Jan 2022 - May 2022
Inclusive 3D Immersive Event Experience for the Deaf Community
I enhanced Microsoft’s digital event experience in Teams for the Deaf community by integrating 3D sign language avatars powered by AI and ML models through Microsoft Mesh and Teams. This solution provided real-time translation for multiple sign languages (e.g., ASL, BSL, JSL, and others), reducing the reliance on live interpreters and fostering greater autonomy for Deaf participants. By leveraging advanced AI technology to address diverse communication needs, it significantly improved accessibility and promoted inclusiveness, ensuring meaningful engagement for users across different regions.


NextGen Innovations: Cutting Edge Tech
While working on the end-to-end experience of Microsoft’s Event Management system, I discovered a significant gap and opportunity for the Deaf user community. Traditional accessibility in digital events relies heavily on human sign language interpreters, leading to several challenges: high costs, limited availability, scheduling complexities, and inconsistencies in contextual accuracy. Additionally, supporting multiple sign languages—such as American Sign Language (ASL), British Sign Language (BSL), and Japanese Sign Language (JSL)—is essential to ensure inclusiveness for diverse participants. However, scaling these services across digital platforms and regions becomes increasingly difficult.
To overcome these challenges, I needed a scalable, real-time solution capable of handling linguistic variations across different sign languages, minimizing dependency on human interpreters, and significantly reducing operational costs—while enhancing accessibility and inclusiveness for all participants.
THE PROBLEM

PRIMARY PERSONAS

Network Architect
Telco Cloud Automation
(TCA)
I need to use available tools and software to create a reliable, scalable network that enables us to meet business goals

Network (NOC) Operator
Telco Cloud Service Assurance
(TCSA)
I need to monitor the health and performance of network services. troubleshoot issues, and assign them to appropriate workflows/departments/colleagues for remediation.

RAN Engineer + SDE
Ran Intelligent Controller Service
(RICS)
My primary responsibility is planning/strategizing performance optimization and to provide expertise to troubleshoot performance-related network issues, so that SLAs
To ensure success, I conducted interviews with Deaf users at Microsoft, validating the concept with
native signers of ASL, BSL, and other languages. Users expressed excitement for the multilingual
support and the convenience of consistent, real-time translation, improving their event experience.
This project built on earlier work, such as the Kinect Sign Language Translator (2013), which laid the
groundwork for AI-powered translation.
-
Limited Availability: Interpreters may not always be available for specialized events, leading to missed communication opportunities.
-
Support for Multiple Sign Languages: Users emphasized the need for native sign language options (e.g., ASL, BSL, JSL) to avoid exclusion.
-
Contextual Accuracy Issues: Interpreters often struggle with technical jargon and event-specific terms.
-
Inconsistent Quality: Varying skill levels and contextual understanding among interpreters can impact communication clarity.
Problem Validation
I conceptualized a system where 3D avatars could perform real-time sign language translation across
multiple sign languages, powered by AI/ML models trained on gesture recognition. These avatars would be integrated into platforms like Microsoft Teams, enabling seamless accessibility for any hosted event, irrespective of the sign language needed.
Key technologies envisioned:
-
Microsoft Mesh for creating 3D avatars
-
AI/ML for multilingual sign language translation
-
Teams for event integration
This concept is currently under patent review with Microsoft.
THE SOLUTION

IMPACT
LESSONS LEARNED
-
Cost Reduction: Removed the need for live interpreters across different languages, reducing costs.
-
Scalability: Enabled real-time support for multiple sign languages in a single system.
-
User Delight: Deaf users across regions praised the accessibility improvements and the inclusion of
their native sign languages.
This project highlighted the importance of contextual accuracy when translating different sign
languages. Training large AI models across ASL, BSL, and other languages is essential to improving the
technology further. Building on earlier Microsoft accessibility innovations like the Kinect Translator, I
extended this legacy into the future of multilingual accessibility.