Alexa, Google Assistant, Cortana: voice assistants are changing the way we shop, search, communicate or even live. At least for most people. But what about those without a voice? What about those who cannot hear? Around 466 million people worldwide have disabling hearing loss. With the SIGNS Project, we are creating awareness for digital accessibility and inclusion.
SIGNS is the first smart voice assistant solution for people with hearing loss worldwide. It’s an innovative smart tool that recognizes and translates sign language in real-time and then communicates directly with a selected voice assistant service (e.g. Amazon Alexa, Google Assistant or Microsoft Cortana). SIGNS is reinventing voice – one gesture at a time. Many people with hearing loss use their hands to speak. And that’s all they need to talk to SIGNS. How's the weather tomorrow? Change lights to blue. Find an Italian restaurant. Just speak, and SIGNS will answer.
Users can browse our most exciting stories and find out about our best achievements and internal projects, using a drag navigation principle
Many people with hearing loss use their hands to speak. This is their natural language. Their hands are their voice. However, voice assistants use natural language processing to decipher and react only to audible commands. No sound means no reaction. SIGNS bridges the gap between deaf people and voice assistants, by recognizing gestures to communicate directly with existing voice assistant services (e.g. Amazon Alexa, Google Home or Microsoft Cortana). How's the weather tomorrow? Change lights to blue. Find an Italian restaurant. Just speak, and SIGNS will answer. SIGNS is based on an intelligent machine learning framework that is trained to identify body gestures with the help of an integrated camera. These gestures are converted into a data format that the selected voice assistant understands. The voice assistant processes the data in real-time and replies appropriately back to SIGNS. The answer is then immediately either displayed in text form or via visual feedback. SIGNS replaces voice assistants’ typical form of communication through audio signals with a visuality – but not only by displaying the word. The visual interface of SIGNS fulfills various requirements that are necessary for an intuitive experience. SIGNS follows the basic principles of sign language. Therefore, the SIGNS dictionary was developed – a set of symbols that are inspired by the hand movements. Just like with other voice assistant devices the user has to naturally interact with the device.