Home > Phone > Apple, Google training their voice assistants to understand people with speech disabilities

Apple, Google training their voice assistants to understand people with speech disabilities

21 Views

According to National Institute on Deafness and Other Communication Disorders, approximately 7.5 million people in the U.S. have trouble using their voices. This group is at the risk of being left behind by voice-recognition technology. But we are in 2021 – the era to make technology more accessible to everyone. And, tech firms, including Apple and Google are working on improving their voice assistants to understand atypical speech. They are now trying to train voice assistants to understand everyone.

“For someone who has cerebral palsy and is in a wheelchair, being able to control their environment with their voice could be super useful to them,” said Ms. Cattiau. Google is collecting atypical speech data as part of an initiative to train its voice-recognition tools. Training the voice assistants like Siri and Google Assistant could improve the voice-recognition experience for a number of groups including senior with degenerative diseases.

Apple is working to help Siri automatically detect if someone speaks with a stutter

Apple debuted its Hold to Talk feature on hand-held devices in 2015. It gives users control over how long they want the voice assistant Siri to listen to them. The feature prevents the assistant from interrupting users that have a stutter before they have finished speaking. Now, Apple is working to help Siri automatically detect if someone speaks with a stutter. The company has built a bank of 28,000 audio clips from podcasts featuring stuttering to help its assistant recognize atypical speech.

Google’s Project Euphoria is the company’s initiative where it is testing a prototype app that lets people with atypical speech communicate with Google Assistant and smart Google Home products. It aims to train the software to understand unique speech patterns. The company hopes that these snippets will help train its artificial intelligence in the full spectrum of speech.

Amazon isn’t far off with its Alexa voice assistant. The company announced Alexa integration with Voiceitt, which lets people with speech impairments train an algorithm to recognize their own unique vocal patterns.

Source: WSJ


Prakhar Khanna

I’ve been associated with the tech industry since 2014 when I built my first blog. I’ve worked with Digit, one of India’s largest tech publications. As of now, I’m working as a News Editor at Pocketnow, where I get paid to use and write about cutting-edge tech. You can reach out to me at [email protected]

Source link

TAGS
Hi guys, this is Kimmy, I started LicensetoBlog to help you with the latest updated news about the world with daily updates from all leading news sources. Beside, I love to write about several niches like health, business, finance, travel, automation, parenting and about other useful topics to keep you find the the original information on any particular topic. Hope you will find LicensetoBlog helpful in various ways. Keep blogging and help us grow as a community for internet lovers.