Smart-Cities-Library-Header-1

Here Are The Ways AI Is Helping To Improve Accessibility

Today marks the seventh Global Accessibility Awareness Day, a celebration of inclusion and digital access for people with disabilities. Microsoft took the opportunity to unveil the Xbox Adaptive Controller, a gaming controller designed to accommodate a range of special needs, and Apple announced that its Everyone Can Code curricula for the Swift programming language will come to schools with vision- and hearing-impaired students.

Neither of those announcements has much to do with artificial intelligence, but increasingly, tech firms are enlisting the help of AI to build accessible, inclusive products. Last week, Microsoft committed $25 million to its AI for Accessibility program with the aim to “assist people with disabilities with work, life, and human connections,” and Facebook recently said it’s collecting data from disabled users to inform its design decisions.

That’s just the tip of the iceberg. Already, text-to-speech and object recognition AI is improving the lives of the more than roughly 40 million people in the U.S. with eyesight and speech problems. And in the not-too-distant future, self-driving cars will afford house- and wheelchair-bound folks the freedom to travel without the assistance of a caregiver, friend, or family member — some for the first time in their lives.

Smart home speakers and voice assistants

Smart home speakers like Google Home, Amazon’s Echo series, and Apple’s HomePod have given digital voice assistants a new lease on life — and they aren’t just a lazy way to queue up your favorite podcasts. For people with certain disabilities, they’re a godsend.

“The Echo Dot makes me feel included,” Ellie Southwood, chair at The Royal National Institute of the Blind, said at TechShare Pro, a U.K.-based conference focused on AI, disability, and inclusive design, held in November 2017. “I spend far less time searching for things online; I can multitask while online and be more.”

When paired with smart home appliances, smart home speakers become even more powerful. People with sight loss and physical ailments can switch on lights without having to fumble around for wall switches, and adjust the temperature with a voice command.

Developers have engineered even niftier uses for home speakers and voice assistants. One hobbyist employed a Raspberry Pi development board and the Alexa Voice Service, Amazon’s third-party speech recognition platform, to add voice controls to a motorized wheelchair.

Speech-to-text and text-to-speech

Smart home devices just scratch the surface of voice recognition’s potential.

Enter Voiceitt, an app for people with speech impediments — specifically those recovering from strokes and brain injuries, and people affected by cerebral palsy, Parkinson’s, Down syndrome, and other chronic health conditions. It learns speakers’ pronunciations over time, normalizing abnormalities in exportable audio and text.

Google’s DeepMind division, meanwhile, is using AI to generate closed captions for deaf users. In a 2016 joint study with researchers at the University of Oxford, DeepMind’s algorithm watched more than 5,000 hours of television and analyzed 17,500 unique words. The resulting model significantly outperformed a professional lip-reader, successfully translating 46.8 percent of words without error in 200 randomly selected clips compared to the human professional’s 12.4 percent of words.

“Everyone wins when we harness AI,” Kiran Kaja, technical program manager for search accessibility at Google, told the audience at last year’s TechShare Pro. “Voice recognition was developed for disabled people, but it’s the hot item at the moment and is useful for everyone. The same with speech-to-text technology, which is completely based on [neural] networks.”

Automatic image recognition

Screen-reading programs help blind and vision-impaired people navigate websites, but most websites contain images, and not every image has an appropriate title or alt text.

One solution is AI that can classify photographs automatically. Facebook has developed captioning tools that describe photos to visually impaired users, and Google’s Cloud Vision API can understand the context of objects in photos. It might label a picture of a jack o’lantern “pumpkin,” “carving,” “Halloween,” and “holiday,” for example.

Another powerful computer vision platform, Microsoft’s Seeing API, can read handwritten text, describe colors and scenes, and more. In a memorable demo at Microsoft’s 2016 Microsoft Build keynote, Saqib Shaikh, tech lead for Microsoft’s AI and research division, used a pair of smart glasses with the API’s vision AI to recognize colleagues’ faces and emotions.

Abstract summarization

People with cognitive impairments like attention deficit disorders and low literacy skills stand to benefit from AI, too.

In 2016, the Google Brain team published a model for TensorFlow, Google’s open source machine learning framework, that can generate single-line summarizations of news articles. And just last year, researchers at Salesforce developed a machine learning algorithm that can distill an article, email, or lengthy document into a single, succinct paragraph.

Self-driving cars

Autonomous cars and other forms of self-driving transportation promise unprecedented freedom for house-bound disabled people. Hearing- and vision-impaired folks are among that group, but so are the elderly and the more than 400,000 people in the U.S. with Down syndrome.

The obvious benefit of the self-driving technologies developed by Google’s Waymo, Uber, Drive.AI, Toyota, GM, and others is increased mobility for people who’d otherwise be constrained to their homes. One in four disabled people experience loneliness on a typical day as a result of physical and social isolation, according to Sense, and autonomous transportation could promote more social lifestyles.

It could also help those folks find jobs. According to the Ruderman Foundation, autonomous cars could help as many as 2 million disabled people get to work.

Google’s Waymo, which kicked off public tests of its self-driving technology in Phoenix, Arizona recently, is already incorporating elements of accessible design into its cars. Project manager Juliet Rothenberg told the Washington Post that the team is experimenting with audible signals for blind users, as well as car dashboard buttons marked in Braille.

A long way to go

Despite encouraging signs of progress in AI for accessibility, though, there’s still a long road ahead.

“Developers really need to consider accessibility,” Jennison Asuncion, accessibility engineering manager at LinkedIn, said in a phone interview. “They need to be good engineers and incorporate accessibility. They need to remember that they’re not just designing things for themselves […] but for the broader population.”

That’s one of the reasons why disabled people are about three times as likely as those without a disability to refrain from using the internet, according to a 2016 Pew Research Center survey. They’re also 20 percent less likely to subscribe to home broadband or own a computer, smartphone, or tablet.

“We need more tools to help automate accessibility,” Asuncion said. “People with disabilities want to have fun and do the stuff that everyone else can do, [and] we’re starting to see the benefits of inclusive design. More companies are beginning to come on board.”

Source: Here Are The Ways AI Is Helping To Improve Accessibility

(Visited 138 times, 1 visits today)

Related Posts

Please Leave a Reply. Thank You.

This site uses Akismet to reduce spam. Learn how your comment data is processed.