Σελίδες

Πέμπτη 12 Σεπτεμβρίου 2019

Chieko Asakawa: “AI is going to allow blind people to see the world.”


Image




Η C. Asakawa έχασε την όραση της στα 14 
στα 24 ξεκίνησε programming για τυφλούς 
στα 26 μπήκε στην ερευνητική ομάδα της IBΜ 
κάνει πολλά και σπουδαία για τη προσβασιμότητα 
των τυφλών σε ότι μπορεί να βάλει ο νους σας 
(word processor για Braille, digital library,etc)
One of the world’s premier accessibility researchers 
provides insight into how AI is beginning to transform 
the lives of the visually impaired.
IBM Fellow Dr. Chieko Asakawa has dedicated her career to developing technology that makes the world more accessible for people with disabilities. Blind since the age of 14, Chieko has helped develop several pioneering accessibility technologies, including the 
earliest practical voice browser in the 1990s which further opened the Internet to the 
visually impaired. As a visiting faculty member at Carnegie Mellon University, she is now leading an effort to develop an artificial intelligence-powered navigation system for the 
blind and other disabled populations.
What AI technologies are you most excited about?
AI is going to allow blind people to “see” the world—and explore it. Right now, we are working on the 
Cognitive Assistance Project for Visual Impairment.People with vision see 
the things around them so they always have some context. For us [the blind] we 
don’t have contextual information. We can get it from AI, but only when technology like computer vision is connected to knowledge and when the Internet of Things provides location information. Also, vision and knowledge need to get to the human very quickly 
and interact with them, something that is very important when an AI system is, 
for example, giving directions and providing context about the things around you while 
you are walking. We are working on this in our open source NavCog app.
In your 2015 TED talk, you gave examples of accessibility innovations 
that ultimately went on to serve uses for other populations. 
Do you think NavCog will follow this same path?
Helping the blind will be one of the hardest challenges for researchers, but the 
technology will help others, too. NavCog can be useful for people in wheelchairs. 
Recently, when I was travelling in New York City with a big suitcase, I couldn’t find 
an elevator anywhere and I had to carry the suitcase upstairs by myself. 
Think of the same situation for someone in a wheelchair who needs a route to 
avoid steps. And the elderly, too. They need elevators and maybe help finding 
shops or recognizing items in a store if their vision isn’t good. GPS doesn’t help 
when you’re indoors.
Even people without accessibility issues can benefit. Think about when you travel to 

a foreign country and you can’t read the signs or food labels or find a type of shop 
because you don’t know the language.
What technology allows the NavCog app to work indoors?
We 
 use machine learning to teach the system to leverage sensors in smartphones as 
well as Bluetooth radio waves from beacons to determine your location. 
To provide detailed information that the visually impaired need to explore the real world, beacons have to be placed between every 5 to 10 meters. These can be built into 
building structures pretty easily today.
How does machine learning optimize the beacons?
Radio waves aren’t always the same – they move. So we have to use machine learning 

to help the system calculate your most likely location. Thanks to machine learning, 
we can achieve accuracy of location within 1–2 meters, which is important for the 
system to be effective for the user. We are continuing to work on the machine learning algorithms to improve accuracy and reduce the number of beacons that need to 
be installed.
When AI is taken to the furthest extreme in our lifetimes, 

what will that look like?
Thirty years from now, we can expect many disabilities to be augmented by 

AI—not just for the blind and people in wheelchairs—because 
AI-based cognitive assistants will supplement any of your senses. So imagine in 
the future, you will be able to access information any time without vision or 
without hearing. With AI, many disabilities will no longer be as big of an issue. 
I wish I was born 30 years later. But maybe even before then, self-driving cars may 
become available for the blind. I cannot wait until that time comes.
https://www.ibm.com/watson/advantage-reports/future-of-artificial-
intelligence/chieko-asakawa.html

Δεν υπάρχουν σχόλια:

Δημοσίευση σχολίου