There’s no doubt that smartphones make our lives easier. As smartphone technology becomes more sophisticated with each passing year, however, it’s worthwhile to ask: are these helping devices truly serving the needs of every person who requires assistance in their daily lives? Could mobile technology be reaching more people who truly need it?  AIPOLY Vision is an amazing app for people with a poor vision and no access to vision correction ! Restoring independence to the visually impaired? There’s an app for that.

The Aipoly app uses artificial intelligence to recognize its surroundings, and then says those objects out loud as a way to aid visually impaired users.  The app’s makers claim the software improves the more it’s used. It can also read objects out loud in seven languages.  

    

How it works ?

When a user snaps a picture in Aipoly, the image is automatically uploaded to Aipoly’s servers, where it is analyzed and tagged. This allows a description to be sent back to the user, which is then read aloud using text-to-speech ! From reading street signs to allowing blind parents to be able to answer their children’s questions about their surroundings, Aipoly can offer convenience and an improved quality of life to its users.

Who is behind Aipoly ? 

The app was developed by Marita Cheng and Alberto Rizzoli, technologists who collaborated at Singularity University to create something that would be useful to the 285 million vision-impaired people around the world. According to Cheng and Rizzoli, two-thirds of this population will become smartphone users in the next five years.

Inspiration struck when they attended a presentation by IBM Watson Group CTO Robert High. High demonstrated some of the celebrity supercomputer’s capabilities—show it a picture and it can provide a semantic, conversational description of what’s happening in it. “We started looking into technologies to recognize images,” Rizzoli told the Daily Dot. “We learned about neural networks and integrated this into an application. It’s the simplest possible process for a user to identify an image: press a button, receive an audio description.”

The Aipoly software works by dividing an image into sections and running reverse image searches on them. It identifies the nouns in a picture—”car,” “battery,” “dog”—as well as the adjectives, like “silver” or “shiny.” Then artificial intelligence steps up to the plate to turn the computer’s understanding of the image into something for a human to digest. Audio playback might tell a visually impaired user that he is looking at “a shiny, silver car.”

This is still an experimental technology. Once perfected, a visually impaired individual might be able to use this app to recognize what’s on a plate of food or to take pictures of their children to identify how they’re dressed. Rizzoli told us about one user who was passionate about cars, so they walked around a parking lot together until they successfully identified a Tesla using the app.

For now, there is some human help taking place behind the scenes to help Aipoly accurately identify images, but Rizzoli tells us it will soon be 100 percent software-based.  He has big ambitions for the future as well, and imagines using Aipoly to create something of a GoogleStreet View for the blind. “We can build a virtual model of the world so that users don’t have to keep scanning their environment,” he said. “The info is already there, and Aipoly would one day provide them with realtime feedback.”

Rizzoli is proud of the autonomy that the app might afford to those with vision impairments. “It makes the visually impaired more independent, and it enables them to explore the world.”

You can dowload the app on the app store.

The demo video shows it in action: