(Note: This article appeared in the Winter 2018 edition of the TAVIP Newsletter.)
Visual Assistance, the evolving space within the field of Assistive Technology (AT) has been gaining traction over the past years with what many are reflecting upon as a “game changing” revolution for the sight loss sector. In brief, the sector consists of two distinct approaches; Artificial Intelligence (AI) or utilising a third-party sighted assistant via an internet connected devices camera to aid in interpreting and understanding the World around you from a non-tangible perspective. That is to say, excluding FIY (feel it yourself) the only remaining option is to have someone, or something, tell you what it is you are looking at.
There are a range of “smart” Apps available, most are cross platform, with a few being restricted to either Apple’s iOS or Google’s Android devices. These Apps rely on a range of AI styled feature sets to Interpret and define what the camera can see. Uses include OCR (the reading of text), colour and currency recognition, right through to more complex Extrapolation to include object identification and facial recognition. Whilst the majority of these Apps are found within the smart phone Ecosphere, a few are device independent platforms opting for hands free usage such as Orcam and CyberEyes, which utilise headset/smart glass technology.
One of the big winners with these solutions is cost, with a fixed purchase price and principally operating in an off-line state, you can purchase these products/Apps, and, get pretty good results with little to no training or assistance. Where they often may fall short is in two respects; firstly, understanding spatial awareness, where to point a camera when you cannot “see” is often a downside for our community. Secondly, filtering exactly what the user wishes to actually know about; can be a complex conundrum, with a tendency for sometimes erroneous quantities of information about a scene or visual panorama to be over detailed; when in fact you just wished to know about something immediately in front of you.
Another solution sees users via either smart phone or smart glasses being assisted by a live interaction with a person. Commencing with Be My Eyes (BME); a free service powered by a Global network of volunteers, blind users can request assistance from someone via their smart phone’s camera with a variety of Tasks. Where the BME product has been found to be lacking is in several key areas; notably consistency of availability of a volunteer, security concerns as to whom you are sharing your camera feed with, and, the lack of supporting context for the volunteer; this is to say, they may be asked to assist with a task requiring more than purely your camera feed to achieve the task with.
This is where the latest iteration within this space finds its feet especially well. Aira, pronounced Ira, is a paid subscription service whereby users are connected to highly trained, resourceful and security vetted individuals whom are paid, thus service standards are both maintained and ensured. Aira’s principal advantages surround availability, background/security checks of its staff, the dash board that Agent’s use; which features not only your camera feed but also a map showing your location to aid in navigation, and, a profile giving the Agent information to better assist you from a customised perspective. This might include how you like directions provided through to whether or not you are a cane or Guide Dog user etc.
The downside to Aira of course is the ongoing monthly subscription fees associated with usage, although Aira would be correct to quickly highlight that it is working hard to forge relationships with organisations to provide Aira to certain locations as a complimentary service to the sight loss community; such locations being targeted include transit, grocery, shopping malls, academic etc. One would be optimistic that the future may see a situation where a blind person may receive a service such as Aira in many of the public locations that they frequent regularly. This though does not detract from the home life requirements.
So where does this ultimately leave us? Assuming that you’ve not established what you need to know via the traditional FIY approach, so you’re considering reaching for a tech solution. Do you initially believe that AI can solve your needs or jump straight to the human who can Filtrate the Erroneous and leap right in at the point to which you needed a little support? The likely truth is that a mixed approach will be where the sector ends, indeed, Aira are making moves into this space where users will first be given the option of solving their visual assistance task with Chloe; the firms AI engine, only contacting an Agent if and when that fails. One would logically assume that products such as CyberEyes will improve their filtration and capacity to reduce the noise and focus more on what a user is likely to be wanting to know about rather than calling out everything in the field of view.
One thing is for sure, this sector has, in its limited life-cycle, already produced some amazing advancements and options for our community; the future certainly seems exciting to this commentator.
Produced for TAVIP by Neil Barnfather – the author asserts and reserves all rights.
To read more about Neil and make contact, please visit; www.NeilBarnfather.com