In a recent study, researchers evaluated the effectiveness of artificial intelligence-based assistive technologies in improving the ability of participants with vision impairment to perform daily tasks. They assessed both objective task performance and user satisfaction across smart glasses and mobile applications and found AI-based technologies offer significant benefits. However, not all tools performed equally across tasks.
The researchers conducted a cross-sectional, counterbalanced, crossover study, recently published in Translational Vision Science & Technology, with 25 participants who had varying levels of vision impairment, from no light perception to moderate vision impairment (best-correct visual acuity = 20/26 to 20/200) as a result of age-related macular degeneration (9 participants), glaucoma (6 participants), diabetic retinopathy (5 participants), retinitis pigmentosa (3 participants), and optic neuropathy (2 participants). Participants ranged in age from 22 to 78 with a mean age of 54.3.
The participants completed tasks across three categories:
- Text-based tasks – Reading documents, invoices, handwritten lists, medication labels, banknotes, and street signs.
- Text-in-column tasks – Extracting information from a table of contents and a TV guide.
- Searching and identifying tasks – Identifying objects, recognizing colors, scanning barcodes, matching faces, and describing scenes.
And used four AI-driven assistive technologies:
- Smart Glasses: OrCam MyEye 2 Pro and Envision Glasses
- Mobile Applications: Seeing AI (iOS) and Google Lookout (Android)
Each participant performed tasks both with and without AI assistance to determine improvements in task completion rates and time efficiency. Researchers also collected data on usability and participant experience using standardized scales.
Led by William Seiple, PhD, of the New York University Grossman School of Medicine, Department of Ophthalmology, the investigators found that AI-assisted tools significantly improved participants' ability to complete text-based tasks. The likelihood of completing tasks successfully was highest with Seeing AI and Envision Glasses.
“We expected that participants with eye diseases characterized by central vision loss would benefit most from an AAII [assistive artificial intelligence implementation],” the investigators wrote. “However, diagnosis, along with sex and age, did not predict the performance at baseline or outcomes in our current study. Although we found no statistical relationship between clinical and demographic variables and outcomes, perceived task difficulty assessed with the PAI [Participatory Activity Inventory] at baseline significantly predicted the objective performance of the same tasks at baseline.”
They also found that AI tools reduced the time required for various tasks, particularly those involving text recognition. Seeing AI and Google Lookout were the fastest in reading printed text, while Envision Glasses were more effective in recognizing barcodes.
Participants reported high satisfaction with AI-assisted technologies. They rated Seeing AI as the technology with the highest usability score.
The researchers highlighted discrepancies between AI accuracy and actual task success. Specifically, users often relied on contextual knowledge to compensate for errors in AI-generated outputs, though “participants were only successful to the extent that the AAII reported the information required to complete the tasks,” the investigators explained. Other limitations included possible selection bias.
The researchers opened their article with a discussion on assistive technologies that manipulate the appearance of text on screens to assist with reading tasks, and they concluded by suggesting that AI-based technologies could be a viable alternative for magnification, especially because most people with visual impairment already own a smartphone. “Our past study on head-mounted displays found that an average of 51% of participants with acquired vision loss and visual acuity better than 20/800 who could not complete the tasks at baseline could complete reading tasks using magnification,” they wrote. “In our current study, 73% of the participants within this visual acuity range who could not complete the tasks at baseline could complete the text tasks when using AI.”
Use of AAII for nontext tasks depends on the AAII: “90% of those unable at baseline gain[ed] the ability to complete the searching and identifying tasks when using Seeing AI, 71% when using Envision, but only 37% when using OrCam, and 27% when using Lookout,” the researchers concluded.
A full list of author disclosures can be found in the published research.