Microsoft Seeing AI 
Redesigning the Seeing AI app for the low vision community 
Challenge
Seeing AI is an intelligent app developed by Microsoft for iOS. Seeing AI uses the device camera to identify people and objects, and then the app audibly describes those objects for people with visual impairment. While the app provides many valuable functions, current low vision users also face many frustrations navigating through the app.
My Role
User Research, Wireframe, Prototype
​​​​​​​Duration
Five weeks, Independent Project
Tools
Figma, Miro, Lucidchart, Adobe CC
_________________________________________________________________________________________
Problem Statement
How might we improve the overall experience and accessibility of the Seeing AI app for users with low vision?
_________________________________________________________________________________________
01 The Solution
New features, including implementing more descriptive error messages, can help low-vision users correct the errors they face during scanning. Accessible gestures and guided Voiceover can make navigation easier for Seeing AI users.
02 The Research
Why This Topic Matters
Low vision is a problem that makes it hard to do everyday activities and can't be fixed with glasses, contact lenses, or other standard treatments like medicine or surgery. As of 2012, 4.2 million Americans aged 40 years and older have uncorrectable vision impairment; this number is predicted to more than double by 2050 to 8.96 million due to the increasing epidemics of diabetes and other chronic diseases and our rapidly aging U.S. population. Individuals with low vision may encounter difficulties reading, driving, recognizing people's faces, telling colors apart, and seeing television or computer screens.
Understanding The Demographic
Why Seeing AI?
The research reflects that most of the products in the market help improve one aspect of the struggle people with low vision face. Users need to constantly turn to different products for different scenarios, making the experience time-consuming. 
Seeing AI is one of the few assistive technologies explicitly designed for the low vision and blind community and tackles users' multiple needs via one platform. This product has excellent initiative and much potential to improve. 
Microsoft Seeing AI press release in 2017
Competitor Analysis - LEMErS Framework
The most commonly used product for people with low vision is magnifiers. This traditional assistive device comes in different forms and sizes; however, it is stationary and limiting. The more accessible technology for people with low vision is apps for smartphones and tablets that serve similar purposes. Some of the APPs were not initially designed for people with low vision but later found the functionality helpful for this demographic. I assessed the magnifier and Seeing AI using LEMErS framework below.
Conversations with Users
It is interesting to note that Seeing AI performed relatively low in Errors and learnability. To find out why I reached out to low vision groups on Facebook and asked if anyone has been using the Seeing AI and would love to share their experiences. When talking with users, the goal was to uncover their thought process when using the app; precisely, what aspect of the app did they feel worked well, and what could use the most considerable improvement? I also observed the user behaviors in how they interact with the app.
+400
Reviews from the App Store
5 
User interviews, 1 took on a co-design role
Highlights from the user interviews: 
Formulating insights
The primary research identify two major challenges that participants encounter recurrently:
⚠️ Hard to Recover From Errors

It's challenging for current users to recover from the errors that occur during the short text, document, and product scanning without any guidance.
🤯 Buttons Close to Each Other

It's hard for users with low vision to rely solely on buttons to navigate different features.
_______________________________________
Hypothesis
If we add more descriptive error messages, such as too darktoo closehold steady, and move more to the right, then users who have low vision will understand why the app can't recognize their documents and objects and make adjustments based on the guidance. 
Suppose we enable accessible gestures, such as shaking the phone to cancel scanning 
and swiping left and right to navigate different features. In that case, it will ease users' frustrations by relaying only with buttons. ​​​​​​​
_______________________________________
03 Bonus 
What If...
I took this opportunity to assess if there's any potential to improve the current design of the Seeing AI app. Current users generally had no outstanding concerns regarding the architecture of the existing app, and they expressed the user flow for each feature was straightforward. However, the interface design could need some help. I then dived into the existing experience below: 
Above is the current app interface. Note the confusing and small camera icon on the middle left and the pause button
All buttons are transparent and close to one another, making it challenging for users with low vision to see.
04 Prototyping 
Wireframes
I brainstormed and sketched various wireframes options based on the secondary and field research. I digitalized my sketches in order to show it to the users and gather their feedback.
User Feedback
Testing users thought that option 2 was clean and the feature onboarding was helpful, especially for first-time users. Users mentioned in the old experience required multiple clicks to find the demonstration video for each feature. 
One user suggested, "As a returning user, even though there are many channels in this app, how we interact with each channel is relatively the same. It feels a little redundant to click on a video each time I land on a new channel, and I would prefer to view them at once and have the option to pause or even skip some once I get the logistics." I iterated the wireframes after preliminary user feedback.
Hi-Fidelity Prototype
Feature Onboarding 
First-time users get familiar with different channels in Seeing AI through a series of quick onboarding videos.
​​​​​​​
More Accessible with Guided VoiceOver
I referenced WCAG Guideline and Apple's Human Interface Guideline to make the UI more accessible for users with low vision. Users will be guided with VoiceOver throughout the Seeing AI app instead of relying solely on text.
What's Next 
I would think more about the ways to validate the hypothesis and measure success, including:
User feedback: what are users' behaviors and opinions?
A/B testing: Is the descriptive error messages feature effectively helping users complete scanning tasks? Do the accessible gesture and VoiceOver guidance make the navigation easier? 
Reflections - Co-designing with Users with Low Vision
When conducting field research, participants brought up many exciting ideas on improving features or how they wished the app could include certain new features. It occurred to me that users were unintentionally co-designing with me. As experts in the low vision community, co-designing can bring different views that inform design and innovation direction whose ultimate goal is to best serve the same user group. I would like to formally invite users to co-design with me throughout the ideation and prototyping process and explore different co-design methods for future projects.
Illustration Credits Marina Podrez, Irina Strelnikova
Thanks for visiting! Let's chat 💖