Opportunities for AI in Accessibility

Published on March 23, 2026

In reading Joe Dolson’s recent piece on the intersection of AI and accessibility, I absolutely appreciated the skepticism that he has for AI in general as well as for the ways that many have been using it. In fact, I’m very skeptical of AI myself, despite my role at Microsoft as an accessibility innovation strategist who helps run the AI for Accessibility grant program. As with any tool, AI can be used in very constructive, inclusive, and accessible ways; and it can also be used in destructive, exclusive, and harmful ones. And there are a ton of uses somewhere in the mediocre middle as well.

I’d like you to consider this a “yes… and” piece to complement Joe’s post. I’m not trying to refute any of what he’s saying but rather provide some visibility to projects and opportunities where AI can make meaningful differences for people with disabilities. To be clear, I’m not saying that there aren’t real risks or pressing issues with AI that need to be addressed—there are, and we’ve needed to address them, like, yesterday—but I want to take a little time to talk about what’s possible in hopes that we’ll get there one day.

Computer vision models generating alternative text is a prominent concern highlighted in Joe’s piece. He discusses the significant issues with the current state of image analysis, which is largely inadequate, particularly for certain types of images. The lack of contextual understanding in AI systems often leads to poor performance, as these systems treat images in isolation, without understanding their relevant contexts. This is exacerbated of foundation models for text and image analysis, which neglects the intrinsic relationship between the two. However, there is still potential for improvement in this area.

Human-in-the-loop authoring of alt text should absolutely be a common practice. If AI can offer a starting point for alt text generation, even if that starting point prompts users to reconsider and refine what the AI has suggested, it could signify a positive step forward. Furthermore, if models are trained specifically to analyze image usage in context, they could help us identify which images may need descriptions versus which ones are purely decorative. This will reinforce awareness of contexts that require descriptions and improve efficiency for content creators in making their pages more accessible.

Complex images, such as graphs and charts, pose a particular challenge in providing succinct descriptions. However, recent advancements in AI, such as those seen in the GPT-4 announcement, indicate emerging opportunities. Consider a scenario where a chart is underwhelmingly described only and type, such as: “Pie chart comparing smartphone usage to feature phone usage among US households making under $30,000 a year.” While this is an inadequate description, what if users could inquire about the data represented in this pie chart? Imagine being able to ask questions like, “Do more people use smartphones or feature phones?” or “How many more?”

Setting aside the reality of large language model (LLM) hallucinations, this type of interaction would be tremendously beneficial for blind and low-vision users, as well as for people with various cognitive disabilities. It would also enhance educational perspectives who can see the charts in grasping the underlying data more effectively. Further imagining, what if users could ask their browser to simplify complex charts or alter colors to accommodate different types of color blindness?

Another intriguing possibility involves purpose-built models that could convert visual data formats into accessible formats. For instance, transforming a pie chart into a structured spreadsheet could dramatically improve the accessibility and usability of that information.

The impact of algorithms, particularly those with broader implications, cannot be overlooked. Safiya Umoja Noble highlights in her book “Algorithms of Oppression” how search engines can exacerbate bias and discrimination. While many computer models can amplify conflict and intolerance, when designed with inclusivity from the start, there is genuine potential for these algorithms to support and empower people with disabilities.

Take Mentra, for example—a job-matching network for neurodivergent individuals that uses an algorithm to connect job seekers with potential employers based on over 75 data points. This innovative approach considers both candidates’ strengths and required accommodations, actively working to reduce the emotional labor typically faced .

When people with disabilities are involved in the development of algorithms, the likelihood of these systems inflicting harm on their communities diminishes. This highlights the essential nature of diversity in tech development. Imagine a recommendation engine in a social media context that promotes connections to diverse voices rather than echo chambers, encouraging a more nuanced understanding of complex subjects.

Beyond these specific examples, AI presents numerous possibilities to assist individuals with disabilities. Voice preservation technology, such as that demonstrated in the VALL-E paper or availability in platforms like Apple and Microsoft, can be life-changing for those who may lose the ability to speak.

Additionally, ongoing research through initiatives like the Speech Accessibility Project aims to improve voice recognition technologies from individuals with atypical speech patterns. This initiative helps create more inclusive datasets, enhancing the usability of voice recognition tools for everyone.

Similarly, text transformation using AI could enable people with cognitive disabilities to access information more easily through summarization or simplification of complex texts.

To realize these benefits, it’s crucial to acknowledge and understand the value of diverse perspectives in tech development. Our distinct lived experiences—shaped of our identities—are vital contributions to the systems we build. Inclusive datasets yield robust models that foster equitable outcomes.

The journey to harnessing AI for accessibility is fraught with both promise and peril. While there are valid concerns regarding the impact of AI on marginalized communities, we must embrace the potential for positive transformation. Intentional, thoughtful approaches to AI development can help us move closer to a future where technology serves as a tool for inclusion and empowerment for all.

The acknowledgment that AI can harbor risks is essential as we proceed. However, there is also a path forward where, with a focus on accessibility and inclusivity, we can effect meaningful change.

In closing