More Than Meets the Eye – Accessibility and AI at WWDC and Google I/O

July marks Disability Pride Month, a time to celebrate the richness of disabled identity, and to reflect on how far we have come and how far we still need to go.
Though its origins lie in the United States, first marked in 1990 to commemorate the signing of the Americans with Disabilities Act (ADA), its message has taken root globally. More countries each year are recognising July as a moment for visibility, dignity, and pride. At its heart, Disability Pride is about challenging the idea that disability is something to hide, overcome or fix. It is a celebration of identity, diversity, and human difference.
As technology increasingly shapes every aspect of daily life, the ways in which disabled people are included in, or excluded from, these conversations matter deeply. Against this backdrop, two of the world’s biggest tech companies, Apple and Google, unveiled their latest innovations at their annual developer conferences: WWDC and Google I/O.
While the headline features span far beyond disability, both companies have continued to build on their accessibility work, reflecting a recognition that inclusive design is central to innovation, not separate from it.

WWDC 2025
Apple kicked off WWDC with its most sweeping software update in years. For the first time, it aligned system version numbers with the calendar year, so iOS, iPadOS, macOS, watchOS, tvOS, and visionOS all jumped to version 26. The previous version of iOS was iOS 18. The leap reflects a shift to year-based numbering rather than a series of skipped releases. Apple described the move as a way to simplify versioning, reduce confusion across platforms, and reflect a more unified, ecosystem-wide approach to innovation.
Liquid Glass: A Unified New Look Across Apple Platforms

One of the most innovative and visually striking changes was the introduction of Liquid Glass, Apple’s new cross-platform design language. It will roll out across iOS 26, iPadOS 26, macOS Tahoe, watchOS 26, tvOS 26, and visionOS, bringing a consistent, layered visual aesthetic to the entire Apple ecosystem.
The design features subtle translucency, depth effects, and dynamic lighting to give interfaces a sense of dimensionality and responsiveness. Panels and navigation elements now appear as if they are crafted from softly frosted glass, floating, refracting, and shifting with the user’s movement and input. It is the most substantial visual update since the flat design introduced with iOS 7.
Apple describes Liquid Glass as more than just a visual refresh. It is intended to create a more immersive and cohesive experience across devices, from iPhones to Vision Pro.
Reactions so far have been mixed. Some users and designers are excited by the fresh aesthetic, while others, particularly within the accessibility community, have voiced concerns. The first beta version prompted considerable feedback, particularly around readability in areas like Control Center, where high transparency made text and icons difficult to distinguish. In response, Apple adjusted blur levels, increased contrast, and added background frosting in later developer betas.
These changes suggest that Apple is aiming to strike a balance between aesthetic ambition and day-to-day usability. The visual richness of Liquid Glass reflects a broader move toward interface expressiveness, but the company’s willingness to respond to accessibility concerns during testing reinforces its ongoing commitment to inclusive design.
iPadOS 26: A More Capable, Flexible iPad

iPadOS 26 brings some of the most meaningful changes we’ve seen to the iPad in years. While it shares the new Liquid Glass visual language with the rest of the Apple ecosystem, this update is as much about function as it is about form.
The most significant change is the introduction of a more flexible windowing system, giving users the ability to resize, move, and layer app windows. It’s a shift that brings the iPad closer to desktop-style multitasking, with more control over how apps behave on screen. Apple has also introduced a slide-down menu bar for easier access to app controls, alongside an improved Exposé-style overview of open windows. For users who rely on the iPad as a primary device, especially with a keyboard or trackpad, these updates will likely be a welcome refinement.
There are also updates aimed at productivity and creative work. The Files app now supports a new list view and docked folders, making it easier to organise and navigate documents. A new Preview app brings annotation tools and file inspection capabilities, and Apple has added background audio support and local capture for more flexible content creation. Together, these updates broaden what the iPad can do without changing what it is.
iPadOS 26 also integrates the same Apple Intelligence features coming to iPhone and Mac, including Live Translation, Genmoji, and smart Shortcuts. These tools have potential value across many use cases, from creative tasks to communication support.
Finally, many of the accessibility updates introduced in iOS 26 carry over here too, such as Accessibility Nutrition Labels, Braille support, and Accessibility Reader, reinforcing Apple’s ongoing focus on inclusive design.
Altogether, iPadOS 26 moves the platform forward in practical ways. It doesn’t reinvent the iPad, but it makes it more capable, more adaptable, and better suited to a wider range of users.
Accessibility Highlights from WWDC 2025
Accessibility has long been a core part of Apple’s design philosophy, and this year’s WWDC brought a number of meaningful updates across iOS, iPadOS, macOS, watchOS, and visionOS. While not headline announcements, these features reflect steady progress in expanding options for disabled users and supporting more diverse ways of interacting with technology.
Accessibility Nutrition Labels

Apple introduced a new labelling system in the App Store called Accessibility Nutrition Labels, designed to help users quickly see which accessibility features an app supports, such as VoiceOver, Dynamic Text, Captions, or Switch Control. Much like hashtags, these labels act as quick signposts, helping users filter and discover apps that align with their access needs. The system adds a layer of transparency to app listings and encourages developers to be more deliberate about inclusive design. It’s a relatively small addition, but one that could make a meaningful difference in how disabled users navigate and evaluate the App Store.
Accessibility Reader

Apple introduced a new system-wide Accessibility Reader, designed to simplify on-screen content. It offers adjustable fonts, spacing, colour themes, and the option to have text read aloud. It’s particularly helpful for users with low vision, dyslexia, or cognitive fatigue, and builds on existing tools like Speak Screen and Safari Reader.
Magnifier for Mac

Mac users now have access to a standalone Magnifier app, providing on-screen zooming with custom filters, contrast settings, and image enhancements. It works with external cameras and integrates with other macOS accessibility tools.
Braille Access

Support for braille displays has been extended, with more robust options for navigation, input, and note-taking across Apple devices. The update also includes support for Nemeth code, used in mathematical notation.
Assistive Access Integration
Apple is expanding Assistive Access, the simplified interface for cognitive accessibility, with new developer tools. Apps can now integrate with Assistive Access directly, allowing for more tailored layouts, reduced complexity, and larger touch targets.
Voice Control Enhancements
Voice Control has seen some incremental improvements, including better integration with Xcode and multi-device syncing of custom vocabulary. While helpful, feedback suggests there is still work to do around support for atypical speech patterns and fatigue management during extended use.
Live Captions on Apple Watch

Live Captions are now available during calls routed through an iPhone or AirPods, with remote control via the Apple Watch. This adds more flexibility for Deaf and hard-of-hearing users in everyday conversations.
Other Updates

There were also updates to Eye and Head Tracking, new audio modes for clearer sound in noisy environments, and features like Vehicle Motion Cues to support users who experience motion sickness. A limited but notable mention was made of BCI (Brain-Computer Interface) support through Switch Control. While BCIs’ have been around for some time, particularly in the research and development space, Apple’s inclusion of BCI is significant and suggests to future possibilities.
These updates may not be front-page announcements, but they reflect a broader commitment to embedding accessibility across the platform, not just in how devices work, but in how developers are supported to design inclusively. Tools like Accessibility Reader and Assistive Access integration show an understanding that access needs are diverse and often layered. While there’s still room to grow, particularly around speech and cognitive flexibility, WWDC 2025 showed that accessibility remains part of the conversation, not an afterthought.
Apple Intelligence: Quietly Present, Practically Useful

AI was always going to be part of the conversation at WWDC 2025. With so much of the industry focused on artificial intelligence, many expected Apple to make a bold, headline-grabbing move. Instead, the company took a more measured approach weaving AI into the fabric of the operating system, rather than putting it centre stage.
Apple Intelligence appears across iOS, iPadOS, macOS and visionOS, with features like Live Translation in Messages and FaceTime, smarter Shortcuts, and new tools like Genmoji and Image Playground. These additions are largely practical designed to support everyday use rather than reinvent it. Translation, image generation, summarising long messages, and context-aware replies are all helpful, but they’re not presented as revolutionary.
One area with clear potential is automation. Shortcuts now allow more complex actions to be triggered and adapted intelligently whether that’s summarising notes, adjusting phrasing in a message, or suggesting a follow-up task. For users who experience fatigue or cognitive load, this kind of contextual support could offer meaningful benefit, though it will depend on how well these features perform in day-to-day use.
Apple has also opened up its foundation models to developers via a new framework, making it easier to build AI-powered features directly into apps, on-device, and with user privacy in mind. This is consistent with Apple’s broader approach: avoid overpromising, focus on trust and usability.
It’s still early days for Apple Intelligence, and not all features will be available at launch. But the direction is clear. Rather than positioning AI as the star of the show, Apple is embedding it where it’s useful, quietly expanding what devices can do, while keeping the user in control.
More Than It Seems
WWDC 2025 may not have been the most attention-grabbing event Apple has hosted in recent years. There were fewer big reveals or headline-grabbing product announcements, and for some, it might have felt like a quieter year. But on closer reflection, there’s more going on beneath the surface.
From the shift to year-based OS versioning, to the introduction of Liquid Glass, to continued investment in accessibility and the quiet rollout of Apple Intelligence this year’s announcements feel less about immediate impact and more about laying the groundwork. It’s a year that seems to be about consolidation, alignment, and setting the stage for what’s coming next.
With that in mind, it’s interesting to compare how Google approached its own developer conference just a few weeks earlier. If Apple took a more understated path, Google I/O leaned more heavily into AI, offering a different perspective on how technology might evolve in the months ahead.
Google I/O 2025: AI Takes Centre Stage, Accessibility in View

Google I/O 2025 carried one clear message: AI is now at the heart of Google’s ecosystem. Unlike Apple’s quieter roll-out, Google pulled out all the stops, with an event packed full of AI‑centred announcements and tools. From search and development to wearables and XR, artificial intelligence featured across nearly every corner of the platform and accessibility was very much part of that conversation.
Many of the announcements may not have immediate day-to-day impact, but they reveal where Google is heading – toward a platform shaped by context-aware, generative, and increasingly multimodal AI. And while not everything was framed explicitly around accessibility, several tools have clear relevance for disabled users and inclusive design.
Gemini Everywhere: AI Across the Google Ecosystem

At the centre of it all was Gemini 2.5, Google’s latest AI model, now integrated across Android, Chrome, Search, Workspace, and beyond. Designed to handle complex, multi-input queries whether text, voice, images, or video Gemini is intended to be more adaptable, responsive, and practical for everyday use.
This year’s announcements weren’t just about putting AI into apps they were about reimagining how AI can support creativity, communication, and access across the entire Google ecosystem.
Creative AI: Imagen 4, Veo 3, Flow and Lyria RealTime

Some of the most talked-about tools were centred on content creation:
- Imagen 4 sharpens image generation, improving how text is handled, enhancing detail, and allowing more control over layout and style.
- Veo 3 steps into AI-generated video, capable of producing short clips with synchronised audio, including dialogue, music, and ambient effects, all from a prompt.
- These models come together in Flow, a new web-based video studio. Users can create scenes, adjust dialogue, tweak camera angles, and guide edits using plain language. It’s pitched at creators, but the lower technical threshold opens the door for more people to express themselves, including those who may find traditional editing software inaccessible.
- Lyria RealTime, Google’s interactive music model, complements this suite. Available through Gemini’s API (a tool developers use to plug AI into their apps) and AI Studio, it allows live responsive music composition. Users can shift style, tempo, mood, or instruments on the fly. It’s the kind of tool that could support not just musicians, but educators, therapists, and disabled creators alike.
Together, these tools mark a shift towards more flexible, multimodal creative expression and a future where storytelling is less about what software you know, and more about what you want to say.
Hands-Free Help: Gemini Live, AI Search, and Android XR

Gemini also powers some of the most practical updates for day-to-day use:
- Gemini Live is a real-time conversational assistant built into Android phones and Wear OS. It can see through the camera, listen, and respond offering translation, object recognition, or guidance without needing to type or tap. For users with low vision, cognitive fatigue, or physical access needs, this kind of hands-free support could be especially powerful.
- AI Mode in Google Search reframes how information is delivered. Instead of static results, users get conversational summaries, follow-up options, and support for image-based queries. This could significantly reduce cognitive load and improve navigation, especially when used with screen readers or other assistive tools.
- Perhaps most compelling was the on-stage demo of Android XR, Google’s extended reality platform. Worn as glasses, the system used Gemini to identify people and objects, translate signs, and deliver real-time prompts via audio. The demo involving live translation and environmental description hinted at how XR might become a kind of assistive tech: ambient, responsive, and quietly helpful in the background. For people with vision loss, sensory sensitivity, or mobility restrictions, the implications are substantial.
Project Astra and AI Ultra

Looking further ahead, Google previewed Project Astra, an AI agent designed to proactively interpret the world. In demos, Astra responded to what it saw and heard offering help without being asked. While still early, it reflects a vision of AI that’s always-on, context-aware, and designed to quietly assist in the background.
Alongside that, Google introduced a new premium AI Ultra subscription tier. For £234.99/month, users get access to tools like Flow, Veo 3, early Gemini agents, and priority support. As AI tools become more central to how we work, create, and communicate, these kinds of tiers will raise important questions about who gets access and who’s left behind.
Implications and Impact
Google I/O 2025 wasn’t just about showing off what’s possible with AI it was about laying the foundation for how these systems will be used. Imagen, Flow, Veo, Lyria these tools suggest a future where expression becomes more fluid and multimodal. Gemini Live and Android XR offer hints of a hands-free, more contextual approach to assistance one that could prove deeply valuable for many disabled users.
Accessibility wasn’t always the headline, but it was there, baked into demos, embedded in product decisions, and increasingly part of the conversation. As always, the real test will be how these tools work in the hands of users and how they experience and use them – will they feel intuitive, helpful, and empowering? Or will they raise new barriers? Either way, it’s clear that accessibility is no longer something added after the fact, it’s part of where technology is heading.
The path Ahead
Taken together, WWDC and Google I/O 2025 show just how central accessibility, design, and AI are becoming to the future of technology. Not everything launched this year was bold or showy but beneath the surface were some significant shifts: more inclusive defaults, quieter forms of support, and new creative possibilities that weren’t imaginable even a few years ago. The challenge now is to ensure these tools evolve in ways that support everyone not just the average user, but those whose needs are often left at the edge of innovation.
As always, I’m keen to hear how you’re using mobile technology, AI, and anything else that’s helping (or hindering) your digital experience. If there’s a topic you’d like to see covered in a future newsletter, or if you have a question or need technical support, please don’t hesitate to get in touch.
Martin Pistorius
Karten Network Technology Advisor
Article meta data
Clicking on any of the links in this section will take you to other articles that have been tagged in the same category.
- Featured in the Karten Summer 2025 Newsletter
- This article is listed in the following subject areas: Technology, Update from Technology Advisor
