I had the privilege of attending the Royal Society’s launch of its Disability Technology report last month, a thought-provoking and, at times, deeply personal event that brought together policymakers, technologists, researchers, and disabled people to reflect on where we are and where we need to go.
The evening opened with powerful remarks from Professor Alison Noble, who reminded us that disability is a universal human experience, something every person will encounter in some form over their lifetime. That perspective was carried through the evening: disability not as an exception, but as part of what it means to be human.
It began with time to explore a range of fascinating exhibits, from the Google Accessibility Discovery Centre, EyeHarp, and Waymap, to Auracast, Blind Ambition, and an immersive installation by artist Christopher Samuel. There was also space to connect with others working across disability and technology, and to pause and reflect on the scale and depth of the challenges ahead.
This was followed by the formal launch of the report, beginning with a short film and a presentation outlining its key findings. A powerful panel discussion rounded out the evening, offering diverse perspectives on the current state of disability technology and where we need to go next.
At the heart of it all was the report itself: Disability Technology, the result of two years of research, including interviews with 800 disabled people, 2,000 members of the public, and insights drawn from the UK, US, India, and Kenya. If you missed the event, I encourage you to watch the recording, or explore the full report. For those who find listening easier, an audio version is also available.
The report makes a clear and compelling case that disability technology isn’t, and nor should it be, an afterthought. It is a core part of digital inclusion, economic participation, and innovation. It calls for better data, more inclusive design, and greater investment in assistive technology.
One recommendation that especially stood out to me was the call to recognise smartphones as assistive technology, on par with hearing aids or white canes. As someone who uses mobile tech every day to communicate, access information, and navigate the world, I was pleased to see this acknowledged. I’d love to see this extended to tablets as well. For many people, especially those using devices like the iPad with alternative input methods or larger screens, they’re no less vital.
Another theme that resonated was the role of policy. One panellist observed that digital exclusion is not just a failure of design, it is a failure of policy too. Designing for inclusion is not enough if the systems around that design don’t support access, funding, or awareness.
Professor Annalu Waller closed the evening with words that lingered long after the panel ended: “We need to inculcate in every person the understanding that disability is not abnormal, but part of being human. Everyone, at some stage, will be disabled. So we need to give them a voice and not write them off.”
There is still a long road ahead, but this felt like a significant and hopeful moment, not just in highlighting challenges, but in pushing the conversation toward action. The Royal Society and others have stressed that we must now reframe how we value assistive technology, recognising its role in everyday life, not just as specialist tools, and ensuring equitable access through inclusive research and policy. The report also emphasises that disabled people must be meaningfully involved from the outset of any design process. The challenge now is to ensure that momentum carries forward, that disability technology is not just discussed, but prioritised, invested in, and embedded across the digital future.
Martin Pistorius Karten Network Technology Advisor
More Than Meets the Eye – Accessibility and AI at WWDC and Google I/O
July marks Disability Pride Month, a time to celebrate the richness of disabled identity, and to reflect on how far we have come and how far we still need to go.
Though its origins lie in the United States, first marked in 1990 to commemorate the signing of the Americans with Disabilities Act (ADA), its message has taken root globally. More countries each year are recognising July as a moment for visibility, dignity, and pride. At its heart, Disability Pride is about challenging the idea that disability is something to hide, overcome or fix. It is a celebration of identity, diversity, and human difference.
As technology increasingly shapes every aspect of daily life, the ways in which disabled people are included in, or excluded from, these conversations matter deeply. Against this backdrop, two of the world’s biggest tech companies, Apple and Google, unveiled their latest innovations at their annual developer conferences: WWDC and Google I/O.
While the headline features span far beyond disability, both companies have continued to build on their accessibility work, reflecting a recognition that inclusive design is central to innovation, not separate from it.
WWDC 2025
Apple kicked off WWDC with its most sweeping software update in years. For the first time, it aligned system version numbers with the calendar year, so iOS, iPadOS, macOS, watchOS, tvOS, and visionOS all jumped to version 26. The previous version of iOS was iOS 18. The leap reflects a shift to year-based numbering rather than a series of skipped releases. Apple described the move as a way to simplify versioning, reduce confusion across platforms, and reflect a more unified, ecosystem-wide approach to innovation.
Liquid Glass: A Unified New Look Across Apple Platforms
One of the most innovative and visually striking changes was the introduction of Liquid Glass, Apple’s new cross-platform design language. It will roll out across iOS 26, iPadOS 26, macOS Tahoe, watchOS 26, tvOS 26, and visionOS, bringing a consistent, layered visual aesthetic to the entire Apple ecosystem.
The design features subtle translucency, depth effects, and dynamic lighting to give interfaces a sense of dimensionality and responsiveness. Panels and navigation elements now appear as if they are crafted from softly frosted glass, floating, refracting, and shifting with the user’s movement and input. It is the most substantial visual update since the flat design introduced with iOS 7.
Apple describes Liquid Glass as more than just a visual refresh. It is intended to create a more immersive and cohesive experience across devices, from iPhones to Vision Pro.
Reactions so far have been mixed. Some users and designers are excited by the fresh aesthetic, while others, particularly within the accessibility community, have voiced concerns. The first beta version prompted considerable feedback, particularly around readability in areas like Control Center, where high transparency made text and icons difficult to distinguish. In response, Apple adjusted blur levels, increased contrast, and added background frosting in later developer betas.
These changes suggest that Apple is aiming to strike a balance between aesthetic ambition and day-to-day usability. The visual richness of Liquid Glass reflects a broader move toward interface expressiveness, but the company’s willingness to respond to accessibility concerns during testing reinforces its ongoing commitment to inclusive design.
iPadOS 26: A More Capable, Flexible iPad
iPadOS 26 brings some of the most meaningful changes we’ve seen to the iPad in years. While it shares the new Liquid Glass visual language with the rest of the Apple ecosystem, this update is as much about function as it is about form.
The most significant change is the introduction of a more flexible windowing system, giving users the ability to resize, move, and layer app windows. It’s a shift that brings the iPad closer to desktop-style multitasking, with more control over how apps behave on screen. Apple has also introduced a slide-down menu bar for easier access to app controls, alongside an improved Exposé-style overview of open windows. For users who rely on the iPad as a primary device, especially with a keyboard or trackpad, these updates will likely be a welcome refinement.
There are also updates aimed at productivity and creative work. The Files app now supports a new list view and docked folders, making it easier to organise and navigate documents. A new Preview app brings annotation tools and file inspection capabilities, and Apple has added background audio support and local capture for more flexible content creation. Together, these updates broaden what the iPad can do without changing what it is.
iPadOS 26 also integrates the same Apple Intelligence features coming to iPhone and Mac, including Live Translation, Genmoji, and smart Shortcuts. These tools have potential value across many use cases, from creative tasks to communication support.
Finally, many of the accessibility updates introduced in iOS 26 carry over here too, such as Accessibility Nutrition Labels, Braille support, and Accessibility Reader, reinforcing Apple’s ongoing focus on inclusive design.
Altogether, iPadOS 26 moves the platform forward in practical ways. It doesn’t reinvent the iPad, but it makes it more capable, more adaptable, and better suited to a wider range of users.
Accessibility Highlights from WWDC 2025
Accessibility has long been a core part of Apple’s design philosophy, and this year’s WWDC brought a number of meaningful updates across iOS, iPadOS, macOS, watchOS, and visionOS. While not headline announcements, these features reflect steady progress in expanding options for disabled users and supporting more diverse ways of interacting with technology.
Accessibility Nutrition Labels
Apple introduced a new labelling system in the App Store called Accessibility Nutrition Labels, designed to help users quickly see which accessibility features an app supports, such as VoiceOver, Dynamic Text, Captions, or Switch Control. Much like hashtags, these labels act as quick signposts, helping users filter and discover apps that align with their access needs. The system adds a layer of transparency to app listings and encourages developers to be more deliberate about inclusive design. It’s a relatively small addition, but one that could make a meaningful difference in how disabled users navigate and evaluate the App Store.
Accessibility Reader
Apple introduced a new system-wide Accessibility Reader, designed to simplify on-screen content. It offers adjustable fonts, spacing, colour themes, and the option to have text read aloud. It’s particularly helpful for users with low vision, dyslexia, or cognitive fatigue, and builds on existing tools like Speak Screen and Safari Reader.
Magnifier for Mac
Mac users now have access to a standalone Magnifier app, providing on-screen zooming with custom filters, contrast settings, and image enhancements. It works with external cameras and integrates with other macOS accessibility tools.
Braille Access
Support for braille displays has been extended, with more robust options for navigation, input, and note-taking across Apple devices. The update also includes support for Nemeth code, used in mathematical notation.
Assistive Access Integration
Apple is expanding Assistive Access, the simplified interface for cognitive accessibility, with new developer tools. Apps can now integrate with Assistive Access directly, allowing for more tailored layouts, reduced complexity, and larger touch targets.
Voice Control Enhancements
Voice Control has seen some incremental improvements, including better integration with Xcode and multi-device syncing of custom vocabulary. While helpful, feedback suggests there is still work to do around support for atypical speech patterns and fatigue management during extended use.
Live Captions on Apple Watch
Live Captions are now available during calls routed through an iPhone or AirPods, with remote control via the Apple Watch. This adds more flexibility for Deaf and hard-of-hearing users in everyday conversations.
Other Updates
There were also updates to Eye and Head Tracking, new audio modes for clearer sound in noisy environments, and features like Vehicle Motion Cues to support users who experience motion sickness. A limited but notable mention was made of BCI (Brain-Computer Interface) support through Switch Control. While BCIs’ have been around for some time, particularly in the research and development space, Apple’s inclusion of BCI is significant and suggests to future possibilities.
These updates may not be front-page announcements, but they reflect a broader commitment to embedding accessibility across the platform, not just in how devices work, but in how developers are supported to design inclusively. Tools like Accessibility Reader and Assistive Access integration show an understanding that access needs are diverse and often layered. While there’s still room to grow, particularly around speech and cognitive flexibility, WWDC 2025 showed that accessibility remains part of the conversation, not an afterthought.
Apple Intelligence: Quietly Present, Practically Useful
AI was always going to be part of the conversation at WWDC 2025. With so much of the industry focused on artificial intelligence, many expected Apple to make a bold, headline-grabbing move. Instead, the company took a more measured approach weaving AI into the fabric of the operating system, rather than putting it centre stage.
Apple Intelligence appears across iOS, iPadOS, macOS and visionOS, with features like Live Translation in Messages and FaceTime, smarter Shortcuts, and new tools like Genmoji and Image Playground. These additions are largely practical designed to support everyday use rather than reinvent it. Translation, image generation, summarising long messages, and context-aware replies are all helpful, but they’re not presented as revolutionary.
One area with clear potential is automation. Shortcuts now allow more complex actions to be triggered and adapted intelligently whether that’s summarising notes, adjusting phrasing in a message, or suggesting a follow-up task. For users who experience fatigue or cognitive load, this kind of contextual support could offer meaningful benefit, though it will depend on how well these features perform in day-to-day use.
Apple has also opened up its foundation models to developers via a new framework, making it easier to build AI-powered features directly into apps, on-device, and with user privacy in mind. This is consistent with Apple’s broader approach: avoid overpromising, focus on trust and usability.
It’s still early days for Apple Intelligence, and not all features will be available at launch. But the direction is clear. Rather than positioning AI as the star of the show, Apple is embedding it where it’s useful, quietly expanding what devices can do, while keeping the user in control.
More Than It Seems
WWDC 2025 may not have been the most attention-grabbing event Apple has hosted in recent years. There were fewer big reveals or headline-grabbing product announcements, and for some, it might have felt like a quieter year. But on closer reflection, there’s more going on beneath the surface.
From the shift to year-based OS versioning, to the introduction of Liquid Glass, to continued investment in accessibility and the quiet rollout of Apple Intelligence this year’s announcements feel less about immediate impact and more about laying the groundwork. It’s a year that seems to be about consolidation, alignment, and setting the stage for what’s coming next.
With that in mind, it’s interesting to compare how Google approached its own developer conference just a few weeks earlier. If Apple took a more understated path, Google I/O leaned more heavily into AI, offering a different perspective on how technology might evolve in the months ahead.
Google I/O 2025: AI Takes Centre Stage, Accessibility in View
Google I/O 2025 carried one clear message: AI is now at the heart of Google’s ecosystem. Unlike Apple’s quieter roll-out, Google pulled out all the stops, with an event packed full of AI‑centred announcements and tools. From search and development to wearables and XR, artificial intelligence featured across nearly every corner of the platform and accessibility was very much part of that conversation.
Many of the announcements may not have immediate day-to-day impact, but they reveal where Google is heading – toward a platform shaped by context-aware, generative, and increasingly multimodal AI. And while not everything was framed explicitly around accessibility, several tools have clear relevance for disabled users and inclusive design.
Gemini Everywhere: AI Across the Google Ecosystem
At the centre of it all was Gemini 2.5, Google’s latest AI model, now integrated across Android, Chrome, Search, Workspace, and beyond. Designed to handle complex, multi-input queries whether text, voice, images, or video Gemini is intended to be more adaptable, responsive, and practical for everyday use.
This year’s announcements weren’t just about putting AI into apps they were about reimagining how AI can support creativity, communication, and access across the entire Google ecosystem.
Creative AI: Imagen 4, Veo 3, Flow and Lyria RealTime
Some of the most talked-about tools were centred on content creation:
Imagen 4 sharpens image generation, improving how text is handled, enhancing detail, and allowing more control over layout and style.
Veo 3 steps into AI-generated video, capable of producing short clips with synchronised audio, including dialogue, music, and ambient effects, all from a prompt.
These models come together in Flow, a new web-based video studio. Users can create scenes, adjust dialogue, tweak camera angles, and guide edits using plain language. It’s pitched at creators, but the lower technical threshold opens the door for more people to express themselves, including those who may find traditional editing software inaccessible.
Lyria RealTime, Google’s interactive music model, complements this suite. Available through Gemini’s API (a tool developers use to plug AI into their apps) and AI Studio, it allows live responsive music composition. Users can shift style, tempo, mood, or instruments on the fly. It’s the kind of tool that could support not just musicians, but educators, therapists, and disabled creators alike.
Together, these tools mark a shift towards more flexible, multimodal creative expression and a future where storytelling is less about what software you know, and more about what you want to say.
Hands-Free Help: Gemini Live, AI Search, and Android XR
Gemini also powers some of the most practical updates for day-to-day use:
Gemini Live is a real-time conversational assistant built into Android phones and Wear OS. It can see through the camera, listen, and respond offering translation, object recognition, or guidance without needing to type or tap. For users with low vision, cognitive fatigue, or physical access needs, this kind of hands-free support could be especially powerful.
AI Mode in Google Search reframes how information is delivered. Instead of static results, users get conversational summaries, follow-up options, and support for image-based queries. This could significantly reduce cognitive load and improve navigation, especially when used with screen readers or other assistive tools.
Perhaps most compelling was the on-stage demo of Android XR, Google’s extended reality platform. Worn as glasses, the system used Gemini to identify people and objects, translate signs, and deliver real-time prompts via audio. The demo involving live translation and environmental description hinted at how XR might become a kind of assistive tech: ambient, responsive, and quietly helpful in the background. For people with vision loss, sensory sensitivity, or mobility restrictions, the implications are substantial.
Project Astra and AI Ultra
Looking further ahead, Google previewed Project Astra, an AI agent designed to proactively interpret the world. In demos, Astra responded to what it saw and heard offering help without being asked. While still early, it reflects a vision of AI that’s always-on, context-aware, and designed to quietly assist in the background.
Alongside that, Google introduced a new premium AI Ultra subscription tier. For £234.99/month, users get access to tools like Flow, Veo 3, early Gemini agents, and priority support. As AI tools become more central to how we work, create, and communicate, these kinds of tiers will raise important questions about who gets access and who’s left behind.
Implications and Impact
Google I/O 2025 wasn’t just about showing off what’s possible with AI it was about laying the foundation for how these systems will be used. Imagen, Flow, Veo, Lyria these tools suggest a future where expression becomes more fluid and multimodal. Gemini Live and Android XR offer hints of a hands-free, more contextual approach to assistance one that could prove deeply valuable for many disabled users.
Accessibility wasn’t always the headline, but it was there, baked into demos, embedded in product decisions, and increasingly part of the conversation. As always, the real test will be how these tools work in the hands of users and how they experience and use them – will they feel intuitive, helpful, and empowering? Or will they raise new barriers? Either way, it’s clear that accessibility is no longer something added after the fact, it’s part of where technology is heading.
The path Ahead
Taken together, WWDC and Google I/O 2025 show just how central accessibility, design, and AI are becoming to the future of technology. Not everything launched this year was bold or showy but beneath the surface were some significant shifts: more inclusive defaults, quieter forms of support, and new creative possibilities that weren’t imaginable even a few years ago. The challenge now is to ensure these tools evolve in ways that support everyone not just the average user, but those whose needs are often left at the edge of innovation.
As always, I’m keen to hear how you’re using mobile technology, AI, and anything else that’s helping (or hindering) your digital experience. If there’s a topic you’d like to see covered in a future newsletter, or if you have a question or need technical support, please don’t hesitate to get in touch.
Martin Pistorius Karten Network Technology Advisor
Advancing Accessibility: iOS 18 and Android 16’s New Features
Global Accessibility Awareness Day (GAAD)was established in 2012 to promote understanding and raise awareness of digital access and inclusion for the more than one billion people worldwide living with disabilities or impairments. It takes place annually on the third Thursday of May – 15th May this year. The day encourages conversation, learning, action, and celebration to help make digital products and services more inclusive.
Ahead of GAAD, both Apple and Google have continued to roll out updates focused on accessibility. Apple’s iOS 18, released in September 2024, introduced a suite of powerful features designed to support users with a wide range of needs, from mobility and speech to sensory processing. The recent 18.4 update, released on 31 March 2025, builds on that foundation with a handful of refinements and smaller but meaningful improvements. Meanwhile, Google’s upcoming Android 16 is expected to bring its own set of enhancements later this year.
New Accessibility Features in iOS 18.4
iOS 18.4 adds a small set of updates that continue Apple’s long-standing commitment to accessibility and inclusive design. While these updates may seem subtle at first, they contribute to more seamless and customisable experiences for users who rely on assistive technologies.
Braille Display Enhancements
This update introduces the ability to perform an indefinite double-tap and hold gesture using a braille display. This may be particularly useful for tasks like recording audio messages or navigating audio content. The gesture is performed by pressing Space with dots 3-6-7-8, and released by repeating the same combination. While perhaps a small update it provides a significant improvement that gives users who rely on braille displays more fluid control and a richer, more intuitive interaction with audio-based features.
VoiceOver Improvements
Another small but thoughtful improvement in iOS 18.4 is VoiceOver’s verbosity settings. You now have more control over how and when VoiceOver announces the type of control currently in focus, such as a heading, link, or button. This can be especially helpful for tailoring the amount of spoken feedback to suit different preferences or contexts. The new setting is found under Settings > Accessibility > VoiceOver > Verbosity > Controls. Here you can choose whether control types are spoken before the content, after it, or not at all.
Building on iOS 18’s Accessibility Foundation
While iOS 18.4 introduces several meaningful updates, it builds on a much broader foundation laid with the release of iOS 18 and its predecessors. That update, mentioned in a previous article marked a significant step forward in how users with disabilities can interact with their devices, introducing tools that are more adaptive, intuitive, and responsive to individual needs. In particular Eye Tracking, Vocal Shortcuts, Listen for Atypical Speech, and Music Haptics, demonstrate Apple’s continued focus on inclusive design across different types of impairment.
Eye Tracking
Eye Tracking enables users to control their iPhone or iPad using only their eyes. It uses the front-facing camera and on-device machine learning to allow for navigation, selection, and gesture-like actions. While this may not be as good as dedicated eye tracking systems, it offers an alternative means of interaction that’s integrated directly into the operating system, with no extra hardware required. The feature is available on devices with an A15 Bionic chip or later. This includes models such as the iPhone 12 and newer, as well as the iPad mini (6th generation), iPad Air (5th generation), and iPad Pro models released in 2021 or later.
Vocal Shortcuts
Vocal Shortcuts enable users to assign personalised, recorded sounds to trigger actions on their device—ideal for those with atypical speech patterns. It adds a layer of flexibility and independence for interacting with iOS using voice.
Listen for Atypical Speech
Another important addition in iOS 18 was enhanced support for recognising atypical speech. Similar to the work done through the Nuvoic project, this feature uses on-device machine learning to better understand speech patterns that fall outside typical voice profiles, enabling more reliable voice control for individuals with non-standard speech. While Apple’s approach is distinct, the inclusion of atypical speech recognition at the system level is a welcome move – reflecting a growing awareness of speech diversity within mainstream tech design.
Music Haptics
This new feature allows users who are deaf or hard of hearing to experience music through vibrations, using the iPhone’s Taptic Engine to reflect rhythm, tone, and the dynamic elements of a track. Rather than relying on audio output, Music Haptics translates the essence of a song into tactile feedback, offering a multisensory way to connect with music.
It works across a range of audio content, including streaming services, downloaded tracks, and third-party apps and requires no additional hardware. With this feature, users can feel the beat drop, rhythm shift, or swell of a chorus directly through their device. It’s particularly valuable not only for enjoyment, but also for enabling greater access to cultural and creative content for those who may not experience it through sound alone.
Music Haptics is currently available only on iPhones, specifically on iPhone 12 and newer models (excluding the third-generation iPhone SE). It’s not supported on iPad, Apple Watch, or Mac, as these devices don’t have the Taptic Engine technology required.
To enable the feature, users can go to Settings > Accessibility > Music Haptics, where it can be switched on with a simple toggle. Once activated, the iPhone will use subtle vibrations to mirror the structure and feel of the music, including tracks played through apps like Apple Music, Apple Music Classical, and Shazam.
While still in its early stages, Music Haptics is a promising step toward more inclusive, sensory-rich media experiences.
Accessibility in Android 16
While Apple continues to expand its accessibility features, Google is preparing to launch Android 16, which is expected to bring its own set of improvements later this year. Although the update hasn’t been released yet, developer previews and early announcements suggest that accessibility is also a key focus for the Android team.
One of the most anticipated additions is Auracast support for Bluetooth LE Audio hearing aids. This feature will allow users to stream audio from public spaces, such as announcements in transport hubs or presentations directly to their compatible hearing aids, creating a more inclusive experience in shared environments.
In addition to Auracast, Android 16 is also expected to bring enhancements to TalkBack and Live Captions. TalkBack, Android’s built-in screen reader, is set to receive performance and customisation improvements aimed at providing smoother navigation and more responsive interaction. Live Captions, which automatically generate on-screen subtitles for spoken content, will likely see expanded language support and better integration across apps and media sources. Together, these updates reflect Google’s continued commitment to making Android more usable and adaptable for people with vision, hearing, and language processing differences.
Final Thoughts
As Global Accessibility Awareness Day approaches, it’s encouraging to see both Apple and Google making continuous strides toward more inclusive technology. From hands-free navigation to haptic music, and from enhanced screen readers to better support for atypical speech, these updates reflect a welcome shift in how mainstream platforms are considering the needs of all users.
Whether refining existing features or introducing entirely new ones, each change marks a step forward. While there’s always more to be done, it’s worth acknowledging the growing momentum to develop more accessible and inclusive systems – continuously seeking to answer the question how our technologies can be made more inclusive, adaptable, and empowering for everyone.
As always, I am keen to hear about how you are using mobile, and other technology, and or even which accessibility features you find useful. If you would like to have a particular topic covered in the next newsletter, please let me know. Finally, please feel free to contact me if you have a question or need technical help and support.
The Evolution of AI-Powered Personal Assistants: Balancing Innovation, Privacy, and Social Interaction
Artificial Intelligence (AI) continues to make its way into many of our conversations. The recent the launch of the Chinese AI, DeepSeek and the subsequent almost $600 billion devaluation of Nvidia’s market value illustrates the profound impact AI can have. Against this backdrop, I thought it would be fitting to pause for a moment, to reflect and explore the evolution of AI-powered assistants, their impact on privacy and human interaction, and how to balance these aspects as we look to the future.
AI-powered personal assistants have become an essential part of our everyday lives. Whether it’s Siri helping us set reminders, Alexa managing our home automation, or ChatGPT providing personalised conversational experiences, these tools are revolutionising how we interact with technology. What started as basic voice recognition systems has evolved into highly sophisticated digital assistants capable of understanding complex commands, predicting needs, and even engaging in human-like conversations.
The rapid innovation behind AI personal assistants has created exciting possibilities, especially in terms of accessibility and efficiency. However, this progress has raised important questions about privacy, data security, and the impact on human social interaction. As we continue to integrate AI into our lives, it becomes increasingly crucial to address these concerns while embracing the benefits of innovation.
The Evolution of AI-Powered Personal Assistants
Early AI Assistants
In the early days, AI assistants were rudimentary tools that performed simple, rule-based tasks. Early systems like the cute animated Clippy in Microsoft Office, or the basic voice recognition functions found in early mobile phones, were limited in capability and scope. These assistants could recognise basic commands and execute simple actions, but their interaction with users was typically very basic and not contextually aware.
The first breakthroughs in AI-assisted technology emerged from rule-based systems that relied on pre-defined logic. For example, if you asked your phone to “call mum,” it would respond with a preset action that was determined by simple keyword recognition. These systems were often static and could not learn or adapt to new information over time.
The Rise of Machine Learning & Natural Language Processing (NLP)
The next major leap came with the introduction of machine learning (ML) and natural language processing (NLP). These technologies allowed personal assistants to go beyond rigid, predefined responses and begin to understand and interpret complex human language in real-time. Siri, launched by Apple in 2011, marked a turning point in how we interacted with digital assistants. Instead of relying on simple keywords, Siri used NLP to understand context, sentence structure, and nuances in language.
With machine learning, assistants began to learn from user interactions, improving over time. As they were exposed to more data, they grew more adept at understanding a broader range of requests and offering more personalised responses. This marked the beginning of AI assistants becoming more autonomous and capable of handling multifaceted tasks. They could now perform actions like sending texts, providing weather updates, and even navigating traffic, all while adapting to users’ preferences.
Integration with IoT & Smart Devices
As the AI assistant ecosystem matured, it expanded beyond smartphones and became integrated into a wide variety of devices through the Internet of Things (IoT). With smart home systems like Amazon’s Alexa and Google Home, AI assistants now controlled everything from lighting and heating to security cameras and kitchen appliances. This integration significantly expanded the utility of these tools, turning them into central hubs for managing everyday life.
AI assistants now play an essential role in making our homes smarter. For example, you can ask Alexa to adjust your thermostat, turn off lights in a room, or even order groceries, all through voice commands. The ability to control various devices through a single assistant has transformed how we live, offering convenience and efficiency at an unprecedented scale.
Moreover, personal assistants are now available across multiple platforms, from smart speakers to smart TVs, cars, and even wearables. This shift has made AI assistants ubiquitous, with more people interacting with them on a daily basis.
Adaptation and Emotional Intelligence
As we move into a new era of AI technology, we are seeing assistants become even more adaptive and context-aware. For example, some assistants now have the ability to understand user emotions, detect sentiment in conversations, and adjust their responses accordingly. This evolving technology is often referred to as “affective computing” and is driving the development of AI that is more emotionally intelligent.
Beyond emotion recognition, AI assistants are learning to adapt to user behaviour and routines. If you typically ask your assistant for weather updates every morning at 7am, the assistant will begin to predict that need and provide the information proactively. These innovations are pushing AI personal assistants toward even more seamless and intuitive user experiences.
Innovation & Accessibility
One of the most exciting aspects of AI-powered assistants is their potential to transform accessibility. AI technology has the power to make everyday tasks more manageable for people with disabilities, providing greater autonomy and independence.
AI-powered personal assistants have played a critical role in creating assistive technology for people with disabilities. For example, voice-activated commands can help individuals with mobility impairments control various devices, reducing the need for physical interaction with the device. Similarly, AI-driven speech-to-text tools and screen readers have made it easier for individuals with visual impairments to navigate the internet and communicate effectively.
For people with cognitive disabilities, AI assistants are increasingly becoming companions that can help with memory, organisation, and communication. These assistants can remind users of important events, help them make decisions, and even serve as virtual companions, offering comfort and reducing loneliness.
The expansion of accessibility features in AI assistants has been a transformative development, opening new possibilities for those who might otherwise face barriers in navigating their environments. As these technologies continue to evolve, we can expect them to play an even greater role in fostering inclusivity.
Context-Awareness & Personalisation
AI assistants are becoming more personalised and aware of their users’ habits, preferences, and routines. This level of contextual awareness allows the assistants to provide tailored recommendations and anticipate needs. For example, a personal assistant can suggest your favourite playlist based on the time of day, or offer travel recommendations based on previous destinations.
This ability to learn from users and adapt to their behaviour is a cornerstone of the future of AI-powered assistants. By offering more personalised experiences, these technologies can become even more integral to everyday life, further enhancing convenience and efficiency.
Multimodal Interaction
Another significant trend is the shift toward multimodal interaction. While voice remains the primary mode of communication with AI assistants, many are now incorporating text-based interactions, gestures, and even visual displays. For instance, in smart TVs, users can speak to their assistants, but they can also interact via touchscreens or keyboards when necessary. Similarly, in smart cars, drivers can control their assistant via both voice and physical touchscreens.
The integration of multimodal interactions improves accessibility, making it easier for people with varying abilities to interact with technology in the way that is most comfortable for them.
Privacy & Ethical Challenges
While the benefits of AI-powered personal assistants are clear, they also raise significant concerns around privacy, data security, and ethical issues. AI assistants often require access to large amounts of personal data to provide personalised experiences, but this raises concerns about how that data is stored, processed, and used.
Data Collection & User Surveillance
At the heart of privacy concerns is the amount of personal data that AI assistants collect. Many of these systems constantly listen for activation commands, which means they often capture conversations and other personal information without the user’s explicit knowledge. While companies claim that AI assistants only activate after hearing a wake word (such as “Hey Siri” or “Alexa”), there have been cases where devices mistakenly recorded and transmitted private conversations.
Additionally, many AI assistants rely on cloud-based processing, meaning that user queries and interactions are sent to remote servers for analysis. This presents risks such as data breaches, unauthorized access, and potential misuse of sensitive information. Users may not always be fully aware of the extent to which their data is being stored, analysed, and shared.
On-Device Processing
One promising solution to privacy concerns is on-device processing, where AI assistants perform tasks and process data directly on the user’s device instead of sending it to external servers. By keeping data locally, on-device processing enhances privacy, reduces latency (the time it takes), and limits exposure to potential cyber threats.
A prime example of this approach is Apple’s Neural Engine, which powers on-device AI features for Siri, Face ID, and predictive text. Unlike cloud-reliant AI assistants that transmit data to remote servers, Apple’s on-device Siri processes many common commands (such as setting reminders, launching apps, and adjusting settings) without needing an internet connection. This means that sensitive user data remains stored locally on the iPhone, iPad, or Mac, reducing the risk of unauthorised access.
Google has also made strides in this area with on-device AI models for Google Assistant, particularly in features like Gboard’s Smart Reply, which suggests responses based on locally stored data rather than sending keystrokes to the cloud. Similarly, some Android devices now support offline voice processing, allowing users to interact with Google Assistant even when not connected to the internet.
While on-device processing is still limited in its capabilities compared to cloud-based AI, it represents a major step toward balancing innovation with user privacy. As hardware improvements allow for more powerful AI computations on personal devices, we can expect a future where digital assistants become smarter while keeping more of our data private.
User Control & Transparency
To further address privacy concerns, tech companies must prioritise user control and transparency. Providing clear privacy policies and giving users the ability to manage and delete their data is essential for building trust. AI assistant developers should also offer granular privacy settings, enabling users to customise their preferences around data sharing.
For example, Apple allows users to review and delete their Siri interactions in their device settings, while Google provides an auto-delete option for Assistant activity, letting users erase their data after a set period. Similarly, Alexa users can manage their data through the Amazon Alexa app, where they can review voice recordings, delete individual interactions, or even set up automatic deletion for recordings after a set time. However, these features must be easy to understand and accessible to all users to truly empower individuals to make informed privacy decisions.
Regulatory Frameworks
As AI assistants become more integrated into our lives, governments around the world are working to establish regulatory frameworks to govern their use. In the EU, the General Data Protection Regulation (GDPR) provides strict rules on how companies can handle personal data, including data collected by AI assistants. Similarly, the UK’s Data Protection Act outlines how companies should manage personal information.
These regulations are an important step toward ensuring that personal data is handled ethically and securely. However, as AI technology continues to evolve, policymakers will need to stay ahead of emerging challenges and ensure that privacy protections remain strong.
Social Interaction & The Human-AI Relationship
Impact on Human Connection
As AI-powered personal assistants become more conversational and responsive, they are not only tools for productivity but also sources of companionship. Many users engage with AI assistants for more than just setting reminders or checking the weather – they interact with them in ways that resemble casual conversation. This is particularly significant for people who experience social isolation, including individuals with disabilities, the elderly, and those living alone.
For people with disabilities, AI assistants can offer a sense of connection and engagement when human interaction is limited. Whether through voice-based conversations, reminders to stay connected with friends and family, or just a simple greeting in the morning, these assistants can provide consistent social presence that some users find comforting.
AI as a Digital Companion
There have been numerous accounts of people talking to AI assistants as though they were friends. A study conducted by the Massachusetts Institute of Technology (MIT) found that users who frequently interact with AI assistants often develop a sense of trust and emotional connection with them. While AI is not truly sentient, its ability to listen, respond, and even offer encouragement can make it feel like a reliable presence in a user’s daily life.
Ethical Considerations & Mindful AI Usage
While AI companionship can offer valuable support, there are ethical concerns about over-reliance on digital assistants for emotional support. Some experts warn that as AI assistants become more human-like, they may unintentionally discourage users from seeking real-life interactions. For individuals who are already socially isolated, excessive reliance on AI could potentially deepen loneliness rather than alleviate it.
To address these concerns, AI developers are incorporating features that encourage human interaction. For example:
Some AI assistants now prompt users to call friends or family if they detect a pattern of prolonged loneliness-related queries.
AI chatbots designed for companionship, such as Replika, emphasize that they are not substitutes for real-life relationships but tools to help users practice conversations and emotional expression.
Socially assistive AI is being developed with the goal of fostering real-world social connections rather than replacing them.
The key to a positive human-AI relationship is mindful usage. AI assistants should be designed to support social interaction, not replace it. They can remind users to reach out to loved ones, offer entertainment and engagement, and provide a sense of presence but human relationships should always remain central.
For individuals with disabilities, AI-powered assistants offer both practical and emotional benefits, bridging gaps in accessibility while providing meaningful interactions. However, as AI continues to evolve, the challenge will be ensuring that these tools are developed ethically, fostering connection rather than unintentional isolation. AI should enhance human relationships by making them more accessible and easier to maintain, not substitute them entirely.
The Future of AI Assistants: What’s Next?
Hyper-Personalisation vs. Privacy Trade-offs
As AI assistants continue to become more personalised, we face an important trade-off between customisation and privacy. The more data an assistant collects, the better it can serve individual needs, but this raises questions about how much personal information is appropriate for an assistant to access.
The future will likely see further advancements in AI personalisation, but it will also require careful consideration of user privacy and autonomy. The goal should be to find a balance that allows for highly personalised experiences while respecting the user’s right to privacy.
Advancements in Emotional Intelligence & Ethical AI
As AI becomes more emotionally intelligent, it will be capable of providing more nuanced responses that take into account the user’s mood and needs. This shift raises important ethical questions about how emotionally aware AI should be and how it should behave in sensitive situations.
To ensure that AI systems remain ethical, it is critical that developers prioritise transparency, fairness, and accountability in their design. Emotional AI should enhance human interaction, not manipulate or exploit vulnerabilities.
AI-Human Collaboration in Work & Daily Life
AI-powered assistants are no longer just tools for convenience; they are becoming essential collaborators in both professional and personal settings. As AI continues to evolve, it holds the potential to reshape work and daily life in even more profound ways, particularly for people with disabilities and neurodivergent individuals. By automating repetitive tasks, enhancing accessibility, and adapting to individual needs, AI is paving the way for a more inclusive future.
AI is already playing a crucial role in helping people with disabilities and neurodivergent individuals contribute more effectively in the workplace. Features like AI-powered speech-to-text, predictive text, and adaptive user interfaces enable individuals with motor impairments, visual impairments, and neurodivergent conditions such as autism or ADHD to engage more seamlessly with their work.
Today, AI-powered tools such as Microsoft’s Seeing AI assist visually impaired employees by describing text, objects, and people in real time, making workplace documents and presentations more accessible. Similarly, AI-driven transcription tools like Otter.ai and Google Live Transcribe provide real-time captions, ensuring that employees who are Deaf or hard of hearing can fully participate in meetings.
Looking ahead, next-generation AI assistants could go even further by offering real-time sign language translation, improved natural language processing (NLP) for non-standard speech patterns, such as what the Nuvoic Project focused on, and hyper-personalised AI coaching that adapts to an individual’s unique work style. Imagine an AI assistant that learns how a neurodivergent employee processes information best and tailors their workflow accordingly, suggesting focus-friendly environments, structuring complex tasks into manageable steps, and even detecting when they might need a break to prevent burnout.
By reducing barriers and creating more inclusive digital workspaces, AI-powered assistants will allow individuals with diverse abilities to not just participate in the workforce, but thrive in it.
AI in Daily Life
Beyond the workplace, AI assistants are already making daily routines smoother, smarter, and more accessible. Voice-activated AI assistants enable greater independence for people with physical disabilities, allowing them to control smart home devices, navigate digital interfaces, and complete everyday tasks without requiring physical interaction.
For individuals with cognitive disabilities, AI-driven assistants are providing personalised daily reminders, adaptive learning support, and even companionship. Future AI assistants could become even more intuitive, using contextual awareness to predict needs before they arise, such as suggesting a break if it detects signs of cognitive overload or automatically adjusting a user’s environment for sensory comfort.
Looking to the Future: AI That Truly Understands and Empowers
The next evolution of AI-powered assistants are likely to move beyond simply responding to commands to being able to proactively assist, adapt, and even advocate for users.
We could potentially see:
More Human-Like Interactions: Future AI assistants will be able to engage in more natural, empathetic conversations, offering meaningful emotional support and social interaction. For individuals who experience isolation, whether due to disability, neurodivergence, or aging AI could serve as a trusted companion that not only listens but also encourages human connection.
AI as a Digital Advocate: Imagine an AI assistant that understands an individual’s accessibility needs and advocates for them in real-world interactions, such as automatically requesting accessible accommodations when booking travel or ensuring workplace software adapts to a user’s needs.
On-Device AI for Greater Privacy: AI will become more private and secure, shifting toward on-device processing where user data remains securely on personal devices rather than being stored in the cloud. This would be of particularly beneficial for individuals who rely on AI for sensitive tasks, such as managing medical information or personal care routines.
Conclusion
AI-powered personal assistants have come a long way since their humble beginnings, offering both transformative potential and new challenges. As these technologies continue to evolve, it’s crucial to ensure they are used responsibly. The balance between innovation, privacy, and human connection will shape the future of AI-powered assistants. The focus must remain on inclusivity, ethics, and personalisation; ensuring these systems are built with accessibility at their core. By prioritising transparency, user control, and ethical design, we can create AI assistants that enhance our lives without compromising privacy or social well-being, ultimately reshaping the future of work and daily life not just for a select few, but for everyone.
As always, I am keen to hear about how you are using AI, mobile, and other technology. If you would like to have a particular topic covered in the next newsletter, please let me know. Finally, please feel free to contact me if you have a question or need technical help and support.
Never Miss a Word – A Guide to Teams Transcription and Accessibility
These days the use of Microsoft Teams has become quite common. Microsoft Teams has evolved from a simple communication tool to a powerful platform for meetings, collaboration teaching and more.
Microsoft Teams has made significant strides in accessibility and inclusivity by introducing transcription and live captioning features. These features are particularly beneficial for individuals who are deaf or hard of hearing, those with language barriers, or anyone who simply prefers to follow along visually – helping to avoid information shared during the Teams call from being missed.
In this article, I will guide you through how to enable and use transcription and Live Captions in Microsoft Teams, including key technical details, accessibility features, and what to do if you forget to start transcription but have recorded the meeting.
What is Transcription in Microsoft Teams?
Transcription in Microsoft Teams allows meeting content, especially the audio, to be converted into text in real-time. This is beneficial for people with hearing impairments, those who prefer reading to listening, or those who simply want to refer to meeting details at a later time. Teams can transcribe spoken content and display it alongside the meeting video feed, allowing people to follow the conversation in both audio and text formats. The meeting transcription can also be downloaded or shared after the meeting.
How to Enable Transcription in Microsoft Teams
For Microsoft 365 Admins
Before users can take advantage of transcription in Microsoft Teams, there are several steps a Microsoft 365 administrator needs to follow to enable this feature.
Ensure Microsoft Teams is Up to Date: The transcription feature is available to users with an up-to-date version of Teams. Admins should ensure that all users are on the latest version of Microsoft Teams.
Verify Licensing: Transcription in Teams is part of Microsoft 365 enterprise plans (such as Business Standard, Business Premium, or Enterprise E3/E5). Admins need to verify that the organisation has the necessary licenses to access this feature.
Enable Cloud Recording: Transcription works with cloud-based recording. Admins should ensure that cloud recording is enabled in the Teams Admin Center:
Under Recording & transcription, ensure that the “Allow transcription” option is turned on. Note: Recording & transcription is typically found within the “Global (Org-wide default)” policy. Tip: Provided you have sufficient permissions, this can also be enabled in the Teams app under Admin> Settings > Meeting “Allow transcription” toggled to On.
Set Up Permissions for Recording and Transcription: Ensure that the appropriate permissions are granted to users who need to record meetings. The “Allow Cloud Recording” setting must be enabled for users to record meetings and access transcription features.
Activate Live Captions and Transcription: In the Teams Admin Center, make sure the live captions and transcription setting is enabled globally or for specific user groups.
Go to Teams Admin Center > Meetings >Meeting Policies>Live Captions.
Set the default language for captions and transcriptions.
Ensure “Allow transcription” is toggled to On.
Compliance Considerations: If your organisation is subject to legal or privacy regulations or policies, please review and consider the compliance implications of transcription. Transcriptions are stored in the Microsoft 365 cloud, and sensitive information might be captured. Admins should communicate any relevant privacy policies to users.
Please note that exact location and name of setting may differ slightly depending on your Microsoft Tenancy and version of interface being used.
Enable Microsoft Teams Transcription with PowerShell
Once the feature is enabled by your Microsoft 365 administrator, people can easily enable and use transcription during meetings. Similar to recording the meeting, it is good practice at the start of the meeting to inform the meeting participants that a transcription will be automatically generated.
Starting Transcription in a Meeting
Schedule or Join a Meeting: You can either schedule a Teams meeting in advance or join an existing meeting.
Start Transcription:
Once in the meeting, click the three dots (More options) in the meeting control bar.
Select Start transcription. This will immediately begin transcribing the conversation in real-time. The transcription will appear in a side panel (for desktop and web clients) or as captions for mobile devices.
Note: You can transcribe the meeting without needing to record it. However, if transcriptions have been enabled by the admin and you start recording your meeting, transcriptions are typically automatically created too.
Review Transcription: After the meeting, the transcription will be available in the meeting chat or under the meeting details, and typically accessible to all participants depending on the settings. Users can download the transcript as a text file in either Microsoft Word document (.docx) or a Video Text Tracks (.vtt) format or review it directly in Teams. Note: as transcripts are automatically generated, they may not be 100% accurate so you may wish to edit the document before sharing it.
Stopping Transcription: To stop the transcription, click the three dots (More options) again and select Stop transcription. The transcript will automatically save once the meeting ends. You can also stop and restart the transcription if you wish not to have a part of the meeting included in the transcription, e.g. discussing a data sensitive topic.
What if You Forgot to Start Transcription, But Recorded the Meeting?
If you forgot to start transcription during a meeting but recorded the meeting, all is not lost! Microsoft Teams automatically saves a recording of the meeting, and, in some cases, you may be able to generate a transcript post-meeting.
For Cloud Recordings: When the meeting is recorded, the video and audio are stored in Microsoft Stream or OneDrive/SharePoint (depending on the organisation’s settings). Once the recording is processed, Teams may automatically generate a transcription of the meeting if the transcription feature was previously enabled.
Manually Start Post-Meeting Transcription: If transcription was not enabled during the meeting but the recording is available, the meeting organiser can start transcription manually after the meeting ends by accessing the recording in the meeting chat. From there, the organiser can turn on transcription if the organisation settings allow it.
Note: This feature may take a few minutes to process after the meeting ends, so users should be patient while the transcription is generated.
If you find however that the transcription option is not showing when accessing the recording of the meeting, contact your Microsoft 365 administrator as they may be able to access the recording directly and generate a transcription for you.
Live Captions in Microsoft Teams
Live captions in Microsoft Teams is another accessibility feature designed to improve the meeting experience for people who are hard of hearing or in noisy environments. Similar to transcriptions Live captions display real-time transcriptions of spoken content as the participants speak. However, unlike transcriptions, Live Captions are not saved and will disappear after the meeting.
How to Use Live Captions
Enable Live Captions in a Meeting:
During a meeting, click the three dots (More options) in the meeting control bar.
Select Turn on live captions. This will display captions for all spoken content in the meeting, including the speaker’s name and their dialogue.
Language Options: Currently, Microsoft Teams supports live captions in several languages. The meeting organiser can set the preferred language for captions in the Teams settings (see the Teams Admin section for this). Participants can also select a preferred language for captions during the meeting.
Editing Captions: In some cases, users may be able to edit captions for accuracy. However, this is typically done at the admin level, and users should follow the compliance guidelines in place for their organisation.
Viewing Captions on Different Devices: Live captions are supported across desktop, web, and mobile devices, allowing participants to view captions wherever they are.
Customising Captions: You can customise the appearance of the caption bar, including font size, colour, and background
Accessibility Benefits of Transcription and Live Captions
Transcription and live captions in Microsoft Teams are essential tools for ensuring meetings are accessible to everyone, regardless of hearing ability or language proficiency. These features help:
Individuals with Hearing Impairments: Transcriptions and captions provide equal access to meeting content for people with hearing loss, allowing them to follow along with the discussion.
Non-Native Language Speakers: By enabling captions in multiple languages, Teams helps bridge language barriers during international meetings.
Meeting Recording and Reference: Transcriptions can be referenced later, making it easier for participants to recall key points or follow up on action items discussed during the meeting.
Conclusion
Transcription and live captions in Microsoft Teams are transformative features that improve accessibility, productivity, and collaboration for all users. With a few simple steps, both Microsoft 365 admins and individual users can unlock the power of these features to enhance the meeting experience. Whether you’re using it for note-taking, accessibility, or record-keeping, transcription and captions ensure that everyone has the opportunity to fully participate and benefit from the meeting, no matter their hearing abilities or language skills.
By leveraging these tools, organisations can create more inclusive and efficient virtual meeting environments, ensuring no one misses out on important discussions.
As always, I am keen to hear about how you are using mobile, and other technology, and AI too. If you would like to have a particular topic covered in the next newsletter, please let me know. Finally, please feel free to contact me if you have a question or need technical help and support.
In early summer people from all over the world gathered to attend two of the major developer conferences – Apple’s Worldwide Developers Conference (WWDC) and Google’s Google I/O. These events serve as the platform to announce what new advances we can expect to see on our devices in the near future. Perhaps unsurprisingly the advances and integration of artificial intelligence (AI) dominated both conferences. In this article I have highlighted some of the more interesting announcements.
WWDC 2024
Apple’s Worldwide Developers Conference (WWDC) 2024 showcased an impressive array of technological advancements, with a clear emphasis on artificial intelligence (AI). However, Apple’s commitment to creating technology that is not only cutting-edge but also inclusive and adaptive to the needs of all users continues.
Accessibility Innovations
Accessibility has long been a cornerstone of Apple’s design philosophy, and WWDC 2024 was no exception. This year, Apple introduced several groundbreaking features aimed at enhancing the user experience for individuals with disabilities. Below are some of these features. I have included some that appeared in press releases prior to WWDC.
Eye Tracking
This revolutionary feature empowers users with limited mobility by enabling complete device control through eye movements. The iPad or iPhone’s front camera tracks eye positions, allowing users to navigate the interface, interact with apps, and even type using their eyes. This is a significant leap forward in providing independent device access for individuals with physical disabilities. In keeping with Apple’s emphasis on privacy all data used to set up and control this feature is kept securely on device and is not shared with Apple. How well this compares to dedicated eye tracking systems remains to be seen. But certainly, opens up another exciting way to interact with your device, assuming it supports this feature.
Music Haptics
Designed to broaden the musical experience for those who are deaf or hard of hearing, Music Haptics leverages the iPhone’s Taptic Engine to translate music into a series of vibrations. These vibrations correspond to the music’s rhythm and intensity, creating a new way to feel the music and appreciate its nuances. This innovative approach opens up music enjoyment for a wider audience.
Vocal Shortcuts
Going beyond traditional touch or voice commands, Vocal Shortcuts cater to users who might find them challenging e.g. those with atypical speech. This feature allows people to create custom sounds that trigger specific actions on their device. Imagine snapping your fingers to take a photo or uttering an indistinguishable word to activate voice control. Vocal Shortcuts open doors for a hands-free and potentially voice-free interaction method, empowering users in unique ways.
Vehicle Motion Cues
Depiction of Apple’s Vehicle Motion Cues
Vehicle Motion Cues aim to counteract motion sickness while using your iPhone or iPad in the car. This feature utilizes the device’s sensors to detect motion and subtly adjusts display settings to combat nausea and dizziness. By reducing on-screen motion, Vehicle Motion Cues creates a more comfortable in-car experience for passengers prone to motion sickness, allowing them to enjoy games, movies, or reading without feeling unwell.
VisionOS Advancements
While specifics remain undisclosed, Apple indicated upcoming improvements to VisionOS, the operating system powering their assistive technology device, the Vision Pro. These enhancements aim to further empower users with visual impairments. It is anticipated that advancements in areas like screen narration, object recognition, and voice control. This will make the Vision Pro an even more valuable tool for daily living, allowing users with visual impairments to navigate their surroundings, access information, and perform everyday tasks with greater ease and independence.
Apple Vision Pro, now available in the UK is reported by some to be the of the most accessible device produced by Apple yet, and a testament to Apple’s commitment to accessible and inclusive design.
The Dawn of Apple Intelligence
Perhaps the most intriguing announcement was Apple Intelligence. While apple has utilised, it is unique AI in other forms (machine learning, powered by Apple’s neural engine) for years it has been slow to join the major tech companies in the AI boom. However, legal issues may mean it could be a while before Apple Intelligence appears on supported devices in Europe.
Apple has also taken the approach of working with partners to bring AI to their systems, in particular Open AI. It has been reported that this approach could allow for people to choose which AI (e.g., Google’s Gemini) they wish to use in future.
Irrespective of which LLM (large language model – the artificial intelligence) Apple Intelligence, is integrated with, it is an ambitious A.I. system designed to be more than just a digital helper. While specifics are still under development, Apple promises an A.I. experience that goes beyond basic tasks. Imagine an assistant that anticipates your needs, proactively suggesting actions, and seamlessly connects tasks across your Apple devices. This personalised approach to A.I. has the potential to significantly alter how we interact with technology in our daily lives.
Unlike virtual assistants that respond to specific commands, Apple Intelligence aspires to be proactive and anticipate your needs. Imagine an A.I. that scans your emails for upcoming travel plans and proactively suggests creating a packing list or currency converter app download. It might interact with your smart fridge, analysing your supplies and recommend adding items to your shopping list.
A major concern with A.I. assistants is your privacy. In keeping with Apple’s drive to ensure your privacy, Apple Intelligence addresses this by prioritizing on-device processing. This means your data stays on your iPhone or iPad, with only anonymised or encrypted information sent to Apple’s secure servers for more complex tasks. This focus on privacy allows you to leverage the power of A.I. with peace of mind.
Apple Intelligence goes beyond simply understanding your words; it aims to grasp your world. By analysing your emails, photos, messages, and even browsing history, it can build a contextual understanding of your life. Imagine asking “What time is mum’s train arriving?” Apple Intelligence, having gleaned “Mum” from your contacts and the train details from your inbox, can provide the answer without you needing to specify where you found the information. This contextual awareness could make interacting with your devices feel more natural and intuitive.
Apple Intelligence is not just about managing tasks; it aspires to be a creative partner. It boasts writing tools powered by A.I. that can help you rewrite sentences for clarity, summarize lengthy articles, or even generate different creative text formats like poems or code. This could be an advantage for students, writers, or anyone who wants to explore different creative avenues.
While specifics are still under development, Apple Intelligence is slated for a developer beta later in 2024 with a full launch in 2025. This glimpse into the future of A.I. assistants suggest a more personalised and helpful way to interact with technology. Apple Intelligence has the potential to become an indispensable partner in our daily lives, streamlining tasks, understanding our needs, and even fostering creativity.
At WWDC 2024 Apple unveiled several AI-driven features designed to enhance user experience across its ecosystem they include:
Image Playground
Apple’s Image Playground, an AI-powered tool that lets you create playful images directly within Apple’s existing apps. By describing concepts, choosing themes, or referencing people in your photos, Image Playground then generates unique illustrations, animations, or sketches. This user-friendly feature prioritizes fun and personalization, offering a range of artistic styles to match your creative vision. With Apple’s on-device processing for privacy, Image Playground empowers you to add a spark of AI generated flair to your messages, notes, presentations, and more.
Genmoji
While the not a standalone app, the Genmoji feature expected to be included in the Messages app and possibly elsewhere. It will allow you to generate your own custom emojis by entering a descriptive prompt. For example, “a t-rex wearing a tutu on a surfboard”.
AI-Enhanced Photos and Videos
The Photos app will now include advanced AI capabilities that automatically enhance images and videos, making them clearer and more vibrant. This feature is particularly useful for users with visual impairments, as it adjusts the content to be more distinguishable and enjoyable.
Siri 2.0
The latest iteration of Apple’s voice assistant, Siri 2.0, leverages advanced AI to provide more contextually aware and conversational interactions. Siri can now understand and process more complex queries, offering more accurate and relevant responses. This upgrade makes Siri not only more useful but also more accessible to users with varying needs.
Other announcements
While there were many more improvements and innovations announced at WWDC the last two I would like to mention are:
Calculator app for iPad
Apple’s Calculator for iPad in action, including Maths Notes
For years, there wasn’t native iPad Calculator app. It is reported that Steve Jobs was never satisfied with the calculator app for iPad, feeling it lacked something. However, Apple has finally announced that iPadOS 18 boasts a built-in Calculator app!
This addition is a game-changer for students, professionals, and anyone who needs to crunch numbers on the go. No more hunting for third-party apps or relying on web-based solutions. The built-in Calculator app puts essential calculations at your fingertips, seamlessly integrated into the iPadOS experience.
Apple is not simply porting a phone app to a larger screen. The Calculator app is designed to take advantage of the iPad’s spacious display. Expect a well-organized layout with clear buttons and ample space for calculations. This makes it easier to see what you are doing, reducing errors and improving overall usability.
While the core functionality focuses on addition, subtraction, multiplication, and division, the Calculator app offers additional features:
Scientific Mode: For those who need more advanced functions, a scientific mode could be included, providing access to trigonometry, logarithms, and other complex calculations.
Unit Conversion: Imagine easily converting between units of measurement like temperature, length, or currency right within the app. This eliminates the need for separate conversion tools, simplifying everyday tasks.
History Tape: Keep track of your calculations with a history tape feature. This allows you to review previous calculations, double-check your work, or pick up where you left off on a complex problem.
The built-in Calculator app might integrate with other iPadOS apps, allowing you to seamlessly copy and paste calculations between them. Imagine performing calculations in the Calculator app and then easily pasting the results into a spreadsheet or a notes document. This streamlines workflows and eliminates the need for manual data entry.
To compliment the Calculator app Apple’s announced the innovative Math Notes feature introduced in iPadOS 18. This built-in Calculator function goes beyond basic calculations. Simply write out your math problems with your Apple Pencil on the iPad screen and watch as Math Notes recognizes your handwriting and solves them in real-time! No more clunky typing or struggling with equations. Math Notes can handle everything from basic arithmetic to complex functions. It even understands variables, allowing you to explore different scenarios within your equations. Plus, the ability to solve problems directly on your notes keeps your work organized and eliminates the need for separate scrap paper. The experience is further enhanced by the new Smart Script feature which smooths and straightens your handwriting as you write, making it instantly neater and easier to read.
Standalone Passwords App
Managing passwords securely across a multitude of websites and apps can be a constant struggle. Apple addressed this with the introduction of a standalone Passwords app, a significant improvement on the previously buried functionality within Settings.
No more digging through menus! The Passwords app offers a centralized location to view, manage, and store all your login information. This includes website usernames and passwords, Wi-Fi network passwords, and potentially even passkeys, a new emerging secure login method.
The app categorizes your logins clearly, making it easy to find the specific credentials you need. Imagine separate sections for frequently accessed websites, social media accounts, and email logins, allowing for quick retrieval and organization.
Building on Apple’s existing security features like iCloud Keychain, the Passwords app is designed to keep your data safe. Features like strong password generation and automatic filling of login information across apps streamline the process while maintaining security.
The Passwords app integrates with other Apple products. You can expect features like:
Cross-device Syncing: Access your passwords from any Apple device, be it your iPhone, iPad, or Mac. Your login information stays up-to-date and readily available, no matter which device you’re using.
AutoFill on Browsers: The app integrates with Safari and other browsers, automatically filling in login information when you visit a website. This eliminates the need to remember complex passwords or manually type them in, saving you time and frustration.
Windows Compatibility: Even if you use a Windows PC alongside your Apple devices, you’re not left out. The Passwords app can be accessed through the iCloud for Windows app, ensuring you have your logins at your fingertips regardless of platform.
The Passwords app directly challenges third-party password managers like 1Password and LastPass. With its focus on simplicity, security, and integration within the Apple ecosystem, it has the potential to become a go-to solution for Apple users who want a secure and convenient way to manage their login credentials.
Google I/O
Similar to WWDC, Google’s annual developer conference, I/O, focused heavily on artificial intelligence, and its integration into Google products. The announcements focused more on the evolution of Google’s AI than new developments. That said, there have been significant advances to Google AI, Gemini. In fact, Gemini seemed to dominate the conference.
Gemini 1.5 signifies Google’s continued commitment to pushing the boundaries of AI. Powerful AI models use a Large Language Model (LLM), this means the model is fed massive amounts of text data to understand and generate human language. In context of large language models (LLMs) like Gemini 1.5. The latest version of has a 2 million Token Context Window. In simple terms an AI “token” means a unit of information that it has learned from. A “Context Window”is the amount of data the LLM considers when generating a response or completing a task. Imagine it like a window that the LLM uses to peek at the surrounding information to understand the current prompt or question.
One of the key strengths of Gemini 1.5 is its ability to understand and process information within a much larger context. Compared to its predecessor, Gemini 1.0, it boasts a significantly longer context window, allowing it to grasp the nuances of information spread across vast amounts of text, code, audio, or video. Unlike many AI models focused solely on text, Gemini 1.5 is a true multimodal powerhouse. It can process and understand information presented in various formats, including images, audio, and video. This versatility allows it to tackle a wider range of tasks. For example, imagine describing a scene you want to create in a video; Gemini 1.5 could analyse your description and generate visuals based on your input.
Gemini 1.5 is not a single entity, but rather a family of models with varying capabilities. Google offers a mid-sized “Pro” version optimized for a wide range of tasks and a “Flash” version focused on speed and efficiency. This allows developers to choose the Gemini model best suited for their specific needs. The Gemini family also includes Gemini Nano. This lightweight version allows Gemini to be used in the Chrome browser and could significantly enhance web browsing experiences by offering advanced capabilities like real-time translation, content summarisation, and code generation. It also allows for Gemini to be included on mobile devices.
In fact, Gemini will be integrated throughout Google’s products such as Gmail and Docs.
Revamped Search Engine
The advances also mean a revamped Search Engine built using the AI. This could be a major game-changer in how people find information online. Google is also working on Gemini agents to complete tasks like meal or trip planning. You would be able to type queries like “Plan a meal for a family of four for three days”. The AI will then provide you with recipes and links for the three days.
Ask Photos
Gemini is also making its way into Google Photos. While still in the experimental phase the Ask Photos feature will allow users to search across their Google Photos collection using natural language queries that leverage an AI’s understanding of their photo’s content and other metadata. While it has been possible to search for specific people, places, or things in the photos, thanks to natural language processing, the AI upgrade will make finding the right content more intuitive and less of a manual search process.
Imagen 3
Imagen 3 is Google’s latest and most advanced text-to-image generation model. It builds upon its predecessors, offering a significant leap in image quality. It can generate incredibly realistic and detailed images that closely resemble photographs. Imagine describing a fantastical landscape with waterfalls cascading down mountains shrouded in mist, and Imagen 3 could generate an image that captures the scene with breathtaking detail.
Google would like this advanced AI model to be a tool that empowers everyone to unleash their creativity. By simply describing your concept in text, you can generate unique and visually captivating images. This opens possibilities for:
Storytelling and illustration: Bring your stories and ideas to life with stunning visuals. Generate illustrations for your blog post, create storyboards for your animation project, or visualize your next marketing campaign.
Design and Prototyping: Imagen 3 can be a valuable tool for designers and product developers. Quickly generate mock-ups and prototypes of your design ideas without needing to spend hours crafting them manually.
Education and Exploration: Imagine exploring historical events or scientific concepts through AI-generated visuals. Imagen 3 has the potential to revolutionize the learning experience by making abstract concepts more tangible and engaging.
Imagen 3 goes beyond just generating images based on simple text descriptions. Imagen 3 allows you to take an existing picture and add elements to it, change the background, or adjust the overall style. Imagine taking a vacation photo and adding a fantastical creature into the scene for a touch of whimsy.
Imagen 3 is designed to run entirely on your device, so your prompts and the generated images remain private and secure. This ensures you maintain control over your creative process and protects your data and privacy.
One of the more exciting announces was Veo. Google DeepMind’s Veo, is a groundbreaking text-to-video generation model. This innovative AI tool takes your textual descriptions and transforms them into dynamic and visually stunning videos.
While other AI models excel at generating realistic images, Veo goes a step further by creating videos complete with motion, lighting effects, and even camera movements. Describe a bustling city street at night, and Veo might generate a video displaying the neon lights, moving cars, and bustling crowds.
This technology holds immense potential. You could bring your stories and ideas to life with captivating animated sequences. Imagine creating storyboards for your animation project or generating explainer videos for your blog post.
While details about Veo’s public availability are limited, its development signifies a significant leap in AI-powered video creation. As this technology continues to evolve, we can expect even more sophisticated and user-friendly tools that will revolutionize the way we create video content.
Google’s Veo paves the way for a future where creating videos becomes more accessible and intuitive. With the power of AI-powered text-to-video generation, anyone with a creative vision will have the potential to bring their ideas to life on screen.
LearnLM
Google unveiled LearnLM, this is an interesting use of AI to support education allowing questions to be asked about a YouTube video, or a quiz to be created. While this is still in the experimental phase, it is already powering features across Google products, including in YouTube, Google’s Gemini apps, Google Search and Google Classroom.
Project Astra
Finally, Google’s Project Astra, aimed and developing Google’s future vision for AI that combines multiple sensory inputs (sight, sound, voice, text) and has the potential to revolutionize human-computer interaction. It is well worth taking a moment to watch the videos showing Gemini Live. What is impressive of the video is not only the speed of processing but the fact that the system is able to capture, store and use the information to answer a question like, “Do you remember where you saw my glasses?”. This shows the huge potential future digital assistants could have.
Whether from Google, Apple, Microsoft, Amazon or elsewhere it is clear that AI will continue to permeate our lives. As always, I would like to hear about how you are using mobile, and other technology, and AI too. If you would like to have a particular topic covered in the next newsletter, please let me know. Finally, please feel free to contact me if you have a question or need technical help and support.
The world has been all abuzz with talk of the increasing use of artificial intelligence, AI. While the field of AI has been around for many years, with the famous computer scientist and mathematician, Alan Turing writing about the “imitation game” in 1950. This later became known as the Turing test – a test to establish if a machine could be so good at mimicking human responses that, you can’t tell if you are interacting with a human or a computer.
AI has been quietly making its way into our lives. Apple’s Neural Engine, a form of AI was first introduced in the A1 Bionic chip found in the iPhone 8 in 2017. Today AI is found in mobile and other devices we use without even thinking about it.
However, it has only been since the wider release of OpenAI’s ChatGPT in 2022 that AI has come to the foreground of our awareness. We have since seen a plethora of AIs emerge, some using OpenAI, some using their own.
Despite some fears and concerns AIs can being very useful and fun to interact with. In this article I will focus on some generative AIs used to create images.
A generative AI, as the name suggests, using its trained model to create something, based on the data it’s given, in the case of the AIs listed in this article, some descriptive text data.
Midjourney
Midjourney quickly earned a reputation for producing rich coherent interesting and visually appealing images. Initially you were able to use Midjourney for free, however the free trail option is currently suspended. You now require a subscription to use the service. Subscriptions start from $10 a month or $8 a month for an annual subscription. This equates to being able to generate approximately 200 images a month.
Currently, you can only interact with Midjourney through Discord, making the interface a little tricky to use. You generate an image by typing the prompt /imagine followed by a description of whatever you would like to create. The AI will then generate four images you can then choose to either download, upscale, or re-edit, the image.
Your generated images are public, so can be viewed by anyone who is connected to Midjourney’s Discord server. People can also view which images you have created by looking at your profile. This, and the fact that you can access other Discord servers is something to be mindful of there are eSafety concerns.
Midjourney are currently testing a web app which means the AI image generator will soon be easier to access and use.
OpenAI’s DALL·E 3 is perhaps the biggest and most popular AI image generator. DALL·E 3, is a significant improvement on the popular previous version, DALL·E 2. DALL·E 3 uses ChatGPT-4’s understanding of language to expand your prompts and as a result it produces more interesting, realistic, and consistent results.
The biggest advantage to using DALL·E 3 is it’s easy to use, particularly if you are familiar with ChatGPT. Currently DALL·E 3 is only available to ChatGPT Plus subscribers. A subscription starts from $20 a month. However, you can access DALL·E 3 for free if you have a Microsoft account and use Microsoft’s Image Creator. See the DALL·E 3 web page for more information.
Microsoft’s Image Creator
Microsoft Designer is a feature rich AI-powered graphic design tool. Currently, Microsoft Designer is in preview (i.e. testing phase) and is free to use. Microsoft states that once Microsoft Designer is officially released it will remain free, but more advanced features would then require a Microsoft 365 subscription.
Within Microsoft Designer is Image Creator, this uses OpenAI’s DALL-E 3 to generate images. The Image Creator is incredibly simple to use and produces images for each prompt you enter. Depending on the current server load, it can take some time for the images to be generated. These images can then be either downloaded, edited or opened in Microsoft Designer to create things using the image. See the Microsoft’s Image Creator for more information.
DALL-E 3 is also available through Microsoft’s AI chatbot, Copilot. Simply ask Copilot to create an image for you. You will, however, need to be signed into a Microsoft account to do this.
DreamStudio
DreamStudio is a powerful image generating AI. It is the official Stable Diffusion web app. DreamStudio allows you to enter various parameters for the images you would like to create. While this gives you greater control, some people may find the additional options confusing. DreamStudio do however provide a good user guide. Currently, DreamStudio requires you to purchase credits in order to generate images. However, you are given 25 credits for free when you create an account which is enough to get a good sense of what DreamStudio is like. See the DreamStudio website for more information.
Adobe Firefly
Adobe is a company that has quietly been working on AI for more than a decade. This is evident with their image generator AI, Firefly. Firefly can be accessed through a web browser. However, it is also making its way into Adobe’s products like Photoshop.
A lot of the current generative image AIs struggle with text generation (i.e. text within an image). Firefly however seems to cope really well with this making it a useful tool for creating images that need to include text, e.g. a logo.
Similar to DreamStudio, the Firefly interface is packed with options that stem from Adobe’s image creation and editing heritage, making it a truly powerful tool.
Adobe Firefly can be accessed through a free individual account that includes 25 credits a month. If you require more then there are various payment plans available, including discounted rates for Students and teachers. See the Adobe Firefly website for more information.
Google’s ImageFX
Google, a bit late to the AI image generator space have produced an AI that is capable of generating high-quality, realistic images, including objects that are difficult to render, such as hands.
Google’s ImageFX interface is filled with features, that make it easier to refine your prompts or generate new ones via dropdowns. ImageFX also provides style suggestions for example photorealistic, 35mm film, minimal, sketch, handmade, and more. This combination of features makes ImageFX perfect for beginners who want to experiment.
Google’s ImageFX is free to use, but does require a Google account. It can be accessed through a web browser at Google AI Kitchen. While you are there, be sure to check out MusicFX.
As always, I would like to hear about how you are using mobile, and other technology, and AI too. If you would like to have a particular topic covered in the next newsletter, please let me know. Finally, please feel free to contact me if you have a question or need tech help and support.
With the festive season upon us, I thought to provide a sprinkling of stocking filler tips to bring some cheer while using the technology you use on a regular basis.
Microsoft Teams
Muting
Anyone who has used Microsoft Teams will be all too familiar with “You’re on mute”. What you may not know is you can help them out, by unmuting them. You will however need to either be the meeting organiser or have been assigned as presenter. If you have either of these roles, you will be able to unmute a fellow participant by clicking on their name and selecting the Unmute Participant. Similarly, you can choose to mute them too.
Note: Microsoft Teams will by default mute anyone joining a meeting in progress when there are 5 or more participants. This is aimed at reducing distraction.
Rich-text Messages
There may be times when it would help to add format and structure to messages. To do this, simply click on the ‘A’ icon/button on the bottom left of the text entry window. This will expand the text window and provide you with options to add format and structure to the text, before sending it.
Keyboard Shortcuts
Keyboard shortcuts can be a quick and easy way to use an application. They can also be used to setup assistive technology (e.g. The Grid) to perform various actions. Three common ones are:
Microsoft Immersive Reader enables you to adjust how the text is displayed, removing distractions as well as being able to have the text spoken aloud. You can launch the Immersive Reader option by hovering over a message with your cursor and clicking the ellipsis. Note that Immersive Reader may be found under the “More actions” the first time you use it.
The live transcriptions feature, as the name suggests, generates a transcription of everything said during a meeting/call. This could make the meeting more accessible and help others who missed part of the meeting. It will also make captions available in the post-meeting recording.
This feature, however, can only be activated from the desktop (i.e. not in a browser) version of Teams. It also typically needs to be enabled from within your Microsoft 365 Teams Admin section. This may require asking IT or the person who administers your Microsoft 365 tenancy to do so.
To start Live Transcriptions:
Go to the meeting controls and select “More actions”
Choose “Record and transcribe”, and select “Start transcription”.
All the participants will see a notification that the meeting is being transcribed. The transcript appears on the right side of the screen.
Note: If you also want to record the meeting, select More options again and this time choose Start recording.
Live Captions
Live captions will display all the words spoken as text on the screen. The font size and position can be customised to suit.
To start Live Captions:
During a meeting. Go to the meeting controls and select “More actions”
Select “Language and speech”, and then “turn on live captions”.
Live captions can be toggled off at any time during the meeting by repeating the process.
Set status duration
By defaults Teams changes your status after 5 minutes (e.g. from Busy to Away.) You can however set a status duration that suits your needs.
To do this click on your name and set your status. From within the status options, select “Duration”; set your desired status and for how long that status should remain active.
iPad
Scan documents
An iPad or or iPhone can be used to scan documents, through the Apple Notes.
To Scan a document
Open the Notes app and select a note or create a new one.
Tap the Camera button, and then tap “Scan Documents” .
Place your document in view of the camera.
If your device is in Auto mode, your document will automatically scan. If you need to manually capture a scan, tap the Shutter button or press one of the Volume buttons. Then drag the corners to adjust the scan to fit the page and tap Keep Scan.
Tap Save or add additional scans to the document.
Virtual trackpad
It can sometimes be useful to be able to move the cursor within a section of text, for example when writing an e-mail. By touching your two fingers on the on-screen keyboard and moving them across the keyboard, the cursor will move as you move them. Note: this does not work if you are using a different keyboard app to Apple’s default one.
Voice assistants
In the festive season here are some fun things to try with your voice assistant.
Ask Siri “I see a little silhouetto of a man”
Playing a game, can’t find or use a dice, simply ask, “Alexa, roll a dice.”
Say, “Alexa, Beatbox for me”
Say, “Alexa, We don’t talk about Bruno”
Say, “Alexa, drum roll, please.”
Say, “Alexa, rap for me.”
Say, “Alexa, meow.”
Ask, “Alexa, how many sleeps until Christmas?”
Wishing you a very happy peaceful festive season and may 2024 be a good year for you. As always I am available to provide to support and help where I can, whether that be on using Microsoft Teams, Microsoft 365 in general, mobile technology, Smart Home technology or something else .
Each year we are given a peek into the near future of mobile technology at the Google I/O and Apple’s WWDC conferences.
This year Apple joined the augmented reality (AR) and mixed reality (MR) space with the launch of Apple Vision Pro. Google, Magic Leap and Microsoft have released AR/MR devices over the past few years with varying degrees of success. Apple is known for only releasing technology when they feel it is refined and functional enough to comply with Apples high standards. It is fair to say Apple’s iPhone, iPad and Watch revolutionised mobile computing. Apple describes the Vision Pro as a new era in spatial computing. Once you put on the headset you are able to either augment your view of the world with photos, videos or apps; or completely immerse yourself into another reality. Unfortunately, the cost of the Vision Pro, expected to be in excess of £3000 when released in the UK in 2024 will limit its adoption. Only time will tell the impact the Vision Pro will have. Nevertheless, the Apple Vision Pro looks incredible and very exciting! I can see huge potential to enhance and enrich the lives of people with disabilities. Apple’s Vision Pro introductory video is well worth a watch.
The Vision Pro was not the only announcement at WWDC 2023. Plenty of new hardware was revealed, including new Mac models, and new Apple Silicon chips. As customary, Apple also unveiled the next iteration of its mobile software platforms – iOS 17, iPadOS 17, and watchOS 10.
iOS 17
Many of these improvements are can be described as enhancements to the user experience.
Contact Posters
The phone app in iOS 17 has received a big update and now features Contact Posters, these enable you to create personalised images of how you would like your name to appear on another person’s device when making a call, using FaceTime (or other third-party apps), or sharing your contact details.
Live Voicemail
The new Live Voicemail enables you to see a transcript in real-time when someone leaves you a voicemail. Apple also uses this feature to help identify and decline spam calls. To ensure that your information is kept private, all the data remains, and is processed on the device by the Neural Engine.
iMessage and Check-in
iMessage has also received an update, with a redesigned menu system and the addition of a new sticker experience allowing you to create Live Stickers from your photos or GIFs. But most significant is the new Check-in feature.
Check-in enables you to alert someone that you have arrived at a particular location. What makes Check-in more advanced than simply sending a message when you get there, is the built-in intelligence. Once Check-in is initiated it will calculate the estimate travel time, and if for some reason it takes longer than expect it will send an alert to your specified contact. This alert will include your current location, the battery level of your device and the signal quality. If there is no signal or your phone dies, the person will be able to access your last known location. When you do arrive at your destination, Check-in will automatically send an alert to inform the person that you have reached your destination.
All data used in this feature is encrypted helping to safeguard your privacy. The Check-in feature could be useful for travel training and other scenarios.
Updated AirDrop and new NameDrop
Apple’s wireless sharing feature, AirDrop has also been updated too. It now allows the transfer of files to continue over an internet connection, meaning you no longer need to remain in close proximity to the device sharing files with you. AirDrop has been extended too, with the addition of NameDrop. This enables you to easily share your contact details with someone just by bringing the devices close to each other. While a similar feature has been available in the past, NameDrop has refined this, making it far easier to use. NameDrop is also supported by Apple Watch making it possible to share contacts through Watch too.
Journal app
An unexpected addition to iOS is the new Journal app. As the name suggests, this diary app allows you to capture your thoughts and feelings in your own personal digital journal. It uses machine learning to prompt you to add the details of your day and the Journal app integrates with photos and maps allowing you to create rich entries about your day. In keeping with Apple’s commitment to privacy, all processing is done securely on the device.
Autocorrect, prediction and speech recognition
An updated “transformer language model” is included in iOS 17. This means you will see an improvement in Apple’s autocorrect and prediction. Dictation has also been updated with a new speech recognition model making speech recognitions more accurate.
Siri
Siri, Apple’s voice assistant has also received a number of updates, including the option to now be able to simply say “Siri” rather than “Hey Siri”. A nice new accessibility feature is Siri will be able to now read the content of a web page to you. This can be done while the phone is locked too, meaning you could set it to read the page and put your phone down and just listen.
iPadOS 17
iPadOS 17 includes many of the updates included in iOS 17. It also adds a new lock screen feature similar to what, until now, has only been available on iOS. This allows you to create custom iPad lock screens using photos, changing layouts, fonts and how the clock is displayed. Clocks can also be intelligently hidden in the background.
Widgets have been added to iPadOS 17. These widgets can be placed on the lock and home screens. With the bigger screen size of iPads, these widgets are slightly larger than the ones seen on iPhone. These widgets are also interactive, allowing you to actively use them, rather than merely displaying information.
Health app
The Health app has now been added to iPad too. This is not merely an addition from iOS but has been specifically designed for iPad and features larger and more detailed displays of the health data.
Support for PDF
Support for PDF has been improved dramatically in iPadOS 17, making it even easier to view and work with PDFs. It is now possible for text entry sections of PDFs to be detected automatically allowing you to easily make edits and send the file. PDF files can now be stored within the Notes app, even allowing you to store multiple PDFs within a single note and/or work with someone else on the document using Live Collaboration.
Countering Myopia
Over the recent years Apple has devoted resources to address various health related issues. This year Apple focused on trying to reduce Myopia (short-sightedness). Studies have indicated that if children spend between 80 and 120 minutes a day outdoors, the chance of developing myopia is reduced.
Apple watchOS 10 will introduce daylight tracking to determine how much time is actually spent outside. In addition to this, a new feature in iOS17 and iPadOS 17 will be able to measure distance between the person’s face and their iPad or iPhone screen. This can be used as a key indicator of potential myopia.
New Accessibility Features
Apple will also be releasing some exciting new accessibility features.
Assistive Access
Assistive Access is aimed at reducing the cognitive load, making using iPhone and iPad simpler – focused on a limited number of tasks e.g. taking photos, listening to music, calling someone.
Once Assistive Access is enabled the entire interface is transformed. The simple interface has high contrast buttons and large text labels. The Phone and FaceTime apps get combined into a single Calls app. Tools enable the interface to be further customised, for example Messages can include an emoji-only keyboard and the option to record a video message.
Live Speech
Live Speech is effectively an AAC system built into Apple’s platforms, and will be available on iPhone, iPad, and Mac. Live Speech will enable people to type what they want to say and then have it be spoken out loud during phone and FaceTime calls as well as in person conversations. It will include the option to save commonly used phrases that can be accessed and used.
Personal Voice
Personal Voice adds voice banking to the iPhone, iPad, and Mac. It is a simple way to create a personal synthetic voice. This can be done in 15 minutes – reading a randomised set of text prompts while recording the audio on your iPhone or iPad. Personal Voice uses on-device machine learning ensuring that the information remains private and secure. It is not clear yet if Personal Voice can be used with third-party AAC apps but it will integrate seamlessly with Live Speech so users can speak with their Personal Voice.
Live Caption
The new Live Caption feature, as the name suggests, will generate captions from audio in real-time. Whether that be from a phone or FaceTime call, social media content or video stream. When used in FaceTime, the captions will automatically be attributed to the person speaking, making it easier to follow the conversation. As with most of Apple’s technology, all the processing happens, and remains on the device ensuring that the person’s data remains private.
Point and Speak
A new feature is to be added to Detection Mode in Magnifier, Point and Speak. This feature enables you to interact with physical objects that have several text labels e.g. a microwave. The person can then hold up their iPhone or iPad with the Magnifier app and moving their finger across the appliance, their iPhone or iPad will read each thing their finger is pointing to. Point and Speak requires a device with a camera and LiDAR Scanner – most new iPhone and iPads have these.
Phonetic suggestions
For people who use Voice Control for text editing and as an alternative to typing, Voice Control will now be able to provide phonetic suggestions so you can choose the right word out of several that might sound alike, for example “do,” “due,” and “dew.
Virtual game controller
The Switch Control accessibility feature can now also be used to turn any switch into a virtual game controller allowing the person to play their favourite games on iPhone and iPad
Google I/O 2023
For Google, similar to Apple there were a number of new hardware announcements. These included additions to the Pixel range of devices, namely the Pixel Fold, Pixel Tablet, and the Pixel 7A. But really, it was all about the AI (Artificial Intelligence).
PaLM 2
Google unveiled PaLM 2, the latest version of Google’s large language model (LLM) AI, and a rival systems like OpenAI’s GPT-4.
It was stated that PaLM 2 is stronger in logic and reasoning, thanks to its broad training. It is much better at a range of text-based tasks, such as reasoning, coding, and translation. It was trained on multilingual text spanning over 100 languages.
PaLM 2 is a significant improvement on PaLM 1 which was unveiled in 2022. There are several variants of PaLM 2, with the PaLM 2 Gecko version, reported to be small enough to run on mobile phones. Google revealed that the new model (PaLM 2) is in fact already powering 25 Google services, including the Google’s chatbot, Bard.
Google Bard
Google Bard will now be available to everyone, an no longer limited to those signed up to the waiting list. Google will also be adding a host of new features to Bard, including an easier way to export generated text to Google Docs and Gmail.
Google plans on adding even more functionality to Bard in the future such as AI image generation using Adobe’s AI image generator, Firefly. Bard will also be integrated with third-party services like OpenTable and Instacart.
AI in Android
The AI will make its way into Android too. One of these new features, Magic Compose, will enable you to reply to text messages using responses suggested by AI.
AI powered search – snapshots
PaLM 2 lies behind Google’s new AI powered search, “snapshots”. Once you decide to use the new feature called Search Generative Experience (SGE), AI powered answers to your search query will appear at the top of the results. You can then further refine the answers returned with follow-up questions.
No doubt we will be seeing more AI powered features across Google’s products and services as it tries to narrow the “AI gap” between the Google and competitors like Microsoft. Microsoft already offer AI features that help you to write e-mails, summarize documents, and even generate slides for presentations.
Get in touch
Finally, I am always interested to hear about how you are using mobile and other smart technology too. If you would like to have a particular topic covered in the next newsletter, or how to use some of the new features mentioned in this article please get in touch. I am also available at any time to offer support and help where I can.
The ongoing evolution of mobile devices and computers and the changing ways organisations use that technology present both opportunities and challenges. It is now commonplace for mobile devices to be used both within the organisation’s premises and externally. This creates a need for organisations to ensure that these devices are managed and secure. While this can be achieved by setting up, managing and updating devices on an individual basis, it is often useful to use a mobile device management system (MDM), particularly if you have more than 10 – 15 devices.
There are a range of MDM systems on the market today e.g. Meraki, JAMF, JumpCloud, VMware Workspace ONE, etc. however in this article I will focus on the MDM solutions offered by Microsoft.
It is important to note that you can’t start using Basic Mobility and Security if you’re already using Microsoft Intune. However, you can start using Basic Mobility and Security and then add the additional capabilities of Microsoft Intune.
For the remainder of this article, I will focus on Microsoft’s Basic Mobility and Security included with Microsoft 365. Basic Mobility and Security enables you to manage and secure mobile devices that are connected to your Microsoft 365 organisation. It allows you to set access rules, device security policies, and to wipe mobile devices if they’re lost or stolen.
Basic Mobility and Security supports many mobile devices including Android, iPhone and iPad. However, each person associated with the device must have an applicable Microsoft 365 license and their device must be enrolled in the Basic Mobility and Security.
Setting up Basic Mobility and Security
To set up Basic Mobility and Security you will need to login to your Microsoft 365 account as a global administrator.
It can take some time to activate Basic Mobility and Security. When it finishes, you should receive an email that explains the next steps to take. If the service has already been activated, you will see a link to “Manage Devices” rather than the activation steps.
Once the service is ready, the following steps need to be completed:
Configure your domain/s for Basic Mobility and Security.
To do this you will need to add DNS records at your DNS host. If you are using a custom domain, the chances are that you have already done this during your initial Microsoft 365 set up. This step, while recommended, is also only required if you intend managing Windows devices.
Note: some Microsoft documents say to “go back to the Security & Compliance Center and go to Data loss prevention > Device management to complete the next step.” The Security & Compliance Center has been migrated to Microsoft Purview and can be found under Settings > Device onboarding.
Configure an APNs Certificate for iOS devices
To manage iPad and iPhones, you need to create an Apple Push Notification Certificate (APNs). For this you will need to be signed into Microsoft 365 as a global administrator.
Navigate to the Microsoft 365 admin center, and choose APNs Certificate for iOS. (note: this page can be slow to load and appear blank at first.)
On the Apple Push Notification Certificate Settings page, check the “I agree” box and select “Next”.
Download your CSR file and save the Certificate signing request – make sure to note where that file is being saved on your computer. Select “Next”.
On the Create an APNs certificate page:
Select Apple APNS Portal to open the Apple Push Certificates Portal. This opens in a new tab.
Sign in with an Apple ID. Important: Use an Apple ID associated with an email account that will remain with your organisation even if the user who manages the account leaves. Save this ID because you’ll need to use the same ID when it’s time to renew the certificate.
Select “Create a Certificate”Read and check the “I have read and agree to these terms and conditions.” Checkbox, and accept the Terms of Use.
Select “Choose file” to browse to the Certificate signing request you downloaded to your computer from Microsoft 365 earlier, and select Upload.
Download the APN certificate you created in the Apple Push Certificate Portal to your computer. Tip: If you’re having trouble downloading the certificate, refresh your browser, or repeat try uploading the Certificate signing request again.
Go back to Microsoft 365 and select “Next”.
Enter your Apple ID
Browse to the APN certificate you downloaded from the Apple Push Certificates Portal and upload it.
Select “Finish”.
Set up multi-factor authentication
Multi-factor authentication (MFA) helps secure the sign into Microsoft 365 for mobile device enrolment by requiring a second form of authentication. Users are required to acknowledge a phone call, text message, or app notification on their mobile device after correctly entering their work account password. They can enrol their device only after this second form of authentication is completed. If MFA is not already enabled in can be done so in the Azure AD portal.
After user devices are enrolled in Basic Mobility and Security, users can access Microsoft 365 resources with only their work account.
Manage device security policies
It is good practice to is to create and deploy device security policies to help protect your organisation’s Microsoft 365 data. For example, policies to lock a device after five minutes of inactivity and wipe the device after three sign-in failures.
To create device security policies:
Sign into your Microsoft 365 as a global administrator.
When creating a new policy, it can be useful to first set the policy to allow access and report policy violation where a user device isn’t compliant with the policy. This allows you to see how many mobile devices are impacted by the policy without blocking access to Microsoft 365.
It is also advisable to test a new policy on the devices used by a small number of users before you deploy to everyone in your organisation.
Before enrolling a device in Basic Mobility and Security and creating and implementing policies it is strongly advisable to consider the potential impacts of this. One possible consequence could be that non-compliant devices might also have apps installed, photos, and other personal information which, could be deleted if the device is wiped. Please see this Microsoft article about wiping a mobile device in Basic Mobility and Security.
Enrolling devices
After everything has been set up and you have created and deployed a mobile device management policy, each licensed Microsoft 365 user in your organisation that the device policy applies to receives an enrolment message the next time they sign into Microsoft 365 from their mobile device. They must now complete the enrolment and activation steps before they can access Microsoft 365 email and documents.
Note: Users with Android or iOS devices will need to install the Company Portal app as part of the enrolment process.
The Karten Network, in association with TechAbility intend offering free support for Microsoft365 (previously called Office365) to Karten Network member organisations. To help us plan for this we kindly request that if you have not already done so, please complete this very short online survey: https://survey.karten-network.org.uk
Lastly, I am always interested to hear about how you are using mobile and other smart technology too. If you would like to have a particular topic covered in the next newsletter, please let me know. I am also available at any time to offer support and help where I can.
Amazon’s Echo devices have become increasingly common in our lives. They are primarily known for Amazon’s artificial intelligence voice assistant, Alexa. However, the power of these relatively cheap devices extends far beyond asking Alexa what the weather is like outside. Apart from some of the earlier versions of the Echo dots, they can be used as a hub to connect, control, and manage smart devices – although some smart devices will require an additional hub.
With the list of “Works with Alexa” devices increasing, I will not go into details of the devices available. But these include smart plugs, smart switches, smart lights, cameras, smart blinds, small appliances etc.
Many of the newer versions of the Echo dot include ultra-sonic and temperature sensors too. Some of the Echo show devices allow you to use the built-in camera as a sensor. There are also a multitude of third-party motion, temperature, and other sensors available.
To harness the power of these you can create a “Routine”. In simple terms a routine is a set of instructions that get triggered by something e.g. time of day, movement, a voice command etc.
Creating a routine
To create a routine, you will need the Alexa app (available for both iOS and Android) and the Amazon account associated with the Echo device/s.
First, ensure that the Alexa app is installed and you have signed in.
To create a routine from scratch, in the Alexa app:
1. From the Alexa app home screen, tap on “More” 2. Tap on “Routines” 3. Tap the plus sign
4. Tap the plus sign next to “Enter routine name”. 5. Type a name for your routine. You can currently have up to 200 routines per Amazon account, so I do recommend choosing a name that is quite descriptive. 6. Tap “Next”
7. Tap the plus sign next to “When this happens”. This is what triggers the routine. If it’s a voice command, you can add up to 7 variations of the phrase. In the example below I have chosen to have the routine trigger by time, specifically 40 minutes after sunset.
8. Tap the plus sign next to “Add action”. These can range from a simple spoken response to, playing music and controlling devices. You can also launch a Skill. This step can be repeated multiple times to build up complex routines. One nice feature is the option to add a delay in the routine. This allows you to for example, turn on a light, wait an hour, then play some music and then turn the light off. 9. If you have multiple devices you can set the “From” to control which Echo device responds to the routine – either “The device you speak to” or a specific Echo device. Unfortunately, routine names and phrases used to trigger them must be unique. Routines are also global, associated with the Alexa Amazon account, and not specific to a particular device. This means you can for example, only set up a routine to turn on the lights in a specific room when you say “Alexa turn on the lights” once. If you want to set up the same function you will need to choose a different routine name and phrase. 10. Tap “Save” to save your routine. Wait for a few moments for the devices to update.
Pre-made templates
You can choose to use one of the pre-made templates such as “Begin my day”. These are shown on the Alexa home screen when you first start using the app. They can also be found by:
From the Alexa app home screen, tap on “More”
Tap on “Routines”
Tap on the “Featured” tab
These templates can be edited to suit your needs.
Editing a routine
To edit a routine that you have created:
From the Alexa app home screen, tap on “More”
Tap on “Routines”
Tap on the routine you want to edit
Tap on either “Change”, “View/Edit” the plus or minus. The order of multiple actions can also be changed.
Copy Actions to New Routine
If you have a routine that performs a particular action for example turning lights on, and you want to create another routine to turn lights off, you can copy and edit that routine. To do this:
From the Alexa app home screen, tap on “More”
Tap on “Routines”
Tap on the routine you want to edit
Tap the 3 dots at the top right of the screen
Tap “Copy Actions to New Routine”
Enter a name for the routine and make the changes
Tap “Save”
Sharing a routine
Routines can also be shared. To do this:
From the Alexa app home screen, tap on “More”
Tap on “Routines”
Tap on the routine you want to share
Tap the 3 dots at the top right of the screen
Tap “Share Routine”
You will be prompted to with a warning informing you that you will be sharing information. While it’s important to always be responsible when sharing information, you can rest easy knowing that all network and account details will be removed. Device name will be anonymised e.g. “Hallway Motion Sensor” will be changed to “motion sensor,”. If you are still happy to share your routine, tap “Continue”.
You will now be presented with various ways to share the routine. Select the one that best suits your needs. Effectively, a URL is created.
Once the person receives the link, they will need to open it on a device that has the Alexa app installed. A screen will appear asking to either reject (“No, Thanks”) the Routine or “View Routine”. If you are not expecting a routine to be shared with you always tap “No, Thanks”.
Tap “View Routine” and edit or remove the fields highlighted by orange text and tap “Save.
There are countless possibilities that can be created using routines and I hope you enjoy experimenting with them.
Microsoft365 Support Survey
The Karten Network, in association with TechAbility intend offering free support for Microsoft365 (previously called Office365) to Karten Network member organisations. To help us plan for this we kindly request that you complete this short online survey: https://survey.karten-network.org.uk
As always, I am keen to hear about how you are using mobile and other smart technology. If you would like to have a particular topic covered in the next newsletter, please let me know. Finally, I am available to provide help, support and advice to any of the Karten Centres.
Martin Pistorius Karten Network Technology Advisor
Technology has permeated almost every aspect of our lives. Access to that technology and being able to use it enables us, especially those of us with additional needs, to participate in society to a greater extent. Over the past decade an enormous effort has gone into providing built-in accessibility features in many of the devices, applications and operating systems we use every day.
While there are far too many features to cover in just one article, I have highlighted some of the accessibility features that you may find useful.
Microsoft 365
Microsoft underwent a dramatic shift when Satya Nadella became CEO in 2014, placing accessibility on the top of Microsoft’s agenda. There are now many built-in accessibility features across their suite of products. Two features worth noting within Microsoft 365 (previously called Office 365) are Dictation and Immersive Reader.
Dictation
Dictation is available in all versions (web, desktop and mobile) of the Microsoft 365 edition of Word. To access Dictation in the web (online) version of Microsoft Word, log into your Microsoft 365 account, and open a new Word Document. Select “Home” then the “Dictate” icon.
Note: Depending on which web browser you are using, and the security settings you may need to enable access to the microphone for this to work.
Dictate works on the desktop version too. Simply, launch Microsoft Word, open a Document, click on “Home” and then the “Dictate” icon.
Dictate is also available on mobile devices. Tap on the Microsoft Word app, open a Document and you will notice a microphone icon in the bottom right of the screen, just above the keyboard. Tapping on the icon starts dictation.
Note: There can sometimes be a short delay before the microphone becomes active. You may also need to grant access to the microphone.
You can now speak what you want to have typed into the document.
As with all speech-to-text systems, it isn’t 100% accurate. However, it does provide a great way to create a text document with minimal keyboard input. For a full list of Dictation’s features and how to use it, please see the Microsoft Dictation help pages:
Microsoft’s Immersive Reader is a real gem. If you are not aware of it, urge you to have a look at it. In keeping with Microsoft’s “on every device” principle Immersive reader is available on web, iPad and desktop. However, there are some small differences between the exact features available depending on the platform.
Immersive Reader allows you to make adjustments to the text to best support your needs. These include adjusting the size and spacing of the font, breaking words up into syl·la·bles, highlighting parts of speech, changing the background colour, speaking the text and more. Of particular note is Boardmaker PCS symbol support – unfortunately, this is only available on the web version.
To access Immersive Reader on the web, open a web browser, login to Microsoft 365 and launch Microsoft Word. Click “View” then “Immersive Reader”. This will open the document Immersive Reader view. This view can also be expanded to full screen. The display preferences can be set by clicking on the three icons located in the top right corner of the screen. The text can be spoken by clicking on the “Play” icon on the bottom middle of the screen. Clicking the gears icon next to the play button allows you to adjust the speed and voice used to read the text out loud.
To access Immersive Reader on an iPad, tap on the Microsoft Word app, tap “View” then tap “Immersive Reader”
To access Immersive Reader on a desktop, launch the Microsoft Word application click on “View” then “Immersive Reader”.
In the interest of brevity, I will use “iOS” referring to both iOS and iPadOS.
Display & Text Size
Sometimes, you just need to adjust the size of things. Tapping on “Settings”, then Accessibility, then with in the vision group, tap “Display & Text Size”. From within the Display & Text Size settings you can choose to bold text, increase text size, adjust button shapes, turn labels on or off, and reduce transparency. You can also increase the contrast, differentiate without colour, invert display colours and add colour filters.
Zoom
If you find you have a need to enlarge things on the screen, then enabling the Zoom feature may be useful. To do this, tap on “Settings”, then Accessibility, then with in the vision group, tap “Zoom”, and tap to turn it on. Once enabled, double-tapping with three fingers anywhere on the screen will open the magnifier (Zoom). Depending on your version of iOS you will either get a menu with options, or a magnifier window.
If you find Zoom useful, then I suggest also turning on the Smart Typing Function – also found within the Zoom options under Accessibility. This feature automatically magnifies any text you type in an input field, e.g. when you write a message.
Magnifier
Depending on your version of iOS and device the built-in magnifier app can be a powerful tool to view and identify objects in your environment. If you can’t find the magnifier app either search for it using the spotlight search or in the App Library. Alternatively, you can enable an accessibility shortcut. To do this, tap on “Settings”, then “Accessibility”, then with in the general group, and tap “Accessibility Shortcut”, scroll to “Magnifier” and tap on it – a check (tick) will appear next to it. Now triple clicking the Side button or Home button will open the Magnifier app.
Within the Magnifier app there are a number of settings and features, including filters, torch and detection. Detection can identify objects, people and doors – providing visual, audible and spoken feedback about the object, and in the case of doors and people, how far away they are.
The detection feature as mentioned in the previous newsletter is only supported on the newer Apple devices with the LiDAR Scanner. While it may not be as good as dedicated assitive devices (e.g. OrCam) and other apps (e.g. Seeing AI) from my experimenting with it I found it to do a decent job.
Sound Recognition
Apple devices have fairly decent microphones, these can be used to help alert you to specific sounds such as a doorbell, kettle boiling, running water etc. With the option to add custom sounds too. To enable this feature, tap on “Settings”, then Accessibility, then with in the hearing group, tap “Sound Recognition”, and tap to turn it on. If this feature has not been turned on your device, it will download some additional files before enabling it.
Background Sounds
Background sounds is one of those lesser-known features that can be very useful. This feature is designed to play a sound to blockout sounds within your environment, helping you to focus. To enable this feature, tap on “Settings”, then “Accessibility”, then with in the hearing group, tap “Audio/Visual”, and tap ‘Background Sounds” to turn it on. There are currently six sounds to choose from.
If you do find this useful, I suggest adding “Hearing” to the Control Centre. This can be done by tapping on “Settings”, then scrolling down to “Control Centre” and adding it.
Back Tap
Back Tap is one of those features that has a multitude of possible uses. In short, Back Tap allows you to assign a particular function to either double or triple tapping on the back of your iPhone (not available in iPad) to trigger an action, e.g. launch the camera, take a screenshot, turn on the torch etc. Combining this with Apple Shortcuts makes even more complex actions possible.
To enable this feature tap on “Settings”, then “Accessibility”, then with in the physical and motion group, tap “touch”, scroll down to the bottom and tap ‘Back Tap” to turn it on. You can now assign an action to a double and/or triple tap.
Note: if you have a case on your iPhone this may affect the responsiveness of this feature. Although, personally even with a rugged case, I have not experienced any problems with this.
Guided Access
Guided Access restricts the use of an iPhone/iPad to a single app. You can also opt to disable the buttons on the device. This feature can be useful if you want to ensure that someone either accidently or intentionally doesn’t navigate away from a particular app.
To activate Guided Access tap on “Settings”, then “Accessibility”, then with in the general group, and tap “Guided Access” to turn it on.
From within the Guided access settings you can set a passcode, what happens if a time limit is set, and set how long before the device locks, including preventing the device from locking. You can also turn on an Accessibility shortcut – if you are going to use Guided Access I recommend turning this on. When enabled you can triple click the side button to launch Guided Access.
To use Guided Access, navigate to and launch the app you want to restrict use to. Start Guided Access. Triple click the side button if you enabled the shortcut. You now have the option to set more specific restrictions e.g. disabling the volume buttons. Then tap “Start”. You will be prompted to enter a passcode. This passcode is unique to Guided Access and can be different to the passcode used to unlock the device. Use of the device is now restricted to the chosen app.
Assistive Touch
Assistive Touch is designed to help with people who either have difficulty using the touch screen (or part thereof) or require an adaptive accessory. To enable this feature tap on “Settings”, then “Accessibility”, then with in the physical and motion group, tap “Touch”, and tap “Assistive Touch” to turn it on.
Once enabled a floating virtual button will appear on the screen. Selecting it will open a menu with a multitude of options, from controlling the device to viewing notifications and the options can be customized to suit your needs.
Spoken Content
Personally, Spoken Content is one of my favourite features. While it may not have all the features of an app like Speechify it is extremely effective at reading content. To enable this feature, tap on “Settings”, then Accessibility, then with in the vision group, tap “Spoken Content”. Within the Spoken Content settings you have the options to “Speak Selection” or “Speak Screen” – unless you have a particular need for the entire screen to be spoken, I recommend only enabling “Speak Selection”.
“Typing Feedback” a subsection of the Spoken Content setting allows you to enable spoken feedback to what is being typed and speak predictions.
If you have Spoken Content enabled you can now select any text, and from the context menu select “Speak” to have it read to you. Note: The “Speak” option may be hidden further along the context menu – tap the right arrow to view it.
More Accessibility features
I have only scratched the surface of the accessibility features in Microsoft 365 and iOS. Not to mention the fact that many of the accessibility features are also available for Android based devices.
For more information, please visit the following web pages:
As always, I am keen to hear about how you are using mobile and other smart technology. If you would like to have a particular topic covered in the next newsletter, please let me know. Finally, I am available to provide help, support and advice to any of the Karten Centres.
In late spring developers from around the world gathered to attend the two major developer conferences – Google I/O and Apple’s WWDC. These conferences typically serve as platforms for major announcements and glimpses into the near future, this year was no exception.
Google I/O
Google I/O held in early May incorporated many announcements, these included four new mobile devices (Google Pixel 6A, Pixel 7, Pixel Watch and Pixel Tablet), Android 13, and excitingly, the return of Google glass.
Android 13
The latest iteration of the Android operating system, Android 13 will include a host of improvements and refinements. Most noteworthy improvements have been made to the user interface through Google’s “Material You” theme.
Google will also be relaunching Google Wallet. This is expected to go beyond just Google Pay and will now support a variety of digital ID – much like the features currently offered by Apple Wallet.
Android 13 could be considered more of a refinement of Android 12 than a significate jump forward. Android 13 is available as a public beta for those who wish to explore it, and is expected to be released later in the year.
Google Pixels
Google will be launching 3 new mobile phones. The Pixel 6A, a mid-range phone expected to be available at the end of July this year. Unlike previous models where the cost of the device was reduced by using a less powerful processor, the 6A will feature the same Pixel 6 Tensor chip and design but will only have a 12-megapixel camera compared to the 50 megapixel camera in the standard Pixel 6.
Google also provided a glimpse into their new flagship phones, the Pixel 7 and Pixel 7 Pro. These new devices will include a newer version of Google’s Tensor chip and improved cameras. However, the full specifications of the new devices will only be known when they become available in the autumn.
Pixel Watch
Similar to the Pixel 7 and Pixel 7 Pro not much detail was revealed other than Google will be releasing its own smart watch, the Pixel Watch. Google acquired Fitbit a little over two years ago for their health and wellness tracking technology. The Pixel Watch will be fully integrated with Fitbit system and run on Google’s Tensor chip. It is expected to include emergency SOS features as well as work with the Google Wallet, Google Maps, and Google Smart Home apps. The Pixel Watch is expected to be released in the autumn.
Pixel Tablet
While Google was very sketchy with the details of the new Pixel Tablet, they did confirm that it will run on Tensor, like Google’s other devices and will be released next year.
The return of Google Glass
Perhaps the most exciting announcement was Google’s next generation augmentative reality (AR) glass. Gone is the futuristic look of the first generation of Google Glass – appearing more like regular glasses. Despite Google not really providing any details, they are clearly keen to join the likes of Meta (i.e. Facebook/Instagram) Snap and Magic Leap in the augmentative reality space. In Google’s demonstration they showed Google’s glasses being used to project real-time translation of what someone was saying, this included being able to translate American sign language into text.
WWDC 2022
During the WWDC keynote Apple announced the new MacBook Air, and excitingly the next generation of Apple Silicon – the M2 Chip, a major advance on the M1 processor. In keeping with tradition Apple announced a plethora of new software updates for the iPhone, iPad, Apple Watch, and Mac. Here are just a few highlights of what will be coming to a supported device later in the year.
iOS16
As with every release of iOS there are a raft of improvements, refinements, and new features, many not even making the news headlines. Below and some of the changes coming in iOS 16.
All-New Lock Screen
The new lock screen is now highly customizable with different styles, colour filters, and fonts. New widgets can now also be added to display information such as calendars, weather and even live updates from various sporting events. You can now also use photo shuffle to display different photos on the lock screen throughout the day.
Dictation
Major updates to dictation will now allow you to swap seamlessly between voice dictation and the touchscreen keyboard. Along with the improvements to Dictation itself it will automatically add updates to the text and can even include emoji dictation.
Live Text
Live Text, which enables text to be extract from either the camera or images has now been extended to video too. Now you can pause on any frame and interact or grab text from the video.
This technology has also been expanded to allow you to now extract images from a background and paste them into other apps.
Safety Check Privacy Settings
Apple in recent years has put a lot of effort into improving your privacy when using their devices and announced Safety Check. This new privacy setting to review allows you to quickly revoke access, sign out of iCloud on all devices and limit Messages to a single device. This feature is aimed at supporting people who find themselves in an abusive relationship.
Security updates
Starting with iOS16 security updates will be able to be automatically installed as they become available and will no longer require a full new version of iOS. This will allow you to ensure that your devices are kept as secure as possible without you needing to think about it.
This new feature will be enabled by default. However, should you wish to turn it off. (not recommended) you can do so by with Settings app under “General > Software Update > automatic security updates”.
Medication tracking
While only available in the US for now, this new feature will enable reminders to be set and log when medication was taken. It will also notify you if there are any potential negative interactions of the medication, for example, if it’s not advisable to consume alcohol while taking a particular medication.
Matter Smart Home App
Apple have redeveloped there Home app incorporating the Matter standard. Matter is a connectivity standard that emerged from an industry lead (Amazon, Apple, Google, Samsung SmartThings, the Zigbee Alliance) working group started in 2019. Matter aims to allow smart home devices to seamlessly work together.
Apples new Smart Home app now allows better control and navigation of smart home devices. You can now get an overview of your smart home stats in a single image, and the app has new features such as lights and climate controls. You can now add a home widget to the lock screen too, making it possible to keep an eye on your smart home without needing to unlock your phone.
Fitness app
Until now Apple’s fitness app was only available to Apple Watch users. Starting from iOS16 the Fitness app will now be available to all iPhone users.
Accessibility improvements
Apple has often led the way by embedding accessibility into every aspect of their technology. With the advancements in hardware, machine learning and software, iOS16 will include even more accessibility features. These include:
Door Detection
This feature will assist someone with a visual impairment to navigate by identifying a door. Door Detection can then provide the person with information about how far they are from the door, if it is open or closed, whether it can be opened by pushing, turning a knob, or pulling a handle. Additionally, Door Detection can read signs, door number and symbols around the door.
Door Detection requires iPhone or iPad with the LiDAR Scanner, for instance either the iPhone 12 Pro and Pro Max, iPhone 13 Pro and Pro Max, or the iPad Pro.
Live Captions
Live Captions will now be available on iPhone, iPad, and Mac. Live Captions are generated in real time on the device ensuring they are provivate and secure. With Live Live Captions enabled any audio content will appear as text captions too. This could be a phone or FaceTime call, using a video conferencing or social media app, streaming media content, or having a conversation with someone next to you. You can adjust font size to suite your needs too. When using this feature with FaceTime on a Mac you have the option to type a response, and have it spoken aloud in real time to others who are part of the conversation.
Live Captions will be available on the Phone 11 and later, iPad models with A12 Bionic and later, and Macs with Apple silicon.
Buddy Controller
Buddy Controller combines any two game controllers into one, this means multiple controllers can drive the input for a single player. So now someone can ask a care provider or friend to help them play a game.
Siri Pause Time
For people with speech disabilities, you can now adjust how long Siri waits before responding to a request.
Sound Recognition
Allows you to effectively teach your device to recognise custom sounds for example a home’s unique alarm, doorbell, or appliances.
Apple Watch Mirroring
For people who have difficulty interacting with Apple Watch, Apple has introduced Apple Watch Mirroring allowing you to control a watch paired with your iPhone. This allows you to then use the iPhone’s assistive features such as Voice Control and Switch Control as alternatives to tapping the Apple Watch display.
iPadOS 16
Many of the new features included in iOS16 will also appear in iPadOS.
One new feature coming to iPad is Apple Stage Manager. This new feature automatically organises open apps and windows allowing you to focus on your task while still being able to see everything at a glance. Unfortunately, due to the memory requirements, Apple Stage Manager will only be available on iPads with an M1 or newer Chip.
A new digital whiteboard app will also be introduced. The Freeform app, enables you to add notes, include photos, draw and even FaceTime someone directly from the app. Freeform supports collaboration so it is possible for you to work together with others on the digital whiteboard with changes happening in real-time.
Apple Passkeys
Currently a lot of effort is being put into creating a more secure way of logging into systems. Apple is working with industry partners such as Microsoft and Google, the FIDO alliance and developers to create a next-generation credential that’s more secure and easier to use. While there is still a long way to go, the aim is to create passwordless logins across mobile, desktop, apps, and browsers.
Passkeys which Apple announced during an WWDC presentation on updates to Safari (Apple’s web browser) aims to make this possible. In simple terms, Apple Passkeys uses the biometrics features built into Apple devices such as Touch ID or Face ID and “cryptographic techniques” to generate a unique and secure key. This is then stored on your Apple devices and shared through Apples iCloud Keychain which uses end-to-end encryption. This, in theory, means that your password can’t be stollen because it only securely exists on your device. In time, you will be able to sign into websites and apps on non-Apple devices using an iPhone or iPad by scanning a QR code and then use Touch ID or Face ID to authenticate.
This is really exciting, not only as it provides a more secure means to login but will make it easier for those who have difficulty logging into systems. Many more announcements covering other products were made during WWDC. It remains thrilling to see the ongoing advances in technology and its potential to improve people’s lives.
Here to help
As always, I am interested to hear about how you are using mobile and other smart technology. If you would like to have a particular topic covered in the next newsletter, please let me know. I am also available at any time to support and help where I can.
With the release of each new mobile device or operating system the line between a tablet, (or mobile phone) and a traditional laptop computer is increasingly becoming blurred. Similar to traditional computers it is possible to do many of the same things with files on mobile devices.
What are files anyway?
A computer file, like the traditional paper-based files where the name is derived from, is a collection of information, data. This data contains information about the file itself, as well as the content. File data is then deciphered by an application and displayed as an image, audio, video, text, or a combination thereof.
Working with files on Mobile devices
Most of us work with files without thinking about them, e.g., sending someone a photo. Many of the mobile apps make use of Cloud based storage making accessing files on multiple devices or transferring between devices easy.
However, it is possible to use external storage devices with mobile devices allowing you to move or copy files, either to free-up storage space on the mobile device, create a backup, or move files to another device.
External Storage devices
External storage devices are typically either a flash drive (a.k.a. memory stick, thumb drive) or external hard drive. These can be connected either directly to the mobile device or using an adapter. Not all external storage devices will work though as they may require more power than the mobile device is able to provide, or the file system may not be supported by the mobile device.
Tip: As you have a USB connection to the mobile device, a USB SD card reader could be used to download photos taken on a digital camera onto the mobile device.
Android devices
To use an external storage device, the mobile device must be running Android 7.0 (Nougat) or later.
Most modern Android devices have a USB-C port and a USB-A to USB-C adapter is often included with the device. If you do not have an adapter, an OTG USB adapter (sometimes called an OTG cable, or OTG connector) are inexpensive and easily purchased online. This makes it relatively easy to connect a flash drive directly to the device. Some external hard drives may require too much power to work directly with the device – it is possible to power the hard drive separately though.
Once the external device is connected, locate the file explorer app (often called “My Files”) tapping on it will show you the storage options available, one of these should be the external storage device (other locations are likely to be “Internal storage” and “SD Card”). From here you can select to either move or copy files from various locations.
Despite Apple’s involvement in the development of USB-C, most iOS and iPadOS devices use Apple’s Lightning connector which means that you will either require a Lightning to USB adaptor or a device, for example the SanDisk® iXpand® Flash Drive that is fitted with a Lightning connector.
If you are going to use an adaptor, I do recommend opting for Apple’s Lightning to USB 3 Camera Adapter. While often more expensive than third party adapters it has more reliable power support enabling you to plug a Lightning cable into the adapter to supplement the power needs of some USB devices.
On a sidenote, in September 2021, the European Commission released a proposal that would require smartphone manufacturers to standardized USB-C as charging port on all devices. It remains to be seen what the outcome of this proposal will be for future devices.
To connect an external storage device to an iPhone or iPad the device will need to be running iOS 13 or iPadOS 13 or later. The external storage device will also need to use one of the following file systems, either macOS Extended journaled, FAT32, exFAT (FAT64) or APFS. Support for Windows default NTFS file system was only added to the current versions (version 15) of iOS and iPadOS. However, currently you can only read from devices formatted as NTFS. This means that if you want to copy or move files your external storage device will need to be using one of the other file systems.
Once the external storage device is connected you can import photos and videos to your iPad or iPhone directly through the Photo App. You can’t however view or watch video files directly from the external device through the Photo app. For detail instructions on how to import photos and videos please follow one of the links below:
Tip: Once photos and/or videos are imported you will be promoted to “keep or delete” the files from the external device, always select “keep” unless you are absolutely sure you want to permanently delete them from the external device.
To view or watch video files directly from an external device you will either need a third-party app or you will need to use the Files app included in iOS/iPadOS. The File app will also enable you to access other files as well as move or copy files between you iPhone/iPad and the external device.
Other ways of moving files
Files can be moved or copied between devices in other ways too. mobile devices can be directly connected to a computer and accessed through the computer. However, depending on the security settings on the mobile device this may be limited.
Apple Mac computers will require additional software though to be able to access the files on Android devices. Basic software (Android File Transfer) to enable you to this can be downloaded free or more advanced software (e.g. Dr.Fone) can be purchased – free trials are available.
Apple devices also offer wireless transfer of files between Apple devices using AirDrop. AirDrop needs to be enabled and the devices need to be within Bluetooth and Wi-Fi range of each other. Please follow this link for more details on how to use AirDrop.
The Western Digital My Passport Wireless SSD is an external hard drive that allows wireless transfer of files between devices and the hard drive. This, (in theory) will work with all devices, eliminating the complexity of cable connections.
There are various cloud-based solutions too, however, this will not be covered in this article.
Due diligence
Transferring files between devices can offer a viable solution for creating backups, making more space on mobile devices and moving files for use on other devices. However, care needs to be taken to ensure that data protection and privacy policies are adhered to.
In this article I have highlighted some of the ways to work with files on mobile devices, should you have questions, or need support with anything mentioned please contact me. Please also let me know if there is a particular topic that you would like me to cover in a future newsletter.
As always, I am available to provide support, advice and help Karten Centres where I can.
With 2022 well on its way we look forward to discovering the new mobile devices, smart home technologies, and even in the not too distant future Amazon’s household robot Asto. To get a glimpse of Asto, have look at Amazon’s introductory video:
If you like your mobile device, I always recommend putting a case on it. Mobile devices manufacturers have gone to great lengths to improve the robustness of their devices. With most mobile devices incorporating either Gorilla Glass, Ceramic Shield or a hybrid of the two. However, accidents happen, mobile devices get dropped, knocked off tables, or end up being at the wrong end of an outburst of emotion.
Screen protector
I personally also like adding a screen protector to a mobile device too. There are typically four types of screen protectors available – Nano Liquid; Thermoplastic Polyreuthane (TPU); Polyethylene Terephthalate (PET); and Tempered Glass.
Nano Liquid
Nano liquid or liquid screen protectors as the name implies is a protective liquid that gets applied to the screen. I suggest avoiding these as it is difficult to determine the level of protection, it tends to wear off over time and can’t be replaced, as with a physical screen protector.
Polyethylene Terephthalate (PET)
Are made from a plastic often found in food containers and plastic bottles. They are cheap, thin making them less noticeable. However, they do not offer the same degree of scratch and impact protection as TPU and Tempered glass.
Thermoplastic Polyurethane (TPU)
TPU is a flexible plastic that offers a greater degree of scratch and impact protection, and limited “self-healing” properties. TPU is applied to the device using a spray solution – this can make the screen protector tricky to install. Once attached to the device, they do have a slight “orange peel” feel to them.
Tempered Glass
Tempered glass is generally considered the best type of screen protector. While tempered glass doesn’t have the self-healing properties of TPU, it offers the maximum scratch and impact protection. Tempered glass protectors are often advertised with an H rating, usually H9. This is derived from the ASTM hardness scale. The Mohs scale is considered a better indication of the hardness of the tempered glass. It can sometimes be useful to do an online search for the product that you are intrested in to see if someone has tested it with a Mohs kit.
Whichever type of screen protection you opt for, I would recommend reading the reviews first. Often some of the more reputable (and expensive) ones do not work as well as the cheaper lesser-known brands.
If you like it, put a case on it!
For best protection of your device, you need a case. There are two general category of case, those that simply offer a degree of protection, and those that incorporate other technology e.g., a speaker, keyboard or extra battery.
When considering a case, it is important to take the needs of the person using the device. cases can add bulk and weight to the device. Some have a handle and/or a carry strap – and while protection outweighs aesthetics, the case may be available in a colour the person likes.
It is also worth considering practical aspects of the case. The added thickness the case adds may mean the devices don’t fit in the charging trolly, or people may struggle to remove a protective flap to plug a cable into the device.
Generally, you get what you pay for when it comes to cases. Unlike screen protectors you are better off choosing a case from the more reputable manufacturers e.g. OtterBox, Griffin, Spigen, Targus, Mous, Big Grips etc.
While this may seem obvious, it is important to check that any case you are considering is compatible with your device for example there is a 7.8mm difference in size between the iPad mini 5 and iPad mini 6.
Big Grips make colourful, fun, light weight, and functional cases. They provide decent impact resistance and come in four variants: a simple case (Big Grips Frame), one with handles (Big Grips Lift), a slim case (Big Grips Slim), and one with a carry case (Big Grips Hipster).
OtterBox produce cases for both mobile phones and iPads. The OtterBox Defender series of cases for iPad is one of the best tough cases. Their Kids Antimicrobial EasyGrab Tablet Case also offers a nice option.
Griffin, like OtterBox produce cases for both mobile phones and iPads. Similarly, the Griffin Survivor cases is one of the best tough cases on the market. Griffin also have other cases that may not provide as much protection but have other features such as an easy grip hand strap.
Probably known more for their mobile phone cases also offer as range of tough cases for iPad. While the Spigen tough cases may not look as military grade tough as the Griffin Survivor and OtterBox Defender they offer a high level of impact protection.
Mous, pronounced mouse offer a selection of cases. Similar to Spigen, are probably more known for their mobile phone cases, but also offer a number of iPad cases. Mous tend to look good, and less like your typical tough case. However, despite the looks, their cases provide a decent level of impact protection.
Mophie, one of Zagg’s product ranges offer cases that incorporate a battery pack to extend the use of your device, while also offering a degree of protection. Mophie also produce a space-pack case which includes 32GB of external storage, and a battery pack for iPad.
iAdapter™ by AMDI is a specialist case that not only offers protection, but is designed to convert an iPad into a more traditional augmentative and alternative communication (AAC) device. The case incorporates front and rear facing speakers and a battery pack.
Many more
These are just a handful of cases currently on the market. As technology progresses, more and new cases become available. I would be interested to know what your experiences with cases are.
I am keen to hear about how you are using mobile and other smart technology too. If you would like to have a particular topic covered in the next newsletter, please let me know. I am also available at any time to support and help where I can.
In my previous newsletter article, I mentioned that Apple announced the latest versions of their mobile operating systems (iOS15 and iPadOS 15), highlighting some of the new features. With iOS15 and iPadOS 15 now available I thought it would be fitting to provide some tips on how to better use the mobile devices running this operating system.
Manage Home Screen Pages
It is possible to rearrange or delete entire home screens.
To rearrange your Home Screen:
Touch and hold an empty space on the Home Screen to enter edit (wiggle) mode.
Tap on the row of dots at the bottom of the screen indicating the pages of your Home Screen.
All the pages of your Home Screen appear in a grid. Touch and drag a page to rearrange it in relation to your other pages. The other pages will move in response to your drag action.
Tap “Done” in the top-right corner of the screen when you have finished editing.
To delete a Home Screen page:
Touch and hold an empty space on the Home Screen to enter edit (wiggle) mode.
Tap on the row of dots at the bottom of the screen indicating the pages of your Home Screen.
All the pages of your Home Screen appear in a grid. Tap on the tick under the page that you want to delete.
Tap the minus (-) icon in the top-left corner of the screen to delete it.
A message will be displayed asking you to confirm, tap “Remove”.
Tap “Done” in the top-right corner of the screen when you have finished editing.
When you delete a Home Screen page, the apps remain in the App Library. If you want to add an app back to a Home screen you will need to drag them from the App Library onto the Home Screen.
Use the camera to scan any text
This is a very useful feature; it can be accessed directly through the camera app or from with another app. For me personally, I find this very handy as I can scan text from within my AAC app (Proloquo4Text) and then “speak” the text.
I do find it can take a moment for the device to process the text though. The text recognition while not perfect, is extremely good!
To access this feature from the camera app:
Open the Camera app.
Point the camera at the text you would like to scan.
When the device is ready, an icon will appear on the right-hand-side of the display, and a yellow bounding box is shown around the text being scanned.
Tap on that icon.
Once the text has been scanned you can choose how you want to use that text.
To access the text scan feature from within an app:
Open your app, it will need to be an app that supports text input.
Double tap, to bring up the context menu.
Tap the scan text icon. A view of what is visible by the camera is displayed. A yellow bounding box is shown around the text being scanned with an “Insert” button.
When you are ready, tap the “Insert” button.
Scan/extract text from any photo
Using the same underlining technology, you can copy any text in a photo. Similar to the real-time text scanning, it takes a moment for the text to be analysed.
To extract text from a photo:
Open the photos app.
Open the photo containing text.
Tripple tap, or long press on the text to bring up a menu with options. From here you can use the text as you wish. If the photo contains a phone number, you will have the option to make a call to the number.
Safari tweaks
The Safari browser received a major update. For those who like using a lot of browser tabs (such as myself) it is now possible to create tab groups. This way your tabs can be organised into categories of your choosing.
To organise your tabs into groups:
Tap the tab button at the bottom right of the screen.
A grid of tabs you already have open is displayed.
Hold down the tab number at the bottom of the screen to create a group with the current tabs, or you can create a completely new group of empty tabs to start browsing a particular topic.
Alternatively, long press on the URL/address bar. This will open a menu with an option to “Move to Tab Group” – this includes an option to create a new tab group.
In the new version of Safari the URL/address bar has been moved to the bottom of the screen. If you prefer having it at the top of the screen that can be changed.
To move the URL/address bar to top of the screen:
Tap on “Settings”.
Scroll down to locate and tap on “Safari”. Top tip, pulling down on the setting screen will display a “Search” box at the top of the setting. Typing in there will enable you to quickly find an app or setting you are looking for.
Scroll down to the “Tabs” section and tap “Single Tab”.
While you are in Safari’s settings you may enable “Hide IP Address” (although it is likely to be enabled by default). This is yet one more feature in Apple’s efforts to improve your privacy.
To enable “Hide IP Address” :
Tap on “Settings”.
Scroll down to locate and tap on “Safari”.
Scroll down to the “Privacy & Security” section and tap “Hide IP Address”.
Tap “From Trackers”.
iCloud Private Relay
For those concerned about privacy, Apple now offers a Private Relay option to paid iCloud subscribers – soon to be called iCloud+. Private Relay is a stripped-down virtual private network (VPN) – in effect this means your IP address and browsing activity is hidden from other parties, including Apple.
To enable iCloud Private Relay:
Tap on “Settings”.
Tap on your account name.
Tap on “iCloud”.
Tap on “Private Reply”.
Tap on the Private Replay switch to enable.
Recovery contacts
Were you to lose or be locked out of your device, Recovery Contacts may prove to be a great help. A Recovery contact enables you to select a person you trust as your “phone a friend for help”. They will then be able to provide you with a recovery key to access your and recover your data. They will not be able to access your data, merely verify your identity. They will need a device running iOS15 or iPadOS 15 or later. Apple also requires the person to be 13 years or older.
To enable Recovery Contacts:
Tap on “Settings”.
Tap on your account name.
Tap on “Password & Security”.
Tap on “Account Recovery”.
Tap on “Add Recovery Contact”.
Tap “Add Recovery Contact” you will be asked to authenticate with Face ID or Touch ID.
If you’re using “Family Sharing” one of these group members is recommended. Alternatively, you can choose one of your contacts. A message will be sent to your contact asking them to accept or decline your request. Note, if a family member is selected, they’re added automatically. After they’ve accepted your request, you will see a message that they have been added as your account recovery contact. Should they decline or remove themselves as your recovery contact you will receive a notification.
Now, should you ever be unable to access your Apple account/device for whatever reason, your recovery contact will be able to provide you with a 6-digit code that you enter on your device to log back in.
Set Background Sounds
Amongst the raft of accessibility features you will find background sounds. The six sounds to choose from are similar to third-party noise generating apps, and are aimed at masking unwanted environmental noise.
To enable Background Sounds:
Tap on “Settings”.
Scroll down and tap on “Accessibility”.
Scroll down and tap on “Audio/Visual”.
Tap on “Background Sounds”.
Tap the Background Sounds switch to enable and set your preferences.
Customise accessibility settings per app
iOS15 and iPadOS 15 also introduced the ability to customise the accessibility setting to individual apps. This means if there is one particular app that needs some adjustments, these can be applied on to that app, rather than the entire system.
To set per app accessibility settings:
Tap on “Settings”.
Scroll down and tap on “Accessibility”.
Scroll down and tap on “Per-App Settings”.
Tap “Add App”.
Browse through the list of apps and select the app you wish to apply the settings too.
From the list of added app, tap the app, and apply the required settings.
There are many more features and tips to enable you to use your iPhone or iPad more effectively. Should you require support or have a question please feel free to contact me.
Lastly, if you would like to have a particular topic covered in the next newsletter, please let me know.
Google and Apple, the two major mobile technology competitors held their annual development conferences in May and June respectively. While aimed at developers, Google I/O and WWDC are often the platform for major announcements and this year was no exception.
Google I/O
With Google cancelling Google I/O last year it was good to see the event back. Some of the noteworthy announcements were:
Project Starline
Project Starline provides a glimpse into the future of video calling. The system builds on three research areas – depth sensors and cameras; compression and streaming algorithms; and Light field display. These are combined to produce an extremely detailed 3D image that is rendered in real-time, without the need for 3D glasses to be worn.
Google is quoted as saying that it’s applying its research in machine learning, computer vision, spatial audio, and real-time compression to build the futuristic system. The result creates the effect of a person sitting across from you.
Currently, the system is only being used internally at Google and there are no plans to release the system commercially. However, access to the technology has been given to some of Google’s enterprise partners.
LaMDA
LaMDA (Language Model for Dialogue Applications) is the next generation of artificial intelligence (AI) conversational bots. LaMDA is based on Google’s transformer architecture which analyses how words relate to each other in order to predict what to say. However, unlike previous systems LaMDA can manage the open-ended nature of human conversations.
Natural conversations are derived by connecting topics, often in unexpected ways. LaMDA makes a major step towards being able to cope with this. This would mean that conversational bots could engage in natural conversations with people.
In simple terms, LaMDA makes it possible for computers to better understand natural language. This means in time the technology will make its way into search and voice assistants creating a better and more “human” interaction.
Google Wear.
Google have failed to really capture the smart watch market – with Apple Watch proving more popular. Google will attempt to change this by their acquisition of Fitbit and the merger of Samsung’s Tizen operating system with Google’s Wear OS. This new operating system will now simply be called Wear. This promises to deliver a wider range of smart watches with better capabilities. It is reported that the next Samsung Galaxy Watch will run this new software.
Android 12
Perhaps the main announcement at Google I/O was Android 12. Reported to be the biggest change to Android since the implementation of Google’s “Material Design” in 2014. The new “Material You” transforms the device interface. Creating a personalised and clearer interface with new widgets, simpler settings menu, larger and bolder quick settings tiles. A 22% reduction in processing time the new interface is more responsive with smoother animations.
Notifications have been revamped to present a clearer at-a-glance view. There is also a new snooze feature which allows you to snooze specific notifications for a set amount of time.
There is a new lock screen featuring a large digital clock that adapts, reducing in size to show any notifications.
A new fresh look to the PIN code keypad has been included with large round buttons.
The new one-handed mode, as the name suggests, when enabled makes it easier to use the phone, particularly those with larger screens using just one hand.
While picture-in-picture is not new, new controls will make it possible to enlarge the window without going full-screen.
Specifically for Google Pixel phones, Android 12 will enable you to double-tap the back of the phone to perform a programmed action like, take a screenshot, launch Google Assistant, open the recent apps, pause or resume media playback.
Privacy is a hot topic amongst mobile devices at the moment with the somewhat controversial release of iOS 14.5. Google has joined the party with their new “Privacy Dashboard” in Android 12. The dashboard will allow you to see what apps have accessed certain permissions. It also includes the option to quickly disable all app access to your camera and microphone.
When an app is first launched and requests access to your location you can now decide to have it only get access to an approximate rather than precise location.
Behind the scenes the Private Compute Core ensures that all audio and language processing are done on the device and can’t be shared over the network.
A new built-in app will also soon be available that will enable android phones to be used as a remote to control any television running Android TV.
More improvements and features that have not been mentioned above will be included in Android 12. Currently the Android 12 public beta is available for download with an official release expected in September this year.
WWDC
iOS 14.5
While most major announcements typically happen at WWDC, Apple released iOS 14.5 in April. This caused some controversy notably with Facebook because of Apple’s App Tracking Transparency privacy feature. App Tracking Transparency allows you control which apps are able to track your activity across other companies’ apps and websites. This data is typically used to display personalised advertisements or shared with data brokers.
Other mentionable changes in iOS14 include support for Apple’s new AirTag. This £2 sized tracker can be attached to objects like your keys, wallet, bag, etc. You can then use the Find My app to locate the object using visual, audible, and haptic feedback to guide you directly to the AirTag.
iOS 15
At WWDC 2021 Apple announced the coming release of iOS 15. As with every new version of the operating system this includes a host of improvements and new features. Some of these are:
FaceTime
A significant update to FaceTime. For the first time, FaceTime is now supported across platforms making it possible to use FaceTime on Windows and Android through a browser. Similar to Zoom, it is now possible to schedule individual FaceTime calls and send a link to join the call. FaceTime now also supports portrait mode to blur backgrounds, and a grid view to speak to multiple people at the same time. The new spatial audio feature creates a 3D audio experience allowing you to get a sense of where the person is on the screen during group calls.
ShareTime, enables users to now share music or their screen during a FaceTime call.
iMessage
iMessage has been redesigned and now includes features that photos received in iMessage are grouped into galleries. Links that are sent to you get automatically saved in “shared with you” so they are in one place and can be access later. This works with Apple Music, Safari, Apple Podcasts, Apple TV and Apple News.
While notifications continue to be displayed on the lock screen, a new feature now collects the notifications and displays them in a custom summary, ordered by priority.
If “do not disturb” or a new “focus” mode are enabled this status will now be shared with other users, like an away message.
Focus mode
The new Focus mode filters and hides notifications and apps based on specific user preferences. Focus, also uses on-device intelligence to suggest which people and apps are allowed to notify them. These suggestions factor in the person’s context, for example work hours or when winding down for bed. Once Focus is set on one Apple device, it gets automatically applied to any other Apple devices the person may have.
Live Text
A new feature in the Camera app, called Live Text can automatically identify and scan text in photographs. This text can then be copied and extracted to use in other apps.
Wallet
The Wallet has been updated to now support the inclusion of corporate ID badges, keys to get into hotels and houses with smart homes.
Safari
Apple’s Safari browser on the iPhone has received a major update with a redesigned tab interface and support for the same extensions used in the desktop version.
iPadOS 15
Similar to iOS 15 for iPhone, the iPad operating system will also be updated to iPadOS 15. Included in the new version are new ways to rearrange iPad apps, put widgets on the home screen and the App Library feature – something that until now was only available on iPhone.
A new multitasking interface that makes it easier to place two apps side by side on the iPad screen.
The Apple notes app is now able to better interface with other apps. Swiping up from the bottom corner of the iPad will now launch the “Quick Notes” feature. This enables you to quickly make notes using the Apple Pencil.
A new version of the Translate app has been added to iPadOS 15. This app enables people to have a conversation and have it translated on-screen in real time.
Apple’s Swift Playgrounds app which is designed to help people learn how to code (program) has been updated and it is now possible to create full apps, that can be submitted to the App Store.
Both iOS 15 and iPadOS 15 are expected to be released in September this year.
Many more announcements covering other products were made during WWDC. It remains exciting to see the ongoing advances in technology and its potential to improve people’s lives.
As always, I am interested to hear about how you are using mobile and other smart technology. If you would like to have a particular topic covered in the next newsletter, please let me know. I am also available at any time to support and help where I can.
“You’re on mute”…a phrase that has become familiar to many of us over the past year. A sign of how the digital age has transformed how we live, work and interact with each other. Mobile devices have put a computer in our hands, one that is able to capture images and video, contributing to the zettabytes of data that is created and generated each year. In this data era, how valuable is your data to you, to your organisation?
The 31st of March is world Backup Day! (http://www.worldbackupday.com/en/) A day to emphasise and remind you of how important it is to backup your data.
Despite the increased reliability of modern devices, hardware can, and does fail, devices get damaged, stolen, or infected by viruses and ransomware. Hardware can be relatively easily repaired or replaced (at a cost) however, data loss, can be priceless – irreplaceable. The simple rm -rf * command executed on the wrong directory at Pixar deleted 90% of Toy story 2! Fortunately, that data or a copy thereof, was able to be recovered. That incident also transformed Pixar’s backup policy.
Simply put, a backup is copy of all your important files which is stored on another device in a safe place.
Typically, backups are done either to an external device (e.g. external hard drive, NAS, etc) or an internet based service, or both. Each option has its advantages, external devices in most cases don’t have any on-going-costs and data transfer rates are higher, meaning backups (and crucially restores) take less time to complete. Internet base services offer off-site backups and greater data integrity as services providers have their own backup procedures to keep your data safe. They also may be included in a service that you are already paying for e.g. Microsoft 365.
The exact backup solution will depend on your requirements. However some key aspects to consider when determining a backup procedure/policy:
What data should be included in the backup?
How often should backups be done?
How many copies will be made?
Where is the backup data stored? If this is internet base, where are the servers located.
Who has access to the backup data?
How long is backup data retained?
While having a backup procedure is good practice, if it includes personal data, backups are a General Data Protection Regulation (GDPR) requirement. The GDPR states that: “…must have the ability to restore the availability and access to personal data in the event of a physical or technical incident in a ‘timely manner’.”
For personal data included in backups, the GDPR could also influence where backups are stored, who has access to them, and how long they are retained for. Trickier aspects include the anonymisation of data, and the individual’s “right to be forgotten”. The GDPR does not make any exceptions for personal data contained in backups i.e. personal data should be deleted from backups too. The guidance from the Information Commissioner’s Office (ICO) is that the steps need to remove an individual’s personal data be dependent “…on your particular circumstances, your retention schedule (particularly in the context of its backups), and the technical mechanisms that are available”. The ICO stresses that “You must be absolutely clear with individuals as to what will happen to their data when their erasure request is fulfilled, including in respect of backup systems.”.
While it is context specific, the guidance acknowledges that due to technical reasons it would be difficult to erase an individual’s data from a backup. If this is the case the backup data be marked as ‘beyond use’. The ICO states “You must ensure that you do not use the data within the backup for any other purpose, ie that the backup is simply held on your systems until it is replaced in line with an established schedule.” For more information on the ICO website.
Most, mobile devices by default will automatically backup to cloud based storage – Google Drive for android devices (Samsung devices can also be backed-up to a Samsung account); iCloud for Apple devices. This is dependent on the available storage space and usually only happens when the device is connected to Wi-Fi and is charging. These backups may not include all the data on the device. Some app e.g. WhatsApp offer their own backup service specific to that app’s data.
Depending on the context in which these devices are being used, these backup options may need to be reviewed and disabled.
Whatever your backup procedure is, backups should be checked for integrity – most backup software allows for this. After all, a backup that can’t be used to restore your data is not much use.
Whether you choose to take the backup pledge or not, on the 31st of March please give some thought to backing-up your data. To quote the World backup day website “Don’t be an April Fool. Backup your data.”
Finally, as always, I am interested to hear about how you are using mobile and other smart technology. I am also available to support and help where I can.
With a new year, comes renewed hope and enthusiasm for the potential that 2021 holds. Unfortunately, we still find ourselves facing many challenges due to the Coronavirus (COVID‑19) restrictions. Our Home Learning Support resource is still available on our website. This collection of information, links and resources can be accessed at: https://karten-network.org.uk/home-learning-support/
An app included in the Home Learning Support resources, BorrowBox allows you to access audio as well as eBooks from your local library for free. If not already registered with your local library, this can be done through the app.
You may find however, depending on your local library service that you get stuck in an error loop when trying to register on an iOS or iPadOS device. This is caused by Apple’s strong password feature which automatically generates a password when filling in forms. While you can choose to use this password or create your own, the library service web-form validation constantly alerts you to the error that you are unable to do this.
I would therefore suggest registering with your library on a computer first. The government website provides a handy postcode lookup to find your nearest library: https://www.gov.uk/join-library
Alternatively, you can choose to disable the strong password feature.
To do this:
Tap on the Settings icon
Scroll down and tap “Passwords”
Tap “AutoFill Passwords”
Turn off AutoFill Passwords
When devices misbehave
Thankfully, most mobile devices are very reliable but when things do go wrong, what can you do, particularly under the Coronavirus restrictions? For Apple devices, Apple Repair (https://support.apple.com/en-gb/repair) offer a variety of options including, phone and online chat support, and the option for the device to be collected by a courier, repaired and returned to you.
It will be best to know the Apple ID and password before beginning the repair request process. It may also be handy to have your device’s IMEI/MEID, or ICCID (i.e. the serial number). To find this on an iPhone, iPod touch, or iPad please see Apple’s guide available at: https://support.apple.com/en-us/HT204073
Some other providers/suppliers may offer a similar service, including the option for a technician to visit your premises, conducting the repairs outside.
For software issues where remote assistance is appropriate, my go-to application is TeamViewer. TeamViewer is free for private and non-commercial use. Various licencing/ pricing options are available, including a free 14-day trial.
A possible challenge with the use of TeamViewer is the app needs to be installed on the mobile device. For some devices, such as Samsung devices will require an add-on TeamViewer app. If this is the case, the user will be prompted to install the add-on app when they install the TeamViewer QuickSupport app.
To establish a remote session the app needs to be launched, the TeamViewer ID needs to be sent to the person providing remote support – for some this may be difficult and require extra support. Thankfully this process has been made more user friendly.
Once the connection is established you can view the screen and in the case of Android devices and computers control the device as if you had physical access to it, including restarting it.
Unfortunately, currently iOS and iPadOS devices only allow screen sharing with a text and audio chat facility. You will therefore need to guide the person, instructing them what actions to take.
A word of caution: do not accept TeamViewer support from people or organisations you do not know and trust.
As mentioned [earlier in the newsletter], the Ian Karten Charitable Trust website was launched in December 2020 to mark the centenary of the Trust’s founder, Ian Karten MBE. To coincide with this quick access tabs have been added to the Karten Network website. These allow access to the Nuvoic, Techabilty and Ian Karten Charitable Trust websites.
I would like to take this opportunity to thank Jo Healy for her valuable feedback. Thanks to this I have improved the Centre Edit form.
If you have any other suggestions, comments or requests regarding any of the Karten websites, please contact me.
As always, I am available to support and help where I can. Please also feel free to let me know if there is a particular topic you would like covered in future newsletters.
The Karten Network website aims to not only provide information to the general public but to serve the Karten Network itself. A prime example of this is our Home Learning Support section, if you have not already done so, please take a look at the resource: https://karten-network.org.uk/home-learning-support/
We also aim to facilitate collaboration by providing information about what services and areas of expertise a centre has. We would therefore kindly request that the information on your centre page is kept up-to-date. This can be done by your Karten Centre manager, or the relevant person within your organisation.
While every effort is made to ensure that the current person responsible for the Karten Centre information has a Karten Network account with the necessary privileges we do know that things change. If you don’t have an account please contact me.
If you have forgotten your password, this can be reset, by clicking on the “Login” link, located on the top left of every page on the website.
Clicking on the “Lost your password?” link below the login form.
Enter either your user name or e-mail address and clicking the “Get New Password” button. You should then receive an e-mail enabling you to reset your password.
Allow a few minutes for the email to reach you. Please check your junk/spam folder.
If you don’t receive an e-mail, please contact me.
To update your centre page:
Please login to the Karten Network website. The login link can be found on the top left of each page or by visiting: https://karten-network.org.uk/login/
Enter your username or e-mail address, and your password.
Once logged in, navigate to your centre page. Below the page title an “Edit Centre Information” link should now be visible. If this is not the case, please contact me [martin@karten-network.org.uk]
Click on the “Edit Centre Information” This will take you to a form where you can update and add information.
Edit the information as necessary. Then scroll down to the bottom of the page, and click the “Save Changes” button. Should you wish to exit the edit form without saving any changes, a cancel link is also available at the bottom of the page.
Should you have any questions, comments or suggestions about your centre page, please contact me: martin@karten-network.org.uk
It wasn’t that long ago when the idea of talking to and interacting with a computer by speaking was the stuff of science fiction. Now we “Hey Google…”, “Siri…”, and “Alexa…” without giving a second thought to it.
While Intelligent virtual assistants (IVA) are still maturing, they already offer an interface to many who would otherwise find traditional computer interfaces difficult to use. However, for some, accessing virtual assistants is still challenging. Thankfully, built in accessibility features may make this easier. As of iOS 11 you are able to type rather than speak to Siri.
Google Home accessibility features are largely dependent on the device. On mobile devices, the app relies on Android’s accessibility features. On Google Nest smart speakers and displays accessibility features are controlled through the Google Home app. To access these features, ensure you mobile device is connected to the same Wi-Fi network as your smart speaker or display. Open the Google Home app. Tap your speaker or Smart Display. Tap on “Device settings, then “Accessibility”. Currently the options are limited. They mainly include additional audio feedback and cues. For smart displays in addition to auditory options, including closed captioning, it is possible to adjust the colours and the amount of contrast, as well as magnify the screen.
Amazon’s Alexa has a large number of accessibility features. Similar to Google Home, some of which are device specific. These accessibility features can be accessed either through the Alexa app or directly through the device. The features include audio instructions for configuration of Amazon Alexa devices; customisable sound cues; text size and contrast; screen reader support for the Alexa app; support for keyboard navigation in the app and on some Alexa devices; screen magnification; and the rate at which Alexa speaks can also be adjusted.
The “wake word” can be changed, although this is currently limited to four options – “Alexa,” “Amazon,” “Echo,” and “Computer.”
On supported devices (e.g. Amazon echo show) you can interact with Alexa without speaking. This includes using a keyboard during video calls made using the supported device. The Real Time Text (RTT) feature adds a live, real-time chat feed during calls and “Drop Ins”. When RTT is enabled, a keyboard pops up on the screen (external Bluetooth keyboards are also supported), enabling you to type text which appears in real time on both parties’ screens.
Ongoing efforts promise to expand access to virtual assistants for people with disabilities. Google recently announced a partnership with Tobii Dynavox to integrate Google’s virtual assistant into Tobii Dynavox augmentative and alternative communication devices.
The Karten Network is excited to be a partner in the European Union funded Nuvoic Project, led by specialist app developer Voiceitt to further develop the Voiceitt app. The app is designed to translate impaired or unclear (‘dysarthric’) speech into intelligible speech as well as control other voice-driven technologies such as virtual assistants. (see the Nuvoic project article for more information).
While privacy and data protection concerns exists, intelligent virtual assistants are hear to stay and possess the potential to make all our lives, particularly those with disabilities a little easier.
As always, I am interested to hear about how you are using mobile and other smart technology. I am also available to support and help where I can.
Martin Pistorius Karten Network Mobile Technology Advisor
We use cookies on our website to give you the most relevant experience by remembering your preferences and repeat visits. By clicking “Accept All”, you consent to the use of ALL the cookies. However, you may visit "Cookie Settings" to provide a controlled consent.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
Cookie
Duration
Description
cookielawinfo-checkbox-analytics
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checkbox-functional
11 months
The cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checkbox-necessary
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-others
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-performance
11 months
This cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy
11 months
The cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.