Android XR is fixing everything that was wrong with Google Glass
The glasses aren't dreadfully ugly and Gemini AI should make them a lot more useful.
Another year, another Google I/O keynote filled to the brim with AI. The search giant today previewed a future that's equally cool and anxiety-inducing. From a supercharged Gemini Pro that's morphing into the de facto search experience to new generative audio and visual tools — including ones that can code apps and games or mix full music tracks and cinematic films with a prompt — it’s scary just how close AI is to replicating human creativity.
It’s no secret Google wants Gemini in everything, starting with your smartphone and web experiences. But later this year, it’s expanding to smart glasses with the Android XR platform, which has been in preview for months.
Although Google was one of the first at bat with a pair of smart glasses known as Google Glass, in a way, it’s playing catchup to Meta and its stylish AI-powered Ray-Ban glasses, which are already on the market and integrate a camera, microphone, and speakers to deliver a multimodal AI experience. But Google’s Android XR platform seems poised to push the bar for what’s possible by not only incorporating each of those elements but also adding augmented reality through an overlaid display. (Meta is said to be working on an updated pair with similar mixed reality features, which may launch in 2025.)
Google’s edge lies in the increasingly impressive and ambitious Gemini. Not to discount Meta AI, but Google seems to be developing its platforms with much broader and deeper capabilities and at a much more rapid pace.
For now, Android XR’s toolkit seems mostly similar to Meta’s. It can view the world through its lenses and respond to queries about the objects, text, places, and even people you see. You can also capture POV photos and video, have Gemini translate your conversations in real time, get guided navigation directions with visual cues, and initiate any other general query you want. Meta AI has picked up more useful features in recent months, such as the ability to remember where you’ve parked and other life-changing conveniences.
Imagining what all of this could look like after a few years of maturation, I see creatives using the cameras on these devices to visualize the world in new ways. One idea that comes to mind is a director of photography using the camera at potential shoot locations to conceptualize and visualize scenes based on real-world locations.
With Gemini’s 3D modeling feature, you could perhaps use your voice to snap captures on-device and ask Gemini to recreate the scene you’re looking at as a stylized backdrop for an AI-generated film, or even recreate it as a 3D environment that you can use for projects like games or animated films. It’s unlikely that those types of operations can happen on-device, but theoretically, you could snap the photo or video and conveniently feed it to Gemini to play with later.
Unlike the original Google Glass I fell in love with over a decade ago in 2013, Google is partnering with luxury eyewear brands such as Warby Parker and Gentle Monster to develop stylish frames that you won’t be uncomfortable wearing, solving one of the biggest pain points that stymied the original. Tech brands such as Samsung and Xreal are also working on hardware, with the latter freshly announcing a pair called Project Aura.

The bulk and general geekiness of the original Google Glass made them difficult and embarrassing to use in public. Not only were they annoying to wear, but people always looked silly as they constantly strained their eyeballs mid-conversation to view whatever filled the tiny off-centered heads-up display.
I vaguely remember how some Google Glass wearers would use them while driving, while pedestrians sometimes strolled into busy crosswalks haphazardly. Granted, this was during a trip to Google I/O featuring a heavy concentration of opulent developers and journalists who’d just bought a $1,500 pair. The design wasn’t inherently unsafe, and the rectangular prism they used for a display didn’t block your frontal or peripheral vision enough for concern, but no amount of foolproofing can account for the human natures of curiosity and wandering attention.
Google Glass also prompted contentious debates about personal privacy in public spaces. It wasn’t always obvious whether someone was recording you, and that angered some folks to the point of violence. Some even coined a term for those who used the glasses so much or so invasively that they actually became a nuisance to society — meet the Google Glassholes.
Eventually, I lost interest in Google Glass. As much as I liked them, they were beautifully ugly, exorbitantly expensive, and not useful enough to people who were unready and unwilling to live 90 percent of their lives surrounded by cameras. Joke’s on them: everyone is blatantly recording everything now, so society has collectively given up on that fight.
Needless to say, I’m lusting hard for a pair of Android XR glasses. I also still want whatever Meta’s cooking for its next magic trick. The race for consumer-ready wearables is firmly between the two right now, and I’m eager to see how each plans to expand. Meanwhile, top competitors Microsoft and Apple are still happy to toil away on mixed reality platforms for headsets that are a lot more powerful, but that no one should be caught walking around in public with.
The future of smart glasses and other XR wearables interests me so much that I’m willing to try wearing the contacts I’ve been deathly afraid of all my life to use them. (My weirdly strong prescription is seemingly impossible for Ray-Ban to fill, and I fear meeting the same fate with Google’s partners.)