fbpx

Geek Hands-On: Android XR Headset Signals Strong AR/VR Return For Google

Artificial intelligence (AI) may be at the front and centre of Google I/O 2025 with a whopping 95 mentions in its keynote, but a brief live demo of Android XR on stage piqued the interest of many, especially those who are familiar with the search giant’s history in the virtual and augmented reality (VR and AR, respectively) space. 

@geekculture

Before the official rollout of Android XR to headsets, here’s a first look at how it allows users to step into different parts of the world, with Google also along for the ride. #google #googleio2025 #googleio #tech #travel #whattobuy #android #androidxr #vr #geek

♬ original sound – Geek Culture – Geek Culture

After all, it has been 13 years since the introduction of Google Glass, a brand of smart glasses developed by its X Development team, formerly Google X, and a decade following its subsequent discontinuation. A surprise announcement in 2024 reaffirmed the company’s commitment to exploring extended reality (XR) tech, now packaged into an Android-based operating system designed to support XR devices, including Samsung’s Project Moohan headset, and a pair of smart glasses developed by AI research laboratory Google DeepMind – both of which we got to try during a hands-on preview held at Google Partner Plex in Mountain View, California. 

As the older reveal of the two, the wait to try out Project Moohan was significantly shorter than the other, but it was hardly an accurate indicator of its potential. The prototype headgear stood out more among the two, making a stronger impression in the areas that matter and offering a glimpse of a cautiously optimistic future (if all the right checkboxes continue to be ticked, that is). Compared to the smart glasses that are styled like normal thick-rimmed sunglasses, Samsung’s answer to the Apple Vision Pro delivers a greater sense of immersion through a larger display and the ability to add a 3D effect to 2D visuals. 

Those who don’t want to be bogged down by the extra heft can find a more lightweight alternative in the eyewear, sporting a discreet display on the right lens and a nice fit on the nose bridge. That’s not to say that the headset is unwieldy – it touts an ergonomic design that wrapped comfortably around the head from start to finish.

The basic workings for Project Moohan are straightforward: twist the dial on the back to adjust the fit, and press the top button to bring up Gemini, Google’s AI voice assistant that shares the same name as its large language model. After putting it on, the screen dims and the eye-tracking cameras will track the user’s interocular distance and mechanically move the lenses, with the whole process over in a jiffy. If the battery is running low, a portable battery pack can be attached to it. 

Our brief stint with the headset prototype comprised three main segments, starting with a stereoscopic video experience on YouTube. Turning the right palm inwards pulls up the launcher, and a quick pinch-and-release selects an app or backs out of the menu. Swipe gestures are used to go back and forth between content, and the navigation proved to be an intuitive, pleasant affair, particularly so for users who have used similar offerings like Meta’s Oculus headsets (not to brag, but I got a “You’re really good at this” a few minutes in, and the rep decided I could skip the tutorial). 

Still, first-timers should be able to pick up the controls easily, though it can take a while to get acquainted with the hover sensitivity, which may be higher than some are used to and requires a few tries to gauge the tracking distance correctly.

As with all things VR/AR/XR, seeing is believing, making it difficult to fully translate the visual experience into words. The previously mentioned video, featuring New Zealand, took me through a hiking trail surrounded by lush greenery and a suspension bridge, delivering a sense of dimensionality that feeds into the immersion. In the latter’s case, the height felt real, and the virtual hand outline was responsive to my actual hand’s movements, with little to no latency to boot. While display specifics were not shared, the 4K footage was accompanied by vibrant colours, rich contrast, and crisp detail.

Next up was a normal 2D video, enhanced with AI for 3D depth. Referred to as spatialised video, the effect is fairly well-integrated, but if users look hard enough, there’s a bit of an unnatural sheen during some parts. Another mechanic was introduced here – hover outside the right edge of the window, wait for a small animation pop-up, then quick-pinch and release to move it to the desired location. It’s normal to not nail it on the first attempt, and fortunately, gets easier once you get into the groove. 

The same manufactured depth also applies to still images, demonstrated through the “Go Immersive” feature on a photograph of the Horseshoe Bend in Arizona. This time, Gemini was along for the ride, so I asked two follow-up questions after it dropped the name of the place, offering a brief history, and how its geographical formation came to be. As expected with real-time interactions, there’s a slight latency between each response, but not long enough to feel awkward or uncomfortable. More notably, the voice assistant impressed with its ability to pick up the Singaporean accent, which isn’t the easiest to understand upon first listen, keeping the need to repeat queries at bay. 

A cool moment was when Gemini opened Google Maps and gave a bird’s-eye view of Singapore, when I asked if it could take me to my home country. After pinching the ground, aiming both hands at an area on the map, and pulling it apart, Little Guilin, or Bukit Batok Town Park, appeared (albeit with lower resolution), to which I enquired about the rock structure, history of the quarry, and videos that had been taken here. Gemini redirected to YouTube smoothly, marking the end of the 10-minute demo. 

With the smart glasses, it should be noted that the build we got our hands on was a prototype, as it will be out only after Project Moohan’s planned 2025 launch, so glitches, errors, and the like are to be expected. Even so, its performance was more inconsistent and underwhelming than expected, especially with more than an hour’s wait for a five-minute session. Slip it on, and the current time and weather details will pop up, immediately revealing two observations: the translucent screen is small, coupled with a narrow field of view. 

Naturally, the controls work differently from the headset, too. Tapping on the right side activates or pauses Gemini, and a button on the top serves as a manual shutter for taking screenshots. The positioning is a little difficult to nail down, and it’d be welcome if an indicator of sorts like haptic feedback could be introduced, but otherwise, Gemini proved to be as responsive as experienced with the headset. It’s also more conversational now, bringing an organic touch to the flow of verbal communication. 

When asked about two paintings in the room, the AI gave the right answer to both, alongside a brief overview of each. It, however, failed to register my photo-taking request for a flower decoration twice, prompting me to press the button on top of the smart glasses and snap a picture manually. Doing so brings up a neat little preview that allows users to frame their shots – a leg-up over the Meta Ray Ban, arguably its biggest competition on the current market. The discrepancy seems to be a constant, as a separate session at the AI Sandbox, an on-site space with demo booths, saw Gemini wrongly identifying an artwork even after multiple attempts. 

Conversely, the live translation feature worked like a charm. While not exactly a translation per se, a conversation in Mandarin showed that it’s able to pick up foreign languages fairly accurately and quickly; here, I asked Gemini to describe the contents of a book and list the writer’s other works, and it passed both tests with flying colours. There are just two instances of nitty-gritty to highlight: the inflection of some words sounded a little unnatural and stiff to a native speaker, and the accompanying text was in Traditional Chinese (Singapore uses Simplified Chinese), though that can likely be changed in the settings. 

On a related note, it’d be interesting to see if the AI will recognise the large number of Chinese dialects, including Cantonese, Hokkien, Teochew, Hakka, Foochow, and more when the eyewear eventually releases, which would certainly benefit communication, especially across generations, across Southeast Asia, and the world. As it stands, the Android XR glasses should prove nifty for breaking down language barriers and real-time navigation – the demo included a short segment with walking directions in AR, a small section of the map on the ground, and a notification of the next turn around eye-level. 

“The UI is not just you in the environment, but it’s you and your companion AI Gemini in the environment,” said Sameer Samat, president of the Android ecosystem at Google, during the Under the hood of Google AI panel held at Google Partner Plex, on the impact of an AI tagalong on everyday life. “You can invoke Gemini, it can see what you see, you can hear what you can hear, and it can actually take actions inside the UI with you… it can do all of these things.”

He added, “It reminds me a lot of the old sci-fi movies like Iron Man, where you could put on the helmet and you had Jarvis with you to assist you. That was a vision that we’ve had for a long time, and I think with the advancements in AI, it’s truly becoming possible.”

The jury’s out on whether users will indeed feel like Iron Man with Project Moohan or the Android XR glasses, but there’s no denying that both devices can serve specific purposes and use cases well. While the latter is, again, still in the early stages and needs more polish, and pricing continues to be the biggest concern for the headset, Google’s return to the fold is looking hopeful – more so now that it already has a strong Gemini ecosystem in place.