The headlining occasion of Google I/O 2025, the stay keynote, is formally within the rearview. Nonetheless, should you’ve adopted I/O earlier than, it’s possible you’ll know there’s much more taking place behind the scenes than what you’ll find live-streamed on YouTube. There are demos, hands-on experiences, Q&A periods, and extra taking place at Shoreline Amphitheatre close to Google’s Mountain View headquarters.
We have recapped the Google I/O 2025 keynote, and given you hands-on scoops about Android XR glasses, Android Auto, and Mission Moohan. For these within the nitty-gritty demos and experiences taking place at I/O, listed here are 5 of my favourite issues I noticed on the annual developer convention at the moment.
Controlling robots together with your voice utilizing Gemini
Google briefly talked about throughout its important keynote that its long-term purpose for Gemini is to make it a “common AI assistant,” and robotics must be part of that. The corporate says that its Gemini Robotics division “teaches robots to understand, comply with directions and regulate on the fly.” I bought to check out Gemini Robotics myself, utilizing voice instructions to direct two robotic arms and transfer object hands-free.
The demo is utilizing a Gemini mannequin, a digicam, and two robotic arms to maneuver issues round. The multimodal capabilities — like a stay digicam feed and microphone enter — make it simple to manage Gemini robots with easy directions. In a single occasion, I requested the robotic to maneuver the yellow brick, and the arm did precisely that.

It felt responsive, though there have been some limitations. In a single occasion, I attempted to inform Gemini to maneuver the yellow piece the place it was earlier than, and rapidly discovered that this model of the AI mannequin does not have a reminiscence. However contemplating Gemini Robotics continues to be an experiment, that is not precisely shocking.
I want Google would’ve targeted a bit extra on these purposes throughout the keynote. Gemini Robotics is precisely the form of AI we should always need. There isn’t any want for AI to switch human creativity, like artwork or music, however there’s an abundance of potential for Gemini Robotics to eradicate the mundane work in our lives.
Making an attempt on garments utilizing Store with AI Mode

As somebody who refuses to attempt on garments in dressing rooms — and hates returning garments from on-line shops that do not match as anticipated simply as a lot — I used to be skeptical however excited by Google’s announcement of Store with AI Mode. It makes use of a customized picture era mannequin that understands “how totally different supplies fold and stretch based on totally different our bodies.”
In different phrases, it ought to offer you an correct illustration of how garments will look on you, reasonably than simply superimposing an outfit with augmented actuality (AR). I am a glasses-wearer that steadily tries on glasses just about utilizing AR, hopeful that it will give me an thought of how they’re going to look on my face, solely to be upset by the outcome.
I am completely satisfied to report that Store with AI Mode’s digital try-on expertise is nothing like that. It rapidly takes a full-length picture of your self and makes use of generative AI so as to add an outfit in a approach that appears shockingly life like. Within the gallery beneath, you’ll be able to see every a part of the method — the completed outcome, the advertising and marketing picture for the outfit, and the unique image of me used for the edit.
Is it going to be good? In all probability not. With that in thoughts, this digital try-on instrument is well the most effective I’ve ever used. I would really feel far more assured shopping for one thing on-line after making an attempt this instrument — particularly if it is an outfit I would not sometimes put on.
Creating an Android Bot of myself utilizing Google AI

A number of demos at Google I/O are actually enjoyable, easy actions with so much of technical stuff happening within the background. There isn’t any higher instance of that than Androidify, a instrument that turns a photograph of your self into an Android Bot. To get the outcome you see beneath, a fancy Android app stream used AI and picture processing. It is a glimpse of how an app developer would possibly use Google AI in their very own apps to supply new options and instruments.

Androidify begins with a picture of an individual, ideally a full-length picture. Then, it analyses the picture and generates a textual content description of it utilizing the Firebase AI Logic SDK. From there, that description is distributed to a customized Imagen mannequin optimized particularly for creating Android Bots. Lastly, the picture is generated.
That is a bunch of AI processing to get from a real-life picture to a customized Android Bot. It is a neat preview of how builders can use instruments like Imagen to supply new options, and the excellent news is that Androidify is open-source. You’ll be able to study extra about all that goes into it right here.
Making music with Lyria 2

Music is not my favourite medium to include AI, however alas, the Lyria 2 demo station at Google I/O was fairly neat. For these unfamiliar, Lyria Realtime “leverages generative AI to supply a steady stream of music managed by consumer actions.” The thought is that builders can incorporate Lyria into their apps utilizing an API so as to add soundtracks to their apps.
On the demo station, I attempted a lifelike illustration of the Lyria API in motion. There have been three music management knobs, solely they had been as huge as chairs. You possibly can sit down and spin the dial to regulate the share of affect every style had on the sound created. As you alter the genres and their prominence, the audio enjoying modified in actual time.

The cool half about Lyria Realtime is that, because the identify suggests, there is no delay. Customers can change the music era right away, giving individuals that are not musicians extra management over sound than ever earlier than.
Producing customized movies with Circulate and Veo

Lastly, I used Circulate — an AI filmmaking instrument — to create customized video clips utilizing Veo video-generation fashions. In comparison with fundamental video mills, Circulate is used to allow creators to have constant and seamless themes and types throughout clips. After making a clip, you’ll be able to change the video’s traits as “components,” and use that as prompting materials to maintain producing.

I gave Veo 2 (I could not attempt Veo 3 as a result of it takes longer to generate) a difficult immediate: “generate a video of a Mets participant hitting a house run in comedian type.” In some methods, it missed the mark — considered one of my movies had a participant with two heads and none of them truly confirmed a house run being hit. However setting Veo’s struggles apart, it was clear that Circulate is a great tool.
The power to edit, splice, and add to AI-generated movies is nothing wanting a breakthrough for Google. The very nature of AI era is that each creation is exclusive, and that is a nasty factor should you’re a storyteller utilizing a number of clips to create a cohesive work. With Circulate, Google appears to have solved that downside.
When you discovered AI discuss throughout the primary keynote boring, I do not blame you. The phrase Gemini was spoken 95 occasions and AI was uttered barely fewer on 92 events. The cool factor about AI is not what it will probably do, however how it will probably change the best way you full duties and work together together with your units. To date, the demo experiences at Google I/O 2025 did a strong job at exhibiting the how to attendees on the occasion.



