/ciol/media/media_files/2025/05/26/qamSSvLvRVIjcB6osDNQ.jpg)
Image Courtesy: Google
At Google I/O 2025, over the last week, Sundar Pichai, in his keynote, walked through Google’s trajectory in the AI landscape. Last year (2024) was a defining year for AI, the models matured a lot, and the emergence of GPTs like DeepSeek, earlier this year, created a new equation. Many went scurrying to drawing boards as DeepSeek altered the traditional GPU power required to process AI outcomes. GenAI today is at a critcial evolutionary inflection point.
In this backdrop, when Pichai took to the stage with his keynote at Google's flagship I/O 2025 event, his message was clear: Gemini is now at the heart of Google’s product strategy. It was a keynote packed with key updates and gave a top-level view of where Google is heading. It also signals how Google is now on the road to becoming an AI-first company. For instance, Pichai and other Google leaders showcased how the company’s flagship AI model is being infused across Search, Workspace, Android, Photos, and more.
Google’s AI Innovations: Project Astra, Beam, and Real-Time Speech Translation
Unlike previous years, where major updates were reserved for the keynote, Google now rolls out its innovations and products throughout the year. In just the past year alone, Google has launched over a dozen AI models and research milestones, along with more than 20 new AI-powered products and features. Pichai underscored how research is rapidly turning into real-world impact, singling out three compelling examples that bridge the lab and the product: Project Astra, Project Starline (now Beam), and real-time speech translation in Google Meet.
First, Pichai introduced Beam, the next-generation evolution of Project Starline. This is built in partnership with HP. For the uninitiated, Beam is a video communication device with six cameras that capture users in 2D and reconstruct them into lifelike 3D avatars. It also supports millimeter-accurate head tracking and 60 frames-per-second rendering, making remote conversations feel immersive and real. This AI-first communication platform is set to reach early customers later this year, a launch that will alter video communication.
Second, Pichai showcased Google Meet’s new AI-powered speech translation. Borrowing technology from Starline’s development, this feature allows real-time translation across languages while preserving the speaker’s tone, emotion, and rhythm. In a live demo on stage, English and Spanish speakers were able to converse seamlessly, with the translated speech sounding expressive and human—not robotic. This feature is already available for premium subscribers, with enterprise deployment and support for additional languages on the horizon.
The third, and perhaps most forward-looking example, was Project Astra. This is a prototype of what Pichai described as the future of universal AI agents. Astra blends visual perception with conversational intelligence, allowing users to interact with the world through an AI that can see, remember, and respond contextually. Whether identifying a part of a circuit board or recalling where someone last left their glasses, Astra hints at a future where AI assistance feels more like a natural companion than a chatbot.
How Gemini 2.5 Pro is Powering Real-World AI and Developer Tools
These examples sit atop Google’s rapidly advancing Gemini models, led by Gemini 2.5 Pro. This model now leads the LM Arena in accuracy and speed, and dominates benchmarks like the WebDev Arena with major improvements in coding. Gemini is also powering applications like Cursor, where developers are accepting AI-generated code at scale. In a playful but telling milestone, Gemini even completed the full Pokémon Blue game autonomously, its an interesting example of its growing reasoning and multimodal capabilities.
Tensor Traction with Google’s 7th Gen ‘Ironwood’
With all this development, the bottom line that enables everything is infrastructure. Google is pivoting with a new 7th gen Tensor Processing Unit , codenamed Ironwood. It brings a 10x performance boost over previous generations and will be available on Google Cloud later this year. With the explosion of AI usage, the data Pichai shared was mindboggling—from 9.7 trillion tokens processed last year to 480 trillion now. Google is taking this head-on, optimizing for both performance and cost, pushing the Pareto frontier in AI compute. (In AI development, engineers face a peculiar challenge: a constant love-hate relationship with trade-offs between competing goals like accuracy, speed, and energy efficiency. They must navigate this balance, which is captured by the concept called the Pareto frontier.)
Pichai, in conclusion, remarked that AI is deeply integrated into Google’s product ecosystem. Take the case of the Gemini app, which now serves over 400 million monthly users. On the other hand, Gemini Pro usage is up 45%, and Search is undergoing its biggest transformation yet, with AI Overviews currently being used by over 1.5 billion people monthly. Pichai hinted that an “AI Mode” in Search is coming soon, marking a bold step toward the next era of the search experience, primed by AI.
#google-io
Also Read:
Google I/O 2025: Project Astra and Gemini Signal AI Leap :What It Means for India
Top GenAI Coding Tools 2025: Copilot, CodeWhisperer & Gemini
Gemini 2.5 Pro vs ChatGPT Pro : Which GenAI Assistant Is Right for You?