NVIDIA, Google, Alphabet Double Down on Agentic and Physical AI

NVIDIA, Google, and Alphabet expanded their AI partnership at GTC, spanning Blackwell infrastructure, agentic AI, robotics, open models, and real-world applications in healthcare and energy.

author-image
Manisha Sharma
New Update
NVIDIA, Google, Alphabet Double Down

NVIDIA, Alphabet, and Google deepened their long-running technology partnership as they unveiled a fresh set of joint initiatives aimed at advancing agentic and physical artificial intelligence, signalling a broader push to move AI from digital workflows into real-world systems.

Advertisment

Announced at NVIDIA’s GTC conference, the collaboration spanned AI infrastructure, open model optimisation, robotics, healthcare research, and energy systems. The effort underscored how hyperscalers and chipmakers are increasingly co-designing AI stacks, from silicon to simulation, to accelerate adoption across complex industries.

Sundar Pichai, CEO of Google and Alphabet, said, “I’m proud of our ongoing and deep partnership with NVIDIA, which spans the early days of Android and our cutting-edge AI collaborations across Alphabet. I’m really excited about the next phase of our partnership as we work together on agentic AI, robotics and bringing the benefits of AI to more people around the world.”

Jensen Huang, Founder and CEO, NVIDIA, added, “Alphabet and NVIDIA have a longstanding partnership that extends from building AI infrastructure and software to advancing the use of AI in the largest industries. It’s a great joy to see Google and NVIDIA researchers and engineers collaborate to solve incredible challenges, from drug discovery to robotics.”

From Models to Machines

Across Alphabet, engineers and researchers have been working closely with NVIDIA’s technical teams to apply AI and simulation to problems that extend beyond conventional software. These efforts ranged from training robots with grasping skills to rethinking drug discovery pipelines and optimising power grids.

The collaborations leveraged NVIDIA Omniverse, NVIDIA Cosmos, and NVIDIA Isaac platforms, with teams from Google DeepMind, Isomorphic Labs, Intrinsic, and X’s moonshot project Tapestry sharing progress at GTC.

The work reflected a broader industry shift toward agentic AI systems, models that can perceive environments, reason over context, and take action in dynamic settings.

Advertisment

Cloud Infrastructure Takes Center Stage

To support AI research and production workloads, Google Cloud emerged as one of the first platforms to adopt NVIDIA’s latest Blackwell-based systems. The cloud provider said it would deploy the NVIDIA GB300 NVL72 rack-scale solution and the NVIDIA RTX PRO™ 6000 Blackwell Server Edition GPU.

The move positioned Google Cloud to serve customers building large-scale AI and simulation workloads, particularly those requiring tight integration between compute, networking, and AI software.

With earlier previews of A4 and A4X virtual machines, Google Cloud also became the first cloud provider to offer instances based on NVIDIA B200 and GB200 GPUs, expanding access to Blackwell-powered infrastructure.

Responsible AI and Open Model Optimization

Beyond infrastructure, the partnership extended into responsible AI development and open models. NVIDIA became the first external industry partner to adopt SynthID, a Google DeepMind technology that embeds digital watermarks directly into AI-generated images, audio, text, and video.

The integration aimed to support content transparency while preserving output quality, particularly for NVIDIA Cosmos world foundation models.

Google DeepMind and NVIDIA also collaborated on optimising Gemma, Google’s family of lightweight open models, to run efficiently on NVIDIA GPUs. With the launch of Gemma 3, NVIDIA made the models available as optimised NVIDIA NIM™ microservices using TensorRT-LLM for improved inference performance. The companies also extended this work to optimising Gemini-based workloads via Vertex AI.

Advertisment

Robotics Moves Toward Adaptability

In robotics, Alphabet’s Intrinsic worked with NVIDIA to simplify how industrial robots are trained and deployed. Most industrial robots today rely on manual programming, a process that can be costly and inflexible.

By integrating Intrinsic Flowstate with NVIDIA Isaac Manipulator foundation models, the teams aimed to reduce development time while improving adaptability. At GTC, Intrinsic also demonstrated an early OpenUSD streaming connection between Flowstate and NVIDIA Omniverse, enabling real-time visualisation of robot workcells across platforms.

In parallel, NVIDIA and Google DeepMind announced a collaboration with Disney Research to develop Newton, an open-source physics engine accelerated by NVIDIA Warp and compatible with MuJoCo. The effort was expected to significantly accelerate robotics machine learning workloads.

Advertisment

AI Applied to Science and Energy

Isomorphic Labs, founded by Google DeepMind CEO Demis Hassabis, continued to apply AI to drug discovery using a drug design engine built on Google Cloud with NVIDIA GPUs. The platform was designed to support large-scale AI models aimed at advancing human health.

Meanwhile, X’s Tapestry project worked with NVIDIA to explore faster and more accurate electric grid simulations. The collaboration focused on integrating new energy sources and expanding grid capacity to meet rising demands from data centres and AI workloads while maintaining grid stability.