How Snowflake and Anthropic Are Taking Agentic AI Into Enterprises

Snowflake's $200M Anthropic deal embeds Claude AI in governed data, tackling trust in regulated sectors. Baris Gultekin explains the path of governance-first AI from pilots to production.

author-image
Manisha Sharma
New Update
Baris Gultekin

Enterprise AI is entering a more demanding phase. The debate has shifted from whether generative AI works to whether it can be trusted, particularly inside banks, healthcare providers, and other regulated environments where data errors carry real consequences.

Advertisment

That shift underscores the significance of Snowflake and Anthropic’s expanded, nine-figure partnership. The $200 million, multi-year deal brings Anthropic’s Claude models deeper into Snowflake’s AI Data Cloud, positioning agentic AI not as an experimental layer but as an operational capability embedded directly within governed enterprise data environments.

For Snowflake, the strategic bet is clear. The future of enterprise AI will be shaped less by who builds the largest model and more by who can deploy AI safely, transparently, and at production scale.

In an interaction with CiOL, Baris Gultekin, VP of AI, Snowflake, outlined why governance, explainability, and data architecture, rather than model performance alone, are emerging as the key fault lines in real-world enterprise AI adoption.

Interview Excerpts:

The $200M Snowflake-Anthropic partnership promises to embed Claude agents across enterprise data. What unique challenges have you faced in integrating autonomous AI into highly regulated environments like BFSI or healthcare?

 Bringing AI into highly regulated industries like financial services and healthcare requires an intentional approach where trust and governance are at the forefront. These industries often face unique challenges in navigating fragmented data, robust compliance requirements, and the need for airtight security and governance.

That’s exactly why our strategy is to bring the AI to the data, not the other way around. Through Snowflake Cortex AI, Claude operates inside the customer’s existing security boundary with the full RBAC, lineage, masking policies, and auditability that Snowflake Horizon Catalogue provides.

In industries where a single misstep can have regulatory consequences, enterprises cannot afford to introduce additional risks associated with moving data to disparate AI tools. By unifying structured and unstructured data, enforcing governance automatically at every step, and ensuring that every insight is traceable back to source, Snowflake enables banks, insurers, and healthcare providers to adopt AI safely, confidently, and at production scale.

AI claims multi-step reasoning over structured and unstructured data. Can you share a concrete example where the AI’s recommendations materially changed a business decision, and how the outcome was measured?

One of the clearest places we’re seeing AI reshape outcomes is in how business teams use Snowflake Intelligence to unlock insights that were previously buried in dashboards or documents.

One real-world example is Toyota Motor Europe. Their product planners were working across 100+ fragmented systems and had spent months trying to build a custom AI solution to make sense of it. When they moved to Snowflake Intelligence, they unlocked the ability to tap into structured and unstructured data such as vehicle telemetry, sales, warranty data, customer feedback—which product planners can now ask questions of to determine how best to translate all of their data into actionable insights that improve real cars on the road. These insights directly inform decisions around product design and customer-driven feature prioritisation.

Another example is Fanatics, which accelerated the speed and quality of its merchandising, marketing, and advertising decisions by putting FanGraph—which includes billions of daily fan interactions—behind a conversational AI interface in Snowflake Intelligence. Snowflake Intelligence enables employees to analyse billions of structured data points (transactions, browsing, conversions) plus unstructured data points (comments, session text, sentiment markers) in natural language. This reduces months of segmentation and data modelling work to days, enabling faster product decisions, sharper fan targeting, and entirely new monetisation streams through their advertising business.

Across our customer base, we see countless stories just like these. When AI can reason over both structured and unstructured business data together—within a governed environment it doesn’t just make people more productive; it materially improves the quality of business decisions.

Enterprise AI adoption often falters due to organisational or data constraints. Based on your deployments, what are the unseen bottlenecks preventing full-scale production use, and how are they addressed?

Most enterprise AI programmes don’t stall because of the models themselves. More often, they stall because organisations underestimate the operational complexity of their data. Fragmented systems, inconsistent semantics, and a lack of unified governance create a hidden tax that makes AI unreliable at scale. Without high-quality retrieval and a shared semantic layer, even the best model will hallucinate.

This is why Snowflake Cortex AI integrates retrieval, orchestration, governance, and evaluation directly into the platform. When those fundamentals are solved, enterprises can move from pilots to production in a matter of weeks, not quarters.

With AI agents mediating critical knowledge and decisions, accountability and governance become vital. How are Snowflake and Anthropic ensuring explainability, auditability, and regulatory compliance in real-world deployments?

When AI agents are mediating real business decisions, the bar for accuracy and trust is dramatically higher. Snowflake and Anthropic share a belief that trust is a cornerstone of enterprise AI, and we’ve anchored our partnership around delivering on this promise.

Claude runs entirely within Snowflake’s governed environment, meaning customer data never leaves their security perimeter, is never used to train shared models, and is always subject to their existing RBAC, masks, and access policies.

Additionally, every agent response in Snowflake Intelligence is traceable: we show the SQL generated, the documents referenced, and the reasoning pathway. Our Agent GPA evaluation framework helps customers monitor quality, detect hallucinations, and meet regulatory standards for explainability. For real-world enterprises deploying AI, this level of transparency is table stakes for deploying AI agents in production with confidence.

The partnership is positioned as a co-innovation initiative. Beyond marketing, how are responsibilities, risks, and rewards shared between Snowflake and Anthropic, and what lessons have emerged from this collaboration that the broader industry can learn from?

This central goal of this partnership is to help our joint enterprise customers deploy AI agents at scale. Snowflake brings the governed data foundation, the retrieval and orchestration layer, and the enterprise distribution. Anthropic brings frontier reasoning models purpose-built for safety, accuracy, and reliability.

Snowflake ensures that every model operates within a secure, observable, enterprise-grade environment; Anthropic ensures that Claude continues to advance the frontier of transparent, multi-step reasoning. The output for customers is a platform where they can iterate quickly, deploy confidently, and scale AI without compromising trust.

One of the biggest lessons we’ve learnt together is that the AI race won’t be won by the biggest model; it will be won by the platform that delivers trusted, context-rich reasoning directly where enterprise data lives. That’s the value Snowflake and Anthropic are jointly creating, and it’s resonating across every regulated industry we serve.