/ciol/media/media_files/2025/10/16/partnership-2025-10-16-00-18-15.png)
Anthropic and Salesforce have broadened their collaboration to position Claude as a “trusted” model for regulated industries, the companies said. The multi-pronged tie-up makes Claude a preferred foundational model inside Salesforce’s Agentforce platform, brings Claude Code to Salesforce’s engineering teams, and deepens Claude’s use across Slack with an explicit focus on healthcare, financial services, cybersecurity, and life sciences.
The deal signals a shift in how enterprise buyers will evaluate agentic AI: not only on raw capability but on whether the model can run inside a vendor-controlled trust boundary and meet stringent data and compliance requirements.
Anthropic will be the first large language model provider fully integrated within Salesforce’s trust boundary, with Claude’s traffic contained in Salesforce’s virtual private cloud. That design is central to the partnership’s promise: regulated firms can leverage generative AI while keeping sensitive data inside a controlled environment.
“Regulated industries need frontier AI capabilities, but they also need the appropriate safeguards before they can deploy in sensitive systems. We’ve built Claude to deliver both the performance and the safeguards,” said Dario Amodei, CEO and Co-Founder, Anthropic.
For enterprises handling regulated data, the ability to constrain model inference and logging inside a trusted environment addresses two immediate concerns: data residency and auditability. That makes the collaboration attractive to compliance teams that otherwise baulk at third-party API calls across public infrastructure.
Industry-Specific Agents
A central plank of the expanded partnership is industry-focused AI solutions. The companies want to adapt Claude to domain specifics — for example, finance agents that understand instruments and consent mechanisms, or healthcare agents that can summarise clinical notes while preserving audit trails.
“Because of Anthropic on Amazon Bedrock and Agentforce, we're able to help our advisors with their most time-consuming task: meeting prep. This has saved them significant time, allowing them to focus on what matters most—client relationships,” said Rohit Gupta, Head of Digital Advisor Platforms, RBC Wealth Management, who is cited as a user of Claude in Agentforce.
That use case captures the promise: agents that stitch CRM context, live industry updates and compliance checks into a single workflow, replacing manual research and document juggling.
Making Agentic Workflows Practical
Anthropic and Salesforce are deepening Claude’s integration with Slack via the Model Context Protocol (MCP) server. That allows Claude to access channels, messages and files to produce summaries, extract decisions and draft updates — effectively turning chat threads into action-ready insights.
The Slack tie-in aims to reduce friction between conversation and execution: instead of copy-pasting context across tools, employees can invoke Claude inside Slack and move from discussion to task in one flow.
Developer Productivity: Claude Code at Scale
Salesforce will deploy Claude Code across its engineering org to help developers build and ship faster. Anthropic, meanwhile, will increase its use of Slack internally — a two-way validation that each partner will run the other’s tools at scale.
“Salesforce and Anthropic share a vision for a trusted AI ecosystem that puts customers at the center,” said Marc Benioff, Chair and CEO, Salesforce. The companies say Agentforce powered by Anthropic is available today for select customers, while broader industry integrations are under development.
The partnership answers several enterprise questions, but it also highlights trade-offs. A model being operated under a vendor trust boundary mitigates part of the third-party risk, but makes it unclear how vendors are locked in and how businesses can maintain model and data portability. The industry teams will also require knowledge on model training, logging, retention policies and red-team testing; these are all areas that are still under development in the AI vendor landscape.
In the case of regulated industries, the calculus of the day will be the comparison of productivity with legal and audit efforts. A more practical way out is the Claude Salesforce model in which agentic capabilities are combined with a controlled deployment environment and a domain-specific agent. Yet CIOs and compliance officers will still want clear SLAs, breach protocols and visibility into how the model handles edge cases.
Anthropic and Salesforce are packaging agentic AI for the real world, not as a demo, but as a controlled, enterprise-grade workflow service. For regulated industries, the move could accelerate adoption, provided buyers get the transparency and controls they need to trust a new class of AI agents.