/ciol/media/media_files/2025/06/26/hpe-2025-06-26-06-54-36.jpg)
Hewlett Packard Enterprise (HPE) has announced a comprehensive set of new AI factory solutions, developed in collaboration with NVIDIA, to accelerate the adoption and management of AI across the entire lifecycle. Designed for enterprises, service providers, sovereign bodies, and model builders, these solutions aim to eliminate complexity and speed up deployment for modern, AI-ready data centers.
HPE’s expanded NVIDIA AI Computing by HPE portfolio includes the latest NVIDIA Blackwell GPUs, and introduces modular, composable solutions to streamline AI infrastructure integration. At the heart of this portfolio is the next-generation HPE Private Cloud AI—a fully integrated AI factory designed to simplify enterprise adoption through turnkey architecture.
Ushering Intelligence at Infrastructure Level
Speaking at the launch, HPE President and CEO Antonio Neri emphasized that infrastructure and data are foundational to realizing AI's promise. “Generative, agentic, and physical AI have the potential to transform global productivity and create lasting societal change—but AI is only as good as the infrastructure and data behind it,” Neri said. “HPE and NVIDIA are delivering the most comprehensive approach, combining industry-leading AI infrastructure and services to help organizations realize their ambitions and deliver sustainable business value.”
Jensen Huang, CEO, NVIDIA, echoed this sentiment, stating, “We are entering a new industrial era—one defined by the ability to generate intelligence at scale. Together, HPE and NVIDIA are delivering full-stack AI factory infrastructure to drive this transformation, empowering enterprises to harness their data and accelerate innovation with unprecedented speed and precision.”
Private Cloud AI: Secure, Scalable, and Enterprise-Ready
The centerpiece of the announcement is the upgraded HPE Private Cloud AI. Built on NVIDIA accelerated computing, networking, and software, it supports Blackwell GPUs through the HPE ProLiant Compute Gen12 servers. These servers currently lead over 23 AI performance benchmarks and come equipped with secure enclaves, post-quantum cryptography, and a trusted supply chain for end-to-end security.
The new architecture introduces a federated model that enables unified resource pooling, allowing GPU resources to be shared across workloads. This ensures seamless scalability and investment protection as customers transition between GPU generations, including NVIDIA H200 NVL and the new RTX PRO 6000 Server Edition. The system also includes multi-tenancy features, air-gapped management capabilities for high-security environments, and support for the latest NVIDIA AI Blueprints, including the AI-Q Blueprint for agent creation.
Enterprises will also benefit from a new “try-and-buy” program in partnership with Equinix, allowing them to test Private Cloud AI across a global network of high-performance data centers before committing to full-scale deployment.
Addressing Service Providers and Sovereign Needs
To meet the needs of different markets, HPE has unveiled a range of validated, modular solutions. Designed with scalability in mind, these solutions incorporate over five decades of innovation in areas such as liquid cooling and unified control through HPE Morpheus.
For large-scale model builders and service providers, the solutions include the HPE ProLiant Compute XD servers, NVIDIA AI Enterprise software, air and liquid cooling capabilities, and expert advisory services. HPE is also introducing specialized sovereign AI factory configurations tailored for governments and public sector organizations. These offer features like air-gapped management, sovereign data control, and compliance-ready operational frameworks.
HPE OpsRamp Software, now validated for the NVIDIA Enterprise AI Factory, plays a critical role in providing full-stack observability across all AI factory environments, ensuring transparency, monitoring, and operational resilience.
Scaling Compute, Storage, and Data Orchestration
Complementing the compute stack is the new HPE Compute XD690, built to support eight NVIDIA Blackwell Ultra GPUs. Managed through the HPE Performance Cluster Manager, the system enables real-time infrastructure monitoring and alerting across thousands of nodes, ensuring high performance in complex environments.
On the storage front, the HPE Alletra Storage MP X10000 now includes support for the Model Context Protocol (MCP), allowing seamless processing of AI-ready unstructured data. It also integrates with the NVIDIA AI Data Platform and includes a software development kit (SDK) to streamline unstructured data pipelines for ingestion, inferencing, training, and continuous learning.
Real-World Use Cases with an Expanded AI Ecosystem
HPE’s Unleash AI ecosystem has been significantly expanded, now supporting more than 75 validated use cases. The latest additions include 26 new partners offering solutions across a wide range of domains—agentic AI, sovereign AI, smart cities, industrial automation, data governance, video analytics, and responsible AI practices. The combined ecosystem is engineered to drive real-world impact at enterprise scale.
In a move aimed at tightly regulated sectors, HPE has partnered with Accenture to develop agentic AI solutions specifically for financial services and procurement. The collaboration leverages Accenture’s AI Refinery platform, built on NVIDIA AI Enterprise, and is deployed on HPE Private Cloud AI.
HPE has already begun piloting the solution within its own finance department to support strategic sourcing, spend analysis, relationship management, and compliance tracking. The joint go-to-market initiative is expected to help financial institutions navigate AI transformation while meeting regulatory requirements.
Services and Financing to Accelerate AI Readiness
To support enterprise customers from planning to production, HPE has introduced a suite of new services covering the full AI journey—from design and deployment to operations and education. These services also include consulting around sustainability, model migration, and business value analysis.
In parallel, HPE Financial Services (HPEFS) is enabling faster onboarding with flexible financing options. Customers can take advantage of a six-month reduced payment plan for Private Cloud AI, or use existing tech assets to fund new AI projects. Lifecycle and refresh services are also available to help manage long-term infrastructure strategy.
HPE ProLiant DL380a Gen12 servers featuring NVIDIA RTX PRO 6000 Blackwell GPUs are available now, along with the latest AI factory solutions and services. The next-generation HPE Private Cloud AI with RTX PRO 6000 is expected to be released in the second half of 2025. HPE Alletra Storage MP X10000 with MCP support will also become available in H2 2025. Meanwhile, the HPE Compute XD690 is set to ship in October 2025.