/ciol/media/media_files/2025/11/18/ai-2025-11-18-16-09-00.png)
Dell Technologies and NVIDIA today updated the Dell AI Factory with integrated storage, automation, server and networking components designed to reduce complexity and accelerate enterprise AI deployments. The release pairs Dell ObjectScale and PowerScale with NVIDIA Dynamo, adds validated PowerEdge server offers, expands the Dell Automation Platform, and brings new networking and rack-scale density for high-performance workloads. Dell is also positioning professional services to run pilots that validate real customer outcomes before scaling.
Why this matters for enterprise AI adoption
Enterprises have many pilots, but few scale projects to production because stitching hardware, software and operational processes is hard. The Dell AI Factory with NVIDIA packages tested components and automation to reduce that integration burden. The claim from Dell is straightforward: by offering validated stacks, automation and pilot services, organisations should shorten the path from concept to production-grade AI, contain infrastructure costs and reduce operational risk.
A key technical update is the integration of Dell ObjectScale and PowerScale the storage engines in the Dell AI Data Platform with the NVIDIA NIXL library from NVIDIA Dynamo. Dell says this enables a scalable KV cache and shared storage approach that can achieve a 1-second Time to First Token (TTFT) at a full context window of 131K tokens, which it reports is 19× faster than standard vLLM implementations. The approach aims to reuse embeddings and reduce GPU memory pressure, lowering compute cost for large-context inference workloads.
Servers and automation: validated stack for production use
Dell expanded validated offers that pair PowerEdge XE7740/XE7745 servers with NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs and NVIDIA Hopper family GPUs. These configurations target large-scale multimodal models, agentic AI and enterprise inferencing and training requirements.
The Dell Automation Platform is now extended to the Dell AI Factory. By automating validated full-stack deployments, Dell says customers can achieve repeatable outcomes, reduce configuration errors and accelerate time to value. Software-driven tools, including an AI code assistant with Tabnine and an agentic AI platform with Cohere North, are integrated to streamline the development-to-production workflow.
Rack-scale density and networking for next-gen compute
Dell announced the PowerEdge XE8712 server, which offers a rack-level design that supports very high GPU density. Dell notes the XE8712 will enable up to 144 NVIDIA Blackwell GPUs per Dell IR7000 rack and includes rack-level automation and thermal controls via iDRAC, OpenManage Enterprise and an Integrated Rack Controller.
On networking, Enterprise SONiC Distribution by Dell Technologies now supports NVIDIA Spectrum-X platforms alongside Cumulus OS. SmartFabric Manager will extend to Dell’s Enterprise SONiC on Spectrum-X, aiming to simplify network setup and reduce deployment time with fewer manual steps. These elements are intended to help enterprises deploy hyperscale-like networking in multi-vendor, standards-based environments.
Dell validated Red Hat OpenShift for the Dell AI Factory on more PowerEdge platforms, including the PowerEdge R760xa and PowerEdge XE9680 with NVIDIA H100 and H200 Tensor Core GPUs. The broader validation aims to help enterprises operationalise AI with controls, governance and containerised platforms. Dell also extended AI PC support to include NVIDIA RTX Blackwell and RTX Ada GPUs to offer more silicon choices across endpoints.
Dell Professional Services will run turnkey pilots using customer data to validate use cases and KPIs before larger investments. These expert-led pilots are positioned to demonstrate business value and reduce the time and uncertainty associated with building proof-of-concept projects into production systems.
Jeff Clarke, vice chairman and chief operating officer, Dell Technologies, said: “The Dell AI Factory with NVIDIA solves the problem every enterprise is facing: how to move from AI pilots to production without rebuilding their infrastructure. We've done the integration work so customers don't have to, which means they can deploy faster and scale with confidence.”
Justin Boitano, vice president, Enterprise AI products, NVIDIA, said: “Enterprise AI is shifting from experimentation to transformation—advancing at unprecedented speed and redefining how businesses operate. Together, Dell and NVIDIA are driving this evolution with a fully integrated platform that unites advanced infrastructure, intelligent automation, and powerful data engines to help organisations deploy AI at scale and realise measurable impact.”
Considerations to keep in Mind
Performance in production: Look for independent benchmarks and customer case studies that validate the 1-second TTFT claim and cost improvements.
Pilot outcomes: Early customer pilots and measurable KPIs will indicate whether automation and validated stacks reduce time to production.
Operational complexity: The practical burden of managing rack-scale systems, thermal constraints and lifecycle updates will test the promise of simplified deployment.
Network and software interoperability: Enterprises will evaluate whether SONiC and OpenShift integrations truly reduce vendor lock-in and accelerate orchestration.
Availability and timeline
Dell says workload blueprints are in tech preview now, services pilots are globally available, and several hardware updates are available immediately. The PowerEdge XE8712 server will ship in December, while broader Enterprise SONiC and SmartFabric Manager support on NVIDIA Spectrum-X will be available in the first half of 2026.
Dell’s updates target a clear enterprise need: reduce the friction between AI experimentation and production scale. By combining validated storage strategies, server and networking density, automation, and services, Dell and NVIDIA are betting enterprises will prefer integrated stacks that reduce engineering overhead. The key test will be customer outcomes in the field, where performance, operational simplicity and demonstrable ROI must align to justify large-scale AI investments.
/ciol/media/agency_attachments/c0E28gS06GM3VmrXNw5G.png)
Follow Us