Supermicro launches global shipments of NVIDIA Blackwell Ultra systems and plug-and-play AI racks, offering exaFLOPS-level performance for next-gen data centers.
![]() |
Supermicro begins global deployment of NVIDIA Blackwell Ultra AI racks and systems, boosting AI infrastructure with liquid cooling and plug-and-play scalability. Image: Supermicro/ CH |
SAN JOSE, United States — September 14, 2025:
Supermicro has officially begun worldwide bulk shipments of its NVIDIA Blackwell Ultra-powered systems and rack-scale plug-and-play solutions, marking a significant leap in next-generation AI infrastructure delivery. The company is now shipping fully integrated NVIDIA HGX B300 and GB300 NVL72 systems to customers across the globe—pre-validated at the system, rack, and data center levels for rapid, large-scale deployment.
Designed for maximum scalability and performance, these systems offer the industry’s highest compute density and are engineered to support the most demanding AI workloads—from trillion-parameter model training and real-time reasoning to multimodal inference and edge-level physical AI.
“Supermicro has an unmatched record in deploying NVIDIA technologies at scale,” said Charles Liang, President and CEO of Supermicro. “Our Data Center Building Block Solutions allow customers to build and scale AI factories faster than ever, with pre-integrated, power-efficient infrastructure ready for immediate deployment.”
The GB300 NVL72 rack-scale system delivers 1.1 exaFLOPS of FP4 compute, while the HGX B300 platforms in 8U air-cooled and 4U liquid-cooled configurations reach up to 144 petaFLOPS per GPU, along with 270 GB of HBM3e memory. The systems support up to 1400W per GPU, thanks to a hybrid cooling approach combining advanced air cooling and Direct Liquid Cooling (DLC-2) technology.
Supermicro claims its DLC-2 implementation can reduce power consumption by up to 40%, cut data center space usage by 60%, and lower water usage by 40%, translating into a 20% reduction in total cost of ownership (TCO).
These systems are built to integrate seamlessly with high-bandwidth fabrics such as NVIDIA Quantum-X800 InfiniBand and Spectrum-X Ethernet, supporting up to 800Gb/s with NVIDIA ConnectX-8 SuperNICs. Enterprises can deploy them as standalone racks or scale to full clusters using reference architectures validated for NVIDIA’s next-gen AI stack.
Supermicro’s Blackwell Ultra systems ship with full support for NVIDIA AI Enterprise, Blueprints, and NIM microservices, enabling enterprises to unlock optimal performance for AI model development, inference, and deployment—without complex custom integration.
With in-house design and manufacturing facilities across the U.S., Taiwan, and the Netherlands, Supermicro continues to push forward with its commitment to green computing, while delivering cutting-edge AI platforms tailored to customer-specific workloads.
As global demand for AI compute accelerates, Supermicro’s launch of the Blackwell Ultra AI Factory portfolio positions the company as a key enabler in the AI infrastructure race, helping customers rapidly deploy future-ready data centers with plug-and-play ease.