NVIDIA DGX Spark AI Supercomputer 4TB
PRODUCT DESCRIPTION
NVIDIA DGX Spark AI Supercomputer 4TB (940-54242-0007-000) - 1 Year Local Warranty
NVIDIA DGX Spark – Enterprise-Grade AI Supercomputer
The NVIDIA DGX Spark AI Supercomputer is a compact yet powerful on-premise AI system built for enterprises, research labs, and advanced developers.
It is designed to deliver high-density AI compute for machine learning, deep learning, generative AI, and data-intensive workloads.
As part of the NVIDIA DGX platform, it provides enterprise-grade reliability, optimized performance, and a production-ready AI infrastructure in a desktop form factor.
This makes it ideal for organizations that require secure, on-site AI computing without relying solely on the public cloud.
Built for Generative AI, LLM Training, and High-Performance Inference
DGX Spark is optimized for large language model training, fine-tuning, and high-throughput AI inference.
It enables faster experimentation, shorter training cycles, and real-time AI model deployment for business-critical workloads.
With support for modern AI frameworks such as PyTorch, TensorFlow, and NVIDIA CUDA-accelerated libraries, it accelerates end-to-end AI development pipelines.
This makes it ideal for generative AI, multimodal AI, computer vision, natural language processing, and advanced data science projects.
4TB High-Speed NVMe Storage for Data-Intensive AI Workflows
The 4TB ultra-fast NVMe storage is engineered for high-speed data access, rapid dataset loading, and accelerated model checkpointing.
It allows teams to work with massive training datasets, large AI models, and continuous experimentation without I/O bottlenecks.
This improves performance across data preprocessing, training, validation, and model deployment stages.
It is ideal for organizations handling large volumes of training data, synthetic data generation, and real-time AI pipelines.
Scalable, Secure, and Ready for Enterprise AI Deployment
DGX Spark integrates seamlessly with NVIDIA AI Enterprise software, providing stability, security, and long-term production support.
It supports secure AI operations with enterprise-grade system management, monitoring, and lifecycle tools.
The system is designed for continuous 24/7 workloads with optimized cooling and stable power delivery.
It also supports scaling across multiple systems, making it suitable for growing AI teams and expanding workloads.
Ideal for On-Prem AI, Edge AI, and Confidential Workloads
DGX Spark is well suited for organizations that require local AI processing for data privacy, compliance, and intellectual property protection.
It enables on-premise AI development for regulated industries such as finance, healthcare, research, defense, and government.
It is also suitable for edge AI environments, enabling real-time inference and low-latency AI deployment at the source of data.
This makes it a strong choice for enterprises building secure, high-performance AI infrastructure without cloud dependency.
Technical Specification – NVIDIA DGX Spark AI Supercomputer (940-54242-0007-000)
Model: NVIDIA DGX Spark AI Supercomputer
GTIN: 812674029197
Category: Compact Desktop AI Supercomputer
Chip Architecture
-
NVIDIA GB10 Grace Blackwell Superchip (CPU + GPU unified architecture)
-
Built for advanced AI development, LLM training, generative AI, and enterprise inference workloads
CPU
-
20-core ARM CPU:
-
10 × Cortex-X925 high-performance cores
-
10 × Cortex-A725 efficiency cores
-
-
Optimized for parallel compute, AI preprocessing, and mixed workload efficiency
GPU / AI Acceleration
-
Integrated NVIDIA Blackwell GPU
-
5th Generation Tensor Cores (AI acceleration, FP4/FP8 optimized)
-
4th Generation RT Cores for graphics, simulation, and rendering acceleration
-
Ideal for LLM workloads, multimodal AI, diffusion models, and inference pipelines
AI Performance
-
Up to 1 PFLOPS FP4 AI compute
-
Supports next-generation generative AI, RAG systems, HPC simulations, and enterprise fine-tuning
Unified Memory System
-
128GB LPDDR5X Unified System Memory (shared CPU–GPU)
-
Memory Bandwidth: Up to 273 GB/s
-
Unified coherent memory model via NVLink-C2C, eliminating CPU-GPU bottlenecks
-
Excellent for training/fine-tuning LLMs, embeddings, and large transformer models
Storage
-
4TB NVMe M.2 Self-Encrypting SSD
-
High-speed read/write performance for datasets, checkpoints, and enterprise workflows
Networking & Interconnect
-
10GbE RJ-45 Ethernet (standard networking)
-
NVIDIA ConnectX-7 SmartNIC:
-
Up to 200Gbps high-speed interconnect
-
RDMA / InfiniBand capable
-
Supports multi-node AI clusters and distributed training
-
-
Wireless: Wi-Fi 7 + Bluetooth 5.x
I/O Ports
-
4 × USB-C (USB 3.x high-bandwidth connectivity)
-
HDMI 2.1a video output
-
Supports modern displays and remote-access console workflows
Video Encode / Decode
-
NVENC ×1 (hardware video encoder)
-
NVDEC ×1 (hardware video decoder)
-
Suitable for video AI, streaming, and real-time media workloads
Software Platform
-
NVIDIA DGX OS (Ubuntu LTS-based enterprise AI OS)
-
Full NVIDIA AI software stack:
-
NVIDIA AI Enterprise Suite
-
CUDA Toolkit
-
TensorRT
-
PyTorch & TensorFlow optimized containers
-
NGC-certified AI containers
-
Cooling & Power
-
Enterprise-grade active cooling system for sustained high-load performance
-
External desktop-class power adapter
-
Efficient thermal management for long-duration AI compute tasks
Form Factor
-
Ultra-compact desktop AI supercomputer
-
Dimensions: ~150 mm × 150 mm × 50.5 mm
-
Weight: ~1.2 kg
-
Designed for lab environments, R&D, on-prem AI, and edge AI deployment
Workload Suitability
-
Primary:
-
LLM training
-
LLM fine-tuning
-
Large-scale inference
-
-
Secondary:
-
Generative AI (image, video, multimodal)
-
Data science & vector embeddings
-
Scientific computing & HPC research
-
Enterprise AI development
-
Security
-
Secure Boot (UEFI)
-
TPM-based security and encryption protections
-
Self-encrypting SSD for data-at-rest protection
Accessories & Power Cord
-
Included: US Type-B Power Cord (factory sealed)
-
UK Power Cord will be provided separately
