NVIDIA DGX Spark Setup and Enterprise Deployment Guide
Posted by Wei Fei on
Step-by-step enterprise guide to NVIDIA DGX Spark hardware, setup, SSH access, Docker GPU configuration and AI deployment for Singapore organisations. AI teams today require compact yet powerful systems that can run advanced models locally without relying entirely on cloud GPUs.
NVIDIA DGX Spark is purpose-built for AI development, fine-tuning, and inference in a desktop-class form factor. This guide follows the practical lifecycle of the system from hardware overview and unboxing to system validation, SSH access, and Docker GPU configuration, tailored for enterprise users in Singapore.
1. DGX Spark Hardware Specifications
DGX Spark is engineered as a compact AI workstation measuring approximately 150 mm × 150 mm × 50.5 mm and weighing around 1.2 kg.
Despite its size, it delivers enterprise-grade AI capability.
Core Architecture
- Powered by the NVIDIA GB10 Grace Blackwell Superchip
- NVIDIA Grace 20-core Arm CPU
- Blackwell-based GPU acceleration
- 128GB unified coherent memory shared between CPU and GPU
Unified memory eliminates traditional host-to-device copying, improving performance efficiency for large AI workloads.
Performance and Scalability
- Supports local inference for models up to 200B parameters
- Two interconnected units can scale up to approximately 405B parameters
- NVIDIA ConnectX high-performance networking for direct interconnect
- Wi-Fi 7 and enterprise network support
- Up to 4TB NVMe local storage
- Expandable via USB Type-C external storage
For Singapore enterprises handling sensitive datasets, local inference reduces dependency on external cloud environments.
2. Unboxing & Device Connections
What’s Included
1 × DGX Spark unit
1 × AC power cable
1 × USB-C power adapter
1 × Quick start documentation

The system supports two operating modes:
- Standalone mode with display and peripherals
- Headless mode managed over network
This section outlines standalone deployment.
Standalone Setup Steps
- Connect an HDMI monitor
- Connect USB or Bluetooth keyboard and mouse
- Connect the power adapter using the designated USB-C port
- Power on the system
If no input device is detected, the system prompts Bluetooth pairing.

Connecting Two Units
For larger model support, two DGX Spark units can be interconnected using supported high-speed networking.
This configuration is suitable for advanced AI research labs and government innovation units.

3. First Boot & Initial Configuration
Upon first startup, a guided setup wizard appears.

Configuration Flow
1. Select language and timezone

2. Choose keyboard layout

3. Accept software license terms

4. Create user account credentials

5. Decide on optional telemetry participation

6. Connect to Wi-Fi or Ethernet

Once internet connectivity is established, the system downloads and installs the official software image.

Do not interrupt power during this process as multiple automated reboots occur.

After completion, the system presents the DGX Spark desktop environment.
4. Checking System Information
After login, open Terminal using Ctrl + Alt + T to validate system configuration.
4.1 CPU Verification
Run: lscpu
This confirms Grace CPU architecture and core configuration.

4.2 Disk Partitions & Capacity
Run: lsblk
Storage may display slightly below 4TB due to binary measurement standards.

4.3 GPU Information
Run: nvidia-smi
This displays GPU status, driver version, and memory usage.

4.4 Docker Engine Version
Run: docker -v
Confirms Docker installation.

4.5 CUDA Version
Run: nvcc -V
Validates CUDA toolkit version.

For enterprise IT teams, documenting these outputs supports internal asset compliance and configuration management.
5. SSH Remote Access
DGX Spark runs a standard Ubuntu-based environment with OpenSSH server enabled.
Step 1: Identify IP Address
Run: ip addr
Step 2: Connect via SSH Client
Use tools such as PuTTY or equivalent SSH clients.
- Enter IP address
- Default port 22
- Authenticate with configured username and password

Once connected, administrators can manage the system remotely.
This is particularly relevant for university AI labs, government AI sandboxes, and enterprise development clusters in Singapore.
6. Docker Configuration
6.1 NVIDIA Container Runtime
DGX Spark includes NVIDIA Container Runtime and NVIDIA Container Toolkit pre-installed.
This enables GPU acceleration inside Docker containers without manual driver setup.
Key advantages:
- Direct GPU access within containers
- Automatic CUDA library integration
- Multi-GPU workload support
- Compatibility with orchestration platforms
This is essential for AI and ML teams using containerised pipelines.
6.2 Docker User Permission Configuration
By default, Docker commands require elevated privileges.
To allow a standard user to run Docker commands:
newgrp docker
This improves operational convenience while maintaining system governance.
6.3 GPU Validation Inside Docker
To test GPU functionality within a container, run a supported AI container image such as a PyTorch environment.
Inside the container, execute:
nvcc -V
If outputs display correctly, GPU passthrough and CUDA integration are functioning as expected.
For enterprise AI engineers, this confirms container-ready GPU acceleration.
Enterprise Deployment Considerations
Before procurement, Singapore organisations should evaluate:
- Model size requirements
- Data sovereignty and PDPA compliance
- Power and space constraints
- Integration with CI/CD pipelines
- Kubernetes or container orchestration needs
- Future scaling via dual-unit configuration
DGX Spark is particularly suitable for AI prototyping teams, secure inference environments, education AI curriculum labs, pre-sales demonstration units, and government research initiatives.
Frequently Asked Questions
1. Can DGX Spark be used for model training?
It supports development and fine-tuning workloads. Large-scale distributed training may require higher-tier infrastructure.
2. Is it cloud-dependent?
No. It operates fully on-premises but can complement hybrid cloud strategies.
3. Can it integrate into enterprise networks?
Yes. It supports Ethernet, Wi-Fi 7, and SSH-based remote administration.
4. Does it require external GPU configuration?
No. NVIDIA Container Runtime and CUDA are pre-configured.
5. Is dual-unit scaling complex?
Interconnection is supported via high-performance networking but should be architected properly for enterprise environments.
6. Is it suitable for government projects?
Yes. Local deployment supports secure and compliant AI experimentation.
Deploy NVIDIA DGX Spark with SourceIT
SourceIT supports enterprise and public sector AI infrastructure projects across Singapore.
Our team provides technical consultation, architecture guidance, bulk procurement support, and enterprise integration assistance.
Discuss your AI infrastructure strategy with our specialists today.
