Promo
ends April 30,
2024
ends April 30,
2024
4xV100, 4x32GB GPU NVlink™, 2x6138 , 384GB RAM
GPU speed:
Server Configuration:
GPU:
4 pcs V100
(Each GPU card has:
5120 CUDA® cores)
NVLink
GPU RAM:
128 GB (4x32GB) HBM2
CPU:
2 x Intel® Xeon® Gold 6138 Processor 3.70 GHz
RAM:
384 GB RAM
SSD:
3200 GB SSD
Internal network:
10 Gbps Port
Best for:
- Deep Learning
- AI Training
- AI Inference
- HPC
OS:
-
Ubuntu 18.04
-
Ubuntu 20.04 Server
-
Ubuntu 22.04 Server
GPU NVIDIA Tesla® V100
GPU NVIDIA® Tesla® V100 - the most efficient GPU, based on the architecture of NVIDIA® Volta. NVIDIA® Tesla® V100 accelerators, connected by NVLink™ technology, provide a capacity of 160 Gb/s, which allows a whole host of problems to be solved, from rendering and HPC to training of AI algorithms.
Tesla® V100 GPU's can be used for any purpose.
Tesla V 100 GPU's are great for:
-
Deep Learning
-
Rendering
-
High-performance computing (HPC)
- GPU: Volta GV100 architecture
- Memory: 16 GB HBM2
- NVIDIA CUDA cores: 5120
- Memory Bandwidth: 900GB GB/s
Frequently bought together