NVIDIA announces TESLA V100 with 5120 CUDA cores

Published: May 10th 2017, 16:00 GMT   Comments

NVIDIA has just announced first Volta computing card Tesla V100.

TESLA V100 has 5120 CUDA Cores

World’s first 12nm FFN GPU has just been announced by Jensen Huang at GTC17. The new Tesla has the second generation NVLink with a bandwidth of 300 GB/s. Tesla V100 utilizes 16 GB HBM2 operating at 900 GB/s.

The card is powered by new Volta GPU, which features 5120 CUDA cores and 21 billion transistors. This is the biggest GPU ever made with a die size of 815 mm2.

Volta GV100 features a new type of computing core called Tensor core. The purpose of this core is deep learning matrix arithmetics.

Jensen said that the cost of Tesla V100 development was 3 billion dollars.

NVIDIA Tesla V100
VideoCardz.comNVIDIA Tesla V100NVIDIA Tesla P100
Die Size815 mm2610 m2
FP32 Computing Performance15.0 TF10.6 TF
CUDA Cores51203584
Core Clock1455 MHz1480 MHz
Memory Type4096-bit 16 GB HBM24096-bit 16GB HBM2
InterfaceNVLink 2.0NVLink 1.0 / PCI-e 3.0
Memory Bandwidth900 GB/s720 GB/s
TDP300W300W

Key features of Tesla V100:

  • New Streaming Multiprocessor (SM) Architecture Optimized for Deep Learning
  • Second-Generation NVLink™
  • HBM2 Memory: Faster, Higher Efficiency
  • Volta Multi-Process Service
  • Enhanced Unified Memory and Address Translation Services
  • Cooperative Groups and New Cooperative Launch APIs
  • Maximum Performance and Maximum Efficiency Modes
  • Volta Optimized Software
NVIDIA Tesla
Tesla ProductTesla K40Tesla M40Tesla P100Tesla V100
GPUGK110 (Kepler)GM200 (Maxwell)GP100 (Pascal)GV100 (Volta)
SMs15245680
TPCs15242840
FP32 Cores / SM1921286464
FP32 Cores / GPU2880307235845120
FP64 Cores / SM6443232
FP64 Cores / GPU9609617922560
Tensor Cores / SMNANANA8
Tensor Cores / GPUNANANA640
GPU Boost Clock810/875 MHz1114 MHz1480 MHz1455 MHz
Peak FP32 TFLOP/s*5.046.810.615
Peak FP64 TFLOP/s*1.682.15.37.5
Peak Tensor Core TFLOP/s*NANANA120
Texture Units240192224320
Memory Interface384-bit GDDR5384-bit GDDR54096-bit HBM24096-bit HBM2
Memory SizeUp to 12 GBUp to 24 GB16 GB16 GB
L2 Cache Size1536 KB3072 KB4096 KB6144 KB
Shared Memory Size / SM16 KB/32 KB/48 KB96 KB64 KBConfigurable up to 96 KB
Register File Size / SM256 KB256 KB256 KB256KB
Register File Size / GPU3840 KB6144 KB14336 KB20480 KB
TDP235 Watts250 Watts300 Watts300 Watts
Transistors7.1 billion8 billion15.3 billion21.1 billion
GPU Die Size551 mm²601 mm²610 mm²815 mm²
Manufacturing Process28 nm28 nm16 nm FinFET+12 nm FFN

NVIDIA Volta GV100

With Tesla V100 NVIDIA introduces GV100 graphics processor. This is the biggest GPU ever made with 5376 CUDA FP32 cores (but only 5120 are enabled on Tesla V100). It has a new type of Streaming Multiprocessor called Volta SM, equipped with mixed-precision tensor cores and enhanced power efficiency, clock speeds and L1 data cache.

This GPU in Tesla V100 is clocked at 1455 MHz with peak computing power of 15 TFLOPs in 32-bit operations.




Comment Policy
  1. Comments must be written in English and should not exceed 1000 characters.
  2. Comments deemed to be spam or solely promotional in nature will be deleted. Including a link to relevant content is permitted, but comments should be relevant to the post topic. Discussions about politics are not allowed on this website.
  3. Comments and usernames containing language or concepts that could be deemed offensive will be deleted.
  4. Comments complaining about the post subject or its source will be removed.
  5. A failure to comply with these rules will result in a warning and, in extreme cases, a ban. In addition, please note that comments that attack or harass an individual directly will result in a ban without warning.
  6. VideoCardz has never been sponsored by AMD, Intel, or NVIDIA. Users claiming otherwise will be banned.
  7. VideoCardz Moderating Team reserves the right to edit or delete any comments submitted to the site without notice.
  8. If you have any questions about the commenting policy, please let us know through the Contact Page.
Hide Comment Policy
Comments