NVIDIA H100 PCIe Tensor Core Workstation Graphics Card, 80GB 5120 bits HBM2 Memory, 1935 GB/s Memory Speed, 14592 Stream Processors, Tensor Cores 456, PCIe 5 x 16 | 900-21010-0000-000

Model: NVIDIA H100
SKU: 900-21010-0000-000
In Stock
AED 242000.00 Original price was: AED 242000.00.AED 152200.00Current price is: AED 152200.00.
In Stock
SKU: 900-21010-0000-000 Categories: , , Tags: , ,

Brand Name: NVIDIA
Product Name: NVIDIA H100
Standard Memory: 80 GB
Memory Technology: HBM3
Power Supply Wattage: 350W

Description

NVIDIA H100 Tensor Core GPU

Extraordinary performance, scalability, and security for every data center.

An Order-of-Magnitude Leap for Accelerated Computing

The NVIDIA H100 Tensor Core GPU delivers exceptional performance, scalability, and security for every workload. H100 uses breakthrough innovations based on the NVIDIA Hopper architecture to deliver industry-leading conversational AI, speeding up large language models (LLMs) by 30X. H100 also includes a dedicated Transformer Engine to solve trillion-parameter language models.

Securely Accelerate Workloads From Enterprise to Exascale

H100 features fourth-generation Tensor Cores and a Transformer Engine with FP8 precision that provides up to 4X faster training over the prior generation for GPT-3 (175B) models. The combination of fourth-generation NVLink, which offers 900 gigabytes per second (GB/s) of GPU-to-GPU interconnect; NDR Quantum-2 InfiniBand networking, which accelerates communication by every GPU across nodes; PCIe Gen5; and NVIDIA Magnum IO software delivers efficient scalability from small enterprise systems to massive, unified GPU clusters.

Real-Time Deep Learning Inference

AI solves a wide array of business challenges, using an equally wide array of neural networks. A great AI inference accelerator has to not only deliver the highest performance but also the versatility to accelerate these networks.

Exascale High-Performance Computing

The NVIDIA data center platform consistently delivers performance gains beyond Moore’s law. And H100’s new breakthrough AI capabilities further amplify the power of HPC+AI to accelerate time to discovery for scientists and researchers working on solving the world’s most important challenges.

Accelerated Data Analytics

Data analytics often consumes the majority of time in AI application development. Since large datasets are scattered across multiple servers, scale-out solutions with commodity CPU-only servers get bogged down by a lack of scalable computing performance.

Exceptional Performance for Large-Scale AI and HPC

The Hopper Tensor Core GPU will power the NVIDIA Grace Hopper CPU+GPU architecture, purpose-built for terabyte-scale accelerated computing and providing 10X higher performance on large-model AI and HPC. The NVIDIA Grace CPU leverages the flexibility of the Arm architecture to create a CPU and server architecture designed from the ground up for accelerated computing. The Hopper GPU is paired with the Grace CPU using NVIDIA’s ultra-fast chip-to-chip interconnect, delivering 900GB/s of bandwidth, 7X faster than PCIe Gen5. This innovative design will deliver up to 30X higher aggregate system memory bandwidth to the GPU compared to today’s fastest servers and up to 10X higher performance for applications running terabytes of data.

FP64 26 TFLOPS
FP64 Tensor Core 51 TFLOPS
FP32 51 TFLOPS
TF32 Tensor Core 51 TFLOPS | Sparsity
BFLOAT16 Tensor Core 1513 TFLOPS | Sparsity
FP16 Tensor Core 1513 TFLOPS | Sparsity
FP8 Tensor Core 3026 TFLOPS | Sparsity
INT8 Tensor Core 3026 TOPS | Sparsity
GPU Memory 80GB HBM2e
GPU Memory Bandwidth 2.0 TB/sec
Maximum Power Consumption 350 W

Reviews

There are no reviews yet.

Be the first to review “NVIDIA H100 PCIe Tensor Core Workstation Graphics Card, 80GB 5120 bits HBM2 Memory, 1935 GB/s Memory Speed, 14592 Stream Processors, Tensor Cores 456, PCIe 5 x 16 | 900-21010-0000-000”

Your email address will not be published. Required fields are marked *