Use Cases

Real-world applications

Training Large Transformer Models

Developers training large-scale Transformer models on NVIDIA GPUs can leverage FP8 precision to reduce memory usage and accelerate training.

Inference Optimization

Deploying Transformer models for inference on supported NVIDIA GPUs benefits from optimized kernels and lower memory footprint.