## Llama-3-70B-Instruct-FP8-v2 | |
* Weights and activations are per-tensor quantized to float8_e4m3. | |
* Quantization with AutoFP8 with the updated activation scaling factor names. | |
* Calibration dataset: Ultrachat (mgoin/ultrachat_2k) | |
* Samples: 1024 | |
* Sequence length: 4096 | |
## Evaluation | |
TBA |