omost-llama-3-8b is Omost's llama-3 model with 8k context length in fp16.

Downloads last month
260
Safetensors
Model size
8.03B params
Tensor type
BF16
Β·
Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for lllyasviel/omost-llama-3-8b

Finetunes
3 models
Quantizations
10 models

Spaces using lllyasviel/omost-llama-3-8b 15