mukulp/Qwen2.5-VL-72B-Instruct-bf16

This model was converted to MLX format from Qwen/Qwen2.5-VL-72B-Instruct using mlx-vlm version 0.1.13. Refer to the original model card for more details on the model.

Use with mlx

pip install -U mlx-vlm
python -m mlx_vlm.generate --model mukulp/Qwen2.5-VL-72B-Instruct-bf16 --max-tokens 100 --temp 0.0 --prompt "Describe this image." --image <path_to_image>
Downloads last month
2
Safetensors
Model size
73.4B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for mukulp/Qwen2.5-VL-72B-Instruct-bf16

Unable to build the model tree, the base model loops to the model itself. Learn more.