Splitting the onnx vision and text models
#19
by
hawkeye217
- opened
For v1, the onnx models provided were split apart - one for vision and the other for text embeddings (here: https://huggingface.co/jinaai/jina-clip-v1/tree/main/onnx). Are there any plans to do this for v2?