Current best model from our experiments to finetune CLIP on 5k archaeological record photos plus another 300 from the Met.
See blog post 1 and a companion post at open context
The goal is to use with LLM
, Simon Willison's package for working with large language models, and in particular, LLM-CLIP
, to make our own embeddings-powered search engine.
https://github.com/simonw/llm-clip
requires LLM: https://llm.datasette.io/en/stable/
So:
$ pip install llm
$ llm install llm-clip
Then, assuming you are doing this in an environment (I create mine with conda), find the site packages directory, and the llm-clip.py file:
/Users/username/mambaforge/envs/clip/lib/python3.10/site-packages
is where mine hides.
Change
if self._model is None:
self._model = SentenceTransformer('clip-ViT-B-32')
to point to your new model, like so:
def embed_batch(self, items):
# Embeds a mix of text strings and binary images
if self._model is None:
self._model = SentenceTransformer('/path/to/your/retrained-model')
The folder with your model should contain a pytorch_model.bin and config.json inside a subfolder called 0CLIP_Model. You will need the extra json files and so on from here https://huggingface.co/sentence-transformers/clip-ViT-B-32/tree/main . You need all those .json files, arranged that way. And since you're not otherwise futzing with the basic CLIP-ness, it should be ok.
Once you create your embeddings, these will be in your ~Library/Application Support/io.datasette.llm folder.