File size: 1,814 Bytes
30292bc
658d7c5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
Current best model from our experiments to finetune CLIP on 5k archaeological record photos plus another 300 from the Met.

See [blog post 1](https://carleton.ca/xlab/2023/archaeclip-or-building-a-visual-search-engine-for-archaeology/) and a companion post at [open context](https://alexandriaarchive.org/2023/10/08/artificial-intelligence-ai-and-open-context/) 

The goal is to use with `LLM`, Simon Willison's package for working with large language models, and in particular, `LLM-CLIP`, to make our own embeddings-powered search engine.

https://github.com/simonw/llm-clip

requires LLM: https://llm.datasette.io/en/stable/

So:

```
$ pip install llm
$ llm install llm-clip
```

Then, assuming you are doing this in an environment (I create mine with conda), find the site packages directory, and the llm-clip.py file: 

`/Users/username/mambaforge/envs/clip/lib/python3.10/site-packages` is where mine hides.

Change

```
if self._model is None:
   self._model = SentenceTransformer('clip-ViT-B-32')
```
to point to your new model, like so:
```
    def embed_batch(self, items):
        # Embeds a mix of text strings and binary images
        if self._model is None:
            self._model = SentenceTransformer('/path/to/your/retrained-model')
```
The folder with your model should contain a pytorch_model.bin and config.json inside a subfolder called 0CLIP_Model. You will need the extra json files and so on from here [https://huggingface.co/sentence-transformers/clip-ViT-B-32/tree/main](https://huggingface.co/sentence-transformers/clip-ViT-B-32/tree/main) . You need all those .json files, arranged that way. And since you're not otherwise futzing with the basic CLIP-ness, it should be ok.

Once you create your embeddings, these will be in your ~Library/Application Support/io.datasette.llm folder.