Create readme.md
Browse files
readme.md
ADDED
@@ -0,0 +1,37 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Current best model from our experiments to finetune CLIP on 5k archaeological record photos.
|
2 |
+
|
3 |
+
See [blog post 1](https://carleton.ca/xlab/2023/archaeclip-or-building-a-visual-search-engine-for-archaeology/) and a companion post at [open context](https://alexandriaarchive.org/2023/10/08/artificial-intelligence-ai-and-open-context/)
|
4 |
+
|
5 |
+
The goal is to use with `LLM`, Simon Willison's package for working with large language models, and in particular, `LLM-CLIP`, to make our own embeddings-powered search engine.
|
6 |
+
|
7 |
+
https://github.com/simonw/llm-clip
|
8 |
+
|
9 |
+
requires LLM: https://llm.datasette.io/en/stable/
|
10 |
+
|
11 |
+
So:
|
12 |
+
|
13 |
+
```
|
14 |
+
$ pip install llm
|
15 |
+
$ llm install llm-clip
|
16 |
+
```
|
17 |
+
|
18 |
+
Then, assuming you are doing this in an environment (I create mine with conda), find the site packages directory, and the llm-clip.py file:
|
19 |
+
|
20 |
+
`/Users/username/mambaforge/envs/clip/lib/python3.10/site-packages` is where mine hides.
|
21 |
+
|
22 |
+
Change
|
23 |
+
|
24 |
+
```
|
25 |
+
if self._model is None:
|
26 |
+
self._model = SentenceTransformer('clip-ViT-B-32')
|
27 |
+
```
|
28 |
+
to point to your new model, like so:
|
29 |
+
```
|
30 |
+
def embed_batch(self, items):
|
31 |
+
# Embeds a mix of text strings and binary images
|
32 |
+
if self._model is None:
|
33 |
+
self._model = SentenceTransformer('/path/to/your/retrained-model')
|
34 |
+
```
|
35 |
+
The folder with your model should contain a pytorch_model.bin and config.json inside a subfolder called 0CLIP_Model. You will need the extra json files and so on from here [https://huggingface.co/sentence-transformers/clip-ViT-B-32/tree/main](https://huggingface.co/sentence-transformers/clip-ViT-B-32/tree/main) . You need all those .json files, arranged that way. And since you're not otherwise futzing with the basic CLIP-ness, it should be ok.
|
36 |
+
|
37 |
+
Once you create your embeddings, these will be in your ~Library/Application Support/io.datasette.llm folder.
|