Update README.md
Browse files
README.md
CHANGED
@@ -1,6 +1,14 @@
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
pipeline_tag: image-classification
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
4 |
---
|
5 |
|
6 |
# AI Anime Image Detector ViT
|
@@ -15,7 +23,9 @@ Each checkpoint was evaluated on 500-500 real and AI images.
|
|
15 |
|
16 |
It seems like using random crops helped the model to generalize better, however, the training dataset only contained 512x512 images, which meant that every cropped image had bilinear interpolation. Training the model on 1024x1024 images could probably further improve its performance.
|
17 |
|
18 |
-
|
|
|
|
|
19 |
|
20 |
| Image | Nahrawy/AIorNot | umm-maybe/AI-image-detector | Organika/sdxl-detector | Ours |
|
21 |
|--------------------|-----------------|-----------------------------|------------------------|------------|
|
|
|
1 |
---
|
2 |
license: apache-2.0
|
3 |
pipeline_tag: image-classification
|
4 |
+
library_name: transformers
|
5 |
+
tags:
|
6 |
+
- image-detection
|
7 |
+
- ai-image-generation
|
8 |
+
- anime
|
9 |
+
- ai-anime
|
10 |
+
- human-detection
|
11 |
+
- art
|
12 |
---
|
13 |
|
14 |
# AI Anime Image Detector ViT
|
|
|
23 |
|
24 |
It seems like using random crops helped the model to generalize better, however, the training dataset only contained 512x512 images, which meant that every cropped image had bilinear interpolation. Training the model on 1024x1024 images could probably further improve its performance.
|
25 |
|
26 |
+
## Performance comparison
|
27 |
+
|
28 |
+
We did a small comparison with the current available AI image detectors. Note that these models were not specificly trained on anime images.
|
29 |
|
30 |
| Image | Nahrawy/AIorNot | umm-maybe/AI-image-detector | Organika/sdxl-detector | Ours |
|
31 |
|--------------------|-----------------|-----------------------------|------------------------|------------|
|