Phips commited on
Commit
e7f6b6d
·
verified ·
1 Parent(s): c98b9f5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +42 -3
README.md CHANGED
@@ -1,3 +1,42 @@
1
- ---
2
- license: cc-by-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ pipeline_tag: image-to-image
4
+ tags:
5
+ - pytorch
6
+ - super-resolution
7
+ ---
8
+
9
+ [Link to Github Release](https://github.com/Phhofm/models/releases/tag/2xHFA2kShallowESRGAN)
10
+
11
+ # 2xHFA2kShallowESRGAN
12
+
13
+ Name: 2xHFA2kShallowESRGAN
14
+ Author: Philip Hofmann
15
+ Release Date: 04.01.2024
16
+ License: CC BY 4.0
17
+ Network: Shallow ESRGAN (6 Blocks)
18
+ Scale: 2
19
+ Purpose: 2x anime upscaler
20
+ Iterations: 180'000
21
+ epoch: 167
22
+ batch_size: 12
23
+ HR_size: 128
24
+ Dataset: hfa2k
25
+ Number of train images: 2568
26
+ OTF Training: Yes
27
+ Pretrained_Model_G: None
28
+
29
+ Description:
30
+ 2x shallow esrgan version of the HFA2kCompact model.
31
+ This model should be usable with [FAST_Anime_VSR ](https://github.com/Kiteretsu77/FAST_Anime_VSR) using TensorRT for fast inference, as should my [2xHFA2kReal-CUGAN](https://drive.google.com/file/d/1wqlK-rQjPGKJ5pNoVgnK9gcNF1tA8EjV/view?usp=drive_link) model.
32
+
33
+ Slow Pics examples:
34
+ [Example 1](https://slow.pics/c/RZj6GMwS)
35
+ [Example 2](https://slow.pics/c/Q3DHaU45)
36
+ [Ludvae1](https://slow.pics/c/fJi4IphY)
37
+ [Ludvae2](https://slow.pics/c/iIhgHokD)
38
+
39
+ ![Example1](https://github.com/Phhofm/models/assets/14755670/367a6b77-a31a-4784-8a09-aca23596fc9d)
40
+ ![Example2](https://github.com/Phhofm/models/assets/14755670/4c8a688a-8689-421c-a995-847d4de78e3f)
41
+ ![Example3](https://github.com/Phhofm/models/assets/14755670/c0981f1c-6650-4604-9cc7-1869bfd8a91d)
42
+ ![Example4](https://github.com/Phhofm/models/assets/14755670/9d14cdb4-829d-4fad-9887-7ff9780ea200)