qaihm-bot commited on
Commit
69636ff
·
verified ·
1 Parent(s): fbe127f

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +43 -19
README.md CHANGED
@@ -15,7 +15,7 @@ tags:
15
 
16
  XLSR is designed for lightweight real-time upscaling of images.
17
 
18
- This model is an implementation of XLSR-Quantized found [here](https://github.com/quic/aimet-model-zoo/tree/develop/aimet_zoo_torch/xlsr).
19
  This repository provides scripts to run XLSR-Quantized on Qualcomm® devices.
20
  More details on model performance across various devices, can be found
21
  [here](https://aihub.qualcomm.com/models/xlsr_quantized).
@@ -30,15 +30,35 @@ More details on model performance across various devices, can be found
30
  - Number of parameters: 22.0K
31
  - Model size: 39.0 KB
32
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
33
 
34
 
35
 
36
- | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
37
- | ---|---|---|---|---|---|---|---|
38
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | TFLite | 1.06 ms | 0 - 3 MB | INT8 | NPU | [XLSR-Quantized.tflite](https://huggingface.co/qualcomm/XLSR-Quantized/blob/main/XLSR-Quantized.tflite)
39
- | Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 0.654 ms | 0 - 3 MB | INT8 | NPU | [XLSR-Quantized.so](https://huggingface.co/qualcomm/XLSR-Quantized/blob/main/XLSR-Quantized.so)
40
-
41
-
42
 
43
  ## Installation
44
 
@@ -94,16 +114,16 @@ device. This script does the following:
94
  ```bash
95
  python -m qai_hub_models.models.xlsr_quantized.export
96
  ```
97
-
98
  ```
99
- Profile Job summary of XLSR-Quantized
100
- --------------------------------------------------
101
- Device: Snapdragon X Elite CRD (11)
102
- Estimated Inference Time: 0.56 ms
103
- Estimated Peak Memory Range: 0.05-0.05 MB
104
- Compute Units: NPU (16) | Total (16)
105
-
106
-
 
107
  ```
108
 
109
 
@@ -142,15 +162,19 @@ provides instructions on how to use the `.so` shared library in an Android appl
142
  Get more details on XLSR-Quantized's performance across various devices [here](https://aihub.qualcomm.com/models/xlsr_quantized).
143
  Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
144
 
 
145
  ## License
146
- - The license for the original implementation of XLSR-Quantized can be found
147
- [here](https://github.com/quic/aimet-model-zoo/blob/develop/LICENSE.pdf).
148
- - The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
 
149
 
150
  ## References
151
  * [Extremely Lightweight Quantization Robust Real-Time Single-Image Super Resolution for Mobile Devices](https://arxiv.org/abs/2105.10288)
152
  * [Source Model Implementation](https://github.com/quic/aimet-model-zoo/tree/develop/aimet_zoo_torch/xlsr)
153
 
 
 
154
  ## Community
155
  * Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
156
  * For questions or feedback please [reach out to us](mailto:[email protected]).
 
15
 
16
  XLSR is designed for lightweight real-time upscaling of images.
17
 
18
+ This model is an implementation of XLSR-Quantized found [here]({source_repo}).
19
  This repository provides scripts to run XLSR-Quantized on Qualcomm® devices.
20
  More details on model performance across various devices, can be found
21
  [here](https://aihub.qualcomm.com/models/xlsr_quantized).
 
30
  - Number of parameters: 22.0K
31
  - Model size: 39.0 KB
32
 
33
+ | Model | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
34
+ |---|---|---|---|---|---|---|---|---|
35
+ | XLSR-Quantized | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | TFLITE | 1.076 ms | 0 - 1 MB | INT8 | NPU | [XLSR-Quantized.tflite](https://huggingface.co/qualcomm/XLSR-Quantized/blob/main/XLSR-Quantized.tflite) |
36
+ | XLSR-Quantized | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | QNN | 0.652 ms | 0 - 3 MB | INT8 | NPU | [XLSR-Quantized.so](https://huggingface.co/qualcomm/XLSR-Quantized/blob/main/XLSR-Quantized.so) |
37
+ | XLSR-Quantized | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | ONNX | 0.678 ms | 0 - 1 MB | INT8 | NPU | [XLSR-Quantized.onnx](https://huggingface.co/qualcomm/XLSR-Quantized/blob/main/XLSR-Quantized.onnx) |
38
+ | XLSR-Quantized | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | TFLITE | 0.878 ms | 0 - 22 MB | INT8 | NPU | [XLSR-Quantized.tflite](https://huggingface.co/qualcomm/XLSR-Quantized/blob/main/XLSR-Quantized.tflite) |
39
+ | XLSR-Quantized | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | QNN | 0.454 ms | 0 - 15 MB | INT8 | NPU | [XLSR-Quantized.so](https://huggingface.co/qualcomm/XLSR-Quantized/blob/main/XLSR-Quantized.so) |
40
+ | XLSR-Quantized | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | ONNX | 0.499 ms | 0 - 24 MB | INT8 | NPU | [XLSR-Quantized.onnx](https://huggingface.co/qualcomm/XLSR-Quantized/blob/main/XLSR-Quantized.onnx) |
41
+ | XLSR-Quantized | RB3 Gen 2 (Proxy) | QCS6490 Proxy | TFLITE | 2.437 ms | 0 - 16 MB | INT8 | NPU | [XLSR-Quantized.tflite](https://huggingface.co/qualcomm/XLSR-Quantized/blob/main/XLSR-Quantized.tflite) |
42
+ | XLSR-Quantized | RB3 Gen 2 (Proxy) | QCS6490 Proxy | QNN | 1.076 ms | 0 - 7 MB | INT8 | NPU | Use Export Script |
43
+ | XLSR-Quantized | RB5 (Proxy) | QCS8250 Proxy | TFLITE | 16.048 ms | 4 - 28 MB | INT8 | GPU | [XLSR-Quantized.tflite](https://huggingface.co/qualcomm/XLSR-Quantized/blob/main/XLSR-Quantized.tflite) |
44
+ | XLSR-Quantized | QCS8550 (Proxy) | QCS8550 Proxy | TFLITE | 1.06 ms | 0 - 12 MB | INT8 | NPU | [XLSR-Quantized.tflite](https://huggingface.co/qualcomm/XLSR-Quantized/blob/main/XLSR-Quantized.tflite) |
45
+ | XLSR-Quantized | QCS8550 (Proxy) | QCS8550 Proxy | QNN | 0.426 ms | 0 - 2 MB | INT8 | NPU | Use Export Script |
46
+ | XLSR-Quantized | SA8255 (Proxy) | SA8255P Proxy | TFLITE | 1.054 ms | 0 - 3 MB | INT8 | NPU | [XLSR-Quantized.tflite](https://huggingface.co/qualcomm/XLSR-Quantized/blob/main/XLSR-Quantized.tflite) |
47
+ | XLSR-Quantized | SA8255 (Proxy) | SA8255P Proxy | QNN | 0.429 ms | 0 - 2 MB | INT8 | NPU | Use Export Script |
48
+ | XLSR-Quantized | SA8775 (Proxy) | SA8775P Proxy | TFLITE | 1.065 ms | 0 - 1 MB | INT8 | NPU | [XLSR-Quantized.tflite](https://huggingface.co/qualcomm/XLSR-Quantized/blob/main/XLSR-Quantized.tflite) |
49
+ | XLSR-Quantized | SA8775 (Proxy) | SA8775P Proxy | QNN | 0.433 ms | 0 - 1 MB | INT8 | NPU | Use Export Script |
50
+ | XLSR-Quantized | SA8650 (Proxy) | SA8650P Proxy | TFLITE | 1.077 ms | 2 - 14 MB | INT8 | NPU | [XLSR-Quantized.tflite](https://huggingface.co/qualcomm/XLSR-Quantized/blob/main/XLSR-Quantized.tflite) |
51
+ | XLSR-Quantized | SA8650 (Proxy) | SA8650P Proxy | QNN | 0.424 ms | 0 - 1 MB | INT8 | NPU | Use Export Script |
52
+ | XLSR-Quantized | QCS8450 (Proxy) | QCS8450 Proxy | TFLITE | 1.399 ms | 0 - 23 MB | INT8 | NPU | [XLSR-Quantized.tflite](https://huggingface.co/qualcomm/XLSR-Quantized/blob/main/XLSR-Quantized.tflite) |
53
+ | XLSR-Quantized | QCS8450 (Proxy) | QCS8450 Proxy | QNN | 0.716 ms | 0 - 13 MB | INT8 | NPU | Use Export Script |
54
+ | XLSR-Quantized | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | TFLITE | 0.854 ms | 0 - 16 MB | INT8 | NPU | [XLSR-Quantized.tflite](https://huggingface.co/qualcomm/XLSR-Quantized/blob/main/XLSR-Quantized.tflite) |
55
+ | XLSR-Quantized | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | QNN | 0.404 ms | 0 - 10 MB | INT8 | NPU | Use Export Script |
56
+ | XLSR-Quantized | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | ONNX | 0.381 ms | 0 - 16 MB | INT8 | NPU | [XLSR-Quantized.onnx](https://huggingface.co/qualcomm/XLSR-Quantized/blob/main/XLSR-Quantized.onnx) |
57
+ | XLSR-Quantized | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 0.536 ms | 0 - 0 MB | INT8 | NPU | Use Export Script |
58
+ | XLSR-Quantized | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 0.794 ms | 3 - 3 MB | INT8 | NPU | [XLSR-Quantized.onnx](https://huggingface.co/qualcomm/XLSR-Quantized/blob/main/XLSR-Quantized.onnx) |
59
 
60
 
61
 
 
 
 
 
 
 
62
 
63
  ## Installation
64
 
 
114
  ```bash
115
  python -m qai_hub_models.models.xlsr_quantized.export
116
  ```
 
117
  ```
118
+ Profiling Results
119
+ ------------------------------------------------------------
120
+ XLSR-Quantized
121
+ Device : Samsung Galaxy S23 (13)
122
+ Runtime : TFLITE
123
+ Estimated inference time (ms) : 1.1
124
+ Estimated peak memory usage (MB): [0, 1]
125
+ Total # Ops : 19
126
+ Compute Unit(s) : NPU (16 ops) CPU (3 ops)
127
  ```
128
 
129
 
 
162
  Get more details on XLSR-Quantized's performance across various devices [here](https://aihub.qualcomm.com/models/xlsr_quantized).
163
  Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
164
 
165
+
166
  ## License
167
+ * The license for the original implementation of XLSR-Quantized can be found [here](https://github.com/quic/aimet-model-zoo/blob/develop/LICENSE.pdf).
168
+ * The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
169
+
170
+
171
 
172
  ## References
173
  * [Extremely Lightweight Quantization Robust Real-Time Single-Image Super Resolution for Mobile Devices](https://arxiv.org/abs/2105.10288)
174
  * [Source Model Implementation](https://github.com/quic/aimet-model-zoo/tree/develop/aimet_zoo_torch/xlsr)
175
 
176
+
177
+
178
  ## Community
179
  * Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
180
  * For questions or feedback please [reach out to us](mailto:[email protected]).