Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -18,7 +18,7 @@ tags:
|
|
18 |
|
19 |
ConvNextTiny is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases.
|
20 |
|
21 |
-
This model is an implementation of ConvNext-Tiny-w8a16-Quantized found [here](
|
22 |
This repository provides scripts to run ConvNext-Tiny-w8a16-Quantized on Qualcomm® devices.
|
23 |
More details on model performance across various devices, can be found
|
24 |
[here](https://aihub.qualcomm.com/models/convnext_tiny_w8a16_quantized).
|
@@ -34,14 +34,21 @@ More details on model performance across various devices, can be found
|
|
34 |
- Model size: 28 MB
|
35 |
- Precision: w8a16 (8-bit weights, 16-bit activations)
|
36 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
37 |
|
38 |
|
39 |
|
40 |
-
| Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
|
41 |
-
| ---|---|---|---|---|---|---|---|
|
42 |
-
| Samsung Galaxy S23 Ultra (Android 13) | Snapdragon® 8 Gen 2 | QNN Model Library | 3.447 ms | 0 - 12 MB | INT8 | NPU | [ConvNext-Tiny-w8a16-Quantized.so](https://huggingface.co/qualcomm/ConvNext-Tiny-w8a16-Quantized/blob/main/ConvNext-Tiny-w8a16-Quantized.so)
|
43 |
-
|
44 |
-
|
45 |
|
46 |
## Installation
|
47 |
|
@@ -97,16 +104,16 @@ device. This script does the following:
|
|
97 |
```bash
|
98 |
python -m qai_hub_models.models.convnext_tiny_w8a16_quantized.export
|
99 |
```
|
100 |
-
|
101 |
```
|
102 |
-
|
103 |
-
|
104 |
-
|
105 |
-
|
106 |
-
|
107 |
-
|
108 |
-
|
109 |
-
|
|
|
110 |
```
|
111 |
|
112 |
|
@@ -145,15 +152,19 @@ provides instructions on how to use the `.so` shared library in an Android appl
|
|
145 |
Get more details on ConvNext-Tiny-w8a16-Quantized's performance across various devices [here](https://aihub.qualcomm.com/models/convnext_tiny_w8a16_quantized).
|
146 |
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
|
147 |
|
|
|
148 |
## License
|
149 |
-
|
150 |
-
|
151 |
-
|
|
|
152 |
|
153 |
## References
|
154 |
* [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545)
|
155 |
* [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/convnext.py)
|
156 |
|
|
|
|
|
157 |
## Community
|
158 |
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
|
159 |
* For questions or feedback please [reach out to us](mailto:[email protected]).
|
|
|
18 |
|
19 |
ConvNextTiny is a machine learning model that can classify images from the Imagenet dataset. It can also be used as a backbone in building more complex models for specific use cases.
|
20 |
|
21 |
+
This model is an implementation of ConvNext-Tiny-w8a16-Quantized found [here]({source_repo}).
|
22 |
This repository provides scripts to run ConvNext-Tiny-w8a16-Quantized on Qualcomm® devices.
|
23 |
More details on model performance across various devices, can be found
|
24 |
[here](https://aihub.qualcomm.com/models/convnext_tiny_w8a16_quantized).
|
|
|
34 |
- Model size: 28 MB
|
35 |
- Precision: w8a16 (8-bit weights, 16-bit activations)
|
36 |
|
37 |
+
| Model | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
|
38 |
+
|---|---|---|---|---|---|---|---|---|
|
39 |
+
| ConvNext-Tiny-w8a16-Quantized | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | QNN | 3.622 ms | 0 - 116 MB | INT8 | NPU | [ConvNext-Tiny-w8a16-Quantized.so](https://huggingface.co/qualcomm/ConvNext-Tiny-w8a16-Quantized/blob/main/ConvNext-Tiny-w8a16-Quantized.so) |
|
40 |
+
| ConvNext-Tiny-w8a16-Quantized | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | QNN | 2.61 ms | 0 - 35 MB | INT8 | NPU | [ConvNext-Tiny-w8a16-Quantized.so](https://huggingface.co/qualcomm/ConvNext-Tiny-w8a16-Quantized/blob/main/ConvNext-Tiny-w8a16-Quantized.so) |
|
41 |
+
| ConvNext-Tiny-w8a16-Quantized | RB3 Gen 2 (Proxy) | QCS6490 Proxy | QNN | 13.298 ms | 0 - 8 MB | INT8 | NPU | Use Export Script |
|
42 |
+
| ConvNext-Tiny-w8a16-Quantized | QCS8550 (Proxy) | QCS8550 Proxy | QNN | 3.178 ms | 0 - 1 MB | INT8 | NPU | Use Export Script |
|
43 |
+
| ConvNext-Tiny-w8a16-Quantized | SA8255 (Proxy) | SA8255P Proxy | QNN | 3.204 ms | 0 - 2 MB | INT8 | NPU | Use Export Script |
|
44 |
+
| ConvNext-Tiny-w8a16-Quantized | SA8775 (Proxy) | SA8775P Proxy | QNN | 3.204 ms | 0 - 2 MB | INT8 | NPU | Use Export Script |
|
45 |
+
| ConvNext-Tiny-w8a16-Quantized | SA8650 (Proxy) | SA8650P Proxy | QNN | 3.198 ms | 0 - 1 MB | INT8 | NPU | Use Export Script |
|
46 |
+
| ConvNext-Tiny-w8a16-Quantized | QCS8450 (Proxy) | QCS8450 Proxy | QNN | 4.241 ms | 0 - 41 MB | INT8 | NPU | Use Export Script |
|
47 |
+
| ConvNext-Tiny-w8a16-Quantized | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | QNN | 2.406 ms | 0 - 35 MB | INT8 | NPU | Use Export Script |
|
48 |
+
| ConvNext-Tiny-w8a16-Quantized | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 3.505 ms | 0 - 0 MB | INT8 | NPU | Use Export Script |
|
49 |
|
50 |
|
51 |
|
|
|
|
|
|
|
|
|
|
|
52 |
|
53 |
## Installation
|
54 |
|
|
|
104 |
```bash
|
105 |
python -m qai_hub_models.models.convnext_tiny_w8a16_quantized.export
|
106 |
```
|
|
|
107 |
```
|
108 |
+
Profiling Results
|
109 |
+
------------------------------------------------------------
|
110 |
+
ConvNext-Tiny-w8a16-Quantized
|
111 |
+
Device : Samsung Galaxy S23 (13)
|
112 |
+
Runtime : QNN
|
113 |
+
Estimated inference time (ms) : 3.6
|
114 |
+
Estimated peak memory usage (MB): [0, 116]
|
115 |
+
Total # Ops : 215
|
116 |
+
Compute Unit(s) : NPU (215 ops)
|
117 |
```
|
118 |
|
119 |
|
|
|
152 |
Get more details on ConvNext-Tiny-w8a16-Quantized's performance across various devices [here](https://aihub.qualcomm.com/models/convnext_tiny_w8a16_quantized).
|
153 |
Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
|
154 |
|
155 |
+
|
156 |
## License
|
157 |
+
* The license for the original implementation of ConvNext-Tiny-w8a16-Quantized can be found [here](https://github.com/pytorch/vision/blob/main/LICENSE).
|
158 |
+
* The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
|
159 |
+
|
160 |
+
|
161 |
|
162 |
## References
|
163 |
* [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545)
|
164 |
* [Source Model Implementation](https://github.com/pytorch/vision/blob/main/torchvision/models/convnext.py)
|
165 |
|
166 |
+
|
167 |
+
|
168 |
## Community
|
169 |
* Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
|
170 |
* For questions or feedback please [reach out to us](mailto:[email protected]).
|