qaihm-bot commited on
Commit
6addd36
1 Parent(s): e48960a

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +261 -0
README.md ADDED
@@ -0,0 +1,261 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: pytorch
3
+ license: mit
4
+ pipeline_tag: depth-estimation
5
+ tags:
6
+ - android
7
+
8
+ ---
9
+
10
+ ![](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/models/depth_anything_v2/web-assets/model_demo.png)
11
+
12
+ # Depth-Anything-V2: Optimized for Mobile Deployment
13
+ ## Deep Convolutional Neural Network model for depth estimation
14
+
15
+
16
+ Depth Anything is designed for estimating depth at each point in an image.
17
+
18
+ This model is an implementation of Depth-Anything-V2 found [here](https://github.com/huggingface/transformers/tree/main/src/transformers/models/depth_anything).
19
+
20
+
21
+ This repository provides scripts to run Depth-Anything-V2 on Qualcomm® devices.
22
+ More details on model performance across various devices, can be found
23
+ [here](https://aihub.qualcomm.com/models/depth_anything_v2).
24
+
25
+
26
+ ### Model Details
27
+
28
+ - **Model Type:** Depth estimation
29
+ - **Model Stats:**
30
+ - Model checkpoint: DepthAnything_V2_Small
31
+ - Input resolution: 518x518
32
+ - Number of parameters: 24.8M
33
+ - Model size: 94 MB
34
+
35
+ | Model | Device | Chipset | Target Runtime | Inference Time (ms) | Peak Memory Range (MB) | Precision | Primary Compute Unit | Target Model
36
+ |---|---|---|---|---|---|---|---|---|
37
+ | Depth-Anything-V2 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | TFLITE | 329.135 ms | 0 - 86 MB | FP16 | NPU | [Depth-Anything-V2.tflite](https://huggingface.co/qualcomm/Depth-Anything-V2/blob/main/Depth-Anything-V2.tflite) |
38
+ | Depth-Anything-V2 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | QNN | 378.932 ms | 3 - 77 MB | FP16 | NPU | [Depth-Anything-V2.so](https://huggingface.co/qualcomm/Depth-Anything-V2/blob/main/Depth-Anything-V2.so) |
39
+ | Depth-Anything-V2 | Samsung Galaxy S23 | Snapdragon® 8 Gen 2 | ONNX | 227.297 ms | 0 - 63 MB | FP16 | NPU | [Depth-Anything-V2.onnx](https://huggingface.co/qualcomm/Depth-Anything-V2/blob/main/Depth-Anything-V2.onnx) |
40
+ | Depth-Anything-V2 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | TFLITE | 251.124 ms | 0 - 248 MB | FP16 | NPU | [Depth-Anything-V2.tflite](https://huggingface.co/qualcomm/Depth-Anything-V2/blob/main/Depth-Anything-V2.tflite) |
41
+ | Depth-Anything-V2 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | QNN | 287.78 ms | 3 - 262 MB | FP16 | NPU | [Depth-Anything-V2.so](https://huggingface.co/qualcomm/Depth-Anything-V2/blob/main/Depth-Anything-V2.so) |
42
+ | Depth-Anything-V2 | Samsung Galaxy S24 | Snapdragon® 8 Gen 3 | ONNX | 187.808 ms | 0 - 986 MB | FP16 | NPU | [Depth-Anything-V2.onnx](https://huggingface.co/qualcomm/Depth-Anything-V2/blob/main/Depth-Anything-V2.onnx) |
43
+ | Depth-Anything-V2 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | TFLITE | 236.199 ms | 1 - 271 MB | FP16 | NPU | [Depth-Anything-V2.tflite](https://huggingface.co/qualcomm/Depth-Anything-V2/blob/main/Depth-Anything-V2.tflite) |
44
+ | Depth-Anything-V2 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | QNN | 240.939 ms | 3 - 282 MB | FP16 | NPU | Use Export Script |
45
+ | Depth-Anything-V2 | Snapdragon 8 Elite QRD | Snapdragon® 8 Elite | ONNX | 156.392 ms | 0 - 519 MB | FP16 | NPU | [Depth-Anything-V2.onnx](https://huggingface.co/qualcomm/Depth-Anything-V2/blob/main/Depth-Anything-V2.onnx) |
46
+ | Depth-Anything-V2 | QCS8550 (Proxy) | QCS8550 Proxy | TFLITE | 329.447 ms | 0 - 47 MB | FP16 | NPU | [Depth-Anything-V2.tflite](https://huggingface.co/qualcomm/Depth-Anything-V2/blob/main/Depth-Anything-V2.tflite) |
47
+ | Depth-Anything-V2 | QCS8550 (Proxy) | QCS8550 Proxy | QNN | 240.33 ms | 4 - 5 MB | FP16 | NPU | Use Export Script |
48
+ | Depth-Anything-V2 | SA7255P ADP | SA7255P | TFLITE | 1138.667 ms | 0 - 268 MB | FP16 | NPU | [Depth-Anything-V2.tflite](https://huggingface.co/qualcomm/Depth-Anything-V2/blob/main/Depth-Anything-V2.tflite) |
49
+ | Depth-Anything-V2 | SA7255P ADP | SA7255P | QNN | 1011.986 ms | 2 - 12 MB | FP16 | NPU | Use Export Script |
50
+ | Depth-Anything-V2 | SA8255 (Proxy) | SA8255P Proxy | TFLITE | 331.644 ms | 0 - 48 MB | FP16 | NPU | [Depth-Anything-V2.tflite](https://huggingface.co/qualcomm/Depth-Anything-V2/blob/main/Depth-Anything-V2.tflite) |
51
+ | Depth-Anything-V2 | SA8255 (Proxy) | SA8255P Proxy | QNN | 231.243 ms | 4 - 5 MB | FP16 | NPU | Use Export Script |
52
+ | Depth-Anything-V2 | SA8295P ADP | SA8295P | TFLITE | 388.927 ms | 1 - 274 MB | FP16 | NPU | [Depth-Anything-V2.tflite](https://huggingface.co/qualcomm/Depth-Anything-V2/blob/main/Depth-Anything-V2.tflite) |
53
+ | Depth-Anything-V2 | SA8295P ADP | SA8295P | QNN | 280.479 ms | 6 - 12 MB | FP16 | NPU | Use Export Script |
54
+ | Depth-Anything-V2 | SA8650 (Proxy) | SA8650P Proxy | TFLITE | 329.235 ms | 1 - 56 MB | FP16 | NPU | [Depth-Anything-V2.tflite](https://huggingface.co/qualcomm/Depth-Anything-V2/blob/main/Depth-Anything-V2.tflite) |
55
+ | Depth-Anything-V2 | SA8650 (Proxy) | SA8650P Proxy | QNN | 229.608 ms | 4 - 5 MB | FP16 | NPU | Use Export Script |
56
+ | Depth-Anything-V2 | SA8775P ADP | SA8775P | TFLITE | 368.9 ms | 1 - 269 MB | FP16 | NPU | [Depth-Anything-V2.tflite](https://huggingface.co/qualcomm/Depth-Anything-V2/blob/main/Depth-Anything-V2.tflite) |
57
+ | Depth-Anything-V2 | SA8775P ADP | SA8775P | QNN | 264.09 ms | 3 - 13 MB | FP16 | NPU | Use Export Script |
58
+ | Depth-Anything-V2 | QCS8450 (Proxy) | QCS8450 Proxy | TFLITE | 361.044 ms | 1 - 261 MB | FP16 | NPU | [Depth-Anything-V2.tflite](https://huggingface.co/qualcomm/Depth-Anything-V2/blob/main/Depth-Anything-V2.tflite) |
59
+ | Depth-Anything-V2 | QCS8450 (Proxy) | QCS8450 Proxy | QNN | 439.422 ms | 2 - 278 MB | FP16 | NPU | Use Export Script |
60
+ | Depth-Anything-V2 | Snapdragon X Elite CRD | Snapdragon® X Elite | QNN | 221.375 ms | 3 - 3 MB | FP16 | NPU | Use Export Script |
61
+ | Depth-Anything-V2 | Snapdragon X Elite CRD | Snapdragon® X Elite | ONNX | 272.662 ms | 62 - 62 MB | FP16 | NPU | [Depth-Anything-V2.onnx](https://huggingface.co/qualcomm/Depth-Anything-V2/blob/main/Depth-Anything-V2.onnx) |
62
+
63
+
64
+
65
+
66
+ ## Installation
67
+
68
+ This model can be installed as a Python package via pip.
69
+
70
+ ```bash
71
+ pip install "qai-hub-models[depth_anything_v2]"
72
+ ```
73
+
74
+
75
+
76
+ ## Configure Qualcomm® AI Hub to run this model on a cloud-hosted device
77
+
78
+ Sign-in to [Qualcomm® AI Hub](https://app.aihub.qualcomm.com/) with your
79
+ Qualcomm® ID. Once signed in navigate to `Account -> Settings -> API Token`.
80
+
81
+ With this API token, you can configure your client to run models on the cloud
82
+ hosted devices.
83
+ ```bash
84
+ qai-hub configure --api_token API_TOKEN
85
+ ```
86
+ Navigate to [docs](https://app.aihub.qualcomm.com/docs/) for more information.
87
+
88
+
89
+
90
+ ## Demo off target
91
+
92
+ The package contains a simple end-to-end demo that downloads pre-trained
93
+ weights and runs this model on a sample input.
94
+
95
+ ```bash
96
+ python -m qai_hub_models.models.depth_anything_v2.demo
97
+ ```
98
+
99
+ The above demo runs a reference implementation of pre-processing, model
100
+ inference, and post processing.
101
+
102
+ **NOTE**: If you want running in a Jupyter Notebook or Google Colab like
103
+ environment, please add the following to your cell (instead of the above).
104
+ ```
105
+ %run -m qai_hub_models.models.depth_anything_v2.demo
106
+ ```
107
+
108
+
109
+ ### Run model on a cloud-hosted device
110
+
111
+ In addition to the demo, you can also run the model on a cloud-hosted Qualcomm®
112
+ device. This script does the following:
113
+ * Performance check on-device on a cloud-hosted device
114
+ * Downloads compiled assets that can be deployed on-device for Android.
115
+ * Accuracy check between PyTorch and on-device outputs.
116
+
117
+ ```bash
118
+ python -m qai_hub_models.models.depth_anything_v2.export
119
+ ```
120
+ ```
121
+ Profiling Results
122
+ ------------------------------------------------------------
123
+ Depth-Anything-V2
124
+ Device : Samsung Galaxy S23 (13)
125
+ Runtime : TFLITE
126
+ Estimated inference time (ms) : 329.1
127
+ Estimated peak memory usage (MB): [0, 86]
128
+ Total # Ops : 635
129
+ Compute Unit(s) : NPU (635 ops)
130
+ ```
131
+
132
+
133
+ ## How does this work?
134
+
135
+ This [export script](https://aihub.qualcomm.com/models/depth_anything_v2/qai_hub_models/models/Depth-Anything-V2/export.py)
136
+ leverages [Qualcomm® AI Hub](https://aihub.qualcomm.com/) to optimize, validate, and deploy this model
137
+ on-device. Lets go through each step below in detail:
138
+
139
+ Step 1: **Compile model for on-device deployment**
140
+
141
+ To compile a PyTorch model for on-device deployment, we first trace the model
142
+ in memory using the `jit.trace` and then call the `submit_compile_job` API.
143
+
144
+ ```python
145
+ import torch
146
+
147
+ import qai_hub as hub
148
+ from qai_hub_models.models.depth_anything_v2 import Model
149
+
150
+ # Load the model
151
+ torch_model = Model.from_pretrained()
152
+
153
+ # Device
154
+ device = hub.Device("Samsung Galaxy S23")
155
+
156
+ # Trace model
157
+ input_shape = torch_model.get_input_spec()
158
+ sample_inputs = torch_model.sample_inputs()
159
+
160
+ pt_model = torch.jit.trace(torch_model, [torch.tensor(data[0]) for _, data in sample_inputs.items()])
161
+
162
+ # Compile model on a specific device
163
+ compile_job = hub.submit_compile_job(
164
+ model=pt_model,
165
+ device=device,
166
+ input_specs=torch_model.get_input_spec(),
167
+ )
168
+
169
+ # Get target model to run on-device
170
+ target_model = compile_job.get_target_model()
171
+
172
+ ```
173
+
174
+
175
+ Step 2: **Performance profiling on cloud-hosted device**
176
+
177
+ After compiling models from step 1. Models can be profiled model on-device using the
178
+ `target_model`. Note that this scripts runs the model on a device automatically
179
+ provisioned in the cloud. Once the job is submitted, you can navigate to a
180
+ provided job URL to view a variety of on-device performance metrics.
181
+ ```python
182
+ profile_job = hub.submit_profile_job(
183
+ model=target_model,
184
+ device=device,
185
+ )
186
+
187
+ ```
188
+
189
+ Step 3: **Verify on-device accuracy**
190
+
191
+ To verify the accuracy of the model on-device, you can run on-device inference
192
+ on sample input data on the same cloud hosted device.
193
+ ```python
194
+ input_data = torch_model.sample_inputs()
195
+ inference_job = hub.submit_inference_job(
196
+ model=target_model,
197
+ device=device,
198
+ inputs=input_data,
199
+ )
200
+ on_device_output = inference_job.download_output_data()
201
+
202
+ ```
203
+ With the output of the model, you can compute like PSNR, relative errors or
204
+ spot check the output with expected output.
205
+
206
+ **Note**: This on-device profiling and inference requires access to Qualcomm®
207
+ AI Hub. [Sign up for access](https://myaccount.qualcomm.com/signup).
208
+
209
+
210
+
211
+ ## Run demo on a cloud-hosted device
212
+
213
+ You can also run the demo on-device.
214
+
215
+ ```bash
216
+ python -m qai_hub_models.models.depth_anything_v2.demo --on-device
217
+ ```
218
+
219
+ **NOTE**: If you want running in a Jupyter Notebook or Google Colab like
220
+ environment, please add the following to your cell (instead of the above).
221
+ ```
222
+ %run -m qai_hub_models.models.depth_anything_v2.demo -- --on-device
223
+ ```
224
+
225
+
226
+ ## Deploying compiled model to Android
227
+
228
+
229
+ The models can be deployed using multiple runtimes:
230
+ - TensorFlow Lite (`.tflite` export): [This
231
+ tutorial](https://www.tensorflow.org/lite/android/quickstart) provides a
232
+ guide to deploy the .tflite model in an Android application.
233
+
234
+
235
+ - QNN (`.so` export ): This [sample
236
+ app](https://docs.qualcomm.com/bundle/publicresource/topics/80-63442-50/sample_app.html)
237
+ provides instructions on how to use the `.so` shared library in an Android application.
238
+
239
+
240
+ ## View on Qualcomm® AI Hub
241
+ Get more details on Depth-Anything-V2's performance across various devices [here](https://aihub.qualcomm.com/models/depth_anything_v2).
242
+ Explore all available models on [Qualcomm® AI Hub](https://aihub.qualcomm.com/)
243
+
244
+
245
+ ## License
246
+ * The license for the original implementation of Depth-Anything-V2 can be found [here](https://github.com/huggingface/transformers/blob/main/LICENSE).
247
+ * The license for the compiled assets for on-device deployment can be found [here](https://qaihub-public-assets.s3.us-west-2.amazonaws.com/qai-hub-models/Qualcomm+AI+Hub+Proprietary+License.pdf)
248
+
249
+
250
+
251
+ ## References
252
+ * [Depth Anything V2](https://arxiv.org/abs/2406.09414)
253
+ * [Source Model Implementation](https://github.com/huggingface/transformers/tree/main/src/transformers/models/depth_anything)
254
+
255
+
256
+
257
+ ## Community
258
+ * Join [our AI Hub Slack community](https://aihub.qualcomm.com/community/slack) to collaborate, post questions and learn more about on-device AI.
259
+ * For questions or feedback please [reach out to us](mailto:[email protected]).
260
+
261
+