langgz commited on
Commit
c9453e7
·
1 Parent(s): fbd3bc9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -4
README.md CHANGED
@@ -4,13 +4,17 @@ license: apache-2.0
4
 
5
  ## Install the `funasr_onnx`
6
 
7
- install from pip
8
  ```shell
9
  pip install -U funasr_onnx
10
  # For the users in China, you could install with the command:
11
  # pip install -U funasr_onnx -i https://mirror.sjtu.edu.cn/pypi/web/simple
12
  ```
13
 
 
 
 
 
 
14
 
15
  ## Inference with runtime
16
 
@@ -27,9 +31,16 @@ pip install -U funasr_onnx
27
  result = model(wav_path)
28
  print(result)
29
  ```
30
- - Model_dir: the model path, which contains `model.onnx`, `config.yaml`, `am.mvn`
31
- - Input: wav formt file, support formats: `str, np.ndarray, List[str]`
32
- - Output: `List[str]`: recognition result
 
 
 
 
 
 
 
33
 
34
 
35
 
 
4
 
5
  ## Install the `funasr_onnx`
6
 
 
7
  ```shell
8
  pip install -U funasr_onnx
9
  # For the users in China, you could install with the command:
10
  # pip install -U funasr_onnx -i https://mirror.sjtu.edu.cn/pypi/web/simple
11
  ```
12
 
13
+ ## Download the model
14
+
15
+ ```shell
16
+ git clone https://huggingface.co/funasr/paraformer-large
17
+ ```
18
 
19
  ## Inference with runtime
20
 
 
31
  result = model(wav_path)
32
  print(result)
33
  ```
34
+ - `model_dir`: the model path, which contains `model.onnx`, `config.yaml`, `am.mvn`
35
+ - `batch_size`: `1` (Default), the batch size duration inference
36
+ - `device_id`: `-1` (Default), infer on CPU. If you want to infer with GPU, set it to gpu_id (Please make sure that you have install the onnxruntime-gpu)
37
+ - `quantize`: `False` (Default), load the model of `model.onnx` in `model_dir`. If set `True`, load the model of `model_quant.onnx` in `model_dir`
38
+ - `intra_op_num_threads`: `4` (Default), sets the number of threads used for intraop parallelism on CPU
39
+
40
+ Input: wav formt file, support formats: `str, np.ndarray, List[str]`
41
+
42
+ Output: `List[str]`: recognition result
43
+
44
 
45
 
46