|
--- |
|
language: |
|
- ja |
|
--- |
|
please see [dahara1/Qwen2.5-3B-Instruct-gguf-japanese-imatrix-128K](https://huggingface.co/dahara1/Qwen2.5-3B-Instruct-gguf-japanese-imatrix-128K) |
|
|
|
|
|
このモデルと投機的デコード(Speculative decoding)という新しいテクニックを使ってより大きいモデルの実行速度を上げる事ができます。 |
|
Using this model and a new technique called speculative decoding, we can speed up larger models. |
|
|
|
## CUDA 実行例 CUDA example |
|
|
|
|
|
### 投機的デコードを使ってserverを起動するサンプルコマンド Example command to start a server with speculative decoding. |
|
``` |
|
CUDA_VISIBLE_DEVICES=0 ./llama.cpp/llama.cpp/build/bin/llama-server \ |
|
-m ./llama.cpp/qwen/32B/Qwen2.5-32B-Instruct-Q8_0-f16.gguf \ |
|
-md ./llama.cpp/qwen/Qwen2.5-0.5B-Instruct-Q8_0-f16.gguf \ |
|
-ngl 10 -ngld 10 -e --temp 0 -fa -c 4096 \ |
|
--draft-max 16 --draft-min 5 |
|
``` |
|
私のテストプロンプトの実行時間: 2520.65秒 |
|
My test prompt execution time: 2520.65 seconds |
|
|
|
### 通常のserverコマンド Normal server command |
|
``` |
|
CUDA_VISIBLE_DEVICES=0 ./llama.cpp/llama.cpp/build/bin/llama-server \ |
|
-m ./llama.cpp/qwen/32B/Qwen2.5-32B-Instruct-Q8_0-f16.gguf \ |
|
-ngl 10 -e --temp 0 -fa -c 4096 |
|
``` |
|
私のテストプロンプトの実行時間: 3240.36秒 |
|
My test prompt execution time: 3240.36 seconds |
|
|
|
### フラッシュアテンションなしのserverコマンド No Flash Attention server command |
|
``` |
|
CUDA_VISIBLE_DEVICES=0 ./llama.cpp/llama.cpp/build/bin/llama-server \ |
|
-m ./llama.cpp/qwen/32B/Qwen2.5-32B-Instruct-Q8_0-f16.gguf \ |
|
-ngl 10 -e --temp 0 -c 4096 |
|
``` |
|
私のテストプロンプトの実行時間: 3285.17秒 |
|
My test prompt execution time: 3285.17 seconds |
|
|
|
### Qwen2.5-0.5B-Instruct-Q4_K_Lを使いGPUメモリも更に最適化した版 A version using Qwen2.5-0.5B-Instruct-Q4_K_L with further optimization of GPU memory |
|
``` |
|
CUDA_VISIBLE_DEVICES=0 ./llama.cpp/llama.cpp/build/bin/llama-server \ |
|
-m ./llama.cpp/qwen/32B/Qwen2.5-32B-Instruct-Q8_0-f16.gguf \ |
|
-md ./llama.cpp/qwen/Qwen2.5-0.5B-Instruct-Q4_K_L.gguf \ |
|
-ngl 20 -ngld 99 -e --temp 0 -fa -c 4096 \ |
|
--draft-max 16 --draft-min 5 |
|
``` |
|
|
|
私のテストプロンプトの実行時間: 2173.36秒 |
|
My test prompt execution time: 2173.36 seconds |
|
|
|
### CUDA指定なし CUDA device not specified |
|
``` |
|
./llama.cpp/llama.cpp/build/bin/llama-server \ |
|
-m ./llama.cpp/qwen/32B/Qwen2.5-32B-Instruct-Q8_0-f16.gguf \ |
|
-e --temp 0 -fa -c 4096 |
|
``` |
|
私のテストプロンプトの実行時間: 3787.47秒 |
|
My test prompt execution time: 3787.47 seconds |
|
|
|
### 4060ti(16GB)現在の最速 current max speed |
|
|
|
``` |
|
CUDA_VISIBLE_DEVICES=0 ./llama.cpp/llama.cpp/build/bin/llama-server \ |
|
-m ./llama.cpp/qwen/32B/Qwen2.5-32B-Instruct-Q8_0-f16.gguf \ |
|
-md ./llama.cpp/qwen/Qwen2.5-0.5B-Instruct-IQ3_XXS.gguf \ |
|
-ngl 25 -ngld 99 -e --temp 0 -fa -c 1800 \ |
|
--draft-max 16 --draft-min 5 |
|
``` |
|
|
|
私のテストプロンプトの実行時間: 2130.14秒 |
|
My test prompt execution time: 2130.14 seconds |
|
|
|
|
|
なお、温度0でも単独でモデルを実行した際と微妙な差異が出るケースを確認してますので再現性が最重要な場合は注意してください |
|
I have confirmed cases where there are slight differences when running the model alone even at 0 temperature, so please be careful if reproducibility is paramount. |
|
|
|
とはいえ、IQ3を使った場合でも語尾が多少異なる程度で結論がかわるようなレベルの違いはありませんでした |
|
However, even when IQ3 was used, the endings were slightly different, but there was no difference to the extent that the conclusion was changed. |
|
|
|
クライアントスクリプトの例は[dahara1/Qwen2.5-3B-Instruct-gguf-japanese-imatrix-128K](https://huggingface.co/dahara1/Qwen2.5-3B-Instruct-gguf-japanese-imatrix-128K)をご覧ください |
|
See [dahara1/Qwen2.5-3B-Instruct-gguf-japanese-imatrix-128K](https://huggingface.co/dahara1/Qwen2.5-3B-Instruct-gguf-japanese-imatrix-128K) for cliant example. |
|
|
|
コマンドの詳細は[llama.cppの公式ページ](https://github.com/ggerganov/llama.cpp/pull/10455)をご覧ください |
|
For more command information, see the official [llama.cpp page](https://github.com/ggerganov/llama.cpp/pull/10455). |
|
|
|
|