namespace-Pt commited on
Commit
ba248bd
·
verified ·
1 Parent(s): 0066775

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +0 -2
README.md CHANGED
@@ -13,8 +13,6 @@ We extend the context length of Llama-3-8B-Instruct to 80K using QLoRA and 3.5K
13
 
14
  **NOTE**: This repo contains the quantized model of [namespace-Pt/Llama-3-8B-Instruct-80K-QLoRA-Merged](https://huggingface.co/namespace-Pt/Llama-3-8B-Instruct-80K-QLoRA-Merged). The quantization is conducted with [llama.cpp](https://github.com/ggerganov/llama.cpp) (Q4_K_M and Q8_0).
15
 
16
- # Evaluation
17
-
18
  All the following evaluation results are based on the [UNQUANTIZED MODEL](https://huggingface.co/namespace-Pt/Llama-3-8B-Instruct-80K-QLoRA-Merged). They can be reproduced following instructions [here](https://github.com/FlagOpen/FlagEmbedding/tree/master/Long_LLM/longllm_qlora).
19
 
20
  **NOTE**: After quantization, you may observe quality degradation.
 
13
 
14
  **NOTE**: This repo contains the quantized model of [namespace-Pt/Llama-3-8B-Instruct-80K-QLoRA-Merged](https://huggingface.co/namespace-Pt/Llama-3-8B-Instruct-80K-QLoRA-Merged). The quantization is conducted with [llama.cpp](https://github.com/ggerganov/llama.cpp) (Q4_K_M and Q8_0).
15
 
 
 
16
  All the following evaluation results are based on the [UNQUANTIZED MODEL](https://huggingface.co/namespace-Pt/Llama-3-8B-Instruct-80K-QLoRA-Merged). They can be reproduced following instructions [here](https://github.com/FlagOpen/FlagEmbedding/tree/master/Long_LLM/longllm_qlora).
17
 
18
  **NOTE**: After quantization, you may observe quality degradation.