Text Generation
Transformers
GGUF
Spanish
Emotional
Emotional Support
Emotional Accompaniment
chatbot
llama-cpp
gguf-my-repo
Inference Endpoints
conversational
File size: 2,520 Bytes
3d49faf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a03c582
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
---
base_model: BrunoGR/Just_HEAR_Me
datasets:
- BrunoGR/HEAR-Hispanic_Emotional_Accompaniment_Responses
- BrunoGR/HRECPW-Hispanic_Responses_for_Emotional_Classification_based_on_Plutchik_Wheel
language:
- es
library_name: transformers
license: mit
pipeline_tag: text-generation
tags:
- Emotional
- Emotional Support
- Emotional Accompaniment
- chatbot
- llama-cpp
- gguf-my-repo
---

# BrunoGR/Just_HEAR_Me-Q4_K_M-GGUF
This model was converted to GGUF format from [`BrunoGR/Just_HEAR_Me`](https://huggingface.co/BrunoGR/Just_HEAR_Me) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/BrunoGR/Just_HEAR_Me) for more details on the model.

## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)

```bash
brew install llama.cpp

```
Invoke the llama.cpp server or the CLI.

### CLI:
```bash
llama-cli --hf-repo BrunoGR/Just_HEAR_Me-Q4_K_M-GGUF --hf-file just_hear_me-q4_k_m.gguf -p "The meaning to life and the universe is"
```

### Server:
```bash
llama-server --hf-repo BrunoGR/Just_HEAR_Me-Q4_K_M-GGUF --hf-file just_hear_me-q4_k_m.gguf -c 2048
```

Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.

Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```

Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```

Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo BrunoGR/Just_HEAR_Me-Q4_K_M-GGUF --hf-file just_hear_me-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or 
```
./llama-server --hf-repo BrunoGR/Just_HEAR_Me-Q4_K_M-GGUF --hf-file just_hear_me-q4_k_m.gguf -c 2048
```

## Citation

If you use Sólo Escúchame (BrunoGR/Just_HEAR_Me) in your research, please cite the following paper:

```bibtex
@misc{ramírez2024soloescuchamespanishemotional,
      title={S\'olo Esc\'uchame: Spanish Emotional Accompaniment Chatbot}, 
      author={Bruno Gil Ramírez and Jessica López Espejel and María del Carmen Santiago Díaz and Gustavo Trinidad Rubín Linares},
      year={2024},
      eprint={2408.01852},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2408.01852}, 
}
```