File size: 1,397 Bytes
6eedcee
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
409d939
6eedcee
409d939
6eedcee
 
409d939
 
 
6eedcee
 
409d939
6eedcee
409d939
 
6eedcee
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
---
tags:
- autotrain
- text-generation-inference
- text-generation
- peft
library_name: transformers
base_model: meta-llama/Meta-Llama-3-8B-Instruct
widget:
  - messages:
      - role: user
        content: What is your favorite condiment?
license: other
---

# Model Trained Using AutoTrain

This model was trained using AutoTrain. For more information, please visit [AutoTrain](https://hf.co/docs/autotrain).

# Usage

```python

from transformers import pipeline

pipe = pipeline("text-generation", "MadKFC/CPsyCode",device_map="auto")

messages = [
    {"role": "user", "content": "I'm stupid!"},
    {'role': "assistant", "content": "I'm so sorry to hear you're feeling that way! It's completely normal to have moments where we doubt ourselves or feel uncertain. But here's the thing: being 'stupid' is not a fact, it's an opinion. And opinions are subjective and can be influenced by many factors, including our own self-talk and the opinions of other\n\ns.You are not stupid. You are capable, unique, and deserving of kindness and compassion - from yourself and others. It's okay to make mistakes, and it's okay to not know something. That's how we learn and grow!\n\nWould you like to talk about what's going on"},
    {"role": "user", "content": "I make a big mistake in the lab"},
]

answer = pipe(messages)

# Print the assistant's response
print(answer[0]['generated_text'][-1])  
```