Update README.md
Browse files
README.md
CHANGED
@@ -122,7 +122,7 @@ while True:
|
|
122 |
num_return_sequences=1,
|
123 |
do_sample=True,
|
124 |
top_k=50,
|
125 |
-
temperature=0.
|
126 |
top_p=0.90
|
127 |
)
|
128 |
|
@@ -149,14 +149,3 @@ While the model performs well in general chat scenarios, it may encounter challe
|
|
149 |
## Additional Disclaimer
|
150 |
|
151 |
Please note that this model has not been specifically aligned using techniques such as Direct Preference Optimization (DPO) or similar methodologies. While the model has been fine-tuned to perform well in chat-based tasks, its responses are not guaranteed to reflect human-aligned preferences or ethical guidelines. Use with caution in sensitive or critical applications.
|
152 |
-
|
153 |
-
|
154 |
-
## Citation
|
155 |
-
If you use this model in your work, please cite as:
|
156 |
-
|
157 |
-
@article{ChatGPT2.V2,
|
158 |
-
title={ChatGPT-2.V2: Fine-Tuned Conversational AI Model},
|
159 |
-
author={Aquilax Team},
|
160 |
-
year={2024},
|
161 |
-
note={https://huggingface.co/suriya7/ChatGPT-2.V2}
|
162 |
-
}
|
|
|
122 |
num_return_sequences=1,
|
123 |
do_sample=True,
|
124 |
top_k=50,
|
125 |
+
temperature=0.2,
|
126 |
top_p=0.90
|
127 |
)
|
128 |
|
|
|
149 |
## Additional Disclaimer
|
150 |
|
151 |
Please note that this model has not been specifically aligned using techniques such as Direct Preference Optimization (DPO) or similar methodologies. While the model has been fine-tuned to perform well in chat-based tasks, its responses are not guaranteed to reflect human-aligned preferences or ethical guidelines. Use with caution in sensitive or critical applications.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|