license: llama2
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- pytorch
- storywriting
- finetuned
- not-for-all-audiences
base_model: KoboldAI/LLaMA2-13B-Psyfighter2
model_type: llama
prompt_template: >
Below is an instruction that describes a task. Write a response that
appropriately completes the request.
### Instruction:
{prompt}
### Response:
Model Card for Psyfighter2-13B-vore
This model is a version of KoboldAI/LLaMA2-13B-Psyfighter2 finetuned to better understand vore context. The primary purpose of this model is to be a storywriting assistant, a conversational model in a chat, and an interactive choose-your-own-adventure text game.
The Adventure Mode is still work in progress, and will be added later.
This is the FP16-precision version of the model for merging and fine-tuning. For using the model, please see the quantized version and the instructions here: SnakyMcSnekFace/Psyfighter2-13B-vore-GGUF
Model Details
The model behaves similarly to KoboldAI/LLaMA2-13B-Psyfighter2
, which it was derived from. Please see the README.md here to learn more.
Updates
- 06/02/2024 - fixed errors in training and merging, significantly improving the overall prose quality
- 05/25/2024 - updated training process, making the model more coherent and improving the writing quality
- 04/13/2024 - uploaded the first version of the model
Bias, Risks, and Limitations
By design, this model has a strong vorny bias. It's not intended for use by anyone below 18 years old.
Training Details
The model was fine-tuned using a rank-stabilized QLoRA adapter. Training was performed using Unsloth AI library on Ubuntu 22.04.4 LTS
with CUDA 12.1
and Pytorch 2.3.0
.
The total training time on NVIDIA GeForce RTX 4060 Ti is about 24 hours.
After training, the adapter weights were merged into the dequantized model as described in ChrisHayduk's GitHub gist.
The quantized version of the model was prepared using llama.cpp.
LoRa adapter configuration
- Rank: 128
- Alpha: 16
- Dropout rate: 0.1
- Target weights:
["q_proj", "k_proj", "o_proj", "gate_proj", "up_proj"]
, use_rslora=True
Domain adaptation
The initial training phase consists of fine-tuning the adapter on ~55 MiB of free-form text that containing stories focused around the vore theme. The text is broken into paragraphs, which are aggregated into training samples of 4096 tokens or less, without crossing the document boundary. Each sample starts with BOS token (with its attention_mask
set to 0), and ends in EOS token. The paragraph breaks are normalized to always consist of two line breaks.
Dataset pre-processing
The raw-text stories in dataset were edited as follows:
- titles, foreword, tags, and anything not comprising the text of the story are removed
- non-ascii characters and chapter separators are removed
- stories mentioning underage personas in any context are deleted
- names of private characters are randomized
Training parameters
- Max. sequence length: 4096 tokens
- Samples per epoch: 5085
- Number of epochs: 2
- Learning rate: 1e-4
- Warmup: 64 steps
- LR Schedule: linear
- Batch size: 1
- Gradient accumulation steps: 1
Adventure mode SFT
TBD
Adventure mode KTO
TBD