--- license: llama3.1 --- # Llama-3.1-8B-ArliAI-RPMax-v1.3 ===================================== ## RPMax Series Overview v1.1 = [2B](https://huggingface.co/ArliAI/Gemma-2-2B-ArliAI-RPMax-v1.1) | [3.8B](https://huggingface.co/ArliAI/Phi-3.5-mini-3.8B-ArliAI-RPMax-v1.1) | [8B](https://huggingface.co/ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1) | [9B](https://huggingface.co/ArliAI/Gemma-2-9B-ArliAI-RPMax-v1.1) | [12B](https://huggingface.co/ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.1) | [20B](https://huggingface.co/ArliAI/InternLM2_5-20B-ArliAI-RPMax-v1.1) | [22B](https://huggingface.co/ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1) | [70B](https://huggingface.co/ArliAI/Llama-3.1-70B-ArliAI-RPMax-v1.1) v1.2 = [8B](https://huggingface.co/ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2) | [12B](https://huggingface.co/ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.2) | [70B](https://huggingface.co/ArliAI/Llama-3.1-70B-ArliAI-RPMax-v1.2) v1.3 = [8B](https://huggingface.co/ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.3) | [32B](https://huggingface.co/ArliAI/Qwen2.5-32B-ArliAI-RPMax-v1.3) RPMax is a series of models that are trained on a diverse set of curated creative writing and RP datasets with a focus on variety and deduplication. This model is designed to be highly creative and non-repetitive by making sure no two entries in the dataset have repeated characters or situations, which makes sure the model does not latch on to a certain personality and be capable of understanding and acting appropriately to any characters or situations. Many RPMax users mentioned that these models does not feel like any other RP models, having a different writing style and generally doesn't feel in-bred. You can access the model at https://arliai.com and we also have a models ranking page at https://www.arliai.com/models-ranking Ask questions in our new Discord Server https://discord.com/invite/t75KbPgwhk or on our subreddit https://www.reddit.com/r/ArliAI/ ## Model Description Llama-3.1-8B-ArliAI-RPMax-v1.3 is a variant made from the Llama-3.1-8B-Instruct model. Let us know what you think of the model! The different parameter versions are based on different models, so they might all behave slightly differently in their own way. v1.3 updated models are trained with updated software and configs such as the updated transformers library that fixes the gradient checkpointing bug which should help the model learn better. This version also uses RSLORA+ for training which helps the model learn even better. ### Specs * **Context Length**: 128K * **Parameters**: 8B ### Training Details * **Sequence Length**: 8192 * **Training Duration**: Approximately 10 hours on 2x3090Ti * **Epochs**: 1 epoch training for minimized repetition sickness * **RS-QLORA+**: 64-rank 64-alpha, resulting in ~2% trainable weights * **Learning Rate**: 0.00001 * **Gradient accumulation**: Very low 32 for better learning. ## Quantization The model is available in quantized formats: * **FP16**: https://huggingface.co/ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.3 * **GGUF**: https://huggingface.co/ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.3-GGUF ## Suggested Prompt Format Meta Llama 3 Instruct Format