The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
RT Finetuning Scripts
⚠️Clear Notebook Before Use
This repository contains the training and fine-tuning scripts for the following models and adapters:
- Llama
- Qwen
- SmolLM
- DeepSeek
- Other Adapters
Overview
These scripts are designed to help you fine-tune various language models and adapters, making it easy to train or adapt models to new datasets and tasks. Whether you want to improve a model’s performance or specialize it for a specific domain, these scripts will facilitate the process.
Features
- Training Scripts: Easily train models on your own dataset.
- Fine-Tuning Scripts: Fine-tune pre-trained models with minimal setup.
- Support for Multiple Models: The scripts support a variety of models including Llama, Qwen, SmolLM, and DeepSeek.
- Adapter Support: Fine-tune adapters for flexible deployment and specialization.
Requirements
Before running the scripts, make sure you have the following dependencies:
- Python 3.x
transformers
librarytorch
(CUDA for GPU acceleration)- Additional dependencies (see
requirements.txt
)
Installation
Clone the repository and install dependencies:
git clone https://github.com/your-repo/rt-finetuning-scripts.git
cd rt-finetuning-scripts
pip install -r requirements.txt
Usage
Fine-Tuning a Model
- Choose a model: Select from Llama, Qwen, SmolLM, or DeepSeek.
- Prepare your dataset: Ensure your dataset is formatted correctly for fine-tuning.
- Run the fine-tuning script: Execute the script for your chosen model.
Contributing
Contributions are welcome! If you have improvements or bug fixes, feel free to submit a pull request.
- Downloads last month
- 5