SmallLM
SmallLM is a specialized transformer-based language model with fewer than 50 million parameters. It utilizes a tokenizer derived from minbpe, developed by Andrej Karpathy, and has been trained on custom datasets. This model is currently in active development, and we are continuously working to enhance its performance. The model can be used on any small end devices without any issue. The model has not been released yet, but we are working to make it available soon. Stay tuned!
Features
- Can be specialized for specific tasks
- Compact model size (less than 50M parameters)
- Custom tokenizer from minbpe
- Trained on custom datasets
- Can run CPU based devices
Limitations
Due to its relatively small size, limited training data, and shorter training duration, the model may occasionally underperform compared to small language models.
Contact
For any questions or feedback, please reach out to us through Discussions.
Resources
- Tokenizer Source: minbpe
Inference API (serverless) does not yet support PyTorch models for this pipeline type.