Model Card for MicroBOB-python

MicroBOB-python is a new, from scratch micro model based on RWKV x051a, which doesnt require a special kernel to train or inference. Developed and trained using a modified version of nanoRWKV.

Model Details

Model Description

MicroBOB-python is a new, from scratch micro model based on RWKV x051a, which doesnt require a special kernel to train or inference. Developed and trained using a modified version of nanoRWKV. Base trained on 10's of thousands of lines of opensource and internal python code, and finetuned in 5 rounds using kejian/codesearchnet-python-raw, AdapterOcean/python-code-instructions-18k-alpaca-standardized_cluster_1_alpaca and 3 others in the same series.

Developed for an in-house python code editor to act as a simple autocomplete, it's gotten smart enough for it's extreme small size (30 million parameters) I thought I should share. Model weights only licenced under MIT.

  • Developed by: BalrogBob
  • Model type: Custom implementation of RWKV x051a
  • License: MIT (Model Weights)
  • Finetuned from model: MicroBOB

Uses

Simple autocompletion of python or python syntax like code.

Direct Use

https://github.com/BlinkDL/nanoRWKV sample.py is sufficient to inference, and the training script included does work with the model weights, though at a slight loss of performance as far as training speed and memory usage during training, but should produce functionally identical results.

Downstream Use

Code replacement and re-formatting - It is possible with a small amount of finetuning and clever python code to use the model to replace words, functions, and variables in python code.

Out-of-Scope Use

HRLF Training can be used to instruct train the model with limited success. HRLF training blunts the models intelligence due to the limited availible parameters. The HRLF training replaces info from the dataset in the model weights.

Bias, Risks, and Limitations

Model has no bias training or safety guardrails. It was trained on code from open web sources. The model may incidentally produce mallicious or insecure code. Use at your own risk! You are fully responsible for any generations produced using these model weights.

How to Get Started with the Model

Clone https://github.com/BlinkDL/nanoRWKV. All the code contained therein is compatible with this model. While the code that generated the model is optimized and customized, the base nanoRWKV package can finetune the MicroBOB-python weights without issue at a reduced performance memory-wise.

Training Details

Training Data

https://huggingface.co/datasets/AdapterOcean/python-code-instructions-18k-alpaca-standardized_cluster_1_alpaca https://huggingface.co/datasets/AdapterOcean/python-code-instructions-18k-alpaca-standardized_cluster_2_alpaca https://huggingface.co/datasets/AdapterOcean/python-code-instructions-18k-alpaca-standardized_cluster_3_alpaca https://huggingface.co/datasets/AdapterOcean/python-code-instructions-18k-alpaca-standardized_cluster_4_alpaca https://huggingface.co/datasets/kejian/codesearchnet-python-raw My personal python code folder with 40+ projects and 30k lines of code

Training Procedure

Standard nanoRWKV data prep, custom training loop.

Preprocessing [optional]

Tokenized all datasets with gpt2 encoding for simplicity. Version of MicroBOB with custom BPE encoder in development.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
Unable to determine this model's library. Check the docs .

Datasets used to train balrogbob/MicroBOB-Python