Uploaded models

  • Developed by: Sweaterdog
  • License: apache-2.0
  • Finetuned from model : unsloth/Qwen2.5-7B-bnb-4bit

The MindCraft LLM tuning CSV file can be found here, this can be tweaked as needed. MindCraft-LLM

This is a very very early access Beta Model

This model is NOT a final version, but instead is a test to see how well models can be with a small dataset. This dataset is also a test of how smaller models can be improved from extremely high quality, and as close to real-world scenarios as possible.

This small dataset finally allows the model to code, and to store history, of course the crux of this dataset is in the playing part.

The storing memory parts are real examples from in-game interactions

The coding is artifical and was generated by GPT-o1, with the instruction to include reasoning and thinking in the comments of the code

The playing is artificial and was generated by me, a human, and used prompts focusing on points where some models fail, such as mining.

This model should not be a reflection on how smaller models play Minecraft, if it performs well, and better than Andy-v2-qwen, then Yay! If not, I wasn't expecting it to be better, (And neither should you!)

You are totally allowed to test the beta model.

I hope this model performs well for you!

ALSO

The models are going to change, I am changing hyperparameters on tuning to (hopefully) increase performance and decrease hallucinations

BTW, if you want to download this model, I suggest using llama.cpp to make a quantization of it, I would have done it during tuning but I ran out of GPU time on google colab

attempt 7 failed, trying again today with fixed settings and possibly more prompts (~3000)

Downloads last month
604
GGUF
Model size
7.62B params
Architecture
qwen2

2-bit

4-bit

5-bit

8-bit

16-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for Sweaterdog/Andy-v3.5-Beta

Base model

Qwen/Qwen2.5-7B
Quantized
(3)
this model

Dataset used to train Sweaterdog/Andy-v3.5-Beta