L3-70B-Nova-Fabula

L3-70B-Nova-Fabula is a fine-tuned version of Facebook's LLaMA 3 70B model, specifically optimized for roleplay and general knowledge tasks.
This is a test model for the “Fabula” series, as in the last deleted 2 models, “Qwen2.5-72B-Fabula”, and “L3.1-70B-Fabula” they apparently had a lot of problems.
So until I can find out what would be the best in parameters, datasets “Nova-Fabula” series will be my playground for it.


Human review (18/01/2024)

I finally managed to test the model out and so far, for the first test, it appears to be significantly better than the broken ones like “Qwen2.5-72B-Fabula”, and “L3.1-70B-Fabula.”
Though I must say from the experience I gathered while testing the old models, it really changes from where you host the model, like in one provider it might be really low quality that "just works" while in other, it might actually be high quality.
Anyways now to the actual model, starting with it's flaws:

  1. Model being stuck on LLaMA3 template.
    • From my testing it seems to be not entirelly following ChatML template, but LLaMA 3 template instead, though I still went with ChatML, just added a stop string as "<|eot_id|>" to ensure it doesn't go forever until it hits the token limit.
  2. Model instruction following.
    • On format following, it mostly appears to be fine.
      • (but it seems to require detailed instruction to do better, but even then, it seems to be copy the format used in examples, even when asked not to do so)

A further testing would be better as I didn't really go into detail on testing, but that's all I could see during testing.

Downloads last month
27
Safetensors
Model size
70.6B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for BusRune/L3-70B-Nova-Fabula

Finetuned
(39)
this model

Datasets used to train BusRune/L3-70B-Nova-Fabula