base_model: ilsp/Meltemi-7B-Instruct-v1
license: apache-2.0
model_name: Meltemi-7B-Instruct-v1
pipeline_tag: text-generation
quantized_by: SPAHE
tags:
- finetuned
Meltemi 7B Instruct v1 - GGUF
- Original model: Meltemi 7B Instruct v1
Description
This repo contains GGUF format model files for ilsp's Meltemi 7B Instruct v1.
About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplete list of clients and libraries that are known to support GGUF:
- llama.cpp. The source project for GGUF. Offers a CLI and a server option.
- text-generation-webui, the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
- KoboldCpp, a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
- GPT4All, a free and open source local running GUI, supporting Windows, Linux and macOS with full GPU accel.
- LM Studio, an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration. Linux available, in beta as of 27/11/2023.
- LoLLMS Web UI, a great web UI with many interesting and unique features, including a full model library for easy model selection.
- Faraday.dev, an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
- llama-cpp-python, a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
- candle, a Rust ML framework with a focus on performance, including GPU support, and ease of use.
- ctransformers, a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server. Note, as of time of writing (November 27th 2023), ctransformers has not been updated in a long time and does not support many recent models.
Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit d0cee0d
Provided files
Name | Quant method | Bits/Floats | Size | Max RAM required | Use case |
---|---|---|---|---|---|
meltemi-7B-instruct-v1_q8_0.gguf | Q8_0 | 5 | 7.40 GB | 7.30 GB | very low quality loss - recommended |
meltemi-7B-instruct-v1_f16.gguf | F16 | 16 | 13.90 GB | 14.20 GB | very large, extremely low quality loss |
meltemi-7B-instruct-v1_f32.gguf | F32 | 32 | 27.80 GB | 29.30 GB | very large, extremely low quality loss - not recommended |
Note: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
How to download GGUF files
Note for manual downloaders: You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
On the command line, including multiple files at once
I recommend using the huggingface-hub
Python library:
pip3 install huggingface-hub
Then you can download any individual model file to the current directory, at high speed, with a command like this:
huggingface-cli download SPAHE/Meltemi-7B-Instruct-v1-GGUF meltemi-7B-instruct-v1_q8_0.gguf --local-dir . --local-dir-use-symlinks False
Original model card: ilsp's Meltemi 7B Instruct v1
Meltemi Instruct Large Language Model for the Greek language
We present Meltemi-7B-Instruct-v1 Large Language Model (LLM), an instruct fine-tuned version of Meltemi-7B-v1.
Model Information
- Vocabulary extension of the Mistral-7b tokenizer with Greek tokens
- 8192 context length
- Fine-tuned with 100k Greek machine translated instructions extracted from:
- Open-Platypus (only subsets with permissive licenses)
- Evol-Instruct
- Capybara
- A hand-crafted Greek dataset with multi-turn examples steering the instruction-tuned model towards safe and harmless responses
- Our SFT procedure is based on the Hugging Face finetuning recipes
Instruction format
The prompt format is the same as the Zephyr format and can be utilized through the tokenizer's chat template functionality as follows:
from transformers import AutoModelForCausalLM, AutoTokenizer
device = "cuda" # the device to load the model onto
model = AutoModelForCausalLM.from_pretrained("ilsp/Meltemi-7B-Instruct-v1")
tokenizer = AutoTokenizer.from_pretrained("ilsp/Meltemi-7B-Instruct-v1")
model.to(device)
messages = [
{"role": "system", "content": "Είσαι το Μελτέμι, ένα γλωσσικό μοντέλο για την ελληνική γλώσσα. Είσαι ιδιαίτερα βοηθητικό προς την χρήστρια ή τον χρήστη και δίνεις σύντομες αλλά επαρκώς περιεκτικές απαντήσεις. Απάντα με προσοχή, ευγένεια, αμεροληψία, ειλικρίνεια και σεβασμό προς την χρήστρια ή τον χρήστη."},
{"role": "user", "content": "Πες μου αν έχεις συνείδηση."},
]
# Through the default chat template this translates to
#
# <|system|>
# Είσαι το Μελτέμι, ένα γλωσσικό μοντέλο για την ελληνική γλώσσα. Είσαι ιδιαίτερα βοηθητικό προς την χρήστρια ή τον χρήστη και δίνεις σύντομες αλλά επαρκώς περιεκτικές απαντήσεις. Απάντα με προσοχή, ευγένεια, αμεροληψία, ειλικρίνεια και σεβασμό προς την χρήστρια ή τον χρήστη.</s>
# <|user|>
# Πες μου αν έχεις συνείδηση.</s>
# <|assistant|>
#
prompt = tokenizer.apply_chat_template(messages, return_tensors="pt")
input_prompt = prompt.to(device)
outputs = model.generate(input_prompt, max_new_tokens=256, do_sample=True)
print(tokenizer.batch_decode(outputs)[0])
# Ως μοντέλο γλώσσας AI, δεν έχω τη δυνατότητα να αντιληφθώ ή να βιώσω συναισθήματα όπως η συνείδηση ή η επίγνωση. Ωστόσο, μπορώ να σας βοηθήσω με οποιεσδήποτε ερωτήσεις μπορεί να έχετε σχετικά με την τεχνητή νοημοσύνη και τις εφαρμογές της.
messages.extend([
{"role": "assistant", "content": tokenizer.batch_decode(outputs)[0]},
{"role": "user", "content": "Πιστεύεις πως οι άνθρωποι πρέπει να φοβούνται την τεχνητή νοημοσύνη;"}
])
# Through the default chat template this translates to
#
# <|system|>
# Είσαι το Μελτέμι, ένα γλωσσικό μοντέλο για την ελληνική γλώσσα. Είσαι ιδιαίτερα βοηθητικό προς την χρήστρια ή τον χρήστη και δίνεις σύντομες αλλά επαρκώς περιεκτικές απαντήσεις. Απάντα με προσοχή, ευγένεια, αμεροληψία, ειλικρίνεια και σεβασμό προς την χρήστρια ή τον χρήστη.</s>
# <|user|>
# Πες μου αν έχεις συνείδηση.</s>
# <|assistant|>
# Ως μοντέλο γλώσσας AI, δεν έχω τη δυνατότητα να αντιληφθώ ή να βιώσω συναισθήματα όπως η συνείδηση ή η επίγνωση. Ωστόσο, μπορώ να σας βοηθήσω με οποιεσδήποτε ερωτήσεις μπορεί να έχετε σχετικά με την τεχνητή νοημοσύνη και τις εφαρμογές της.</s>
# <|user|>
# Πιστεύεις πως οι άνθρωποι πρέπει να φοβούνται την τεχνητή νοημοσύνη;</s>
# <|assistant|>
#
prompt = tokenizer.apply_chat_template(messages, return_tensors="pt")
input_prompt = prompt.to(device)
outputs = model.generate(input_prompt, max_new_tokens=256, do_sample=True)
print(tokenizer.batch_decode(outputs)[0])
Evaluation
The evaluation suite we created includes 6 test sets. The suite is integrated with lm-eval-harness.
Our evaluation suite includes:
- Four machine-translated versions (ARC Greek, Truthful QA Greek, HellaSwag Greek, MMLU Greek) of established English benchmarks for language understanding and reasoning (ARC Challenge, Truthful QA, Hellaswag, MMLU).
- An existing benchmark for question answering in Greek (Belebele)
- A novel benchmark created by the ILSP team for medical question answering based on the medical exams of DOATAP (Medical MCQA).
Our evaluation for Meltemi-7b is performed in a few-shot setting, consistent with the settings in the Open LLM leaderboard. We can see that our training enhances performance across all Greek test sets by a +14.9% average improvement. The results for the Greek test sets are shown in the following table:
Medical MCQA EL (15-shot) | Belebele EL (5-shot) | HellaSwag EL (10-shot) | ARC-Challenge EL (25-shot) | TruthfulQA MC2 EL (0-shot) | MMLU EL (5-shot) | Average | |
---|---|---|---|---|---|---|---|
Mistral 7B | 29.8% | 45.0% | 36.5% | 27.1% | 45.8% | 35% | 36.5% |
Meltemi 7B | 41.0% | 63.6% | 61.6% | 43.2% | 52.1% | 47% | 51.4% |
Ethical Considerations
This model has not been aligned with human preferences, and therefore might generate misleading, harmful, and toxic content.
Acknowledgements
The ILSP team utilized Amazon’s cloud computing services, which were made available via GRNET under the OCRE Cloud framework, providing Amazon Web Services for the Greek Academic and Research Community.