Question Answering
Safetensors
English
German
llama
German
RAG
Retrieval
Question-Answering
Summarization
Reasoning
GRAG Logo

Model Card for GRAG-LLAMA-3.1-8B-ORPO-HESSIAN-AI

GRAG (German Retrieval Augmented Generation) models are designed for the German-speaking market, enabling innovation and AI solutions to drive German research collaboration in business-focused Generative AI by 2025

Our GRAG-LLAMA-ORPO model are trained on this GRAG-ORPO dataset.

Model Details

The core models released in this batch are the following:

Size Training Tokens
GRAG-LLAMA-CPT 507.47 million
GRAG-LLAMA-SFT 2.03 billion
GRAG-LLAMA-ORPO 2.0577 billion

Model Description

  • Developed by: Avemio AI Team
  • Supported by: Hessian AI
  • Model type: a Transformer style autoregressive language model.
  • Language(s) (NLP): German, English
  • License: The code and model are released under Apache 2.0.
  • Contact: [email protected]

Model Sources

  • Project Page:
  • Repositories:
    • Training:
    • Evaluation code:
  • Technical blog post:

Uses

Inference

Quickly get inference running with the following required installation: Now, proceed as usual with HuggingFace:

from transformers import AutoModelForCausalLM, AutoTokenizer
 
model_name = "avemio/GRAG-LLAMA-3.1-8B-SFT-HESSIAN-AI"
 
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
im_end_token_id = tokenizer.convert_tokens_to_ids('<|im_end|>')
im_start_token_id = tokenizer.convert_tokens_to_ids('<|im_start|>')
 
messages = [
    {"role": "system", "content": "Folge den Anweisungen des Benutzers. Bevor du deine finale Antwort gibst, schildere deine Überlegungen zur Lösung des Problems."},
    {"role": "user", "content": "Ferdinand steht vor der Herausforderung, eine faire Besuchsregelung für seine drei Kinder zu finden, die den Bedürfnissen jedes einzelnen Kindes gerecht wird. Jedes Kind hat unterschiedliche Vorlieben und Bedürfnisse, die in den Besuchsplan integriert werden müssen. Er muss sicherstellen, dass die Regelung sowohl den Interessen der Kinder als auch den rechtlichen Vorgaben entspricht. Ferdinand hat eine Woche Zeit, um einen Vorschlag zu erarbeiten, den er mit seinem Anwalt besprechen kann."}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=False
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
 
generated_ids = model.generate(
    **model_inputs,
    max_length=2024,
    temperature=0.01,
    do_sample=False,
    #bos_token_id=im_start_token_id,
    eos_token_id=im_end_token_id,
    pad_token_id=tokenizer.eos_token_id,
    repetition_penalty=1.1,
    num_return_sequences=1,
    top_k=40,
    top_p=0.95,
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]
 
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
 

Fine-tuning

We are providing a comprehensive Google Colab notebook to guide users through the process of fine-tuning our model, complete with detailed instructions, essential dependencies, and configurable settings. Colab-Notebook.

Evaluation

The evaluation was performed using seven subsets, focusing on extraction recall, question answering (QA) with multiple references, and time difference reasoning. Relevant context and summarization were treated as distinct subsets, each playing a crucial role in the evaluation process. For relevant context, the model's ability to identify and extract pertinent information from the source material was assessed. In contrast, the summarization subset evaluated the model's capability to generate concise and accurate summaries based on the relevant context.

Four evaluation metrics were employed across all subsets: language quality, overall correctness, instruction following, and an overall score.

  • Language quality: This metric focused on the overall linguistic quality of the outputs, considering factors such as grammar, fluency, and clarity.
  • Overall correctness: The accuracy and correctness of the content were evaluated under this metric.
  • Instruction following: This metric assessed the model's ability to follow specific instructions provided for each task.
  • Overall score: This metric combined the results from the previous three metrics, offering a comprehensive evaluation of the model's capabilities across all subsets.
Metric Vanila-llama-3.1-8B-Instruct GRAG-LLAMA-SFT GRAG-LLAMA-ORPO [GRAG-LLAMA-MERGED] GPT-3.5-TURBO
Average Language Quality 87.78 88.93 88.93 86.93 87.58
OVERALL SCORES (weighted):
extraction_recall 66.1 73.2 66.3 61.8 66.9
qa_multiple_references 74.7 91.5 90.9 84.8 90.3
qa_without_time_difference 83.5 90.7 91.4 88.0 89.9
qa_with_time_difference 86.7 91.4 91.8 89.1 90.6
relevant_context 87.9 90.3 89.6 84.4 88.5
summarizations 88.6 90.7 82.7 84.9 87.7

Model Details

Data

For training data details, please see the GRAG-ORPO-Dataset documentation.

The ORPO Tasks Dataset represents a specialized collection for fine-tuning language models with a focus on RAG-specific capabilities.

The subsets can be for this training step are derived from 3 different sources:

  • SauerkrautLM Preference Datasets:
    • SauerkrautLM-Fermented-GER-DPO: is a specialized dataset designed for training language models in function calling irrelevance detection using Preference Optimization. The dataset consists of 2,000 carefully evaluated instruction-response pairs, specifically curated to help models recognize situations where function calls are unnecessary and direct responses are more appropriate.
    • SauerkrautLM-Fermented-Irrelevance-GER-DPO: is a high-quality German instruction-response dataset specifically designed for Preference Optimization training. The dataset consists of 3,305 instruction-response pairs. Rather than being merged from existing German datasets, it was carefully created through a sophisticated augmentation process, transforming curated English instructions and responses into culturally adapted German content. Each pair includes comprehensive quality metrics and rejected responses for Preference training.
  • Hard Reasoning DE & EN: Synthetic generation inspired by Tencent's (“Scaling Synthetic Data Creation with 1,000,000,000 Personas”).
  • Multi-Turn-QA: Developed by Avemio AG, this dataset builds upon and enhances the German Wikipedia dump provided by Cohere (wikipedia-22-12-de-embeddings), expanding it with synthetic examples and structured tasks to create a robust training resource.

Data Subsets

Subset Examples per Task
SauerkrautLM-Fermented-GER-DPO 3.31k
SauerkrautLM-Fermented-Irrelevance-GER-DPO 2k
hard-reasoning-de 3.19k
hard-reasoning-en 1.97k
multi-turn-qa 3.2k

Source Data: SauerkrautLM

SauerkrautLM-Fermented-GER-DPO

SauerkrautLM-Fermented-Irrelevance-GER-DPO

Source Data: Hard-Reasoning DE & EN

  • Base: (proj-Persona/PersonaHub)
  • Enhancement: Synthetic data generation by Avemio AG
  • Quality: Automatic validation and curation of examples by Open Source LLM's

Methodology: Reasoning-DE & Reasoning-EN

  • Providing Persona Descriptions and rewriting in a similar style with a different focus area and name in german/english language
  • Generating Simple Logical Problems out of Persona-specific Views & Language.
  • Generating Approaches, Thinking-Steps & Solutions separately verified by Llama-3.1-405B-Instruct
  • Quality assurance and validation

Source Data: Multi-Turn-QA

Methodology: Multi-Turn-QA

  1. Extraction of base content from German Wikipedia
  2. Enhancement through synthetic example generation
  3. Structure addition for specific task types
  4. Quality assurance and validation

Architecture

Parameter GRAG-LLAMA-ORPO
d_model 3072
num heads 32
num layers 32
MLP ratio 3.5
LayerNorm type RMSNorm
pos embeddings RoPE
attention variant Standard Multi-Head Self Attention
biases none
block type sequential
activation SiLU
sequence length 131072
weight typing bfloat16

Hyperparameters

Parameter GRAG-LLAMA-ORPO
warmup steps 50
peak LR 5.0E-07
weight decay 0.1
LR schedule linear
gradient reduce dtype FP32
optimizer state dtype FP32

Environmental Impact

GRAG-LLAMA-ORPO, running on NVIDIA A100 with 80 GPUs for 4 days, has an approximate power consumption as follows:

It's important to note that the actual power consumption may vary depending on the specific workload and operational conditions. For accurate power consumption measurements, using dedicated power monitoring tools is recommended.

Model GPU Type Power Consumption From GPUs
GRAG-LLAMA-ORPO A100 (Hessian AI supercomputer) 0.01843 MWh

Bias, Risks, and Limitations

Like any base language model or fine-tuned model without safety filtering, it is relatively easy for a user to prompt these models to generate harmful and generally sensitive content. Such content can also be produced unintentionally, especially in the case of bias, so we recommend users consider the risks of applications of this technology.

Otherwise, many facts from GRAG-LLAMA-ORPO or any LLM will often not be true, so they should be checked.

Model Card Contact

For errors in this model card, please contact ([email protected]).

Downloads last month
55
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for avemio/GRAG-LLAMA-3.1-8B-ORPO-HESSIAN-AI

Finetuned
(1)
this model

Collection including avemio/GRAG-LLAMA-3.1-8B-ORPO-HESSIAN-AI