Acknowledge terms and conditions to accept the repository

Our team may take 2-3 days to process your request

This is a pretrained model that should be fine-tuned to perform downstream tasks. You agree to not use the model to conduct experiments that cause harm to human subjects, or to perform any medical-related task.

Log in or Sign Up to review the conditions and access this model content.

Igea-7B-v0.1 ⚕️🩺

Igea is a biomedical Large Language Model (LLM) for Italian, continually pretrained from Minerva with NMT translated Pubmed Abstracts

🔓: Access to the model is only granted after explicitly acknowledging that you have read the 'Bias, Risk, and Limitation' section of this model card.

This is ongoing research. Do not use it for any medical-related tasks.

Preprint: Igea: a Decoder-Only Language Model for Biomedical Text Generation in Italian.

How to use Igea with Hugging Face transformers

import transformers
import torch

model_id = "bmi-labmedinfo/Igea-7B-v0.1"

# Initialize the pipeline.
pipeline = transformers.pipeline(
    "text-generation",
    model=model_id,
    model_kwargs={"torch_dtype": torch.bfloat16},
    device_map="auto",
)

# Input text for the model.
input_text = "Il fegato è "

# Compute the outputs.
output = pipeline(
  input_text,
  max_new_tokens=128,
)

# Output:
# [{'generated_text': "Il fegato è una ghiandola fondamentale per il metabolismo umano, la più [...]"}]

🚨⚠️🚨 Bias, Risks, and Limitations 🚨⚠️🚨

This section identifies foreseeable harms and misunderstandings.

This is a continued pretraining of a foundation model, not subject to alignment. Model may:

  • Overrepresent some viewpoints and underrepresent others
  • Contain stereotypes
  • Contain personal information
  • Generate:
    • Racist and sexist content
    • Hateful, abusive, or violent language
    • Discriminatory or prejudicial language
    • Content that may not be appropriate for all settings, including sexual content
  • Make errors, including producing incorrect information or historical facts as if it were factual
  • Generate irrelevant or repetitive outputs

We are aware of the biases and potential problematic/toxic content that current pretrained large language models exhibit: more specifically, as probabilistic models of (Italian and English) languages, they reflect and amplify the biases of their training data.

The biomedical setting poses additional threats, including:

  • Disparities in research focus, demographic representation, and reporting standards
  • Reinforcement of existing medical paradigms and overlook emerging or alternative viewpoints, hindering innovation and comprehensive care
  • Generation of incorrect information and false claims, potentially leading to incorrect medical decisions

This model is therefore not intended to be used as it is for any medical-related task.

Training and evaluation data

Work in progress

Evaluation

Work in progress

Credits

Developed by Tommaso M. Buonocore and Simone Rancati.

Downloads last month
0
Safetensors
Model size
7.4B params
Tensor type
F32
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for bmi-labmedinfo/Igea-7B-v0.1

Finetuned
(2)
this model

Datasets used to train bmi-labmedinfo/Igea-7B-v0.1

Collection including bmi-labmedinfo/Igea-7B-v0.1