JordiBayarri commited on
Commit
b647616
verified
1 Parent(s): 31eec4c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -1
README.md CHANGED
@@ -26,7 +26,7 @@ base_model:
26
  ---
27
  <p align="center">
28
  <picture>
29
- <source media="(prefers-color-scheme: dark)" srcset="https://cdn-uploads.huggingface.co/production/uploads/6620f941eba5274b5c12f83d/McxwesdwA45FqWJyO7evz.png">
30
  <img alt="aloe_70b" src="https://cdn-uploads.huggingface.co/production/uploads/6620f941eba5274b5c12f83d/udSFjP3wdCu3liH_VXhBk.png" width=50%>
31
  </picture>
32
  </p>
@@ -37,6 +37,7 @@ Aloe: A Family of Fine-tuned Open Healthcare LLMs
37
  ---
38
 
39
 
 
40
  Llama3.1-Aloe-Beta-70B is an **open healthcare LLM** achieving **state-of-the-art performance** on several medical tasks. Aloe Beta is made available in two model sizes: [8B](https://huggingface.co/HPAI-BSC/Llama3.1-Aloe-Beta-8B) and [70B](https://huggingface.co/HPAI-BSC/Llama3.1-Aloe-Beta-70B). Both models are trained using the same recipe.
41
 
42
  Aloe is trained on 20 medical tasks, resulting in a robust and versatile healthcare model. Evaluations show Aloe models to be among the best in their class. When combined with a RAG system ([also released](https://github.com/HPAI-BSC/prompt_engine)) the 8B version gets close to the performance of closed models like MedPalm-2, GPT4. With the same RAG system, Aloe-Beta-70B outperforms those private alternatives, producing state-of-the-art results.
 
26
  ---
27
  <p align="center">
28
  <picture>
29
+ <source media="(prefers-color-scheme: dark)" srcset="https://cdn-uploads.huggingface.co/production/uploads/6620f941eba5274b5c12f83d/aFx4k7UaJqvD-cVGvoHlL.png">
30
  <img alt="aloe_70b" src="https://cdn-uploads.huggingface.co/production/uploads/6620f941eba5274b5c12f83d/udSFjP3wdCu3liH_VXhBk.png" width=50%>
31
  </picture>
32
  </p>
 
37
  ---
38
 
39
 
40
+
41
  Llama3.1-Aloe-Beta-70B is an **open healthcare LLM** achieving **state-of-the-art performance** on several medical tasks. Aloe Beta is made available in two model sizes: [8B](https://huggingface.co/HPAI-BSC/Llama3.1-Aloe-Beta-8B) and [70B](https://huggingface.co/HPAI-BSC/Llama3.1-Aloe-Beta-70B). Both models are trained using the same recipe.
42
 
43
  Aloe is trained on 20 medical tasks, resulting in a robust and versatile healthcare model. Evaluations show Aloe models to be among the best in their class. When combined with a RAG system ([also released](https://github.com/HPAI-BSC/prompt_engine)) the 8B version gets close to the performance of closed models like MedPalm-2, GPT4. With the same RAG system, Aloe-Beta-70B outperforms those private alternatives, producing state-of-the-art results.