|
--- |
|
license: cc-by-nc-4.0 |
|
language: |
|
- en |
|
--- |
|
|
|
# AlpaCare: Instruction-tuned Large Language Models for Medical Applications |
|
|
|
<p align="center"> |
|
<img src="https://raw.githubusercontent.com/XZhang97666/AlpaCare/master/plots/logo.png" alt="Alt Text" width="200" height="200"> |
|
</p> |
|
|
|
|
|
|
|
This is the model weight of *AlpaCare*-LLaMA2-13B. *AlpaCare* are LLMs tuned on medical instructions. |
|
|
|
Github page: |
|
[https://github.com/XZhang97666/AlpaCare/](https://github.com/XZhang97666/AlpaCare/) |
|
|
|
|
|
## Citation: |
|
If you think it is a useful repo, please cite the paper: |
|
|
|
``` |
|
@misc{zhang2023alpacareinstructiontuned, |
|
title={AlpaCare:Instruction-tuned Large Language Models for Medical Application}, |
|
author={Xinlu Zhang and Chenxin Tian and Xianjun Yang and Lichang Chen and Zekun Li and Linda Ruth Petzold}, |
|
year={2023}, |
|
eprint={2310.14558}, |
|
archivePrefix={arXiv}, |
|
primaryClass={cs.CL} |
|
} |
|
``` |