--- base_model: Qwen/Qwen2.5-7B-Instruct library_name: peft license: apache-2.0 datasets: - medalpaca/medical_meadow_medical_flashcards language: - en pipeline_tag: text-generation --- # Model Card for FlowerTune-Qwen2.5-7B-Instruct-Medical-PEFT This PEFT adapter has been trained by using [Flower](https://flower.ai/), a friendly federated AI framework. The adapter and benchmark results have been submitted to the [FlowerTune LLM Medical Leaderboard](https://flower.ai/benchmarks/llm-leaderboard/medical/). ## Model Details Please check the following GitHub project for model details and evaluation results: [https://github.com/mrs83/FlowerTune-Qwen2.5-7B-Instruct-Medical](https://github.com/mrs83/FlowerTune-Qwen2.5-7B-Instruct-Medical) ## Training procedure The following `bitsandbytes` quantization config was used during training: - quant_method: bitsandbytes - _load_in_8bit: False - _load_in_4bit: True - llm_int8_threshold: 6.0 - llm_int8_skip_modules: None - llm_int8_enable_fp32_cpu_offload: False - llm_int8_has_fp16_weight: False - bnb_4bit_quant_type: fp4 - bnb_4bit_use_double_quant: False - bnb_4bit_compute_dtype: float32 - bnb_4bit_quant_storage: uint8 - load_in_4bit: True - load_in_8bit: False ### Framework versions - PEFT 0.6.2 - Flower 1.12.0