--- license: apache-2.0 base_model: - black-forest-labs/FLUX.1-dev pipeline_tag: text-to-image tags: - lora widget: - text: >- A romantic scene featuring a couple embracing under a full moon, with a serene lake and mountains in the background, surrounded by red flowers. output: url: gallary/image_1_enhance.jpg library_name: diffusers --- # FLUX Aesthetics Enhancement LoRA ## Introduction This is a LoRA model trained for FLUX.1-dev, which enhances the aesthetic quality of images generated by the model. The improvements include, but are not limited to: rich details, beautiful lighting and shadows, aesthetic composition, and clear visuals. This model does not require any trigger words. * Paper: https://arxiv.org/abs/2412.12888 * Github: https://github.com/modelscope/DiffSynth-Studio * Model: [ModelScope](https://www.modelscope.cn/models/DiffSynth-Studio/ArtAug-lora-FLUX.1dev-v1), [HuggingFace](https://huggingface.co/ECNU-CILab/ArtAug-lora-FLUX.1dev-v1) * Demo: [ModelScope](https://modelscope.cn/aigc/imageGeneration?tab=advanced&versionId=7228&modelType=LoRA&sdVersion=FLUX_1&modelUrl=modelscope%3A%2F%2FDiffSynth-Studio%2FArtAug-lora-FLUX.1dev-v1%3Frevision%3Dv1.0), HuggingFace (Coming soon) ## Methodology ![](workflow.jpg) The ArtAug project is inspired by reasoning approaches like GPT-o1, which rely on model interaction and self-correction. We developed a framework aimed at enhancing the capabilities of image generation models through interaction with image understanding models. The training process of ArtAug consists of the following steps: 1. **Synthesis-Understanding Interaction**: After generating an image using the image generation model, we employ a multimodal large language model (Qwen2-VL-72B) to analyze the image content and provide suggestions for modifications, which then lead to the regeneration of a higher quality image. 2. **Data Generation and Filtering**: Interactive generation involves long inference times and sometimes produce poor image content. Therefore, we generate a large batch of image pairs offline, filter them, and use them for subsequent training. 3. **Differential Training**: We apply differential training techniques to train a LoRA model, enabling it to learn the differences between images before and after enhancement, rather than directly training on the dataset of enhanced images. 4. **Iterative Enhancement**: The trained LoRA model is fused into the base model, and the entire process is repeated multiple times with the fused model until the interaction algorithm no longer provides significant enhancements. The LoRA models produced in each iteration are combined to produce this final model. This model integrates the aesthetic understanding of Qwen2-VL-72B into FLUX.1[dev], leading to an improvement in the quality of generated images. ## Usage This model is trained using [DiffSynth-Studio](https://github.com/modelscope/DiffSynth-Studio). We recommend users to use DiffSynth-Studio for inference. ```shell git clone https://github.com/modelscope/DiffSynth-Studio.git cd DiffSynth-Studio pip install -e . ``` ```python import torch from diffsynth import ModelManager, FluxImagePipeline, download_customized_models lora_path = download_customized_models( model_id="DiffSynth-Studio/ArtAug-lora-FLUX.1dev-v1", origin_file_path="merged_lora.safetensors", local_dir="models/lora" )[0] model_manager = ModelManager(torch_dtype=torch.bfloat16, device="cuda", model_id_list=["FLUX.1-dev"]) model_manager.load_lora(lora_path, lora_alpha=1.0) pipe = FluxImagePipeline.from_model_manager(model_manager) image = pipe(prompt="a house", seed=0) image.save("image_artaug.jpg") ``` Since this model is encapsulated in the universal FLUX LoRA format, it can be loaded by most LoRA loaders, allowing you to integrate this LoRA model into your own workflow. ## Examples |FLUX.1-dev|FLUX.1-dev + ArtAug LoRA| |-|-| |![](gallary/image_1_base.jpg)|![](gallary/image_1_enhance.jpg)| |![](gallary/image_2_base.jpg)|![](gallary/image_2_enhance.jpg)| |![](gallary/image_3_base.jpg)|![](gallary/image_3_enhance.jpg)| |![](gallary/image_4_base.jpg)|![](gallary/image_4_enhance.jpg)| |![](gallary/image_5_base.jpg)|![](gallary/image_5_enhance.jpg)| |![](gallary/image_6_base.jpg)|![](gallary/image_6_enhance.jpg)|