metadata
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- ai-toolkit
widget:
- text: >-
a model standing against a white background wearing a navy hooded
sweatshirt with the text "cgp" printed on it. [trigger]
output:
url: samples/1728739100740__000001000_0.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: hood
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
test-hood
Trigger words
You should use hood
to trigger the image generation.
Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc.
Weights for this model are available in Safetensors format.
Download them in the Files & versions tab.
Use it with the 🧨 diffusers library
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('seawolf2357/test-hood', weight_name='test-hood.safetensors')
image = pipeline('a model standing against a white background wearing a navy hooded sweatshirt with the text "cgp" printed on it. [trigger]').images[0]
image.save("my_image.png")
For more details, including weighting, merging and fusing LoRAs, check the documentation on loading LoRAs in diffusers