metadata
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- ai-toolkit
widget:
- text: An ornate green faberge egg faberge_egg
output:
url: samples/1733410745801__000001000_0.jpg
- text: A royal red faberge egg with gold highlights on a sturdy base faberge_egg
output:
url: samples/1733410764369__000001000_1.jpg
- text: Four differently-styled unique faberge eggs next to each other faberge_egg
output:
url: samples/1733410782939__000001000_2.jpg
- text: >-
Four faberge eggs next to each other on a wooden desk. Each egg is a
different color than the others. Each egg is of a completely different
style than the others. faberge_egg
output:
url: images/example_pahmsogiv.png
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: faberge_egg
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
fabergeegg
Model trained with AI Toolkit by Ostris
Trigger words
You should use faberge_egg
to trigger the image generation.
Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc.
Weights for this model are available in Safetensors format.
Download them in the Files & versions tab.
Use it with the 🧨 diffusers library
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('sohvren/fabergeegg', weight_name='fabergeegg.safetensors')
image = pipeline('An ornate green faberge egg faberge_egg').images[0]
image.save("my_image.png")
For more details, including weighting, merging and fusing LoRAs, check the documentation on loading LoRAs in diffusers