Skip to content

FLUX LoRA Training

A few days ago, I started experimenting with local AI image generation using ComfyUI - building workflows via a web interface and running cutting-edge models like Flux-1-dev entirely on my own hardware.

That worked surprisingly well.

But I wanted more personalized results.

👉 That’s where LoRA training comes in.

LoRA (Low-Rank Adaptation) is a lightweight approach for fine-tuning large image generation models. Instead of retraining the full model, you train a small add-on that teaches the base model a specific style, subject, or concept - which can then be stacked on top of models like Flux inside ComfyUI.

For training, I used FluxGym:

  • A simple web UI for training FLUX LoRAs
  • Designed for low VRAM setups (12-20 GB)

The workflow is refreshingly straightforward:

  • Name your LoRA
  • Define trigger words
  • Upload images (automatic captioning supported)
  • Train
  • Drop the resulting LoRA into ComfyUI and start generating images in your own style.

Watch the result!

Personal project: I trained a LoRA using ~150 images of my cat Carli:

  • Sleeping, playing, sitting, running
  • Different lighting and environments

The results are honestly impressive - have a look on the attached video!

Watch the video

Side note on hardware

  • With FluxGym’s low-VRAM mode, training with 20–30 images works on my local RTX 3080 (12 GB)
  • For the full 150-image dataset, training would have taken far too long locally

➡️ I used an NVIDIA A100 (40 GB) on an HPC cluster

➡️ Training time: ~5 hours

This experiment really highlights how far local, customizable generative AI has come - from inference to personalization.