Training a full model like Stable Diffusion takes millions of dollars and weeks of compute. LoRA (Low-Rank Adaptation) cheats: instead of changing all the model weights, it adds a small sidecar layer that learns just the new concept. Think of it like putting a filter on a camera lens—the main lens stays the same, but the filter changes the output. In ComfyUI, you drop a LoRA node before your sampler and crank its weight (usually 0.5–1.0) to control how strongly it applies. Want to generate in the style of Studio Ghibli or add a specific character? There’s probably a LoRA for that—usually just 10–200MB instead of the 4–6GB base model.

(Click to skip) →
What are LoRAs in image generation?
Tags: