Skip to main content

LoRA finetuning

LoRAs are a recent and effective approach for fine-tuning Language Model (LLM) and Diffusion models. This technique involves injecting new weights into specific layers, particularly the attention layers, which play a crucial role in these models.

One of the significant advantages of LoRAs is their ability to enable efficient fine-tuning while reducing the file size for sharing the fine-tuned models. This compatibility with the Dreambooth approach further contributes to its widespread adoption.

Fine-tuning Diffusion models can take various forms, including Dreambooth, textual inversion, and LoRA. However, LoRA has emerged as the most prevalent approach due to its ease of use and quick fine-tuning capabilities, along with its compatibility with the Dreambooth approach.