Lora
Lora (Low-Rank Adaptation) is a technique used to efficiently fine-tune large models, like language models. Instead of updating all parameters during training, Lora introduces a few trainable matrices of a lower rank, reducing the computational cost. This technique allows adjustments to be made without consuming as much memory or time as traditional methods. Lora is effective when applied to large models where full fine-tuning would be resource-intensive.