A survey on LoRA of large language models
Higher Education Press
image: Illustration of full fine-tuning, LoRA and its variants for improving downstream adaptation
Credit: HIGHER EDUCATON PRESS
Low-Rank Adaptation (LoRA), which updates the dense neural network layers with pluggable low-rank matrices, is one of the best performed parameter efficient fine-tuning paradigms. Furthermore, it has significant advantages in cross-task generalization and privacy-preserving. Hence, LoRA has gained much attention recently, and the number of related literature demonstrates exponential growth. To conduct a comprehensive overview of the current progress on LoRA,a research team led by Yuren Mao from Zhejiang University published a survey on LoRA of large language models. The survey was published on 15 July 2025 in Frontiers of Computer Science co-published by Higher Education Press and Springer Nature.
In this paper, the researchers categorize and review the progress of LoRA from several perspectives, including (1) downstream adaptation improving variants that enhance LoRA’s performance on specific tasks; (2) cross-task generalization methods that combine multiple LoRA plugins to achieve generalization across different tasks; (3) efficiency-improving techniques that boost the computational efficiency of LoRA; (4) data privacy-preserving methods that utilize LoRA in federated learning; and (5) various applications of LoRA. This comprehensive analysis provides valuable background knowledge, research trends, and technical insights for researchers and practitioners working with large language models, helping them navigate the rapidly growing body of literature on LoRA.
Looking ahead, the survey proposes several future directions for LoRA. In Generative-as-a-Service (GaaS), LoRA’s pluggable nature can facilitate efficient construction and execution of diverse functions, enabling rapid adaptation to service updates. For continued pre-training, enhancing LoRA can reduce computational costs in domain-specific model training. Additionally, in LLM-based autonomous agents, LoRA can be used to assign roles and manage memory, improving their adaptability and efficiency. These future directions highlight the potential of LoRA to expand its applications and improve its effectiveness in various scenarios.
Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.