By combining image outpainting with knowledge distillation, the system can identify weeds that are partially or fully outside the frame and still recommend where to spray. In tests across three public datasets, KDOSS-Net outperformed current state-of-the-art approaches in accuracy while remaining lightweight enough to run on embedded hardware used in agricultural robots.
Conventional crop–weed segmentation models classify every pixel in an image as crop, weed, or background, usually using deep learning. These systems support targeted herbicide application and automated weeding. But there is a critical blind spot: they only “see” what is currently inside the camera’s field of view (FOV). Weeds just outside that view go undetected, which leads to under-spraying, weed escape, and repeat interventions. Existing outpainting methods can in theory reconstruct missing regions, but they are too computationally heavy for use on mobile platforms like field robots. Because of these limitations, real-world spraying systems risk both inefficiency and overuse of chemicals. To address these challenges, accurate segmentation must incorporate out-of-FOV information without sacrificing speed or deployability.
A study (DOI: 10.1016/j.plaphe.2025.100098) published in Plant Phenomics on 20 August 2025 by Kang Ryoung Park’s team, Dongguk University, advances precision agriculture by enabling lightweight AI systems to accurately detect weeds beyond the camera’s field of view, improving herbicide efficiency and crop yield management.
The proposed two-part framework, KDOSS-Net, was trained using the Adam optimizer and a teacher–student learning strategy, followed by systematic evaluation of its training dynamics, architecture, and real-world usability. Both the teacher model (OPOSS-Net) and the lightweight student model (SSWO-Net) demonstrated smooth convergence with steadily decreasing training and validation losses, confirming effective learning without overfitting. During training, knowledge distillation successfully transferred semantic understanding from the teacher to the student: cross-entropy loss dropped rapidly at first, while knowledge distillation losses declined more gradually, indicating progressive absorption of high-level structural information. Ablation studies revealed that restoring areas beyond the limited camera FOV through outpainting in OPOSS-Net significantly improved segmentation, boosting mean intersection over union (mIOU) by up to 2.02%. Object prediction and outpainting together yielded stronger results than segmentation alone, and sequential training of OPOSS-Net’s sub-networks enhanced both speed and stability. The student model further improved with a 2.16% mIOU gain from knowledge distillation, and techniques such as channel expansion and nonlinear multilayer perceptron transformations enhanced feature transfer. When benchmarked on three public datasets—Rice seedling and weed, CWFID, and BoniRob—KDOSS-Net outperformed state-of-the-art models including U-Net, SegNet, and DeepLabv3+, achieving the best scores in mIOU, F1, and weed detection accuracy beyond the FOV. Notably, the lightweight SSWO-Net operated efficiently on embedded systems like the NVIDIA Jetson TX2 and even mobile devices, confirming its capability for real-time, on-field agricultural applications.
In conclusion, KDOSS-Net achieved top performance on three public crop/weed datasets—Rice seedling and weed (0.6315), CWFID (0.7101), and BoniRob (0.7524)—surpassing models like U-Net and DeepLabv3+. Its lightweight SSWO-Net operated efficiently on both GPUs and low-power devices such as Jetson TX2 and smartphones, enabling real-time field use. Moreover, integration with the vision-language model LLaVA allowed automatic herbicide recommendations, demonstrating its potential for intelligent, accessible weed management systems.
###
References
DOI
Original URL
https://doi.org/10.1016/j.plaphe.2025.100098
Funding information
This work was supported in part by the Ministry of Science and ICT (MSIT), Korea, through the Information Technology Research Center (ITRC) Support Program under Grant IITP-2025-RS-2020-II201789, and in part by the Artificial Intelligence Convergence Innovation Human Resources Development Supervised by the Institute of Information & Communications Technology Planning & Evaluation (IITP) under Grant IITP-2025-RS-2023-00254592.
About Plant Phenomics
Plant Phenomics is dedicated to publishing novel research that will advance all aspects of plant phenotyping from the cell to the plant population levels using innovative combinations of sensor systems and data analytics. Plant Phenomics aims also to connect phenomics to other science domains, such as genomics, genetics, physiology, molecular biology, bioinformatics, statistics, mathematics, and computer sciences. Plant Phenomics should thus contribute to advance plant sciences and agriculture/forestry/horticulture by addressing key scientific challenges in the area of plant phenomics.
Journal
Plant Phenomics
Method of Research
Experimental study
Subject of Research
Not applicable
Article Title
KDOSS-net: Knowledge distillation-based outpainting and semantic segmentation network for crop and weed images
Article Publication Date
20-Aug-2025
COI Statement
The authors declare that they have no competing interests.