PodNet: a deep learning breakthrough for real-time soybean pod detection in the field
Nanjing Agricultural University The Academy of Science
By combining a low-cost video-based dataset construction workflow with cutting-edge deep learning, PodNet extracts pod areas from complex, real-world crop images with unprecedented accuracy. Achieving a mean average precision (mAP@50) of 0.786, the model overcomes the limitations of postharvest and indoor phenotyping methods.
Soybean (Glycine max) is the world’s most important legume crop, prized for its high protein and oil content. Pods, as critical organs, directly influence seed number, size, and quality—key determinants of yield. Pod dimensions affect nutrient partitioning and seed weight, while pod count and color inform yield potential and maturity. However, phenotypic measurement of pods in breeding programs traditionally requires extensive manual labor across large populations and growth stages. These efforts are costly, time-consuming, and subject to subjective bias. Based on these challenges, there is an urgent need for automated, accurate, and scalable pod perception methods that can be deployed in dynamic preharvest field environments.
A study (DOI: 10.1016/j.plaphe.2025.100052) published in Plant Phenomics on 19 May 2025 by Xiujuan Chai’s team, Agricultural Information Institute, Chinese Academy of Agricultural Sciences, provides a scalable, low-cost solution for high-throughput pod analysis, paving the way for more efficient soybean breeding and selection.
In this study, the researchers adopted a cost-effective and efficient video-based data collection methodology to construct a robust dataset for soybean pod instance segmentation in preharvest fields. Instead of relying solely on static images, they recorded 1,402 video clips across varying light conditions, plant orientations, and cultivars, totaling more than 127 minutes of footage. This strategy minimized site visits and operational costs while capturing real-world complexities such as wind, leaf occlusion, and lighting variation. From the raw videos, over 5,000 high-quality images were filtered using no-reference quality assessment indicators, and after manual checks, 488 images were selected for annotation, yielding more than 20,000 pod masks. The dataset was annotated with the aid of the Segment Anything model, reducing annotation time per image from 10.2 to 6.5 minutes. Building upon this dataset, the team developed PodNet, a lightweight instance segmentation model based on YOLOv8-nano, specifically designed for small-object detection in resource-limited agricultural settings. PodNet incorporates a hierarchical prototype aggregation (HPA) strategy and a U-EMA Protonet module, which together enhance feature fusion and small object recognition. Experimental evaluations demonstrated that PodNet achieved a mean average precision (mAP@50) of 0.786, outperforming baseline models while maintaining real-time inference speeds of 8.8 ms per image on a GPU and 32 ms on an edge device. Visualization results confirmed that PodNet effectively segmented pods under challenging conditions, such as low light, occlusion, and complex backgrounds. Although limitations remain with diseased or heavily occluded pods, the model’s performance was further strengthened by data augmentation and backdrop-free training, confirming its transferability. Overall, this integrated pipeline offers a scalable, low-cost, and high-precision solution for in-field soybean phenotyping.
PodNet provides a powerful tool for accelerating soybean breeding. By enabling precise, real-time extraction of pod traits directly from field images, it reduces labor costs, minimizes errors, and scales up phenotyping capacity. Researchers can now assess pod morphology, count, and distribution across large breeding populations with greater efficiency, improving selection for high-yield and high-quality cultivars. Moreover, PodNet lays a foundation for cross-scale phenotyping—linking pod-level insights with whole-plant and seed-level traits—offering a comprehensive understanding of genotype-phenotype relationships.
###
References
DOI
Original URL
https://doi.org/10.1016/j.plaphe.2025.100052
Funding information
This work was funded in part by the China Postdoctoral Science Foundation [grant number NO.2023M743821]; the Beijing Smart Agriculture Innovation Consortium Project [grant number BAIC10-2024]; the Innovation Program of Chinese Academy of Agricultural Sciences [CAAS-ASTIP-2025-AII] ; the Central Public-interest Scientific Institution Basal Research Fund [No.JBYW-AII-2025-04].
About Plant Phenomics
Plant Phenomics is dedicated to publishing novel research that will advance all aspects of plant phenotyping from the cell to the plant population levels using innovative combinations of sensor systems and data analytics. Plant Phenomics aims also to connect phenomics to other science domains, such as genomics, genetics, physiology, molecular biology, bioinformatics, statistics, mathematics, and computer sciences. Plant Phenomics should thus contribute to advance plant sciences and agriculture/forestry/horticulture by addressing key scientific challenges in the area of plant phenomics.
Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of news releases posted to EurekAlert! by contributing institutions or for the use of any information through the EurekAlert system.