Journal of Real-Time Image Processing, Год журнала: 2025, Номер 22(2)
Опубликована: Апрель 1, 2025
Язык: Английский
Journal of Real-Time Image Processing, Год журнала: 2025, Номер 22(2)
Опубликована: Апрель 1, 2025
Язык: Английский
Scientific Reports, Год журнала: 2025, Номер 15(1)
Опубликована: Март 29, 2025
To address the challenges encountered by safflower filament harvesting robots in detecting and localizing points unstructured environments, this study proposes a point detection localization model based on DSOE (Detect-Segment-OpenCV Extraction) method, integrated with system using depth camera. Firstly, YOLO-SaFi was employed to optimize classification of dataset, identifying harvestable filaments for further study. Secondly, novel lightweight segmentation head (LSDH) introduced, model, efficiently segment fruit balls. Using OpenCV toolkit, contour information balls extracted. The centroid connection intersection were used determine 2D points. Finally, control developed Delta robotic arm camera precisely spatial locations. Experimental results indicate that improved YOLO-SaFi-LSDH reduces size 30.2%, while achieving accuracy, recall rate, average precision 95.0%, 96.8%, respectively, significantly outperforming conventional heads. Additionally, demonstrated an overall success rate 91.0%, errors controlled within 2.42 mm x-axis, 2.86 y-axis, 3.18 z-axis. These show proposed exhibits superior performance complex providing solid theoretical foundation development intelligent robots.
Язык: Английский
Процитировано
0Plant Methods, Год журнала: 2025, Номер 21(1)
Опубликована: Март 3, 2025
As an important economic crop, the growth status of root system cabbage directly affects its overall health and yield. To monitor seedlings during their period, this study proposes a new network architecture called Swin-Unet++. This integrates Swin-Transformer module residual networks uses attention mechanisms to replace traditional convolution operations for feature extraction. It also adopts concept fuse contextual information from different levels, addressing issue insufficient extraction thin mesh-like roots seedlings. Compared with other backbone high-precision semantic segmentation networks, SwinUnet + achieves superior results. The results show that accuracy Swin-Unet in tasks reached as high 98.19%, model parameter 60 M average response time 29.5 ms. classic Unet network, mIoU increased by 1.08%, verifying can accurately extract fine-grained features roots. Furthermore, when images after segmentations are compared locate position through contours, has best positioning effect. On basis pixels obtained segmentation, calculated maximum length, extension width, thickness actual measurements. resulting goodness fit R² values 94.82%, 94.43%, 86.45%, respectively. Verifying effectiveness extracting phenotypic traits seedling framework developed provides technique monitoring analysis systems, ultimately leading development automated platform offers technical support intelligent agriculture efficient planting practices.
Язык: Английский
Процитировано
0Journal of Real-Time Image Processing, Год журнала: 2025, Номер 22(2)
Опубликована: Апрель 1, 2025
Язык: Английский
Процитировано
0