One high-value crop that needs careful handling both throughout the growing season and during post-harvest care is broccoli. Broccoli heads are still plucked by hand since they are easily damaged. Moreover, to harvest broccoli plants when they are in their best condition, human scouting is also needed to initially locate the field segments where multiple plants have attained the requisite maturity level.
However, hand harvesting is a very time-consuming task, both for the actual harvesting process and for the preliminary scouting necessary to locate the field segments where several broccoli plants have attained this maturity level. Additionally, the scouting procedure is carried out on foot because agricultural vehicles driving across the fields compact the soil, which is quite unfavorable in horticulture, particularly when using organic systems.
The aim of the experiment was to automate this process using state-of-the-art Object Detection architectures trained on georeferenced orthomosaic-derived RGB images captured from low-altitude Unmanned Aerial Vehicles (UAV) flights, and to assess their capacity to effectively detect and classify broccoli heads based on their maturity level. UAVs are widely used in precision agriculture for image capturing and the detection of specific conditions in the field, as they could quickly provide a high-resolution image of the field. The provided image combined with image vision techniques, could output the potential problem/condition that would need to be dealt with.
In the recent years, several object detectors have emerged, each with unique benefits and drawbacks. While some of them perform better but with less accuracy, others perform better but require more computational resources, which can occasionally be inappropriate depending on the deployment platform.
For the specific experiment, data acquisition was done with the use of a custom quadcopter drone equipped with a 20-megapixel (4096×2160 resolution) CMOS mechanical shutter RGB camera. Furthermore, before image capturing, 45 ground truth targets were placed inside the broccoli field to facilitate the annotation process. For the created dataset, five different object detectors were used to compare their effectiveness: Faster R-CNN, SSD, CenterNet, RetinaNet and EfficientDet-D1.
The results of the experiment have clearly demonstrated that the models were able to perform exceptionally well for the assignment of mechanized development detection. All the exploratory cycles maintained high mAP@50 and mAP@75 values of over 80% and 70%. The results showed that, in general, Faster R-CNN and CenterNet were the best broccoli maturity detectors. Additionally, geometrical changes for data augmentations reported improvements, whereas colour distortions were counterproductive. Specifically, RetinaNet shown a noteworthy improvement in performance with the use of augmentations.
To conclude, the created UAV broccoli dataset with the utilize of object-detection methods has managed to automate the method of checking and identifying the development level of broccoli in open-field conditions. The usage of the developed technique can drastically reduce work and increment the effectiveness in scouting operations, whereas guaranteeing effective yield quality through ideal collect timing, dispensing with potential parasitic contamination and quality corruption issues.
Psiroukis, V.; Espejo-Garcia, B.; Chitos, A.; Dedousis, A.; Karantzalos, K.; Fountas, S., “Assessment of Different Object Detectors for the Maturity Level Classification of Broccoli Crops Using UAV Imagery.”, Remote Sens. 2022, 14, 731.