UTFacultiesEEMCSDisciplines & departmentsPSEducationAssignment: Object detection model comparison for detecting sugar beets applied in a mechanical weeder

Assignment: Object detection model comparison for detecting sugar beets applied in a mechanical weeder

Object detection model comparison for detecting sugar beets applied in a mechanical weeder

Problem statement

The world population is growing steadily, and now it has reached around 7.9 billion people. To feed such a growing population, a substantial increase of global food production by 70 percent by 2050 is required. As a result, agriculture is facing a challenge, especially in combination with labor shortages and consumers asking for more sustainable food. To take on the food and labor challenges in the agriculture sector, significant interventions in arable farming and greenhouse horticulture are needed. One such innovation is the decreased use of chemical herbicides. Therefore, the demand for mechanical weeders is increasing. These mechanical weeders are able to hoe between rows of plants and remove the weeds. Nevertheless, by comparing the effectiveness of a mechanical weeder with the effectiveness of chemical herbicides, one difference is that herbicides could also be applied within a row of plants instead of only between them. Therefore, mechanical weeder producers are looking into innovative solutions for hoeing within the row without destroying the crop. For this project, we are looking into detecting sugar beets to subsequently determine the places which should not be weeded. We already developed an object detection algorithm that is able to detect these sugarbeets, but we are not sure whether this is the best suitable object detection model. Therefore, we want to compare different object detection models written in different python libraries (PyTorch, TensorFlow, TFLite, TensorRT/Torchscript/Lightning, etc.) regarding their fps, mAP, memory requirement, and GFLOPS.


Investigating which object detection models performed well on common datasets such as COCO. Subsequently, some of the well-performing models need to be implemented or cloned in different python libraries and after that trained on the already existing and annotated dataset of sugar beets. In the end, a table with a comparison between these models including a description of the conclusion is added to the report.


20% Theory, 10% data collection, 50% implementation, 20% writing report


Le Viet Duc, v.d.le@utwente.nl

About Track32:

At Track32 we produce innovative computer vision and AI software. Making the technology easily accessible, so that it becomes part of your organization’s natural intelligence.

Track32 provides state-of-the-art computer vision and AI algorithms, and we integrate them into existing or new hardware and software systems. We are experts in processing all types of visual and non-visual data, using deep learning and other methods. Track32 excels at analyzing the user’s requirements and matching them with technically feasible and cost-effective solutions. Using our computer vision and AI solutions leads to huge cost savings and massively increased operational efficiency for our customers.

Computer vision and AI are generic technologies that can be used in any application domain. We serve a wide spectrum of customers in the agricultural supply chain, but also players in other markets such as post harvest, material handling, spatial planning, healthcare and the life sciences. We serve commercial companies as well as research institutes and government.