A Two-Phased Fruits Image Detection Based on Colors as an Auto-Vision Object Annotation Technique
DOI:
https://doi.org/10.65781/actbg841Keywords:
Object Detection, Computer Vision, Dual-Mode System, Automated Annotation, Quality Assessment, Real time systemsAbstract
Automatic image detection systems are computer vision solutions that use various algorithmic approaches to identify and localize objects within digital images. Current systems primarily rely on either traditional computer vision techniques or deep learning methods, presenting significant limitations in real-world applications. These systems often struggle with balancing processing speed and detection accuracy, lack adaptability across different object types, and require substantial computational resources. Additionally, existing solutions typically operate in a single mode, making them inefficient for scenarios requiring different levels of detection complexity. This paper presents AutoVision, a novel approach to a semi-automated system for detecting and annotating objects that employs an enhanced detection approach using multiple color space analysis with adaptive thresholding. Implementing a system with a comprehensive two-phase methodology which is preprocessing for image organization and quality assessment, followed by an advanced object detection algorithm utilizing combined threshold masks and morphological operations. In our experimental evaluation with a dataset of 5000 fruit images, the system achieved a remarkable processing efficiency at 757.06 images per second with an overall detection success rate of 74.68%. Also, the system maintained exceptional image quality standards, with 98.76% of processed capabilities which is 0.003 seconds per image. The research will contribute to the field of automated fruit detection by presenting a scalable, efficient solution that balances processing speed with detection accuracy.
References
S. V. Mahadevkar et al., "A Review on Machine Learning Styles in Computer Vision—Techniques and Future Directions," in IEEE Access, vol. 10, pp. 107293-107329, 2022, doi: 10.1109/ACCESS.2022.3209825.
K. Chen, J. Tang, L. Wen, Z. Ma, S. Liu, and D. Li, “Machine learning for image-based cell classification, detection, and segmentation in microfluidic biosensors: A computer vision perspective,” Microchemical Journal, vol. 215, p. 114357, Jul. 2025, doi: 10.1016/j.microc.2025.114357.
R. Szeliski, Computer Vision, Springer Nature, 2022. doi: 10.1007/978-3-030-34372-9.
W. Zhang, J. Calautit, P. W. Tien, Y. Wu, and S. Wei, “Deep learning models for vision-based occupancy detection in high occupancy buildings,” Journal of Building Engineering, vol. 98, p. 111355, Nov. 2024, doi: 10.1016/j.jobe.2024.111355.
G. Wu, B. Li, Q. Zhu, M. Huang, and Y. Guo, “Using color and 3D geometry features to segment fruit point cloud and improve fruit recognition accuracy,” Computers and Electronics in Agriculture, vol. 174, p. 105475, May 2020, doi: 10.1016/j.compag.2020.105475.
J. Anitha, S. I. A. Pandian, and S. A. Agnes, “An efficient multilevel color image thresholding based on modified whale optimization algorithm,” Expert Systems With Applications, vol. 178, p. 115003, Apr. 2021, doi: 10.1016/j.eswa.2021.115003.
C. Xin, A. Hartel, and E. Kasneci, “DART: An automated end-to-end object detection pipeline with data Diversification, open-vocabulary bounding box Annotation, pseudo-label Review, and model Training,” Expert Systems With Applications, vol. 258, p. 125124, Aug. 2024, doi: 10.1016/j.eswa.2024.125124.
Y. Li, B. Hu, S. Wei, D. Gao and F. Shuang, "Simplify-YOLOv5m: A Simplified High-Speed Insulator Detection Algorithm for UAV Images," in IEEE Transactions on Instrumentation and Measurement, vol. 74, pp. 1-14, 2025, Art no. 3519614, doi: 10.1109/TIM.2025.3549909.
D. Liu, P. Wang, Z. Zhao, and F. Su, “Automated text annotation: a new paradigm for generalizable text-to-image person retrieval,” Applied Intelligence, vol. 55, no. 7, Mar. 2025, doi: 10.1007/s10489-025-06487-1.
J. Cuzzola, J. Jovanović, E. Bagheri, and D. Gašević, “Evolutionary fine-tuning of automated semantic annotation systems,” Expert Systems With Applications, vol. 42, no. 20, pp. 6864–6877, May 2015, doi: 10.1016/j.eswa.2015.04.054.
J. Stolp et al., “Automated sample annotation for diabetes mellitus in healthcare integrated biobanking,” Computational and Structural Biotechnology Journal, vol. 24, pp. 724–733, Oct. 2024, doi: 10.1016/j.csbj.2024.10.033.
K. Vijiyakumar, V. Govindasamy, and V. Akila, “An effective object detection and tracking using automated image annotation with inception based faster R-CNN model,” International Journal of Cognitive Computing in Engineering, vol. 5, pp. 343–356, Jan. 2024, doi: 10.1016/j.ijcce.2024.07.006.
S. R. Vankdoth and M. Arock, “End-to-end deep learning pipeline for scalable, deployable object detection engine in the traffic system,” Signal Image and Video Processing, vol. 18, no. 2, pp. 1589–1600, Nov. 2023, doi: 10.1007/s11760-023-02869-5.
J. B. Martirena, M. N. Doncel, A. C. Vidal, O. O. Madurga, J. F. Esnal and M. G. Romay, "Automated Annotation of Lane Markings Using LIDAR and Odometry," in IEEE Transactions on Intelligent Transportation Systems, vol. 23, no. 4, pp. 3115-3125, April 2022, doi: 10.1109/TITS.2020.3031921.
M. Karaa, H. Ghazzai and L. Sboui, "A Dataset Annotation System for Snowy Weather Road Surface Classification," in IEEE Open Journal of Systems Engineering, vol. 2, pp. 71-82, 2024, doi: 10.1109/OJSE.2024.3391326.
R. Fernandes, A. Pessoa, M. Salgado, A. De Paiva, I. Pacal and A. Cunha, "Enhancing Image Annotation With Object Tracking and Image Retrieval: A Systematic Review," in IEEE Access, vol. 12, pp. 79428-79444, 2024, doi: 10.1109/ACCESS.2024.3406018.
C. Agnew, A. Scanlan, P. Denny, E. M. Grua, P. van de Ven and C. Eising, "Annotation Quality Versus Quantity for Object Detection and Instance Segmentation," in IEEE Access, vol. 12, pp. 140958-140977, 2024, doi: 10.1109/ACCESS.2024.3467008.
M. M. Adnan et al., "Image Annotation With YCbCr Color Features Based on Multiple Deep CNN- GLP," in IEEE Access, vol. 12, pp. 11340-11353, 2024, doi: 10.1109/ACCESS.2023.3330765.
A. K. Zhuravlyov and K. A. Grigorian, “Automatic annotation of training datasets in computer vision using machine learning methods,” Automatic Documentation and Mathematical Linguistics, vol. 58, no. S5, pp. S279–S282, Dec. 2024, doi: 10.3103/s0005105525700347.
N. Griffioen, N. Rankovic, F. Zamberlan, and M. Punith, “Efficient annotation reduction with active learning for computer vision-based Retail Product Recognition,” Journal of Computational Social Science, vol. 7, no. 1, pp. 1039–1070, Apr. 2024, doi: 10.1007/s42001-024-00266-7.
Y. Liu et al., "High-Resolution Real-Time Imaging Processing for Spaceborne Spotlight SAR With Curved Orbit via Subaperture Coherent Superposition in Image Domain," in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 15, pp. 1992-2003, 2022, doi: 10.1109/JSTARS.2022.3150025.
N. Bhavana, M. M. Kodabagi, B. M. Kumar, P. Ajay, N. Muthukumaran and A. Ahilan, "POT-YOLO: Real-Time Road Potholes Detection Using Edge Segmentation-Based Yolo V8 Network," in IEEE Sensors Journal, vol. 24, no. 15, pp. 24802-24809, Aug. 2024, doi: 10.1109/JSEN.2024.3399008.
R. C. Camara de M. Santos, M. Coelho and R. Oliveira, "Real-time Object Detection Performance Analysis Using YOLOv7 on Edge Devices," in IEEE Latin America Transactions, vol. 22, no. 10, pp. 799-805, Oct. 2024, doi: 10.1109/TLA.2024.10705971.
Y. Hui, J. Wang and B. Li, "WSA-YOLO: Weak-Supervised and Adaptive Object Detection in the Low-Light Environment for YOLOV7," in IEEE Transactions on Instrumentation and Measurement, vol. 73, pp. 1-12, 2024, Art no. 2507012, doi: 10.1109/TIM.2024.3350120.
Y. Li and J. Liu, “Energy-efficient real-time visual image adversarial generation and processing algorithm for new energy vehicles,” Journal of Real-Time Image Processing, vol. 21, no. 5, Aug. 2024, doi: 10.1007/s11554-024-01544-3.
L. S. Bosma et al., “Tools and recommendations for commissioning and quality assurance of deformable image registration in radiotherapy,” Physics and Imaging in Radiation Oncology, vol. 32, p. 100647, Sep. 2024, doi: 10.1016/j.phro.2024.100647.
J. I. Peltonen, A.-P. Honkanen, and M. Kortesniemi, “Quality assurance framework for rapid automatic analysis deployment in medical imaging,” Physica Medica, vol. 116, p. 103173, Nov. 2023, doi: 10.1016/j.ejmp.2023.103173.
T. Torfeh, S. Aouadi, S. Yoganathan, S. Paloor, R. Hammoud, and N. Al-Hammadi, “Deep Learning Approaches for automatic quality assurance of magnetic resonance images using ACR Phantom,” BMC Medical Imaging, vol. 23, no. 1, p. 197, Nov. 2023, doi: 10.1186/s12880-023-01157-5.
J. Detring, A. Barreto, A.-K. Mahlein, and S. Paulus, “Quality assurance of hyperspectral imaging systems for neural network supported plant phenotyping,” Plant Methods, vol. 20, no. 1, p. 189, Dec. 2024, doi: 10.1186/s13007-024-01315-y.
M. K. Monir Rabby, B. Chowdhury and J. H. Kim, "A Modified Canny Edge Detection Algorithm for Fruit Detection & Classification," 2018 10th International Conference on Electrical and Computer Engineering (ICECE), Dhaka, Bangladesh, 2018, pp. 237-240, doi: 10.1109/ICECE.2018.8636811.
H. Mirhaji, M. Soleymani, A. Asakereh, and S. A. Mehdizadeh, “Fruit detection and load estimation of an orange orchard using the YOLO models through simple approaches in different imaging and illumination conditions,” Computers and Electronics in Agriculture, vol. 191, p. 106533, Nov. 2021, doi: 10.1016/j.compag.2021.106533.
D. Liu, J. Zhang, Y. Qi, Y. Wu and Y. Zhang, "Tiny Object Detection in Remote Sensing Images Based on Object Reconstruction and Multiple Receptive Field Adaptive Feature Enhancement," in IEEE Transactions on Geoscience and Remote Sensing, vol. 62, pp. 1-13, 2024, Art no. 5616213, doi: 10.1109/TGRS.2024.3381774.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Ankita Chaurasia, Mubeen Ahmed Khan, Akshit Harsol, Kalyani Tiwari (Author)

This work is licensed under a Creative Commons Attribution 4.0 International License.