Automatic detection and counting of fisheries using fish images

Authors

DOI:

https://doi.org/10.31763/businta.v7i2.655

Keywords:

Fish, Fishbase, YOLO v8, Bounding Box, Detection

Abstract

In Senegal, stock recovery and fish classification are based on manual data collection, and the fish caught by the fishery are not often declared. What's more, data collection suffers from a lack of tools for monitoring and counting fish caught at fishing docks. Researchers have carried out studies on the fishery in Senegal, but data collection is almost non-existent. Moreover, there is no local fisheries database or automatic detection and counting algorithm. In this paper, a semantic segmentation algorithm is proposed using intelligent systems for the collection of fishery catches, for the formation of the local database. The data are collected by taking images of fish at the Soumbédioune fishing wharf in Senegal, and are completed with the Fishbase database. They were applied to the algorithm and resulted in a segmented dataset with masks. This constitutes our local database. The database is used with YOLO v8. The latter is very important for detecting images with bounding boxes in order to train the model. The results obtained are very promising for the proposed automatic poison detection and counting model. For example, the recall-confidence scores translate into bounding box performance with scores ranging from 0.01 to 0.75, confirming the performance of this model with bounding boxes

References

J. Hu, D. Li, Q. Duan, Y. Han, G. Chen, and X. Si, “Fish species classification by color, texture and multi-class support vector machine using computer vision,” Comput. Electron. Agric., vol. 88, pp. 133–140, Oct. 2012, doi: 10.1016/j.compag.2012.07.008.

A. Newton et al., “Anthropogenic, Direct Pressures on Coastal Wetlands,” Front. Ecol. Evol., vol. 8, p. 512636, Jul. 2020, doi: 10.3389/fevo.2020.00144.

R. Priyadharsini and T. S. Sharmila, “Object Detection In Underwater Acoustic Images Using Edge Based Segmentation Method,” Procedia Comput. Sci., vol. 165, pp. 759–765, Jan. 2019, doi: 10.1016/j.procs.2020.01.015.

S. K. Amit, M. M. Uddin, R. Rahman, S. M. R. Islam, and M. S. Khan, “A review on mechanisms and commercial aspects of food preservation and processing,” Agric. Food Secur., vol. 6, no. 1, p. 51, Dec. 2017, doi: 10.1186/s40066-017-0130-8.

M. M. Tall, I. Ngom, O. Sadio, and I. Diadne, “Proposal for a Local Database: Segmentation and Classification Algorithm,” in 2023 IEEE 12th International Conference on Communication Systems and Network Technologies (CSNT), Apr. 2023, pp. 846–849, doi: 10.1109/CSNT57126.2023.10134645.

B. Meissa and D. Gascuel, “Overfishing of marine resources: some lessons from the assessment of demersal stocks off Mauritania,” ICES J. Mar. Sci., vol. 72, no. 2, pp. 414–427, Jan. 2015, doi: 10.1093/icesjms/fsu144.

R. Bonnardel, “Les problèmes de la pêche maritime au Sénégal,” Ann. Georgr., vol. 78, no. 425, pp. 25–56, 1969, doi: 10.3406/geo.1969.14498.

E. H. Balla Dieye, A. Tahirou Diaw, T. Sané, and N. Ndour, “Dynamique de la mangrove de l’estuaire du Saloum (Sénégal) entre 1972 et 2010,” Cybergeo, vol. 2013, p. 27, Jan. 2013, doi: 10.4000/cybergeo.25671.

H. Qin, W. Zhou, Y. Yao, and W. Wang, “Individual tree segmentation and tree species classification in subtropical broadleaf forests using UAV-based LiDAR, hyperspectral, and ultrahigh-resolution RGB data,” Remote Sens. Environ., vol. 280, p. 113143, Oct. 2022, doi: 10.1016/j.rse.2022.113143.

Y. Kutlu, B. Iscimen, and C. Turan, “Multi-stage fish classification system using morphometry,” Fresenius Environ. Bull., vol. 26, no. 3, pp. 1911–1917, 2017, [Online]. Available at: https://www.researchgate.net/profile/Yakup-Kutlu/publication/314284234.

Y. Zhao, Z.-Y. Sun, H. Du, C.-W. Bi, J. Meng, and Y. Cheng, “A novel centerline extraction method for overlapping fish body length measurement in aquaculture images,” Aquac. Eng., vol. 99, p. 102302, Nov. 2022, doi: 10.1016/j.aquaeng.2022.102302.

H. Mohammadi Lalabadi, M. Sadeghi, and S. A. Mireei, “Fish freshness categorization from eyes and gills color features using multi-class artificial neural network and support vector machines,” Aquac. Eng., vol. 90, p. 102076, Aug. 2020, doi: 10.1016/j.aquaeng.2020.102076.

Z.-Q. Zhao, P. Zheng, S.-T. Xu, and X. Wu, “Object Detection With Deep Learning: A Review,” IEEE Trans. Neural Networks Learn. Syst., vol. 30, no. 11, pp. 3212–3232, Nov. 2019, doi: 10.1109/TNNLS.2018.2876865.

W. Liu, I. Hasan, and S. Liao, “Center and Scale Prediction: Anchor-free Approach for Pedestrian and Face Detection,” Pattern Recognit., vol. 135, p. 109071, Mar. 2023, doi: 10.1016/j.patcog.2022.109071.

R. Ranjan, V. M. Patel, and R. Chellappa, “HyperFace: A Deep Multi-Task Learning Framework for Face Detection, Landmark Localization, Pose Estimation, and Gender Recognition,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 41, no. 1, pp. 121–135, Jan. 2019, doi: 10.1109/TPAMI.2017.2781233.

Y. Xu et al., “Gliding Vertex on the Horizontal Bounding Box for Multi-Oriented Object Detection,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 43, no. 4, pp. 1452–1459, Apr. 2021, doi: 10.1109/TPAMI.2020.2974745.

J. Ma et al., “Arbitrary-Oriented Scene Text Detection via Rotation Proposals,” IEEE Trans. Multimed., vol. 20, no. 11, pp. 3111–3122, Nov. 2018, doi: 10.1109/TMM.2018.2818020.

M. M. Islam, A. A. R. Newaz, and A. Karimoddini, “Pedestrian Detection for Autonomous Cars: Inference Fusion of Deep Neural Networks,” IEEE Trans. Intell. Transp. Syst., vol. 23, no. 12, pp. 23358–23368, Dec. 2022, doi: 10.1109/TITS.2022.3210186.

J. Li, X. Liang, S. Shen, T. Xu, J. Feng, and S. Yan, “Scale-aware Fast R-CNN for Pedestrian Detection,” IEEE Trans. Multimed., vol. 20, no. 4, pp. 1–1, Apr. 2017, doi: 10.1109/TMM.2017.2759508.

G. Li, Z. Ji, and X. Qu, “Stepwise Domain Adaptation (SDA) for Object Detection in Autonomous Vehicles Using an Adaptive CenterNet,” IEEE Trans. Intell. Transp. Syst., vol. 23, no. 10, pp. 17729–17743, Oct. 2022, doi: 10.1109/TITS.2022.3164407.

H. Wang, Y. Yu, Y. Cai, X. Chen, L. Chen, and Q. Liu, “A Comparative Study of State-of-the-Art Deep Learning Algorithms for Vehicle Detection,” IEEE Intell. Transp. Syst. Mag., vol. 11, no. 2, pp. 82–95, 2019, doi: 10.1109/MITS.2019.2903518.

A. Ben Tamou, A. Benzinou, and K. Nasreddine, “Multi-stream fish detection in unconstrained underwater videos by the fusion of two convolutional neural network detectors,” Appl. Intell., vol. 51, no. 8, pp. 5809–5821, Aug. 2021, doi: 10.1007/s10489-020-02155-8.

T. Liu, P. Li, H. Liu, X. Deng, H. Liu, and F. Zhai, “Multi-class fish stock statistics technology based on object classification and tracking algorithm,” Ecol. Inform., vol. 63, p. 101240, Jul. 2021, doi: 10.1016/j.ecoinf.2021.101240.

X. Yang, S. Zhang, J. Liu, Q. Gao, S. Dong, and C. Zhou, “Deep learning for smart fish farming: applications, opportunities and challenges,” Rev. Aquac., vol. 13, no. 1, pp. 66–90, Jan. 2021, doi: 10.1111/raq.12464.

D. Li and L. Du, “Recent advances of deep learning algorithms for aquacultural machine vision systems with emphasis on fish,” Artif. Intell. Rev., vol. 55, no. 5, pp. 4077–4116, Jun. 2022, doi: 10.1007/s10462-021-10102-3.

S. Li, Y. Li, Y. Li, M. Li, and X. Xu, “YOLO-FIRI: Improved YOLOv5 for Infrared Image Object Detection,” IEEE Access, vol. 9, pp. 141861–141875, 2021, doi: 10.1109/ACCESS.2021.3120870.

S. Luo, J. Yu, Y. Xi, and X. Liao, “Aircraft Target Detection in Remote Sensing Images Based on Improved YOLOv5,” IEEE Access, vol. 10, pp. 5184–5192, 2022, doi: 10.1109/ACCESS.2022.3140876.

X. Zhu, S. Lyu, X. Wang, and Q. Zhao, “TPH-YOLOv5: Improved YOLOv5 Based on Transformer Prediction Head for Object Detection on Drone-captured Scenarios,” in 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), Oct. 2021, vol. 2021-Octob, pp. 2778–2788, doi: 10.1109/ICCVW54120.2021.00312.

Y. Jin, H. Gao, X. Fan, H. Khan, and Y. Chen, “Defect Identification of Adhesive Structure Based on DCGAN and YOLOv5,” IEEE Access, vol. 10, pp. 79913–79924, 2022, doi: 10.1109/ACCESS.2022.3193775.

Z. Wang, H. Zhang, Z. Lin, X. Tan, and B. Zhou, “Prohibited Items Detection in Baggage Security Based on Improved YOLOv5,” in 2022 IEEE 2nd International Conference on Software Engineering and Artificial Intelligence (SEAI), Jun. 2022, pp. 20–25, doi: 10.1109/SEAI55746.2022.9832407.

Y. Li, R. Cheng, C. Zhang, M. Chen, J. Ma, and X. Shi, “Sign language letters recognition model based on improved YOLOv5,” in 2022 9th International Conference on Digital Home (ICDH), Oct. 2022, pp. 188–193, doi: 10.1109/ICDH57206.2022.00036.

C. Zhang, A. Xiong, X. Luo, C. Zhou, and J. Liang, “Electric Bicycle Detection Based on Improved YOLOv5,” in 2022 4th International Conference on Advances in Computer Technology, Information Science and Communications (CTISC), Apr. 2022, pp. 1–5, doi: 10.1109/CTISC54888.2022.9849750.

M. Cao, H. Fu, J. Zhu, and C. Cai, “Lightweight tea bud recognition network integrating GhostNet and YOLOv5,” Math. Biosci. Eng., vol. 19, no. 12, pp. 12897–12914, 2022, doi: 10.3934/mbe.2022602

Downloads

Published

2023-11-29

How to Cite

Tall, M. M., Ngom, I., Sadio, O., Coulibaly, A., Diagne, I., & Ndiaye, M. (2023). Automatic detection and counting of fisheries using fish images. Bulletin of Social Informatics Theory and Application, 7(2), 150–162. https://doi.org/10.31763/businta.v7i2.655

Issue

Section

Articles