Innovative CNN approach for reliable chicken meat classification in the poultry industry
DOI:
https://doi.org/10.31763/businta.v8i2.686Keywords:
Chicken, Deep Learning, CNN, Confusion MatrixAbstract
In response to the burgeoning need for advanced object recognition and classification, this research embarks on a journey harnessing the formidable capabilities of Convolutional Neural Networks (CNNs). The central aim of this study revolves around the precise identification and categorization of objects, with a specific focus on the critical task of distinguishing between fresh and spoiled chicken meat. This study's overarching objective is to craft a robust CNN-based classification model that excels in discriminating between objects. In the context of our research, we set out to create a model adept at distinguishing between fresh and rotten chicken meat. This endeavor holds immense potential in augmenting food safety and elevating quality control standards within the poultry industry. Our research methodology entails meticulous data collection, which includes acquiring high-resolution images of chicken meat. This meticulously curated dataset serves as the bedrock for both training and testing our CNN model. To optimize the model, we employ the 'adam' optimizer, while critical performance metrics, such as accuracy, precision, recall, and the F1-score, are methodically computed to evaluate the model's effectiveness. Our experimental findings unveil the remarkable success of our CNN model, with consistent accuracy, precision, and recall metrics all reaching an impressive pinnacle of 94%. These metrics underscore the model's excellence in the realm of object classification, with a particular emphasis on its proficiency in distinguishing between fresh and rotten chicken meat. In summation, our research concludes that the CNN model has exhibited exceptional prowess in the domains of object recognition and classification. The model's high accuracy signifies its precision in furnishing accurate predictions, while its elevated precision and recall values accentuate its effectiveness in differentiating between object classes. Consequently, the CNN model stands as a robust foundation for future strides in object classification technology. As we peer into the horizon of future research, myriad opportunities beckon. Our CNN model's applicability extends beyond chicken meat classification, inviting exploration across diverse domains. Furthermore, the model's refinement and adaptation for specific challenges represent an exciting avenue for future work, promising heightened performance across a broader spectrum of object recognition tasks.
References
N. A. Mir, A. Rafiq, F. Kumar, V. Singh, and V. Shukla, “Determinants of broiler chicken meat quality and factors affecting them: a review,” J. Food Sci. Technol., vol. 54, no. 10, pp. 2997–3009, 2017, doi: 10.1007/s13197-017-2789-z.
K. Wessels, D. Rip, and P. Gouws, “Salmonella in chicken meat: Consumption, outbreaks, characteristics, current control methods and the potential of bacteriophage use,” Foods, vol. 10, no. 8, 2021, doi: 10.3390/foods10081742.
S. M. Yanestria and F. J. Wibisono, “Incidents of Tiren Chicken Distribution at Traditional Markets in Surabaya,” J. Kaji. Vet., vol. 5, no. 1, pp. 43–51, 2017, [Online]. Available at: https://www.neliti.com/publications/298631/insiden-peredaran-ayam-tiren-pada-pasar-tradisional-di-surabaya.
N. Benabdellah, K. Hachami, M. Bourhaleb, and N. Benazzi, “Identification of Two Types of Rotten Meat Using,” Int. J. Smart Sens. Intell. Syst., vol. 10, no. 3, pp. 673–695, 2017, doi: 10.21307/ijssis-2017-229.
P. Patriani, H. Hafid, T. H. Wahyuni, and T. V. Sari, “Physical quality improvement of culled chicken meat with marinated technology using Gelugur acid (Garcinia atroviridis) biomass,” IOP Conf. Ser. Earth Environ. Sci., vol. 749, no. 1, 2021, doi: 10.1088/1755-1315/749/1/012001.
S. Anraeni, E. R. Melani, and H. Herman, “Ripeness Identification of Chayote Fruits using HSI and LBP Feature Extraction with KNN Classification,” Ilk. J. Ilm., vol. 14, no. 2, pp. 150–159, 2022, doi: 10.33096/ilkom.v14i2.1153.150-159.
B. Johanes, D. N. Mendrofa, and O. Sihombing, “Implementation of The K-Nearest Neighbor Method to Determine The Quality of Export Import Swallow’s Nest,” J. Comput. Networks, Archit. High Perform. Comput., vol. 4, no. 1, pp. 46–53, 2022, doi: 10.47709/cnahpc.v4i1.1281.
N. Z. Abidin and A. R. Ismail, “An improved K-nearest neighbour with grasshopper optimization algorithm for imputation of missing data,” Int. J. Adv. Intell. Informatics, vol. 7, no. 3, pp. 304–317, 2021, doi: 10.26555/ijain.v7i3.696.
G. B. P. and E. P. Calvin, “Classification of Chicken Meat Freshness using Convolutional Neural Network Algorithms,” in 2020 International Conference on Innovation and Intelligence for Informatics, Computing and Technologies (3ICT), 2020, pp. 1–6, doi: 10.1109/3ICT51146.2020.9312018.
A. Patil and M. Rane, “Convolutional Neural Networks: An Overview and Its Applications in Pattern Recognition,” Smart Innov. Syst. Technol., vol. 195, pp. 21–30, 2021, doi: 10.1007/978-981-15-7078-0_3.
Lukman, “Classification Of Detection Of Tiren And Normal Chicken In Chicken Meat Using The Lvq (Learning Vector Quantization) Method,” pp. 1-103, 2017. [Online]. Available at: http://etheses.uin-malang.ac.id.
A. A. Alhamdani, “Application of Deep Learning using Convolutional Neural Network ( CNN ) Algorithm for Gesture Recognition,” J. Softw. Eng. , Inf. Commun. Technol. ( SEICT ), vol. 4, no. June, pp. 61–68, 2023, doi: 10.17509/seict.v2i1.34673.
C. Shorten and T. M. Khoshgoftaar, “A survey on Image Data Augmentation for Deep Learning,” J. Big Data, vol. 6, no. 1, p. 60, Dec. 2019, doi: 10.1186/s40537-019-0197-0.
D. Satybaldina and G. Kalymova, “Deep learning based static hand gesture recognition,” Indones. J. Electr. Eng. Comput. Sci., vol. 21, no. 1, pp. 398–405, 2021, doi: 10.11591/ijeecs.v21.i1.pp398-405.
A. Mujahid et al., “Real-time hand gesture recognition based on deep learning YOLOv3 model,” Appl. Sci., vol. 11, no. 9, 2021, doi: 10.3390/app11094164.
R. E. Nogales and M. E. Benalcázar, “Hand Gesture Recognition Using Automatic Feature Extraction and Deep Learning Algorithms with Memory,” Big Data Cogn. Comput., vol. 7, no. 2, p. 102, 2023, doi: 10.3390/bdcc7020102.
S. Aggarwal, M. Bhatia, R. Madaan, and H. M. Pandey, “Optimized sequential model for plant recognition in Keras,” IOP Conf. Ser. Mater. Sci. Eng., vol. 1022, no. 1, 2021, doi: 10.1088/1757-899X/1022/1/012118.
K. Wang, C. Chen, and Y. He, “Research on pig face recognition model based on keras convolutional neural network,” IOP Conf. Ser. Earth Environ. Sci., vol. 474, no. 3, 2020, doi: 10.1088/1755-1315/474/3/032030.
S. Inthiyaz, M. Muzammil Parvez, M. Siva Kumar, J. Sri Sai Srija, M. Tarun Sai, and V. Amruth Vardhan, “Facial Expression Recognition Using KERAS,” J. Phys. Conf. Ser., vol. 1804, no. 1, 2021, doi: 10.1088/1742-6596/1804/1/012202.
W. Nengsih, A. Ardiyanto, and A. P. Lestari, “Classification of cendrawasih birds using convolutional neural network (CNN) keras recognition,” Ilk. J. Ilm., vol. 13, no. 3, pp. 259–265, 2021, doi: 10.33096/ilkom.v13i3.865.259-265.
M. O. KAYA, “Performance Evaluation of Multilayer Perceptron Artificial Neural Network Model in the Classification of Heart Failure,” J. Cogn. Syst., vol. 6, no. 1, pp. 35–38, 2021, doi: 10.52876/jcs.913671.
J. Naskath, G. Sivakamasundari, and A. A. S. Begum, “A Study on Different Deep Learning Algorithms Used in Deep Neural Nets: MLP SOM and DBN,” Wirel. Pers. Commun., vol. 128, no. 4, pp. 2913–2936, 2023, doi: 10.1007/s11277-022-10079-4.
A. Noeman and D. Handayani, “Detection of Mad Lazim Harfi Musyba Images Uses Convolutional Neural Network,” IOP Conf. Ser. Mater. Sci. Eng., vol. 771, no. 1, 2020, doi: 10.1088/1757-899X/771/1/012030.
W. Jiang et al., “Multilayer perceptron neural network for surface water extraction in landsat 8 OLI satellite images,” Remote Sens., vol. 10, no. 5, pp. 1–22, 2018, doi: 10.3390/rs10050755.
X. Tang, L. Zhang, and X. Ding, “SAR image despeckling with a multilayer perceptron neural network,” Int. J. Digit. Earth, vol. 12, no. 3, pp. 354–374, 2019, doi: 10.1080/17538947.2018.1447032.
H. Gao, S. Lin, Y. Yang, C. Li, and M. Yang, “Convolution Neural Network Based on Two-Dimensional Spectrum for Hyperspectral Image Classification,” J. Sensors, vol. 2018, pp. 1–13, Aug. 2018, doi: 10.1155/2018/8602103.
K. Choudhary et al., “Recent advances and applications of deep learning methods in materials science,” npj Comput. Mater., vol. 8, no. 1, 2022, doi: 10.1038/s41524-022-00734-6.
P.-A. Grumiaux, S. Kitić, L. Girin, and A. Guérin, “A survey of sound source localization with deep learning methods,” J. Acoust. Soc. Am., vol. 152, no. 1, pp. 107–151, 2022, doi: 10.1121/10.0011809.
Q. Zhang, M. Zhang, T. Chen, Z. Sun, Y. Ma, and B. Yu, “Recent advances in convolutional neural network acceleration,” Neurocomputing, vol. 323, pp. 37–51, 2019, doi: 10.1016/j.neucom.2018.09.038.
I. Rizwan I Haque and J. Neubert, “Deep learning approaches to biomedical image segmentation,” Informatics Med. Unlocked, vol. 18, p. 100297, 2020, doi: 10.1016/j.imu.2020.100297.
Z. Amiri, A. Heidari, N. J. Navimipour, M. Unal, and A. Mousavi, “Adventures in data analysis: a systematic review of Deep Learning techniques for pattern recognition in cyber-physical-social systems,” Multimed. Tools Appl., Aug. 2023, doi: 10.1007/s11042-023-16382-x.
M. I. Mardiyah and T. Purwaningsih, “Developing deep learning architecture for image classification using convolutional neural network (CNN) algorithm in forest and field images,” Sci. Inf. Technol. Lett., vol. 1, no. 2, pp. 83–91, Nov. 2020, doi: 10.31763/sitech.v1i2.160.
S. Aggarwal and N. Chugh, “Review of Machine Learning Techniques for EEG Based Brain Computer Interface,” Arch. Comput. Methods Eng., vol. 29, no. 5, pp. 3001–3020, 2022, doi: 10.1007/s11831-021-09684-6.
J. H. J. C. Ortega, “Analysis of Performance of Classification Algorithms in Mushroom Poisonous Detection using Confusion Matrix Analysis,” Int. J. Adv. Trends Comput. Sci. Eng., vol. 9, no. 1.3, pp. 451–456, 2020, doi: 10.30534/ijatcse/2020/7191.32020.