Applying Image Processing techniques for Automated Dairy Feeding Robots

Authors

  • Amornthep Sonsilphong Faculty of Engineering, Rajamangala University of Technology Isan Khon Kaen Campus
  • Chutchai Kaewta Faculty of Computer Science, Ubon Ratchathani Rajabhat University
  • Sarayut Gonwirat Faculty of Engineering and Industrial Technology, Kalasin University
  • Ronnachai Sangmuenmao Faculty of Engineering and Industrial Technology, Kalasin University

DOI:

https://doi.org/10.14456/jeit.2023.18

Keywords:

Smart Farm, Image Processing, Robot and Automation

Abstract

At present, farmers are facing problems with increasing production costs, agricultural labor shortage problems, lack of technology and most of the products are not of standard quality. This research has studied and developed a smart farm system for the automatic feeding of dairy cows. By bringing knowledge of information technology together with intelligent electronics to develop a smart system for feeding dairy cows in-house. This research has applied AI to image processing to analyze the current cow status and the positioning of robots in the cow house. The main purpose is to improve the efficient feeding and minimize the energy consumption of the smart farm system for the automatic feeding of dairy cows. The experimental assessment results of the image processing efficiency for the analysis of the current cattle state showed that the MobileNet technique was the most suitable for the application. It has the smallest size of 14 megabytes, has the fastest response time of 0.001 seconds, and an accuracy of 97.22 percent.

References

[1] LELY VECTOR, "Automatic feeding system," [Online]. Available: https://www.lely.com/us/. [Accessed: 10 November 2022].

[2] S. Dargan, et al., "A Survey of Deep Learning and Its Applications: A New Paradigm to Machine Learning," Archives of Computational Methods in Engineering, vol. 27, no. 4, pp. 1071–1092, 2020.

[3] Y. LeCun, et al., "Gradient-Based Learning Applied to Document Recognition," Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.

[4] S. Loffe and C. Szegedy, "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift," in Proceedings of the 32nd International Conference on Machine Learning, vol. 37, pp. 1-9, 2015.

[5] V. Nair and G. E. Hinton, "Rectified Linear Units Improve Restricted Boltzmann Machines," in Proceedings of the 27th International Conference on Machine Learning, vol. 10, no. 2, pp. 807-814, 2010.

[6] M. Lin, et al., "Network In Network," Computer Science, pp. 1-10, 2013.

[7] A. Krizhevsky, et al., "ImageNet Classification with Deep Convolutional Neural Networks," Communications of the ACM, vol. 60, no. 6, pp. 84-90, 2017.

[8] J. Deng, et al., "ImageNet: A large-scale hierarchical image database," in IEEE Conference on Computer Vision and Pattern Recognition, pp. 248-255, 2009.

[9] K. Simonyan and A. Zisserman, "Very Deep Convolutional Networks for Large-Scale Image Recognition," Clinical Orthopaedics and Related Research, pp. 1409-1556, 2015.

[10] C. Szegedy, et al., "Going deeper with convolutions," in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 1-9.

[11] C. Szegedy, et al., "Rethinking the Inception Architecture for Computer Vision," in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 2818-2826.

[12] K. He, et al., "Deep Residual Learning for Image Recognition," in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770-778.

[13] C. Szegedy, et al., "Inception-v4, inception-ResNet and the impact of residual connections on learning," in Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, 2017, pp. 4278-4284.

[14] G. Huang, et al., "Densely Connected Convolutional Networks," in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 2261-2269.

[15] J. Hu, et al., "Squeeze-and-Excitation Networks," in IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018, pp. 7132-7141.

[16] A. G. Howard, et al., "MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications," International Journal of Intelligence Science, vol. 11, no. 1, pp. 1-9, 2017.

[17] M. Sandler, et al., "MobileNetV2: Inverted Residuals and Linear Bottlenecks," in IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 4510–4520.

[18] B. Zoph, et al., "Learning Transferable Architectures for Scalable Image Recognition," in IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 8697-8710.

[19] Cytron Technologies Co. Ltd., "Raspberry Pi 4 Model B 4GB and Kits," [Online]. Available: https://th.cytron.io/p-raspberry-pi-4-model-b-4gb. [Accessed: 10 November 2022].

Downloads

Published

2023-08-30

How to Cite

[1]
A. Sonsilphong, C. Kaewta, S. Gonwirat, and R. Sangmuenmao, “Applying Image Processing techniques for Automated Dairy Feeding Robots”, JEIT, vol. 1, no. 4, pp. 33–43, Aug. 2023.