Ayuda
Ir al contenido

Dialnet


Computer vision classification detection of chicken parts based on optimized Swin-Transformer

  • Xianhui Peng [1] ; Chenchen Xu [1] ; Peng Zhang [1] ; Dandan Fu [1] ; Yan Chen [1] ; Zhigang Hu [1]
    1. [1] Wuhan Polytechnic University

      Wuhan Polytechnic University

      China

  • Localización: CyTA: Journal of food, ISSN 1947-6337, ISSN-e 1947-6345, Vol. 22, Nº. 1, 2024
  • Idioma: inglés
  • Enlaces
  • Resumen
    • In order to achieve real-time classification and detection of various chicken parts, this study introduces an optimized Swin-Transformer method for the classification and detection of multiple chicken parts. It initially leverages the Transformer’s self-attention structure to capture more comprehensive high-level visual semantic information from chicken part images. The image enhancement technique was applied to the image in the preprocessing stage to enhance the feature information of the image, and the migration learning method was used to train and optimize the Swin-Transformer model on the enhanced chicken parts dataset for classification and detection of chicken parts. Furthermore, this model was compared to four commonly used models in object target detection tasks: YOLOV3-Darknet53, YOLOV3-MobileNetv3, SSD-MobileNetv3, and SSD-VGG16. The results indicated that the Swin-Transformer model outper-forms these models with a higher mAP value by 1.62%, 2.13%, 5.26%, and 4.48%, accompanied by a reduction in detection time by 16.18 ms, 5.08 ms, 9.38 ms, and 23.48 ms, respectively. The method of this study fulfills the production line requirements while exhibiting superior performance and greater robustness compared to existing conventional methods.


Fundación Dialnet

Dialnet Plus

  • Más información sobre Dialnet Plus

Opciones de compartir

Opciones de entorno