Ayuda
Ir al contenido

Dialnet


Supported Decision-Making by Explainable Predictions of Ship Trajectories

    1. [1] University of Stuttgart

      University of Stuttgart

      Stadtkreis Stuttgart, Alemania

    2. [2] Karlsruhe Institute of Technology

      Karlsruhe Institute of Technology

      Stadtkreis Karlsruhe, Alemania

    3. [3] Fraunhofer Institute for Manufacturing Engineering and Automation

      Fraunhofer Institute for Manufacturing Engineering and Automation

      Stadtkreis Stuttgart, Alemania

    4. [4] Fraunhofer IOSB (Karlsruhe, Germany)
  • Localización: 15th International Conference on Soft Computing Models in Industrial and Environmental Applications (SOCO 2020): Burgos, Spain ; September 2020 / coord. por Álvaro Herrero Cosío, Carlos Cambra Baseca, Daniel Urda Muñoz, Javier Sedano Franco, Héctor Quintián Pardo, Emilio Santiago Corchado Rodríguez, 2021, ISBN 978-3-030-57802-2, págs. 44-54
  • Idioma: inglés
  • Texto completo no disponible (Saber más ...)
  • Resumen
    • Machine Learning and Deep Learning models make accurate predictions based on a specifically trained task. For instance, models that classify ship vessel types based on their trajectory and other features. This can support human experts while they try to obtain information on the ships, e.g., to control illegal fishing. Besides the support in predicting a certain ship type, there is a need to explain the decision-making behind the classification. For example, which features contributed the most to the classification of the ship type. This paper introduces existing explanation approaches to the task of ship classification. The underlying model is based on a Residual Neural Network. The model was trained on an AIS data set. Further, we illustrate the explainability approaches by means of an explanatory case study and conduct a first experiment with a human expert.


Fundación Dialnet

Dialnet Plus

  • Más información sobre Dialnet Plus

Opciones de compartir

Opciones de entorno