Ayuda
Ir al contenido

Dialnet


Building a Classification Model Using Affinity Propagation

    1. [1] Georgia Southern University

      Georgia Southern University

      Estados Unidos

  • Localización: Hybrid Artificial Intelligent Systems. 14th International Conference, HAIS 2019: León, Spain, September 4–6, 2019. Proceedings / coord. por Hilde Pérez García, Lidia Sánchez González, Manuel Castejón Limas, Héctor Quintián Pardo, Emilio Santiago Corchado Rodríguez, 2019, ISBN 978-3-030-29858-6, págs. 275-286
  • Idioma: inglés
  • Enlaces
  • Resumen
    • Regular classification of data includes a training set and test set. For example for Naïve Bayes, Artificial Neural Networks, and Support Vector Machines, each classifier employs the whole training set to train itself. This study will explore the possibility of using a condensed form of the training set in order to get a comparable classification accuracy. The technique we explored in this study will use a clustering algorithm to explore how the data can be compressed. For example, is it possible to represent 50 records as a single record? Can this single record train a classifier as similarly to using all 50 records? This thesis aims to explore the idea of how we can achieve data compression through clustering, what are the concepts that extract the qualities of a compressed dataset, and how to check the information gain to ensure the integrity and quality of the compression algorithm. This study will explore compression through Affinity Propagation using categorical data, exploring entropy within cluster sets to calculate integrity and quality, and testing the compressed dataset with a classifier using Cosine Similarity against the uncompressed dataset.


Fundación Dialnet

Dialnet Plus

  • Más información sobre Dialnet Plus

Opciones de compartir

Opciones de entorno