Ayuda
Ir al contenido

Dialnet


The Effect of the Dataset Size on the Accuracy of Software Defect Prediction Models: An Empirical Study

    1. [1] King Fahd University of Petroleum and Minerals

      King Fahd University of Petroleum and Minerals

      Arabia Saudí

    2. [2] University of Ha'il, Arabia Saudi
  • Localización: Inteligencia artificial: Revista Iberoamericana de Inteligencia Artificial, ISSN-e 1988-3064, ISSN 1137-3601, Vol. 24, Nº. 68, 2021, págs. 72-88
  • Idioma: inglés
  • Enlaces
  • Resumen
    • The ongoing development of computer systems requires massive software projects. Running the components of these huge projects for testing purposes might be a costly process; therefore, parameter estimation can be used instead. Software defect prediction models are crucial for software quality assurance. This study investigates the impact of dataset size and feature selection algorithms on software defect prediction models. We use two approaches to build software defect prediction models: a statistical approach and a machine learning approach with support vector machines (SVMs). The fault prediction model was built based on four datasets of different sizes. Additionally, four feature selection algorithms were used. We found that applying the SVM defect prediction model on datasets with a reduced number of measures as features may enhance the accuracy of the fault prediction model. Also, it directs the test effort to maintain the most influential set of metrics. We also found that the running time of the SVM fault prediction model is not consistent with dataset size. Therefore, having fewer metrics does not guarantee a shorter execution time. From the experiments, we found that dataset size has a direct influence on the SVM fault prediction model. However, reduced datasets performed the same or slightly lower than the original datasets.


Fundación Dialnet

Dialnet Plus

  • Más información sobre Dialnet Plus

Opciones de compartir

Opciones de entorno