A Comparative Study of Iterative Feature Selection and Boosting for Software Quality Estimation  
Author Kehan Gao


Co-Author(s) Taghi M. Khoshgoftaar; Randall Wald


Abstract Software quality prediction models are effective tools for helping achieve high quality software products. These models utilize software metrics and defect data to build classification models for identifying potentially-problematic program modules, thereby enabling effective project resources allocation. The predictive accuracy of these classification models can be affected by the quality of input data. There are two main problems which can affect accuracy: high dimensionality (too many independent attributes in a dataset) and class imbalance (many more members of one class than the other class in a binary-classification problem). In this study, we present an iterative feature selection approach to solve both of these problems, which consists of two basic steps: (1) using a sampling technique to balance a dataset, and (2) applying a filter-based feature ranking technique to the balanced dataset and ranking all the features according to their predictive capabilities. We repeat the two steps k times (k = 10 is this study) and finally aggregate the results. Following feature selection, models are built either using a plain learner or by using a boosting algorithm which incorporates sampling. The main purpose of this paper is to investigate the impact of various balancing, filter, and learning techniques in the feature selection and model-building process on software quality prediction. We use two sampling techniques, random undersampling (RUS) and synthetic minority oversampling (SMOTE), and two ensemble boosting approaches, RUSBoost and SMOTEBoost (in which RUS and SMOTE, respectively, are integrated into a boosting technique), along with six feature ranking techniques. We apply the iterative feature selection techniques to two groups of software datasets and use two simple learners to build classification models. The experimental results demonstrate that the RUS technique results in better prediction than SMOTE, and also that boosting is more effective in improving classification performance than not using boosting. In addition, some feature ranking techniques, like chi-squared and information gain, exhibit better and more stable classification behavior than other filters.


    Article #:  19104
Proceedings of the 19th ISSAT International Conference on Reliability and Quality in Design
August 5-7, 2013 - Honolulu, Hawaii, U.S.A.