摘要
In spite of its simplicity, naive Bayesian learning has been widely used in many data mining applications. However, the unrealistic assumption that all features are equally important negatively impacts the performance of naive Bayesian learning. In this paper, we propose a new method that uses a Kullback-Leibler measure to calculate the weights of the features analyzed in naive Bayesian learning. Its performance is compared to that of other state-of-the-art methods over a number of datasets.
- 出版日期2014-8