IJSTR

International Journal of Scientific & Technology Research

Home About Us Scope Editorial Board Blog/Latest News Contact Us
0.2
2019CiteScore
 
10th percentile
Powered by  Scopus
Scopus coverage:
Nov 2018 to May 2020

CALL FOR PAPERS
AUTHORS
DOWNLOADS
CONTACT

IJSTR >> Volume 3- Issue 11, November 2014 Edition



International Journal of Scientific & Technology Research  
International Journal of Scientific & Technology Research

Website: http://www.ijstr.org

ISSN 2277-8616



A Review Of Fast Clustering-Based Feature Subset Selection Algorithm

[Full Text]

 

AUTHOR(S)

Pawan Gupta, Susheel Jain, Anurag Jain

 

KEYWORDS

Keyword: Feature subset selection, filter method, and feature clustering.

 

ABSTRACT

Abstract: In this paper we cover some reference paper and compared different algorithm on the basis of their performance and selection of data set. Where the efficiency concerns on the time evaluation of features selection, and the effectiveness is related to the quality of the subset of features selection. We analysis this report based on feature subset selection algorithm from the years of 1997 to 2013 and summaries the result of data.

 

REFERENCES

[1] Guyon I. and Elisseeff A., “An introduction to variable and feature selection,” The Journal of Machine Learning Research, vol. 3, pp. 1157-1182, 2003.

[2] Yu L. and Liu H., “Efficient feature selection via analysis of relevance and redundancy,” The Journal of Machine Learning Research, vol. 25, pp. 1205-1224, 2004.

[3] Peng H. C., Long F. H., and Ding C., “Feature selection based on mutual information criteria of max-dependency, max-relevance, and min-redundancy,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 8, pp. 1226-1238, 2005.

[4] Banks A., Vincent J., and Anyakoha C., “A review of particle swarm optimization. Part i: background and development,” Natural Computing, vol. 6, no. 4, pp. 467-484, 2007.

[5] Azevedo G. L. F. B. G., Cavalcanti G. D. C., and Filho E. C. B. C., “An approach to feature selection for keystroke dynamics systems based on pso and feature weighting,” in Proc. IEEE Congress on Evolutionary Computation (CEC’97), pp. 3577-3584, 2007.

[6] Wang X., Yang J., Teng X., Xia W., and Jensen R., “Feature selection based on rough sets and article swarm optimization,” Pattern Recognition Letters, vol. 28, no. 4, pp. 459-471, 2007.

[7] Chakraborty B., “Feature subset selection by particle swarm optimization with fuzzy fitness function,” in Proceedings3rd International Conference on Intelligent System and Knowledge Engineering (ISKE’08)), vol. 1, pp. 1038-1042, 2008.

[8] Chuang L., Chang H., Tu C., and Yang C., “Improved binary pso for feature selection using gene expression data,” Computational Biology and Chemistry, vol. 32, no. 1, pp. 29-38, 2008.

[9] Li A. and Wang B., “Feature subset selection based on binary particle swarm optimization and overlap information entropy,” in International Conference on Computational Intelligence and Software Engineering (CiSE’09), pp. 1-4, 2009.

[10] Hall M., Frand E., Holms G., Pfahringer B., Reutemann P., and Witten I. H., “The weka data mining software: An update,” SIGKDD Explorations, vol. 11, no. 1, pp. 10-18, 2009.

[11] Clerc M., Particle swarm optimization, Wiley-ISTE, 2010.

[12] Frank A. and Asuncion A., “UCI machine learning repository,” 2010.

[13] Wang J., Zhao Y., and Liu P., “Effective feature selection with particle swarm optimization based one-dimension searching,” in 3rd International Symposium on Systems and Control in Aeronautics and Astronautics (ISSCAA), pp. 702-705. 2010.

[14] Dejaeger K., Verbeke W., Martens D. and Baesens B., “Data Mining Techniques for Software Effort Estimation: A Comparative Study,” IEEE Transactions on Software Engineering - TSE , vol. 38, no. 2, pp. 375-397, 2012.

[15] Javed K., Babri H. A., Saeed M., “Feature Selection Based on Class-Dependent Densities for High-Dimensional Binary Data,” IEEE Transactions on Knowledge and Data Engineering - TKDE , vol. 24, no. 3, pp. 465-477, 2012.

[16] Kohavi R. and John G. H., “A survey on particle swarm optimization in feature selection,” in Global Trends in Information Systems and Software Applications, Springer, pp. 192-201, 2012.

[17] Xue B., Zhang M., and Browne W. N., “Multi-objective particle swarm optimization (pso) for feature selection,” in Proceedings of the fourteenth international conference on Genetic and evolutionary computation conference, pp. 81-88, 2012.

[18] Cervante L., Xue B., Zhang M., and Shang L., “Binary particle swarm optimization for feature selection: A filter based approach,” in Proc. IEEE Congress on Evolutionary Computation (CEC’12), pp. 1-8, 2012.

[19] Qinbao Song, Jingjie Ni, and Guangtao Wang, “A Fast Clustering-Based Feature Subset Selection Algorithm for High-Dimensional Data,” IEEE Transactions On Knowledge And Data Engineering, Vol. 25, No. 1, pp. 1-14, January 2013.

[20] L. Yu and H. Liu, “Feature Selection for High-Dimensional Data: A Fast Correlation-Based Filter Solution,” Proc. 20th Int’l Conf. Machine Leaning, vol. 20, no. 2, pp. 856-863, 2003.

[21] C. Krier, D. Francois, F. Rossi, and M. Verleysen, “Feature Clustering and Mutual Information for the Selection of Variables in Spectral Data,” Proc. European Symp. Artificial Neural Networks Advances in Computational Intelligence and Learning, pp. 157-162, 2007.

[22] G. Van Dijck and M.M. Van Hulle, “Speeding Up the Wrapper Feature Subset Selection in Regression by Mutual Information Relevance and Redundancy Analysis,” Proc. Int’l Conf. Artificial Neural Networks, 2006.

[23] I.S. Dhillon, S. Mallela, and R. Kumar, “A Divisive Information Theoretic Feature Clustering Algorithm for Text Classification,” J. Machine Learning Research, vol. 3, pp. 1265-1287, 2003.

[24] R. Butterworth, G. Piatetsky-Shapiro, and D.A. Simovici, “On Feature Selection through Clustering,” Proc. IEEE Fifth Int’l Conf. Data Mining, pp. 581-584, 2005.

[25] Ron Kohavi and George H. John, “Wrapper for Feature Subset Selection.”Artificial Intelligence, 97(1-2) pp.273-324, 1997.

[26] I. Guyon and A. Elissee, “An Introduction to Variable and Feature Selection,” J. Mach. Learn.Res.,3, pp. 1157-1182, 2003.

[27] Lei Yu and Huan Liu, “Feature Selection for High Dimensional Data: A Fast Correlation-based filter solution,” ICML 03: Proceedings of the 20th International Conference Machine learning, pp. 856-863, 2003.

[28] Fan Li and Yiming Yang, “Using recursive classification to discover predictive features,” proceeding of the 2005 ACM Symposium applied computing, pp 1054-1058, 2005.

[29] Y. Saeys, I Inza, and P Larranaga.“A Review of Feature Selection Techniques in Bioinformatics,”Bioinformatic, 2007.

[30] M. Robnik-Sikonja and I. Kononenko, “Theoretical and Empirical Analysis of ReliefF and RReliefF,” Mach.Learn.,53(1-2), pp. 32-69,2003.

[31] Max Bramer. Principles of Data Mining. Springer, 2007.

[32] M. Dash, H. Liu and H, Motoda, “Consistency based Feature Selection,” Proceedings of the 4th Pacic Asia Conference on Knowledge Discovery and Data Mining, Current Issues and New Applications, pp.98-109, Springer, 2000.

[33] M. Dash, H Liu, “Consistency based searching Feature Selection,” Artificial Intelligence, 151(1-2), pp. 155-176, 2003.

[34] L Yu and H Liu, “Efficient Feature Selection via Analysis of Relevance and Redundancy,”J.Mach. Learn Res. Vol.5,pp. 1205-1224, 2004.

[35] AL Blum and P Langley, “Selection of Relevant Features and Examples in Machine Learning,” Artificial Intelligence, Vol.97, pp. 245-271, 1997.