Comparison of decision trees used in data mining
DOI:
https://doi.org/10.14527/pegegog.2019.039Abstract
The purpose of this study is to compare decision trees obtained by data mining algorithms used in various areas in recent years according to different criteria. In the study, similar and different aspects of the decision trees obtained by different methods for classifying the students as successful and unsuccessful in terms of science literacy were revealed with the help of 12 independent variables included in the PISA 2015 student survey. Data collected across Turkey, from a total of 5895 students in the age group of 15, were analyzed in Java-based Weka software, which has an open source code. As a result of the analysis, it was found that the most successful algorithms in terms of correct classification rate were respectively Logistic Model, Hoeffding Tree, J.48, REPTree and Random Tree. In addition, regarding the decision trees obtained by different learning algorithms, variables that have been effective in the classification were found to be different. According to the results, it was concluded that independent variables found to be effective in the classification of the students for the decision trees obtained by different algorithms differed from each other and it was suggested to report the finding of more than one algorithm instead of those of only one algorithm.
Downloads
References
Aydın, S. (2007). Veri madenciliği ve Anadolu Üniversitesi uzaktan eğitim sisteminde bir uygulama. Unpublished doctorate dissertation, Anadolu Üniversitesi, Eskişehir.
Bakker, R. (2016). A comparison of decision trees for ingredient classification. Bachelor thesis, University of Amsterdam, Amsterdam.
Barros, R. C., Carvalho, A. C. P. L. F. de, & Freitas, A. A. (2015). Automatic design of decision-tree ınduction algorithms. Heidelberg, NY: SpringerBriefs in Computer Science.
Büyüköztürk, Ş., Kılıç Çakmak, E., Akgün, Ö., Karadeniz, Ş., & Demirel, F. (2016). Eğitimde bilimsel araştırma yöntemleri. Ankara: Pegem Akademi Yayıncılık.
Carletta, J. (1993). Assessing agreement on classification tasks: The kappa statistic. Computational Linguistics, 22 (2), 249–254.
Cinaroglu, S. (2016). Comparison of performance of decision tree algorithms and random forest: An application on OECD countries health expenditures. International Journal of Computer Applications, 138 (1), 37–41.
Dietterich, T. (2000). An experimental comparison of three methods for constructing ensembles of decision trees: Bagging, boosting, and randomization. Machine Learning, 40 (2), 139–157.
Doreswamy, H. K. (2012). Performance evaluation of predictive classifiers for knowledge discovery from engineering materials data sets. CIIT International Journal of Artificial Intelligent Systems and Machine Learning, 3 (3), 162–168.
Fayyad, U., Piatetsky-Shapiro, G., & Smyth, P. (1996). The KDD process for extracting useful knowledge from volumes of data. Communications of the ACM, 39 (11), 27–34. https://doi.org/10.1145/240455.240464
Fraenkel, J. R., Wallen, N. E., & Hyun, H. H. (2012). How to design and evaluate research in education. New York: McGraw-Hill.
Han, J., & Kamber, M. (2006). Data mining concepts and techniques. San Francisco, CA: Morgan Kaufmann Publishers.
Hossin, M., & Sulaiman, M. N. (2015). A review on evaluation metrics for data classification evaluations. International Journal of Data Mining & Knowledge Management Process, 5(2), 01–11. https://doi.org/10.5121/ijdkp.2015.5201
Hssina, B., Abdelkarim, M., Ezzikouri, H., & Erritali, M. (2014). A comparative study of decision tree ID3 and C4.5. International Journal of Advanced Computer Science and Applications, 4 (2), 13–19. https://doi.org/10.14569/specialissue.2014.040203
Huang, S., & Fang, N. (2013). Predicting student academic performance in an engineering dynamics course: A comparison of four types of predictive mathematical models. Computers and Education, 61 (1), 133–145. https://doi.org/10.1016/j.compedu.2012.08.015
Imielinski, T., & Mannila, H. (1996). A database perspective on knowledge discovery. Communications of the ACM, 39 (11), 373–408.
Kiranmai, B., & Damodaram, A. (2014). A review on evaluation measures for data mining tasks. International Journal Of Engineering And Computer Science, 3 (7), 7217–7220.
Kohavi, R. (1995). A study of cross-validation and bootstrap for accuracy estimation and model selection. Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence, 2, 1137–1145. https://doi.org/10.1067/mod.2000.109031
Kusiak, A. (2001). Data analysis: Models and algorithms. Proceedings of the SPIE Conference on Intelligent Systems and Advanced Manufacturing, In P.E. Orban and G.K. Knopf (Eds), SPIE (pp. 1-9), Boston: MA.
Kuyucu, Y. E. (2012). Lojistik regresyon analizi (LRA), yapay sinir ağları (YSA) ve sınıflandırma ve regresyon ağaçları (C&RT) yöntemlerinin karşılaştırılması ve tıp alanında bir uygulama. Unpublished master’s thesis, Gaziosmanpaşa Üniversitesi, Sağlık Bilimleri Enstitüsü, Tokat.
Landis, J. R., & Koch, G. G. (1977). The measurement of observer agreement for categorical data. Biometrics, 33 (1), 159–174. https://doi.org/10.2307/2529310
Larose, D. T. (2005). Discovering knowledge in data: An introduction to data mining. New Jersey: John Wiley & Sons.
Liaw, A., & Wiener, M. (2002). Classification and regression by random forest. R News, 2 (3), 18–22.
Lv, S., Kim, H., Zheng, B., & Jin, H. (2018). A review of data mining with big data towards its applications in the electronics industry. Applied Sciences, 7 (582), 2–34. https://doi.org/10.3390/app8040582
Lykourentzou, I., Giannoukos, I., Mpardis, G., Nikolopoulos, V., & Loumos, V. (2009). Early and dynamic student achievement prediction in e-learning courses using neural networks. Journal of the American Society for Information Science and Technology, 60 (2), 372–380.
Maimon, O., & Rokach, L. (2005). Data mining and knowledge discovery handbook. Secaucus, NJ: Springer-Verlag Inc.
Mease, D., & Wyner, A. (2008). Evidence contrary to the statistical view of boosting: A rejoinder to responses. Journal of Machine Learning Research, 9, 195–201.
MEB. (2016). PISA 2015 Ulusal Raporu. Millî Eğitim Bakanlığı, Ölçme, Değerlendirme ve Sınav Hizmetleri Genel Müdürlüğü, Ankara.
Mohan, V. (2013). Decision trees: A comparison of various algorithms for building decision trees. Retrieved July 23, 2019, from https://pdfs.semanticscholar.org/3399/c175beca3ab4843d67f91bb28f564099d0bb.pdf
Neelamegam, S., & Ramaraj, E. (2013). Classification algorithm in data mining: An Overview. International Journal of P2P Network Trends and Technology (IJPTT), 3 (5), 1–5.
Podgorelec, V., Kokol, P., Stiglic, B., & Rozman, I. (2002). Decision trees: An overview and their use in medicine. Journal of Medical Systems, 26 (5), 445-463.
Rastogi, R., & Shim, K. (2000). PUBLIC: A decision tree classifier that integrates building and pruning. Data Mining and Knowledge Discovery, 4 (4), 315–344.
Romero, C., & Ventura, S. (2013). Data mining in education. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 3 (1), 12–27. https://doi.org/10.1002/widm.1075
Sieber, J. E. (2008). Data mining: Knowledge discovery for human research ethics. Journal of Empirical Research on Human Research Ethics, 3 (3), 1–2. https://doi.org/10.1525/jer.2008.3.3.1
Sokolova, M., & Lapalme, G. (2009). A systematic analysis of performance measures for classification tasks. Information Processing and Management, 45 (4), 427–437. https://doi.org/10.1016/j.ipm.2009.03.002
Strobl, C., Malley, J., & Tutz, G. (2009). An introduction to recursive partitioning: Rationale, application, and characteristics of classification and regression trees, bagging, and random forests. Psychological Methods, 14 (4), 323–348. https://doi.org/10.1037/a0016973.
Svetnik, V., Liaw, A., Tong, C., & Wang, T. (2004). Application of breiman’s random forest to modeling structure-activity relationships of pharmaceutical molecules. In F. Roli, J. Kittler, & T. Windeatt, (Eds.). Multiple Classifier Systems (pp. 1-35), Berlin, Heidelberg: Springer.
Tan, P.N., Steinbach, M., & Kumar, V. (2005). Introduction to data mining. Boston, USA: Addison-Wesley Longman Publishing Co.
Thuraisingham, B. (2003). Web data mining and applications in business ıntelligence and counter terrorism. USA: CRC Press LLC, Boca Raton, FL.
Tiwari, M., Jha, M. B., & Yadav, O. (2012). Performance analysis of data mining algorithms in weka. IOSR Journal of Computer Engineering (IOSRJCE), 6 (3), 32–41.
Vaus, D. de. (2001). Research design in social research. London: Sage Publications.
Vihinen, M. (2012). How to evaluate performance of prediction methods? Measures and their interpretation in variation effect analysis. BMC Genomics, 13 (4), 1-10. https://doi.org/10.1186/1471-2164-13-S4-S2
Weiss, S. M., & Kulikowski, C. A. (1991). Computer systems that learn: Classification and prediction methods from statistics, neural nets, machine learning, and expert systems. San Mateo, CA: Morgan Kaufmann.
Witten, I. H., & Frank, E. (2005). Data mining: Practical machine learning tools and techniques. San Francisco, CA: Morgan Kaufmann Publishers.
Witten, I. H., Frank, E., & Hall, M. (2016). Data mining: Practical machine learning tools and techniques. USA: Morgan Kaufmann Publications.
Wu, X., Kumar, V., Ross, Q. J., Ghosh, J., Yang, Q., Motoda, H., … Steinberg, D. (2008). Top 10 algorithms in data mining. Knowledge and Information Systems, 14 (1), 1-37. https://doi.org/10.1007/s10115-007-0114-2
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2019 Gökhan Aksu, Nuri Doğan (Author)

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
NonCommercial — You may not use the material for commercial purposes.
No additional restrictions — You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.