An Extensive Analysis Of Various Machine Learning Algorithm For Performance Comparison
Abstract
Machine learning (ML) is becoming an essential tool for many industries, including automated decision-making and predictive analytics. To achieve best performance, it is essential to pick the most suited algorithm for a certain job from the huge array that is accessible. Numerous machine learning (ML) methods, such as neural networks, decision trees, logistic regression, random forests, k-nearest neighbors (KNN), and support vector machines (SVM) are thoroughly examined and compared in this research. Several criteria, including precision, recall, accuracy, F1 score, and computational efficiency, are used to compare the two sets of data, which include synthetic and real-world examples. Taking variables like data quantity, feature complexity, and noise into account, the research demonstrates the pros and downsides of each approach in various scenarios. With a focus on accuracy, interpretability, and computational demands, the results hope to help practitioners choose the best method for their expectations.
References
2. Ahmad, Khalil and Varshney, Natasha 2012 On minimizing software defects during new productdevelopment using enhanced preventive approach, International Journal of Soft Computing and Engineering, 2 p 9-12.
3. Andersson, Carina 2007 A replicated empirical study of a selection method for software reliability growth models, Empirical Software Engineering 12 p 61-82.
4. Fenton, Norman E &Nichlas O 2000 Quantitative analysis of faults and failures in a complex software system, IEEE Transactions on Software Engineering, 26, p 97-14.
5. Khoshgoftaar, Taghi M & Seliya 2004 Comparative assessment of software qualityclassification techniques: An empirical case study, Empirical Software Engineering, 9, p 29-57.
6. Khoshgoftaar,Taghi M , Seliya and Sundaresh, Nandani 2006 An empirical study ofpredicting software faults with case-based reasoning, Software Quality Journal, 14, p 85-11.
7. Menzies, Greenwald, Jeremy & Frank, Art 2007 Data mining static code attributes to learndefect predictors, IEEE Transaction Software Engineering, 33 p 2-13.
8. Spiewak, Rick and McRitchie, Karen 2008 Using software quality methods to reduce cost and prevent defects, Journal of Software Engineering and Technology, p 23-27.
9. Shiwei, Deng 2009 Defect prevention and detection of DSP-Software, World Academy ofScience, Engineering and Technology, 3, p 406-09.
10. Trivedi, Prakriti and Pachori, Som 2010 Modelling and analyzing of software defect preventionusing ODC, International Journal of Advanced Computer Science and Applications, 1,p 75- 77.
11. Nair, Gopalakrishan T and Suma V. 2010 The pattern of software defects spanning across sizecomplexity, International Journal of Software Engineering, 3, p 53- 70.
12. Lessmann, Baesens, Christopher., & Pietsch, Swantje 2008 Benchmarking classification models for software defect prediction: A proposed framework and novel finding, IEEE Transaction on Software Engineering, 34,p 485-96.
13. Sharma, Trilok C and Manoj J 2013 WEKA approach for comparative study of classification algorithm, International Journal of Advanced Research in Computer and Communication Engineering, 2, p 4- 7.
14. Kaur, Puneet J and Pallavi, 2013 Data mining techniques for software defect prediction,International Journal of Software and Web Sciences (IJSWS), 3, p 54-57.
15. Wang T, Weihua L, Haobin S and Zun L. 2011Software defect prediction based on classifiers ensemble, Journal of Information & Computational Science, 8, p 41– 54.
16. Surendra A and Geethanjali N 2013 Classification of defects in software using decision treealgorithm, International Journal of Engineering Science and Technology (IJEST), 5, p 32-40.
17. Sunil D, Agrawal J, Reddy R, Ram M and Sowmya K 2012 Bug classification: Feature extraction and comparison of event model using Naïve Bayes approach, International Conference on Recent Trends in Computer and Information Engineering , p 8-12.
18. Danny H and Luiz C Fernando 2010 An empirical study on the procedure to derives Software quality estimation models, International Journal of Computer Science & Information Technology (IJCSIT), AIRCC Digital Library, 2,p 1-16.