ENHANCING TRUST IN MACHINE LEARNING INTERPRETABLE MODELS THROUGH EXPLAINABLE AI TECHNIQUES
Keywords:
Black Box Models, Explainable Artificial Intelligence (XAI), Model Transparency, Interpretability, Model-Agnostic Methods, Feature Importance, Ethical AI, Trustworthy AIAbstract
Trust and transparency are of the utmost importance in light of the evergrowing influence of machine learning (ML) systems on autonomous systems, healthcare,and financial decisions. Classic black-box models are frequently precise; however, they fail to provide an explanation for the methodology
Downloads
References
Linardatos, P., Papadimitriou, K., & Kotsiantis, S. (2020). A review of machine learning
interpretability methods. Artificial Intelligence Review, 53(5), 1–44.
Akula, A. R., Wang, K., & Liu, C. (2021). CX-ToM: Enhancing human understanding
and trust in image recognition models with counterfactual explanations. IEEE
Transactions on Neural Networks and Learning Systems, 32(11), 4780–4793.
Downloads
Published
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
Attribution — You must give appropriate credit, provide a link to the license, and indicate if changes were made. You may do so in any reasonable manner, but not in any way that suggests the licensor endorses you or your use.
NonCommercial — You may not use the material for commercial purposes.
No additional restrictions — You may not apply legal terms or technological measures that legally restrict others from doing anything the license permits.