Explainable Artificial Intelligence for Academic Performance Prediction. An Experimental Study on the Impact of Accuracy and Simplicity of Decision Trees on Causability and Fairness Perceptions

Abstract

The rising adoption of learning analytics and academic performance prediction technologies in higher education highlights the urgent need for transparency and explainability. This demand, rooted in ethical concerns and fairness considerations, converges with Explainable Artificial Intelligence (XAI) principles. Despite the recognized importance of transparency and fairness in learning analytics, empirical studies examining student fairness perceptions, particularly within academic performance prediction, remain limited. We conducted a pre-registered factorial survey experiment involving 1,047 German students to investigate how decision tree features (simplicity and accuracy) influence perceived distributive and informational fairness, mediated by causability (i.e., the self-assessed understandability of a machine learning model’s cause-effect linkages). Additionally, we examined the moderating role of institutional trust in these relationships. Our results indicate that decision tree simplicity positively affects fairness perceptions, mediated by causability. In contrast, prediction accuracy neither directly nor indirectly influences these perceptions. Even if the hypothesized effects of interest are either minor or non-existent, results show that the medium positive effect of causability on the distributive fairness assessment depends on institutional trust. These findings substantially impact the crafting of transparent machine learning models in educational settings. We discuss important implications for fairness and transparency in implementing academic performance prediction systems.

Marco Lünich
Marco Lünich
Social Scientist

My research interests include the public perception of Digital Media, Big Data, and Artificial Intelligence.

Related