当前位置: X-MOL 学术Decis. Support Syst. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Explainable Learning Analytics: Assessing the stability of student success prediction models by means of explainable AI
Decision Support Systems ( IF 7.5 ) Pub Date : 2024-04-26 , DOI: 10.1016/j.dss.2024.114229
Elena Tiukhova , Pavani Vemuri , Nidia López Flores , Anna Sigridur Islind , María Óskarsdóttir , Stephan Poelmans , Bart Baesens , Monique Snoeck

Beyond managing student dropout, higher education stakeholders need decision support to consistently influence the student learning process to keep students motivated, engaged, and successful. At the course level, the combination of predictive analytics and self-regulation theory can help instructors determine the best study advice and allow learners to better self-regulate and determine how they want to learn. The best performing techniques are often black-box models that favor performance over interpretability and are heavily influenced by course contexts. In this study, we argue that explainable AI has the potential not only to uncover the reasons behind model decisions, but also to reveal their stability across contexts, effectively bridging the gap between predictive and explanatory learning analytics (LA). In contributing to decision support systems research, this study (1) leverages traditional techniques, such as concept drift and performance drift, to investigate the stability of student success prediction models over time; (2) uses Shapley Additive explanations in a novel way to explore the stability of extracted feature importance rankings generated for these models; (3) generates new insights that emerge from stable features across cohorts, enabling teachers to determine study advice. We believe this study makes a strong contribution to education research at large and expands the field of LA by augmenting the interpretability and explainability of prediction algorithms and ensuring their applicability in changing contexts.

中文翻译:

可解释的学习分析:通过可解释的人工智能评估学生成功预测模型的稳定性

除了管理学生辍学之外,高等教育利益相关者还需要决策支持来持续影响学生的学习过程,以保持学生的积极性、参与度和成功。在课程层面,预测分析和自我调节理论的结合可以帮助教师确定最佳的学习建议,并让学习者更好地自我调节并确定他们想要的学习方式。表现最好的技术通常是黑盒模型,它更注重性能而不是可解释性,并且深受课程背景的影响。在这项研究中,我们认为可解释的人工智能不仅有可能揭示模型决策背后的原因,而且有可能揭示模型决策在不同环境下的稳定性,从而有效地弥合预测性学习分析和解释性学习分析(LA)之间的差距。为了促进决策支持系统研究,本研究 (1) 利用概念漂移和性能漂移等传统技术来研究学生成功预测模型随时间的稳定性; (2) 以一种新颖的方式使用 Shapley Additive 解释来探索为这些模型生成的提取特征重要性排名的稳定性; (3) 从不同群体的稳定特征中产生新的见解,使教师能够确定学习建议。我们相信这项研究对整个教育研究做出了巨大贡献,并通过增强预测算法的可解释性和可解释性并确保其在不断变化的环境中的适用性来扩展洛杉矶领域。
更新日期:2024-04-26
down
wechat
bug