Trustworthy AI in Education is an under researched area. When it comes to the use of AI in society, academics focus on responsibility, accountability and trust (e.g., in AI systems and ethics), while politically the focus is on social acceptance; trust is actually a complex social phenomenon. Thus, when looking at the trustworthy use of AI in the educational sector we need research that takes both perspectives; it is not enough to have transparent, interpretable, and FAIR AI systems, they also need to be trusted by stakeholders and accepted by society at large.
Trustworthiness is something that is gained and earned. The trustworthiness of AI systems lies not only in the explainability and interoperability of the data and the algorithms, the FAIRNESS of the system, and the accountability relationship between the AI system developer and users, it is more complex. Trust lies within the complex web of interactions between these AI systems, their intentions and their developers, the actors, the entities, and the regulatory system that comprises the eco-system of the educational sector. The values of human rights, democracy, and the rule of law require thoughtfulness from the time of the conception and subsequent development and implementation of these AI systems in schools and universities.
The primary objective is to develop research excellence in the area of Trustworthy AI in Education and provide a framework, multi-disciplinary insights, materials and tools for building trust in the use of AI in the educational sector.
O1. Map stakeholder motives and interests in the educational eco-system, map accountability relationships in the educational eco-system, and develop a conceptual frameworkof the layers of trust for the responsible and trustworthy use of AI ineducation.
O2. Explore and analyse relevant EU/EEA (GDPR/AI ACT) and national legal frameworks (e.g., opplæringslova & universitets- og høyskoleloven) that regulate processing of personal data and AI with a view of verifying whether it safeguards trustworthy AI in education and, if necessary, propose amendments on both EU/EEA and national level as called for by Personvernkommisjonen (NorwegianPrivacy Commission).
O3. Analyse a variety of AI systems in education against the ethical guidelines for trustworthy AI (lawful, ethical, robust) to identify the key requirements and competencies that should be addressed in building trust between the stakeholders as identified in the conceptual framework.
O4. Develop a repository of communication processes, guidelines, materials and tools (e.g., games) to address trust in AI in education for multiple stakeholders (parents, students, teachers, privacy officers, EdTech companies, etc.), thus increasing hteir knowledge of responsible use of AI in education
O5. Contribute to national and European work on legal guidelines for AI and Education.
O6. Identify competence needs and new educational and training offeringsfor a variety of stakeholders on responsible and trustworthy use of AI inEducation.
 the Norwegian education and higher education laws & Council of Europe’s on-going work on binding legal guidelines for AI and Education
Thus, through an interdisciplinary collaboration between the Centre for the Science of Learning & Technology (SLATE), Faculty of Psychology and the Faculty of Law, University of Bergen, EduTrust AI contributes scientific value by creating newknowledge, methods, guidelines (educational, technological, and regulatory), and tools, and gives input to a practicable framework related to the challenging questions around the use of student data and AI systems in education, relevant for the fields of law, information and computer science, learning sciences, and the social sciences.
The Scientific Advisory Committee (SAC) comprising leading expertise in children’s human rights, AI and Education challenges, strategic use of digital tools in Norwegian schools, national data protection and regulations, ethical and legal challenges in AI-driven practices in education, and learning analytics/AI for reading.
1 November 2023 – 31 October 2027
Trond Mohn Foundation
SLATE, University of Bergen: Barbara Wasson (PI), Anja Salzmann, Mohammad Khalil, Cathrine Tømte (Professor II), Ingunn Ness, Jeanette Samuelsen, Qinyi Lui. Faculty of Law, University of Bergen: Malgorzata Cyndecka (PI); UCL: Wayne Holmes