Artificial Intelligence in Education: Layers of Trust (EduTrust AI)

Trustworthy AI in Education is an under researched area. When it comes to the use of AI in society, academics focus on responsibility, accountability and trust (e.g., in AI systems and ethics), while politically the focus is on social acceptance; trust is actually a complex social phenomenon. Thus, when looking at the trustworthy use of AI in the educational sector we need research that takes both perspectives; it is not enough to have transparent, interpretable, and FAIR AI systems, they also need to be trusted by stakeholders and accepted by society at large. 

Trust is something that is gained and earned. The trustworthiness of AI systems lies not only in the explainability and interoperability of the data and the algorithms, the FAIRNESS of the system, and the accountability relationship between the AI system developer and users, it is more complex. Trust lies within the complex web of interactions between these AI systems, their intentions and their developers, the actors, the entities, and the regulatory system that comprises the ecosystem of the educational sector. The values of human rights, democracy, and the rule of law require thoughtfulness from the time of the conception and subsequent development and implementation of these AI systems in schools and universities.

Primary Objective   

The primary objective is to develop research excellence in the area of Trustworthy AI in Education and provide a framework, multi-disciplinary insights, materials and tools for building trust in the use of AI in the educational sector.

Secondary Objectives                                              

O1.  Map stakeholder motives and interests in the educational ecosystem, map accountability relationships in the educational ecosystem, and develop a conceptual framework of the layers of trust for the responsible and trustworthy use of AI in education.

O2.  Explore and analyse relevant EU/EEA (GDPR/AI ACT) and national legal frameworks (e.g., opplæringslova & universitets- og høyskoleloven) that regulate processing of personal data and AI with a view of verifying whether it safeguards trustworthy AI in education and, if necessary, propose amendments on both EU/EEA and national level as called for by Personvernkommisjonen (Norwegian Privacy Commission).

O3.  Analyse a variety of AI systems in education against the ethical guidelines for trustworthy AI (lawful, ethical, robust) to identify the key requirements and competencies that should be addressed in building trust between the stakeholders as identified in the conceptual framework.

O4.  Develop a repository of communication processes, guidelines, materials and tools (e.g., games) to address trust in AI in education for multiple stakeholders (parents, students, teachers, privacy officers, EdTech companies, etc.), thus increasing hteir knowledge of responsible use of AI in education

O5.  Contribute to national and European work[1] on legal guidelines for AI and Education.

O6. Identify competence needs and new educational and training offerings for a variety of stakeholders on responsible and trustworthy use of AI in Education.  

[1] the Norwegian education and higher education laws & Council of Europe’s on-going work on binding legal guidelines for AI and Education

Thus, through an interdisciplinary collaboration between the Centre for the Science of Learning & Technology (SLATE), Faculty of Psychology and the Faculty of Law, University of Bergen, EduTrust AI contributes scientific value by creating new knowledge, methods, guidelines (educational, technological, and regulatory), and tools, and gives input to a practicable framework related to the challenging questions around the use of student data and AI systems in education, relevant for the fields of law, information and computer science, learning sciences, and the social sciences.

Scientific Advisory Committee

The Scientific Advisory Committee (SAC) comprising leading expertise in children’s human rights, AI and Education challenges, strategic use of digital tools in Norwegian schools, national data protection and regulations, ethical and legal challenges in AI-driven practices in education, and learning analytics/AI for reading.

Project Period:

1 November 2023 – 31 October 2027

Project Period:

November 2023

October 2027

Funded By:

Trond Mohn Foundation

Project Leader:

Barbara Wasson

Project Members:

SLATE, University of Bergen: Barbara Wasson (PI), Anja Salzmann, Mohammad Khalil, Fride Klykken, Cathrine Tømte (Professor II), Ingunn Ness, Qinyi Lui. Faculty of Law, University of Bergen: Malgorzata Cyndecka (PI); University College London: Wayne Holmes

Project Partners:

More Relevant Projects

See All