With the rise of affordable VR, its relevance for learning purposes is also increasing. We want to understand how people learn and how we can optimise the learning in VR through multimodal learning analytics.
This project aims to use interaction data such as eye tracking- and positional data to understand what happens when subjects are immersed in Virtual Reality Learning Environments (VRLEs). As VR scenarios are complex and offer advanced ways of interaction, we want to investigate the subjects interaction with the VRLEs using multimodal data. By combining data such as position, gaze, game events, etc., we hope to understand how we learn and how we can learn better in VR.