Learning analytics promises insights into learning processes through the analysis of educational big data. Trying to measure learning with data points, however, comes with many challenges. Three of these are as follows. First, having access to “good quality” datasets, meaning datasets from which we can infer something about learning. Second, selecting the correct methods to analyse the data, so that we do not just tune the algorithms to get expected patterns from the data. Third, assigning pedagogical meaning to the results of the data analysis.
These challenges accompany this PhD project, which comprises 4 case studies that use learning analytics to explore different aspects of peer assessment (three of which involve data collected at one of two Norwegian University Colleges). The first study focuses only on a big dataset (10 000+ students from 1000+ institutions) and no context data is used. The remaining three studies, however, add context data to their big datasets. The second study explores the relationship between student grades and their performance. The third study focuses on comparing student’s drafts of assignments and the final submissions. The fourth study investigates how student peer assessment skills change over time.
The main source of data for each case study is collected by an online platform, Peergrade, that facilitates peer assessment. In three of the studies the data is supplemented by additional sources, such as focus group interviews, information about the course (i.e., learning design), questionnaires, or student grades. The datasets include both numerical and text data, and are analysed using various ML techniques, such as NLP, clustering, and classification.