Looking at the current chatter around big data in higher education, there’s a definite hint of Big Brother.
“Data as a blunt instrument to replace qualitative understanding.” “Data misappropriated by a government intent on ‘shaking up’ the university sector.” “Fears of data security threats or selling on of data for commercial purposes.”
Actually, learning analytics can provide so much more than quick answers to standard questions. It’s about ongoing learning, evolving understanding and applying that incremental understanding in a variety of ways to support the student experience.
At Edinburgh, for example, we are using data to combine personalised and automated feedback to undergraduate science and technology students, as part of our OnTask Learning project.
Each student is associated with a set of indicators (or markers) that outline their learning progression. For each level of these indicators – there may be around four or five levels of expertise within each given project – students are then assigned personalised feedback. It's a bit like completing a level on a computer game. Feedback would be different depending on your starting point and your overall score.
It’s important to emphasise that we are not trying to label students. Rather, we are identifying students as multifaceted subjects characterised with a sequence of indicators. As they progress in their learning, some of the indicators will change along with the topics they are working on, resulting in different types of feedback.
So, where do the data come from? Institutions can gather data from several sources, including digital and real-life interactions – learning management systems, student information systems, discussion forums, library use, assessments and observation – to build a model of each student.
As well as using student data to support learning gains via personal feedback on each assignment, students who, for example, miss several classes, get a low score on an assessment or don’t complete the week’s reading will get an automated message with targeted advice for getting back on track.
For tutors, this means an initial time investment to write the series of targeted email scripts. But the benefits for students are far reaching, giving them specific advice and actions to consistently improve the quality of their learning and output: the elusive learning gain.
Targeted support for higher achieving students is generally poor, with countries such as the UK and the Netherlands focusing most effort on the lowest achieving quartile of students (compared with Australia, where the mid to low range students receive specific support). Our data-informed feedback project, in contrast, would offer support to the 2:1 student who aspires to achieve a first as well as to those who are underperforming.
Challenging student satisfaction surveys
We know that dissatisfaction with the National Student Survey (NSS) is widespread, with concerns across the sector about its influence in, for example, generating teaching excellence framework scores.
Can learning analytics provide more accurate data? I believe so. When students complete a survey such as the NSS, their responses are based on their perceptions. The use of learning analytics gives us evidenced data on their actual experience, including different facets of engagement and learning gains from one assignment to the next.
The University of Sydney last month began a pilot project using feedback analytics to measure student satisfaction and has already seen satisfaction with feedback grow from 3.35 to 3.85 on a five point Likert scale. We’ll begin our own pilot in Edinburgh in September.
Understanding what satisfaction means in the context of learning is also key. Happiness is not necessarily an indicator of “good” learning. I refer to the concept “desirable difficulties”, which is well established in educational psychology, including spaced practice or self-testing. It relates to the effective approaches to learning that can be promoted by in-course expectations. Learning analytics can offer a way to understand the complex interplay between learning gains, effective study and teaching practices, and student satisfaction.
However academics and students choose to use the data that are increasingly available on their student journey, context is key. You can’t separate the data from the organisational culture, national and regional differences, pedagogical differences between courses, and the political context. Rather than applying the same data rules across all providers and courses to reach a neat set of answers, we must look at the context in which we’re operating.
Even within an institution, one size absolutely does not fit all. The approach we’ve taken with our science and technology students would not work with, say, English or history students, although we’re following with interest the development of writing analytics that some of our global partners are working on, where analytics are being used to gather evidence of, for example, the coherence and summary of an argument.
Dragan Gasevic is chair in learning analytics and informatics at the University of Edinburgh. He is making a keynote presentation at the QAA International Enhancement Conference taking place in Glasgow form 6-8 June.