LAK 2016 – personalisation 

Unfortunately I had to skip the first day of LAK due to some other work commitments, but I’m back today and looking forward to the discussions.

The first talk in today’s session on personalisation discusses a dashboard used in a flipped first year engineering course, where a dashboard was provided to the students to help them see how their level of activity and performance compared with the class average. What I found interesting was the analysis of the dashboard, using cluster analysis of student performance as well as interactions with the dashboard, to see if they could identify groups of students. The looked for behaviours such as whether students changed their behaviours after their dashboard usage, such as did they watch more videos, less? Did they engage in more activities?

Snapshot of the dashboard

There was one issue in the dashboard usage which was quite controversial, which was that the academics had elected to set an initial dashboard setting for the course that showed higher engagement for each student, essentially lying to the students about their initial performance (given a lack of data at the start of the course) but manipulating the students to behave in a certain way. This is highly problematic, as there has to be an element of trust in the classroom setting. The authors did explain that they were transparent about this process, but it is still a concern to me.

 The second talk explored emotion analysis, using a sequence of surveys where students were classified according to a model of 12 emotions within an academic setting, including the development of a visualisation to provide information back to the students. They conducted correlation analysis with student performance to see what emotions were associated most strongly with student performance, and also analysed the impact on the visualisations on student reflection. It would be interesting to contrast this with sentiment analysis of student contributions, as we have used in our team health dashboard.

It also reminds me that we still have several continuation projects planned for that space, including student reflections on the usefulness of emotion reflection, and teacher intervention planning, as well as SNA analysis and cluster analysis to explore potential groups and networks.

An interesting question – how does a student react when they see their emotions in comparison to others in their class? If they are identified as more negative, do they become even more negative? How mature are students in their ability to reflect on emotional feedback?

This last point is quite critical, not just for this area of analytics, but more broadly – we can measure all sorts of things, and we can provide this information to our students, but are they able to use it in a meaningful way to address their self-regulated learning? Do we understand, as yet, how students take on these forms of feedback, and what feedback can actually be used easily? I think we are starting to explore this, but we have to be careful we don’t get caught up in the desire to show as much data as possible, or that we analyse only learner perception rather than learning impact.

The final talk in this session talks about a learning analytics framework, but what I found interesting about this talk was the specific focus on goal-directed analytics rather than data-directed analytics. In this final example, learners construct learning goals, and the analytics that would be presented would be relevant to that goal. This is difficult, though, as how would we know what analytics should be used for specific goals? Unless, we have an overly simplified view of goals, in which case I don’t see the point. But – I think this is a very useful aim to remember as we develop these kinds of systems.


“Data2U: Scalable Real time Student Feedback in Active Learning Environments”, Imran Khan and Abelardo Pardo

“Supporting learning by considering emotions: Tracking and Visualization. A case study”, Samara Ruiz, Sven Charleer, Maite Urretavizcaya, Joris Klerkx, Isabel Fernandez-Castro and Erik Duval

“A Rule-Based Indicator Definition Tool for Personalized Learning Analytics”, Arham Muslim and Mohamed Amine Chatti


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s