A recurring theme in our discussions at Latice was the idea of process over outcomes. We talked a lot at this conference about understanding student behaviours, and how they approached problems, and how we are not really supporting the development of good process because we are not focussing our assessment on them.
This is something that we have become more aware of in our use of collaborative learning – how do we know whether students are collaborating well, and whether they have developed good collaboration skills? It is not by assessing the outcomes of the collaboration. That places the wrong emphasis on learning, and can really confuse the students, as we discussed in our SIGSCE paper.
However, that tends to be what we do. We assess the project that is produced at the end.
We are looking at good ways of assessing collaboration, and I think as a discipline we need to spend some more time thinking and structuring our curricula around ways to better assess process in general. I think that we know how to do this – but the ways that we know are manual, and time consuming. It may be that we have no choice, but it strikes me that as computer scientists currently teaching our students how to cope with the emerging problems of big data, that we could do better.
In the CSER group we are starting to move more and more of our research into learning analytics, integrating pedagogy, social science and computer science to think about the best ways of analysing and understanding the massive amounts for data that our students generate. This is not just a nicety to give our educators and students pretty graphs – this is crucial in being able to develop better assessment and pedagogical models that focus on process as well as outcome.