And what would “collaborating better” mean anyway?
Collaborative activities serve many purposes in our teaching. Firstly, they provide opportunities for discussion, which enable us to focus on the development of thought or changing opinions, rather than the transmission of knowledge, which is our common focus in lectures. Secondly, they provide opportunities for knowledge construction – i.e. they allow students to develop their abilities and knowledge beyond what they currently know, building upon the skills, knowledge, expertise, synergy and serendipity that can arise from the interactions of the group. And thirdly, they provide opportunities for the development of professional skills in communication and collaboration.
The first two are intrinsically related, although separate objectives – one targets the structure of our curriculum, while the other the enhancement of learning.Viewed as a tool to enhance learning, collaborative activities then become more than simply a something that we can use within a specific course or topic. They become a lifelong skill that students can adopt and adapt to help them learn new concepts and material throughout their careers and lives. If they can use collaboration as an effective learning tool then they have developed a valuable skill.
But how do we know whether our students are actually building collaboration skills? How do we know whether the discussions that they are having in our collaborative activities are actually building knowledge?
The most immediate and obvious way of assessing whether an education intervention or change as made an impact is to look at whether the students have improved in their assessment. This tells us whether students have learnt more. Or more accurately it tells us whether they have learnt more that is relevant to the type of assessment that we are applying. But it often does not tell us whether our students have developed better problem solving skills, or whether they can work more effectively in a group.
I have been reading some work by Chernobilsky and Hmelo-Silver on assessing knowledge construction, and one particular idea struck home as a valuable evaluation exercise that we can apply when experimenting with collaborative learning in our courses. In this particular study, they focus on evaluating the transcripts from online collaborative sessions using their online PBL system, eSTEP, using transcript coding to help us analyse the student conversations. The idea here is to identify whether students are simply sharing knowledge, by independently stating ideas, concepts, facts etc, or whether they are using what has been shared, combined with their own knowledge, to extend or construct – by modifying, agreeing or disagreeing (with justification), summarising, or transforming, etc.
By analysing the results of a coded transcript, we can identify whether students actually collaborated or whether they worked in parallel, with surface-only discussion.
Their coding system is quite complex, with several categories (content, collaboration, questioning, complexity, justification, monitoring) each broken down into sub-categories to help identify the contribution made by each utterance by a student. For example, content is broken down into (task-related utterances, tool-related utterances, concept-related utterances, and personal talk), while collaboration is broken down into (new ideas, modifications, agreement, disagreement, summaries). And so on. Although relatively complex, and the coding activities would take a considerable amount of time, I think this would be a valuable tool for assessing our activities.
In this study, they also discuss a visualisation tool that includes the results of their coding analysis, with access to online resources – something similar could be integrated through observation of collaborative sessions – i.e. access to the whiteboard, writing something new, modifying an existing entry, etc. Their visualisation tool gives a straightforward way of “seeing” the temporal relationships between different occurrences. However, I am also wondering whether social network analysis techniques could also be useful here to explore the make up of our groups – what roles do the different group members have? Are there particular group structures that result in better collaboration?
(Chernobilsky, Nagarajan and Hmelo-Silver, Problem-based Learning Online: Multiple Perspectives on Collaborative Knowledge Construction. Proceedings of the 2005 conference on Computer support for collaborative learning: learning 2005: the next 10 years!)