Apologies for the delay in writing this, but I wanted to share some of my experiences from attending the first CS Education Research Dagstuhl on Assessing Learning in Introductory Computer Science.
The Dagstuhl was an interesting combination of invited talks, and semi-structured workshop sessions, providing opportunity for us to actually work on developing new ideas and discussion. The invited talks were primarily from other discipline areas, designed to prompt new ways of looking at assessment and assessment structures by learning from others. Our pre-work for the conference (brainstorming ideas!) was used to form a series of breakout workshop groups, some of which lasted for one or two sessions, and others, such as the corpus creation group that I was a part of, lasting for the entire event.
This was a fantastic format, the right mixture of opportunities to sit back, listen and think, and then times where we had to push through and work, challenging our preconceptions on assessment, and exploring the diversity of what assessment means within Computer Science. One of the aspects that I enjoyed about this event (which I wasn’t sure I would!) was the intensive nature of it – almost all of our time not spent sleeping was spent together, working, talking, and learning more about each other and our community.
Each participant was also asked to present a summary of their research work, in a series of poster sessions. A nice icebreaker, which enabled us to find out more about each other’s work and institutions. I presented a series of posters on the work of the CSER group, which I have included below.
The first poster focussed on our work on Learning Analytics and Teamwork, starting with a summary of our work in Collaborative Learning, analysing how we might assess collaboration and teamwork skill, rather than a focus on project outcomes. I also talked about the new work we had developed here on automated analysis of teamwork behaviours and sentiment analysis to guide instructor interventions. Our team health dashboard will soon be available as open source for anyone to use, but if you are interested now please let me know!
Our second poster covered more recent work on automated extraction of concept maps led by Thushari Atapattu in her PhD studies. Thushari has continued with us since her PhD as a postdoc working on a project funded by Google on personalised learning at scale. This works explores topic modelling and automated topic labelling in large scale discussion forums, with the aim of providing dashboards that can assist educators in seeing what topics are gathering most discussion at any one point in time, and for students to help them find relevant discussion, particularly when courses are self-paced.
The third poster looks at our work in Media Computation. We started exploring Media Computation when we revised our first year curriculum several years ago. While we were happy with the results of this, we wanted to explore whether we can do Media Computation at scale in a MOOC, and as a consequence, also explore a blended learning model for our face-to-face courses. Last year, we launched our Think Create Code MOOC, which uses Media Computation in Processing, with some interesting outcomes. In particular, we observed that the MOOC version of this course attracted quite a high percentage of women. We also noted the difficulties in genuine collaboration in MOOC environments.
Our final poster looks at our work in K-12 Computer Science education, including our work on developing new professional learning models for online learning and our experiences in running the CSER MOOC program. We were also able to share the good news about our new K-12 professional learning program with the Department of Education and Training!
It was actually quite enjoyable preparing these, as it was a chance to reflect over all of the work that we have
completed (and new projects that we have started!) since we formed the group in 2010. While we had been working on CS Education Research for a number of years beforehand, it was only in 2010 when we officially formed the CSER group, and then in 2012, when we were able to employ our first Postdoc that we were able to start to see more significant research outcomes.
With Digital Technologies and Computational Thinking reaching greater prominence in K-12 education, and in Australia, increased awareness of the need for research into CS K-12 pedagogy, it’s a fantastic opportunity to now establish new research programs, and explore how we can use what we know about CS pedagogy now, in this new space.
Mark Guzdial recently wrote a post about his future plans as an Academic. One of the statements that really resonated with me was the idea that you need a level of critical mass to do impactful research work. I agree with this, but I also think that the current degree of community interest and political investment in Computer Science education also provides strong incentive, and motivation that can help drive the creation of this ‘mass’.