I have a new Masters student who is looking at the construction of a visualisation system to help students learn programming. There has been quite a lot of work in this area, building extensive visualisation of algorithms or coding so that students can learn the flow of control in programming, and learn about scope and variable usage. My student will be exploring more deeply memory usage in programming, although he he is still finalising his ideas at this point.
One of the interesting points that came out here was that in their analysis, it was possible for a student to receive full marks for a substantial portion of the assessment exercise while still having misconceptions. The issue is that they used random input data for their exercises – if the data is not carefully chosen, then it might not exercise parts of the algorithm related to the misconception. This meant that many of the misconceptions were not observable in the exercises set for the students. They are now revising this now to ensure that it is not possible to achieve full grades when there is a misconception. Of course, this means that they need to know all of the misconceptions that are possible, which probably needs some degree of manual supervision or analysis of student work to observe repeating incorrect solution sequences, which indicate common misconceptions.
James made a very interesting point in the discussion time – perhaps it would be better, or easier, to get the students to talk aloud about what they think the algorithms are doing to help identify misconceptions. Another interesting point was whether the students are made aware of the common misconceptions when they are taught about the topic. This made me think about mental model development – is it possible for a student to understand a misconception is they don’t have a fully formed mental model to contrast it with? Is it possible to form a mental model through negation?