And now the sessions begin, first with a talk by Keith Trigwell, University of Sydney, regarding models of teaching learning relationships, and whether there is a relationship between variation in teaching and variation in learning. (which we would hope). This talk is an overview of several years of work in to teaching inventory creation, establishing such a relationship undertaken through several large scale studies. Interestingly, the studies that they have undertaken have indeed found a correlation between students reporting that they have adopted deeper learning approaches, with teachers adopting concept creation approaches. (good!)
Although this is a quick summary, it represents extensive work reviewing discipline based differences, time allocation for teaching, etc within this model for reviewing correlation between teaching intention and learning intention. Reading list: http://www.clt.uts.edu.au/Scholarship/A.Model.html/
There were some interesting implications that they emphasised from their research – one being that teacher development and experience will result in teaching that is focussed more around concept change, and that teachers who have a holistic view of their discipline are more likely to be student centred, and able to achieve concept change and therefore deeper learning. Of course, with such a brief overview of so much work, I am sure that I have misrepresented some of their findings, and will need to read some more of their papers.
The next talk promised to be a review of the top 50 most influential educators of 2012, framing it in terms of media interpretation, as in what issues did the media focus on, and how did that contrast with how this work was published in academic venues. Unfortunately, this one somewhat passed me by – with the presenter reading through a previously published top 50 of influential people in the Australian higher education sector, with various annotations as to achievements and commentary. The audience was then asked to vote for those that they found influential. However, it really did not cover some of the interesting questions that could be posed here (and were in the abstract). Let’s not even mention the idea of getting through a detailed discussion of 50 people, and their influence or lack thereof in 25 minutes.
Aft this I attended a talk by Beatrice Tucker and Julie-Ann Pegden, Curtin University, on the relationships between student evaluations of learning and teaching and indicators of student success. Their focus is on unit level analysis, trying to understand the new performance indicators (UES) and factors that improve student outcomes. The talk started with a discussion of our assumptions regarding student evaluations, such as who is filling out these surveys, whether assessment results interfere with or bias student responses, etc. They then followed this up with a brief literature review identifying no or low correlations between student outcome and evaluation response. They did identify one interesting aspect from the literature – a study by Mustaffa, I believe that identified that low GPA students identified the teacher as the most significant factor for their success, while high GPA students identified the course content.
As a brief overview, in their study, they found that response rates increases with student grade, and that there were several patterns across the different units regarding overall satisfaction correlation with student outcomes. Within their unit study, they found three patterns, one that shows consistent satisfaction across the pass-high distinction range, one that showed steady satisfaction across the full outcome range, and one that demonstrated the typical all of institution pattern of increasing satisfaction across the whole range. They also analysed qualitative data regarding student perceptions, linked to a classification of the course type: required course, gatekeeper course, high pass rate, discipline centred, generic skills, assessment profile. However, they really needed to look at more units to see whether these were indeed distinct patterns and to understand the different factors involved in their study.
They did do some interesting visualisation using SPSS text analysis of the student evaluations, a fairly simple text-based analysis identifying common themes, and relationships identifying themes that were indicated by the same person. I am working on a project at the moment looking at applying ontology learning and information retrieval mechanisms to help visualise evaluations data (and other forms of qualitative data) as a potential alternative, or complementary mechanism for analysing or understanding qualitative data – this appears to be a fairly primitive version of what I want to do, so I’ll have to look further at how it works.