Press "Enter" to skip to content

Challenges, causes of error, and a failure to replicate

The task of a teacher is not to work for the pupil nor to oblige him to work, but to show him how to work.

Wanda Landowska

Podcast

Stachowiak, B. (2019). Using challenges to motivate learners. Teaching in Higher Ed podcast.

In this episode of the Teaching in Higher Ed podcast Michael Wesch, Professor of Cultural Anthropology at Kansas State University, talks about his experiences of teaching, specifically how he aims to always be authentic with students. Wesch doesn’t hide behind a lectern or his slides, suggesting that in order to know your students, they need to know you. And he doesn’t think that it’s only students who should be challenged; Wesch also talks about the value of challenging ourselves and of sharing that experience with students.

I first came across Michael Wesch when I saw his Web 2.0…The machine is us/ing us, and A vision of students today. I still find these videos quite compelling even though they’re more than 10 years old. You can see more of his work on his YouTube channel.

Article

Norman, G. R., Monteiro, S. D., Sherbino, J., Ilgen, J. S., Schmidt, H. G., & Mamede, S. (2017). The Causes of Errors in Clinical Reasoning: Cognitive Biases, Knowledge Deficits, and Dual Process Thinking. Academic Medicine, 92(1), 23–30.

However attractive the assumption is that diagnostic errors originate in cognitive biases, and the implication that relatively simple and quick strategies directed at identifying and eliminating biases can reduce errors, the evidence is consistent in demonstrating that such strategies have no or limited effectiveness.

We know that students and clinicians are prone to errors in clinical decision making and that these errors can often be seen to arise from cognitive biases. In this narrative review of the literature, the authors suggest that teaching strategies aimed at making students aware of these cognitive biases and advocating for them to slow down, reflect, and think carefully, are unlikely to reduce errors in clinical reasoning.

We assume that it’s in the Type 1 mode (quick and intuitive decision making) where flaws in our thinking creep in and so we try to get students to intentionally shift into Type 2 mode, which is more reflective and analytic. Except it doesn’t work.

Rather, it’s in the reorganisation of knowledge (i.e. learning) where we find – admittedly limited – evidence for reducing errors in clinical reasoning. The takeaway, for me anyway, is that the long-term benefit of helping students and clinicians learn more effectively is of greater importance than asking them to “think carefully”.

Resource

Tyson, C. (2014). Failure to replicate. Inside Higher Ed blog.

…education journals routinely prize studies that yield novel and exciting results over studies that corroborate – or disconfirm – previous findings. Conducting replications is largely viewed in the social science research community as lacking prestige, originality, or excitement.

The replication crisis has been playing out in the field of social psychology for a few years and it’s interesting to see this comment on similar concerns related to educational research. As someone who has spent most of the past decade doing educational research, I have no doubt that few of my findings would be replicated if the study were conducted elsewhere.

That’s not the same thing as saying that the results I got were wrong (although maybe they were…someone should try to replicate them), only that the unique context of what I did, and the students I was working with, and their relationships with me, and the technology of the time, and the political situation in the country…etc., all influenced the outcome of the study. In addition to the many interacting variables that affect student learning, there are also unique challenges to conducting educational research.

I don’t know what the answer is – or even if there is one – other than to say that we need to do better when it comes to designing, conducting, and evaluating educational interventions that aim to improve student learning.

See also the No significant difference database, which captures the findings of studies that aim to compare the outcomes of educational interventions but which all conclude that there was no significant difference.

Leave a Reply