Recently, John Hattie put out what amounts to a correction paper on his meta-analysis of feedback with B Wisniewski, and K Zierer. Previously, Hattie had placed the effect size of feedback as .70. However, as is admitted in the 2020 paper, Hattie and his original co-author accidentally included some outlier data twice, when he included both the original outlier studies and meta-analyses that included this outlier data within his own secondary meta-analysis. For whatever reason, there does appear to be a significant amount of outlier data within this specific topic of education. While most studies appear to put feedback with an ES of .40 - .60, we have several studies with both negative effect sizes and extreme effect sizes higher than 2.0. Of course, this is precisely why meta-analysis is such an imperative tool to use within the context of education and why we cannot rely on just individual studies.

Overall, in this new meta-analysis, we saw that feedback had a generalized effect cohen’s d size of .55. However, if we remove outliers, we have an effect size of .48. Giving feedback a moderate effect size overall. The authors also noted that feedback with studies with control groups had an ES of .42, and feedback studies that used pre and post-test data had an ES of .63. I have previously discussed this on my podcast, but I do not think pre and post-test data studies are valid for education, as they are so affected by time. Ultimately, we should want education studies to be always using control groups. However, as most meta-analyses in education are not currently controlling for this, I, unfortunately, think we cannot exclude this data when making comparisons between education factors and their effect sizes.

In general, it appears from this new analysis that feedback should be considered a moderate yield teaching factor and not a high yield one. That being said, their meta-analysis also showed that different types of feedback were more and less effective; moreover, feedback was more and less effective for different types of learning. Please see the graph below for these results. 

Feedback IMG001.png

As we can see from this graph while feedback itself is not a high yield strategy, highly detailed feedback (or descriptive feedback) and student-to-student feedback are. I think this suggests two things, firstly that the more detailed feedback is the better, and secondly, that feedback can be an important component in a successful peer tutoring program. Interestingly the authors also looked at oral vs written feedback and found no evidence that one was superior to the other. I think this is good news for teachers, as feedback can be one of the most time-consuming parts of the job; however, oral feedback is easier and faster to provide than written feedback.

Admittedly, why I set out to research this topic today was to examine the concept of immediate feedback. Sadly there are no meta-analyses to the best of my knowledge on this topic. I wanted to cover this topic because I recently saw a very popular education influencer promoting the vast superiority of immediate feedback. The influencer not only claimed that this type of feedback was superior, but was also offering a paid course on the topic for interested teachers.

While I did not find a meta-analysis on this topic, I did come across an interesting study by  Metcalfe and et al. Their study involved two experiments comparing delayed feedback to immediate feedback. In their first experiment, they looked at 27 grade, 6 students and measured the effect size of delayed and immediate feedback over 4 vocabulary sessions. Each session was 1-4 days apart and in each session, the students performed a test on the prescribed vocabulary. In the immediate feedback group, the students immediately got feedback after giving a wrong answer and had to put in the correct answer before moving on. In the delayed feedback group, the students received their feedback at the beginning of their next session. 

In this experiment, the delayed feedback group actually outperformed the immediate feedback group, with an ES of .29 compared to a control group of no feedback. Whereas, the immediate feedback group had an ES of .17. The authors recreated their experiment, with harder vocabulary and a sample of 20 university students. In their second experiment, they found absolutely no difference between the two groups. For each experiment, they had all students try each feedback type for consistency’s sake. 

While I do not think this study is evidence enough to dismiss the claim of the mentioned influencer, I think its existence, with the lack of meta-analysis, on the topic is enough to conclude that there is little evidence to suggest immediate feedback is a crucial aspect of teaching. That being said, I do think there is merit to the idea and I hope that there is more research on the topic in the future.

Ultimately, I think feedback is a core component of teaching, regardless of its effect size. If we as teachers do not provide feedback to students, I do not see how they can be expected to learn from their mistakes. That being said, I do not think the type of feedback teachers provide, is something they need to labor or stress over. It appears from this data that more descriptive feedback is better. However, I think teacher tools like class-wide feedback and conferencing can be a great way to deliver more feedback in a more time-efficient manner. As a final side note, I want to compliment Hattie for having the deep integrity to issue a correction, on this topic. 

Written by Nathaniel Hansford

Last Edited: 2/28/2020


Wisniewski B, Zierer K, Hattie J. The Power of Feedback Revisited: A Meta-Analysis of Educational Feedback Research. Front Psychol. 2020;10:3087. Published 2020 Jan 22. doi:10.3389/fpsyg.2019.03087

J, Metcalfe, et al. (2009). Delayed Versus Immediate Feedback in Children’s and Adults’ Vocabulary Learning. Memory and Cognition. Retrieved from <>

Copyright © 2018 Pedagogy Non Grata  - All Rights Reserved.