MATH INSTRUCTIONAL STRATEGIES RANKED BY META-ANALYSIS EFFECT SIZE

Math List.png

An Explanation and a Disclaimer: 

As teachers, we get exposed to so many different teaching interventions each year that it would be impossible to effectively implement all of them in the classroom. Perhaps more to the point, it should be noted that these different methodologies naturally vary in their efficacy as teaching tools and cannot, therefore, produce equal results in regard to student learning.


As, of course, there are only so many teaching hours in a day and given the fact that this time is subdivided to accommodate multiple subjects, it can be an overwhelming task to wade through the array of recommended teaching interventions and determine which of these will best serve to promote student success. To that end, I wanted this article to help demystify the process of choosing appropriate teaching methods and highlight those which have the highest probability of improving student learning outcomes. I was inspired by John Hattie’s work, which ranks teaching factors according to their effect sizes (ES) in meta-analysis. Hattie’s work, however, provides a more generalized view of teaching interventions, which, although broadly useful in determining the effectiveness of strategies that apply to all subjects, can overlook those strategies that are subject-specific and which target individual learning goals. I, therefore, decided to try and replicate Hattie’s ideas and apply them more specifically to individual subjects, starting with math.

I compiled the results of five separate meta-analyses, which examined 101 different quantitative studies, on the efficacy of different math interventions. It was my goal to include every meta-analysis that surveyed math intervention studies with effect sizes and control groups. While I do not think specific effect size numbers are overly important, I do believe they provide us with a less subjective measure for determining which teaching interventions are the most likely to be successful. And, while small differences in effect size, like the 0.02 point difference between scaffolded examples and authentic assessment, may not be materially instructive, it would, I believe, be fair to say that large differences, like the 0.69 point difference between scaffolded examples and game-based learning, are substantive enough to arbitrate which strategies are likely to have the greatest success in motivating classroom learning. By viewing the extant literature and available statistics in this light, I believe it is possible to empower teachers to make more informed decisions about how best to optimize the use of their available time and resources.  


Generally speaking, if an intervention has an effect size (ES) of greater than 0.70 the intervention could be called a high yield instructional strategy. If the intervention has an ES of 0.40 - 0.69, the intervention could be called a moderate yield instructional strategy, and if the intervention has an ES lower than 0.40, it should be referred to as a low yield strategy. While the actual statistical data can, at times, be less clear-cut, particularly when variables are inconsistent or when studies employ different effect size calculations, the use of meta-studies to synthesize this data does help to mediate possible discrepancies and provides a means of comparing and cross-examining results as a way of holding individual studies to a higher degree of accountability. And, although this article does analyze studies with different effect size calculators, which has been a criticism of John’s Hattie’s work, the existing data on this subject is limited and there are too few studies to allow for more restrictive inclusion criteria. Acknowledging, then, that the specific numbers which appear in this article may be off by slight degrees relative to one another, I posit that the individual numbers, themselves, are less relevant than the proportional deviation between different interventions, as the resulting hierarchy is not so dependent on numerical exactness as to be upended by the difference of a few hundredths of a decimal point.  


The largest meta-study included in this analysis used a Hedge’s g effect size calculation, as did three other studies. Meaning that seventy-five percent of the studies being looked at all used the same effect size calculation method. One study used a PND effect size; however, the effect size and calculations for this formula are comparable to Hedge’s g. Lastly, one meta-analysis by Rittle Johnson used a standardized mean difference effect size calculation. However, this meta-study only looked at one intervention and had completely typical results. The average effect size for all factors looked at in this analysis was 0.63. Ultimately, I think the precise effect sizes are almost meaningless, as these numbers represent a reflection of an average created by a very diverse set of numbers. What these numbers really allow us to do is approximate the comparative values of different math interventions in a more educated way.

Personally, I am a big fan of this type of research, because it allows teachers to make quick and evidence-based inferences about what interventions would be the most time-efficient for them to implement within their classrooms. While it may be argued that data is changeable and that these effect sizes could shift as more research is conducted or that the numbers, as they are now represented, could be influenced by errors in execution, this type of comparative analysis, in the very least, provides us with the foundations of a scientific meter for evaluating mathematical interventions and a  jumping-off point for further research into their efficacy.


Ultimately, while I believe in this research as a tool for exposing teachers to new or more evidence-based teaching methods, I also acknowledge that this list is a starting point only and that teachers will need to experiment with these interventions (and any strategies they find in their own research ) to determine what best fits with and supports the unique learning environment of their classrooms. Simply because an intervention on this list has a low ES, does not mean that it cannot be a powerful instructional tool with the right variables or that there are no students who will benefit from its use. So much of successful classroom teaching comes down to the different and often highly individual needs of students. What works for one class, or what worked for the classes in these studies, may not work for your students. There is an element of trial and error that is inescapable, therefore, in classroom teaching, but this does not mean that teachers should have to rely on guesswork or single-handedly navigate the labyrinth of purported best practices. It is my hope, then, that this article will be viewed not as a ranking system in the strictest sense, but rather as a springboard for directing the implementation of mathematical strategies in an evidence-based way.


Instructional Methods:


  1. Use of Heuristics: This intervention refers to the concept of teaching students multiple different procedural formals to solve the same problem, as well as discussing with students the different merits and detractions of different math procedures. The esteemed Dr. Jon Star has been a particularly strong advocate of this method. He recommends that teachers show students two to three different methods for most types of math questions. However, he recommends against showing too many, as it can be too confusing and time consuming. This method is ultimately quite time-efficient and easy to implement. 

  2. Explicit Instruction: This intervention refers to directly explaining to students both the conceptual basis of mathematics and the steps of different procedural methods. In recent years, many teachers have begun to advocate against this idea, arguing that students should be given time to play and discover math concepts on their own. However, in my personal experience it is very challenging for students to learn math without explicit instruction. While Inquiry-Based Learning can be a great tool to build on a student's problem-solving skills, I personally believe that math instruction should be primarily taught through direct instruction. 

  3. Student Verbalizations of their Mathematical Reasoning: This intervention refers quite simply to the idea of students discussing their mathematical reasoning within a group concept. The fact that this intervention method has such a high ES was a bit of a surprise to me. However, it seems to be just more evidence that students need math discussions to improve their math skills. Indeed, it hardly appears to be a coincidence to me now, that the first three interventions on this list are Use of Heuristics, Explicit Instruction, and Student Verbalization of their math Reasoning. 

  4. Speed Based Intervention: This intervention is essentially just speed skill and drill. There are many ways this can be used in a classroom, from skill and drill games like Around the World, to flash cards, to Mad Math Minutes.  I was surprised to see that this intervention, which is essentially a form of skill and drill performed so highly, in meta-analysis. While this type of approach used to be very popular, its popularity has waned in recent years with the rise of Inquiry-Based Learning. 

  5. Peer Tutoring Multi-Age: This meta-study analysis showed that peer tutoring was only a high yield strategy if the tutors were older than the students being tutored. Despite this, in my personal experience, using in-class peer tutors can be an extremely high yield strategy. However, I have personally coupled peer tutoring with other interventions that in my opinion made it more effective, including classroom economy and an Action Research framework. 

  6. Concrete Representational Abstract: This Intervention refers to an Iterative approach to geometry, meaning that it combines procedural and conceptual for the instruction of geometry. Within the CRA model of instruction, there are three stages. The first stage is referred to as the concrete stage. In this stage, students look at manipulatives and concrete models to explore the conceptual knowledge, behind a particular area of math instruction. The second stage is referred to as the representational stage. In this stage, students are encouraged to draw 2-dimensional drawings and diagrams of what they learned in the first phase. The third stage is called the abstract stage. During this stage, students are taught the necessary procedural knowledge. While this intervention meets our criterion of being a high yield strategy, it is very complicated and specific for both teachers to learn and to explain to other teachers. I am, therefore, the least likely to refer another teacher to this method, of any high yield strategy on this list. 

  7. Sequence and or Range of Scaffolded Examples: This intervention, refers to providing students with multiple examples of how to solve a problem, with increasing difficulty. Essentially, a teacher provides the scaffolded explanations of how to solve a math problem. For example, a teacher could provide examples of how to add large numbers, without regrouping, with regrouping, and with decimals. Or alternatively, a teacher could provide examples of how to solve addition problems, with different place value sizes. Personally, I have always scaffolded my explanations over a period of a week to a month, because I constantly worry about the attention span of my students. However, with an ES of .82, it is hard to argue with these results. 

  8. Authentic Assessment: Authentic assessments are assessments meant to mimic tasks that students might find in real life and are meant to hold intrinsic value for students. For example, having students create a resume, in language class would be an example of authentic assessment. This might be the effect size that surprised me the most, as I cannot say I have ever been a big proponent of authentic assessment, but again these results are challenging to argue against. 

  9. Contingent Reinforcement: Contingent Reinforcement is the implementation of a specific reward for a specific task. For example, giving a student a piece of candy every time they complete one sheet of math. In the studies looked at for this meta-analysis, the reinforcement was specifically given when students beat a previous test score. As I see it there were several limitations to this specific research. For starters, the cited studies looked at in the meta-analysis had extremely small sample sizes. One study only looked at 4 students, which is nowhere near enough students, to be statistically relevant in my mind. Secondly, reward systems have been shown to be far more effective in general for younger students than for older students and these studies only looked at younger elementary students, which could make these results less valid for older students. On a personal note, I prefer using reward systems that create natural intrinsic rewards, such as classroom economy over reward systems that are more extrinsically based. As some studies have shown that extrinsic rewards over the long term, lower students’ intrinsic motivation. 

  10. Cover Copy Compare: With this intervention, students fold a sheet of paper in three. On the first fold, the students copy down math facts from the board. Then the students cover the sheet and try to rewrite these math facts from memory on the second fold. On the last fold, the students compare their answers to the original math facts. This strategy appears to be a simple memorization activity. 

  11. Early Numeracy: This meta-analysis only looked at the efficacy of having students start learning mathematics during the early primary years. This is an important factor to consider, because some advocates within the DAP movement, still argue against the early direct instruction of literacy and numeracy. 

  12. Providing Feedback: This ES was consistent with John Hattie’s findings. While this ES appears to be fairly moderate, I would argue that feedback is only as effective as its specificity to the task at hand. Good feedback is not just an evaluation, but rather an explanation of how a student can improve. Tools like self-assessment, peer assessment, conferencing, and having students copy and comment on their own feedback can make feedback far more effective, in my own experience. 

  13. Using Visual Representations To Solve Problems: This intervention is essentially the use of diagrams and manipulatives to solve problems. This strategy is often advocated for by many, as one of the most effective ways to increase math learning yields. However, I have never personally found it to be all that impressive of an intervention and I am not surprised that its ES is a little on the lower side. 

  14. Interspersal: This intervention involves mixing easy and challenging math problems. While the learning yield is quite moderate, the opportunity cost is very low and I would argue is therefore very worthwhile for teachers to try. 

  15. Self Explanation: In this intervention, students are told to talk through their procedural steps as they tried to solve a math problem. Again, while the ES of this intervention was quite low, so was the opportunity cost and I would, therefore, recommend implementing it. 

  16. Formative Assessment, Coupled with Optional Targeted Instruction: Formative assessment, is when students are given assessments, part way through a unit, to check the students understanding. In this intervention, students are offered extra instruction to help improve their marks, after completing the formative assessment. Personally, I would think the fact that part of this intervention was optional for students, likely weakened the ES. I personally use this all the time, coupled with an Action Research Framework and self-assessment and I have found it to be a very high yield strategy. 

  17. Formative Assessment: (See intervention 16 for more information). It is interesting that formative assessment on its own had a lower ES than with Optional Targeted Instruction. However, I would argue that this is further proof that Formative Assessment is only so useful as the quality of feedback that comes with it. Personally I am a big proponent of Formative Assessment, because of how it can be used with Action Research, something that the literature does show has a very high yield. 

  18. Formative Assessment, Coupled with Student Goal Setting: (See intervention 16 for more information). It was very interesting to me that this intervention had such a low ES, especially considering that Providing Student Feedback with Goal Setting, also had such an extremely low yield. To me this suggests that having students set their own learning goals is not an effective strategy for increasing learning. 

  19. Peer Tutoring (Same Ages): Please see Peer Tutoring (Different Ages). 

  20. Game-Based Learning: Game-Based learning is often an idea touted as being the be all end all of education, similar to manipulatives. However, it consistently shows low effect sizes, across the research, for multiple subjects. Personally, I use games all the time, sell smartboard games, and am a big proponent of learning games. However, similar to feedback, I think Game-Based learning is only as effective, as the games being used. Oftentimes Game-Based Learning is coupled with Inquiry-Based Learning and I think this lowers their efficacy. Personally, I use games to skill and drill with students math facts, vocabulary, and phonetic sounds, which I believe, couples the games with other higher-yield strategies, while also increasing student motivation. However, that being said, it is abundantly clear from the literature that Game-Based Learning is not the be all end all of teaching. 

  21. Providing Feedback With Goal Setting: In this intervention, teachers provide feedback to students and then ask the students to create learning goals based on that feedback. The results of this intervention were significantly lower than just giving students feedback. Indeed the results of this meta-analysis were so low, that the authors did not include the mean effect size, and rather only included the highest and lowest ES. The highest ES was .07, which is one of the lowest effect sizes, I have ever seen in a math instruction study. I think it is therefore very unlikely that this intervention would be worth the opportunity cost of implementing. 



Additional Information: There is a large amount of additional data that I collected for this research including P-values, sample sizes, effect size formulas, and effect size ranges if you are interested in examining this data for your own benefit, please email me at evidenced.based.education@gmail.com

References: 

R, Getsen, Et al. (2009). A Meta-analysis of Mathematics Instructional Interventions for Students with Learning Disabilities:

Technical Report. Instructional Research Group. Retrieved from <https://www.researchgate.net/publication/266864172_A_Meta-analysis_of_Mathematics_Instructional_Interventions_for_Students_with_Learning_Disabilities> 


Methe, S., Kilgus, S., Neiman, C., & Riley-Tillman, T. (2012). Meta-Analysis of Interventions for Basic Mathematics Computation in Single-case Research. Journal of Behavioral Education, 21(3), 230-253. Retrieved July 25, 2020, from www.jstor.org/stable/43551228Copy


NELSON, G.; MCMASTER, K. L. The Effects of Early Numeracy Interventions for Students in Preschool and Early Elementary: A Meta-Analysis. Journal of Educational Psychology, [s. l.], v. 111, n. 6, p. 1001–1022, 2019. DOI 10.1037/edu0000334. Disponível em: http://search.ebscohost.com.ezproxy.lakeheadu.ca/login.aspx?direct=true&db=eue&AN=137730895&site=ehost-live. Acesso em: 25 jul. 2020.


Rittle-Johnson B, Loehr A, Durkin K. Promoting self-explanation to improve mathematics learning: A meta-analysis and instructional design principles. ZDM. 2017;49(4):599-611. doi:10.1007/s11858-017-0834-z.


Effects of game‐based learning on students’ mathematics achievement: A meta‐analysis. (2019). Journal of Computer Assisted Learning., 35(3), 407–420.

K, Garfourth. (2020). Concrete – Representational – Abstract: An Instructional Strategy for Math. LD School. Retrieved from <https://www.ldatschool.ca/concrete-representational-abstract/>.