In this article, I will try to review the efficacy of Amplify CKLA by meta-analysis. However, to be fair this is a challenging task, as there is very minimal research into the program. That being said, the program is at very least research-based. Meaning that while the experimental evidence for the program might be lacking, there is sufficient evidence behind most of the principles of the program.
According to the Amplify website the program is described as explicitly teaching the mechanics of reading, while simultaneously teaching background knowledge. It explicitly teaches the 44 English language sounds, 150 morphemes, spelling, grammar, and writing skills. They use “fun/dynamic decodable texts and content rich read alouds.” Explicit instruction, morphological, phonological, spelling, and writing instruction are all well evidenced ideas, within meta-analysis.
According to Hattie’s latest meta-analysis, phonics instruction has an effect size of .60 and explicit instruction has an effect size (ES) of .57. According to the 2010 Graham, Et al, meta-analysis, spelling instruction has an ES of .79, and according to my 2021 (not peer-reviewed) meta-analysis, morphology has an ES of .51.
Conversely the Ellerman, Et al, 2009 meta-analysis found a small mean effect size of .21 on standardized tests for the teaching of vocabulary and I found an ES of .09 for decodable text in my non peer-reviewed 2021 meta-analysis. You can find these effect sizes graphed below; however, the results of Amplify specifically, may not be congruent with these effect sizes.
I included any study with a control group that either had a calculated effect size or provided the raw data, making it possible to calculate an effect size. I examined each study listed on the CKLA research website and the Amplify research website. I also searched the academic databases Education Source and Sage Portal, for the terms “CKLA”, “Core Knowledge Language Arts”, and “Amplify CKLA”. Each study found had already calculated their own effect size. There were no additional studies found in each database. Mean effect sizes were then calculated and displayed in as many ways as possible, to give the reader the most options for understanding. Effect sizes were not weighted to sample size. This study is not peer reviewed.
Wedlman, Et al, published this study in 2004 comparing the average performance of 5 CKLA schools to the US average. The study was unpublished and reported moderate differences, but smaller standard deviations. No effect size was calculated or could be calculated based on the statistics provided.
Cabell, E al, published this study in 2009, and compared providing students with background knowledge to not giving them background knowledge. The study only looked at background knowledge and vocabulary outcomes for k-5 students. This study was listed on the Amplify website as proof of concept.
Cabell, Et al, published this “meta-analysis” in 2019. It is listed on the CKLA website, under research. The study looks at the effect of providing background knowledge instruction. I put the words meta analysis in quotation marks because it’s not actually a meta-analysis, but rather a literature review, as they do not conduct any statistical analysis. The study claims to have found 32 applicable RCT’s. However, they do not list which studies in their bibliography were part of the 32, nor do they include their inclusion criteria. That being said, they do list the effect sizes found in three individual studies: Cabell 2002, Cabell 2019, and Elleman 2009. Strangely, the Elleman paper is another Meta-Analysis and not an RCT. Moreover the Elleman paper is not CKLA.
Cabell conducted an unpublished study, on an unspecified date, examining the effect of CKLA on kindergarten students. While unpublished, the study found results within the normal range.
Core Knowledge conducted their own pilot study on grade 5 students in 2019. The CKLA study had a sample size of 5602 grade 5 students and was 1 year long. The study found an ES of .38, which is low moderate.
Abele, Et al, conducted an unpublished study in 2000, comparing the results of 5 CKLA schools to 5 non CKLA schools. The results of this study were extremely negative. This study looked at 10 different outcomes. In 8 of the 10 outcomes the CKLA schools had worse results than the control schools. Effect sizes for this paper could not be calculated.
Discussion of Studies Included:
There are some issues of concern with these papers. We have a severe sponsorship bias, with most of the studies being connected to Core Knowledge. Several of the papers excluded non-significant effect sizes, which will bias the results to be higher. Non-standardized tests were used, which likely increased effect sizes. We also have two papers that were supposed to be completed, but never published, which raises the question of whether the “file drawer problem” is inflating these results. That being said, results were typically moderate and not high.
There is a very strong theoretical basis for CKLA. I especially like that it starts with a base in synthetic phonics and transitions to the explicit instruction of morphology. This analysis shows a moderate benefit to the program, but there needs to be significantly more research, before it can be called evidence-based and not just research based.
The most unique factor about CKLA in my opinion is its focus on background knowledge instruction, unsurprisingly, its highest yield outcome was for background knowledge (.55). That being said the overall results were on the lower end.
Final Grade: B: There are few direct studies, but most of the program principles are well evidenced, within the meta-analysis literature. The meta-analysis found an ES of .40-.49.
Qualitative Grade: 9/10
The program includes the following evidence based principles: direct instruction, individualized instruction, scaffolding, phonics, fluency instruction, morphology instruction, sight word instruction, and comprehension instruction.
Written by Nathaniel Hansford,
Last Edited 2022-07-24
HyeJin Hwang, Sonia Q. Cabell & Rachel E. Joyner (2022) Effects of Integrated Literacy and Content-area Instruction on Vocabulary and Comprehension in the Elementary Years: A Meta-analysis, Scientific Studies of Reading, 26:3, 223-249, DOI: 10.1080/10888438.2021.1954005
Amplify. (2019). Amplify CKLA AZ grade 5 efficacy research report. Retrieved from <https://amplify.com/wp-content/uploads/2019/12/CKLA_AZ-grade-5-efficacy-research-report.pdf>.
CKLA. (2019). CK Early Literacy Pilot. Retrieved from <https://amplify.com/wp-content/uploads/2019/12/CKLA-Early-Literacy-Pilot.pdf>.
Cabell, S.Q., White, T.G., Kim, J., Hwang, H., & Gale, C. (2019, December). Impact of the Core Knowledge Language Arts read-aloud program on kindergarteners’ vocabulary, listening comprehension, and general knowledge. Paper presented at the annual meeting of the Literacy Research Association, Tampa, FL.
Cabell, S. Q., & Hwang, H. (2020). Building Content Knowledge to Boost Comprehension in the Primary Grades. Reading Research Quarterly, 55, S99–S107. https://doi-org.ezproxy.lakeheadu.ca/10.1002/rrq.338
Elleman, A.M., Lindo, E.J., Morphy, P., & Compton, D.L. (2009). The impact of vocabulary instruction on passage-level comprehension of school-age children: A meta-analysis. Journal of Research on Educational Effectiveness, 2(1), 1–44. https://doi.org/10.1080/1934574080 2539200
S, Cabell. (Date not specified).Impact of a content-rich English language arts program on kindergarten students' language and knowledge. Florida State University. Retrieved from <https://www.triplesr.org/impact-content-rich-english-language-arts-program-kindergarten-students-language-and-knowledge>.
J, Wedman. (2004). Core Knowledge Curriculum and School Performance. University of Missour. Retrieved from <https://www.coreknowledge.org/wp-content/uploads/2016/12/CK_National_Study_2004.pdf>.
M, Abele. (2000). CORE KNOWLEDGE CURRICULUMFive-Year Analysis of Implementation and Effects in Five Maryland Schools. John Hopkins University.
Retrieved from <https://www.coreknowledge.org/wp-content/uploads/2016/12/FiveYearEffects_Maryland_2000.pdf>.