The Problem with Three‑Cueing and Predictable Texts
​
There are few instructional debates in early reading as heated as the one over three‑cueing and predictable texts. Three‑cueing refers to prompting beginners to identify unknown words by using Meaning (pictures/context), Structure (syntax), and Visual information (first letter/word shape). Predictable texts (often called levelled readers with patterned sentences) use repetitive frames—e.g., “The duck swam. The duck ate. The duck flew.”—so children can anticipate what comes next.
​
For years I’ve argued that the empirical case against these practices is often overstated in Science‑of‑Reading discourse. That’s not to say the critique is wrong; it’s to say the research base is thinner than many assume. We have few, if any, rigorous experiments isolating three‑cueing prompts themselves. On text type, a handful of meta‑analyses compare levelled/predictable versus decodable texts, but they largely examine the same small set of studies with notable limitations (LINK). Across those syntheses, the central finding is fairly consistent: text type, by itself, makes little difference in short‑term instructional outcomes. Some more recent work even suggests that a mix of decodable and non‑decodable texts can be associated with the highest achievement (Birch and colleagues, 2022).
At the same time, broader program‑level evidence paints a different picture. Programs that explicitly rely on three‑cueing and predictable texts tend to show lower average effects than programs that teach explicit decoding with decodable texts and avoid three‑cueing (e.g., NRP, 2000; Hansford et al., 2024). There is also emerging evidence of negative long‑term outcomes associated with programs using predictable texts and three‑cueing (e.g., May et al., 2024; Hansford et al., 2024b). It’s tempting to attribute those long‑term effects specifically to three‑cueing—on the theory that students become reliant on pictures and context, strategies that fail as texts grow more complex. But to be clear: we still lack many direct tests that cleanly isolate those individual variables.
​
A quick anecdote
A couple of weeks ago, my 3‑year‑old daughter found a predictable text in a Little Free Library. She loved it. After a single read‑through, she could “read” the entire 20‑page book on her own—or at least it looked that way. With one glance at the picture she would produce a sentence that conveyed the gist, but the words were often off: “bear” became “polar bear,” “hat” became “red hat,” “horse” became “pony.” She skipped function words like and. Sometimes she barely looked at the print.
​
That moment crystallized something for me: predictable texts can create the appearance of reading before decoding is in place, and they can reward picture‑based guessing. Developmentally, pretend reading at age three is normal and delightful; my point is not to judge her. The concern is that, in instructional contexts, these materials and prompts can reinforce habits that compete with learning to attend to the words on the page.
​
Why prompting MSV isn’t necessary
Whole Language and some Balanced Literacy approaches explicitly prompt children to use pictures, first letters, or sentence meaning to identify words. But children will naturally use meaning and pictures without instruction. What they won’t reliably do without instruction is attend to the letter–sound structure of the word in print. In my experience, you almost have to train beginners not to guess and instead to look at the word and decode it.
With my daughter, I now use a simple prompt sequence:
​
Three cues that actually help
-
Look at the word.
-
Segment the word. Map graphemes to phonemes for her (e.g., t-r-ai-n).
-
Use continuous blending. Say the sounds with as little gap as possible and slide them together (e.g., /ssssaaaaat/). For short, regular words, this almost always works after a try or two.
Used successively, these cues nudge children away from picture‑based guessing and toward genuine decoding.
​
What the evidence does and does not show
-
Text type alone: Current syntheses with small, overlapping study pools suggest minimal short‑term differences between predictable and decodable texts; one more recent analysis found the best outcomes with a mix of text types (Birch and colleagues, 2022). These results warrant caution in making categorical claims about text type.
-
Program packages: When you zoom out to entire programs, packages emphasizing explicit decoding and decodables generally outperform those built around three‑cueing and predictable texts (NRP, 2000; Hansford et al., 2024). Correlation isn’t causation, but the pattern is hard to ignore.
-
Long‑term trajectories: Some longitudinal evidence possibly links three‑cueing/predictable‑text programs with worse outcomes over time (May et al., 2024; Hansford et al., 2024b). We still need studies that isolate which components drive those effects.
Bottom line: The case against three‑cueing is strongest at the program level and in its theoretical conflict with learning to decode; the case based purely on text type is weaker and more nuanced than social media debates suggest.
​
Practical takeaways for classrooms
-
Teach and practice letter–sound mapping and blending routines explicitly from day one.
-
If you use predictable texts, use them sparingly, while ensuring that word identification practice relies on decoding.
-
Consider a blend of text types across the week: decodables for decoding practice and rich trade books for vocabulary and knowledge; Keep the prompts consistent with decoding.
-
Replace MSV prompts with the three decoding cues above.
-
Written by Nathaniel Hansford
Last updated: September 27, 2025
Want help teaching your students to read? Check out my online learning platform: https://www.sageonlineacademy.ca/landing Email me for a free trial code: evidenced.based.teaching@gmail.com
References:
Birch, R,. Sharp, H,. Miller, D,. Ritchie, D & and Ledger, S.(2022). A systematic literature
review of decodable and levelled reading books for reading instruction in primary school
contexts: an evaluation of quality research evidence. University of Newcastle. Accessed at
Hansford, N. (2024). Do kids need decodable texts to Learn How to Read? Teaching by Science. https://www.pedagogynongrata.com/_files/ugd/237d54_1466900359e5494b8e698be0e866a81f.pdf
Hansford, N., Dueker, S., Garforth, K., Grande, J., King, J & McGlynn, S. (2024a). Structured Literacy compared to Balanced Literacy: A meta-analysis. [Pre Print]. https://www.researchgate.net/publication/387497935_Structured_Literacy_Compared_to_Balanced_Literacy_A_meta-analysis
Hansford, N., Dueker, S., Garforth, K., Grande, J., J, King & McGlynn, S. (2024b). Reading recovery: a longitudinal meta-analysis. Discover Education. 3(1). https://www.researchgate.net/publication/386355120_Reading_recovery_a_longitudinal_meta-analysis
May, H., Blakeney, A., Shrestha, P., Mazal, M., & Kennedy, N. (2023). Long-Term Impacts of Reading Recovery through 3rd and 4th Grade: A Regression Discontinuity Study. Journal of Research on Educational Effectiveness, 17(3), 433–458. https://doi.org/10.1080/19345747.2023.2209092
National Reading Panel. (2000). Teaching children to read: An evidence-based assessment of the scientific research literature on reading and its implications for reading instruction (NIH Publication No. 00-4769). National Institute of Child Health and Human Development.