The Siren Song of “Evidence-Based” Instruction

image_print
May 23, 2024

The Siren Song of “Evidence-Based” Instruction

By Alfie Kohn

I’m geeky enough to get a little excited each time a psychology or education journal lands in my mailbox.1 Indeed, I’ve spent a fair portion of my life sorting through, critically analyzing, and writing about social science research. Even my books that are intended for general readers contain, sometimes to the dismay of my publishers, lengthy bibliographies plumped with primary sources so that anyone who’s curious or skeptical can track down the studies I’ve cited.

Why, then, have I developed a severe allergy to the phrases “evidence-based” and “the science of…” when they’re used to justify certain educational practices? It took me awhile to sort out my concerns and realize that these terms raise five distinct questions.

1. What kind of evidence? A healthy respect for data protects us from relying on unrepresentative anecdotes, or falling for conspiracy theories, or believing what we wish were true regardless of whether there’s good reason to conclude that it is. But some people take an extreme, reductionist view of what qualifies as data, dismissing whatever can’t be reduced to numbers, or ignoring inner experience and focusing only on observable behaviors, or attempting to explain all of human life in terms of neurobiology. All of these have troubling implications for education, leaving us with a shallow understanding of the field. People who talk about the “science” of reading or learning, for example, rarely attend to student motivation or the fact that “all learning is a social process shaped by and infused with a system of cultural meaning.”2

2. Evidence of what? When someone says that science conclusively proves that this instructional strategy is more effective than that one, what exactly is meant by “effective”? As I’ve discussed elsewhere, that question is so obvious, so foundational to any claim, that it’s astonishing to realize how rarely it’s asked. Often it turns out that “effective,” along with other terms of approbation (“higher achievement,” “positive outcomes,” “better results”) signify nothing more than scoring well on a standardized test. Or having successfully memorized a list of facts. Or producing correct answers in a math class (without grasping the underlying principles). Or being able to recognize and pronounce words correctly (without necessarily understanding their meaning).

3. Evidence of an effect on whom?  Even large, well-constructed studies typically are able to show only that some ways of implementing a particular practice (not all possible versions of it) have some probability (greater than chance but far short of certainty) of producing some degree of benefit for some subset of students in some educational contexts (in certain academic subjects, or at certain age levels, or in certain cultures). Even one of these qualifiers, let alone all of them, signifies that evidence of an “on-balance” effect for a given intervention doesn’t allow us to claim that it’s a sure bet for all kids.

Yet it’s common to make just such an inference, which is why so many literacy experts are skeptical of, if not alarmed by, what’s being presented as “evidence-based” in their field. “Effective teaching is not just about using whatever science says ‘usually’ works best,” Richard Allington reminds us. “It is all about finding out what works best for the individual child and the group of children in front of you.”3 Ironically, as Thomas Newkirk adds, medical research is “trending toward more individualized diagnoses and treatments…[since] patients may differ greatly in the response to certain drugs or how their immune systems work….But the so-called ‘science of reading’ is moving in the opposite direction – toward a monolithic and standard approach.”4 Science complicates more often than it simplifies, which is your first clue that the use of “evidence based” or “the science of….” to demand that teachers must always do this or never do that — or even that they should be legally compelled to do this (or forbidden from doing that) — represents the very antithesis of good science.

4. Evidence of an effect at what cost? It’s not just that restricting evidence to what can be seen or measured limits our understanding of teaching and learning. It’s that doing so ends up supporting the kind of instruction that can alienate students and sap their interest in learning. Thus does schooling become not only less pleasant but considerably less effective. This exemplifies a broader phenomenon that Yong Zhao describes as a tendency to overlook unanticipated, harmful consequences. Even if a certain way of teaching did produce the desired effects, he argues, an inattention to its damaging side effects means that what’s sold to us as “evidence based” can sometimes do more harm than good.5

5. Does “evidence-based” refer to evidence at all? That citing research in support of a claim can raise as many questions as it answers should give us pause. Even more disturbing is the fact that the term evidence-based sometimes functions not as a meaningful modifier but just as a slogan, an all-purpose honorific like “all-natural” on a food label. Rather than denoting the existence of actual evidence, its purpose may be to brand those who disagree with one’s priorities as “unscientific” and pressure them to fall in line.6

This would be troubling enough if evidence and science were employed to justify all sorts of educational approaches, as seems to be the case with a label like “best practice.” But these words are almost always used to defend traditionalist practices such as direct instruction and control-based interventions derived from Skinnerian behaviorism such as Applied Behavior Analysis (ABA) and Positive Behavioral Interventions and Supports (PBIS). A kind of ideological fervor tends to fuel each of these things, whereas actual empirical support for them could be described as somewhere between dubious and negligible.7

A quarter-century ago, defenders of high-stakes standardized exams resorted to the same strategy on behalf of the punitive, test-driven No Child Left Behind Act. The word science (or scientific) appeared more than a hundred times in the text of that law, while the Bush administration declared: “We will change education to make it an evidence-based field.”8 In reality, no controlled study then or since has, to the best of my knowledge, ever demonstrated any benefit to high-stakes testing — other than the tautological claim that it raises scores on those same tests. The damage done to the quality of teaching and learning by NCLB has been incalculable.9

A few years earlier, as Bill Jacob, a math professor at the University of California, Santa Barbara, reported, “the use of problem solving as a means of developing conceptual understanding [in math] was abandoned and replaced by direct instruction of skills” in California, and this move was similarly rationalized by “the use of the code phrase research-based instruction” even though the available research actually tended to point in the opposite direction (and still does). Indeed, Jacob added, the phrase research-based was just “a way of promoting instruction aligned with ideology.”10 Much the same was true for reading instruction back then, and today such efforts have been turbocharged, with systematic phonics instruction for all children being sold, misleadingly, as the “science of reading.”11 Explicit academic instruction in preschools, too, is presented as evidence-based even though, once again, actual evidence not only fails to support this approach but warns of its possible harms.12

At best, then, there are important questions to ask about evidence that’s cited in favor of a given proposal, particularly when it’s intended to justify a one-size-fits-all teaching strategy. At worst, the term evidence-based is used not to invite questions but to discourage them, much as a religious person might seek to end all discussion by declaring that something is “God’s will.” Too often, the invocation of “science” to defend traditionalist education reflects an agenda based more on faith than on evidence.

 

NOTES

1. That’s right, I still subscribe to the print editions. You have a problem with that?

2. National Academies of Sciences, Engineering, and Medicine, How People Learn II: Learners, Contexts, and Cultures (Washington, DC: The National Academies Press, 2018), p. 27. Also see Jean Lave and Etienne Wenger, Situated Learning (Cambridge University Press, 1991). Regarding the exclusion of motivation, see Seth A. Parsons and Joy Dangora Erickson, “Where Is Motivation in the Science of Reading?“, Phi Delta Kappan, February 2024, pp. 32-36.

3. Richard L. Allington, “Ideology Is Still Trumping Evidence,” Phi Delta Kappan, February 2005, p. 462.

4. Thomas Newkirk, The Broken Logic of “Sold a Story” (Literacy Research Commons, 2024), p. 9.

5. Yong Zhao, What Works May Hurt: Side Effects in Education (Teachers College Press, 2018). For a shorter version (with the same title), see this article in the Journal of Educational Change.

6. British educator Andrew Davis made a similar point in an essay about efforts to defend direct instruction. See “Evidence-Based Approaches to Education,” Management in Education 32 (2018): 135-38.

7. On direct instruction, see the research in the first half of my 2024 essay “Cognitive Load Theory: An Unpersuasive Attempt to Justify Direct Instruction.” On ABA, see Micheal Sandbank et al., “Project AIM: Autism Intervention Meta-Analysis for Studies of Young Children,” Psychological Bulletin 146 (2020): 1-29, whose findings I described in “Autism and Behaviorism,” as well as this independent evaluation of ABA and this study of a version of Positive Behaviour Support used on autistic children. ABA and PBS/PBIS principally rely on rewards to elicit compliance, and I’ve offered a lengthy critical appraisal not only of that strategy but of behaviorism more generally: Punished by Rewards (Houghton Mifflin, 1993/2018).

8. U.S. Department of Education, Strategic Plan – 2002-2007, March 2002, p. 51.

9. For example, see the essays in Deborah Meier et al., Many Children Left Behind (Beacon Press, 2004); and a description of two studies of NCLB’s effect on NAEP scores in Gerald W. Bracey, “The Condition of Public Education,” Phi Delta Kappan, October 2006, pp. 151-53.

10. Bill Jacob, “Implementing Standards: The California Mathematics Textbook Debacle,” Phi Delta Kappan, November 2001, pp. 265, 266.

11. As of this writing, the most comprehensive treatment of the topic is a book by eminent reading experts Robert J. Tierney and P David Pearson: Fact-Checking the Science of Reading (Literacy Research Commons, 2024). Also see David Reinking et al., “Legislating Phonics: Settled Science or Political Polemics?“, Teachers College Record 125 (2023): 104-31; Peter Johnston and Donna Scanlon, “An Examination of Dyslexia Research and Instruction with Policy Implications,” Literacy Research: Theory, Method, and Practice 70 (2021): 107-28; Jeffrey S. Bowers, “Reconsidering the Evidence That Systematic Phonics Is More Effective Than Alternative Methods of Reading Instruction,” Educational Psychology Review 32 (2020): 681-705; Dominic Wyse and Alice Bradbury, “Reading Wars or Reading Reconciliation?“, Review of Education 10 (2022): e3314; Catherine Compton-Lilly et al., “Stories Grounded in Decades of Research: What We Truly Know About the Teaching of Reading,” The Reading Teacher 77 (2023): 392-400; and a series of blog posts by literacy specialist Maren Aukerman in 2022 on how the media has covered the “science of reading,” subtitled, respectively, “Is Reporting Biased?“, “Does the Media Draw on High-Quality Reading Research?“, and “How Do Current Reporting Patterns Cause Damage?

12. See Peter Gray, “Beware of ‘Evidence-Based’ Preschool Curricula,Psychology Today, December 9, 2021; and, for a review of earlier research on the subject, this lengthy excerpt from my book The Schools Our Children Deserve (Houghton Mifflin, 1999).


To be notified whenever a new article or blog is posted on this site, please enter your e-mail address at www.alfiekohn.org/sign-up .