Early Childhood Education: The Case Against Direct Instruction of Academic Skills (#)

From Appendix A: “The Hard Evidence” in The Schools Our Children Deserve
(Boston: Houghton Mifflin, 1999)

Early Childhood Education:

The Case Against Direct Instruction of Academic Skills

By Alfie Kohn

“The earlier [that schools try] to inculcate so-called ‘academic’ skills, the deeper the damage and the more permanent the ‘achievement’ gap.”

— Deborah Meier

Some of the most ambitious and expensive educational evaluations conducted in this country have looked at programs growing out of Head Start – that is, programs begun in the 1960s to help disadvantaged young children.  One of those efforts, known as Follow Through, was originally intended to provide support for children after they left preschool.  Threatened by the Nixon Administration with a loss of funding, Follow Through was hastily reinvented as an experiment involving more than a dozen different models of instruction at more than a hundred sites around the country.  Among the results of that comparison was the finding that some programs emphasizing basic skills – in particular, a model known as Direct Instruction, in which teachers read from a prepared script in the classroom, drilling young children on basic skills in a highly controlled, even militaristic fashion, and offering reinforcement when children produce the correct responses – appeared to produce the best results.  Proponents of this kind of teaching have trumpeted this finding ever since as a vindication of their model.

Of course, even if these results could be taken at face value, we don’t have any basis for assuming that the model would work for anyone other than disadvantaged children of primary school age.  But it turns out that the results can’t be taken at face value because the whole study was, to put it bluntly, a mess.  It’s worth elaborating on that assertion at least briefly because of the role these findings have played in giving the appearance of empirical support for a drill-and-skill approach to teaching – and also because it will help us to understand why other studies have supported exactly the opposite conclusion.

To begin with, the primary research analysts wrote that the “clearest finding” of Follow Through was not the superiority of any one style of teaching but the fact that “each model’s performance varies widely from site to site.”[1] In fact, the variation in results from one location to the next of a given model of instruction was greater than the variation between one model and the next.  That means the site that kids happened to attend was a better predictor of how well they learned than was the style of teaching (skills-based, child-centered, or whatever).

Second, the primary measure of success used in the study was a standardized multiple-choice test of basic skills called the Metropolitan Achievement Test.  While children were also given other cognitive and psychological assessments, these measures were so poorly chosen as to be virtually worthless.[2]  Some of the nontraditional educators involved in the study weren’t informed that their programs were going to end up being judged on this basis.[3]  The Direct Instruction teachers methodically prepared their students to succeed on a skills test and, to some extent at least, it worked.  As the study’s authors put it, Follow Through proved that

models that emphasize the kinds of skills tested by certain subtests of the Metropolitan Achievement Test have tended – very irregularly – to produce groups that score better on those subtests than do groups served by models that emphasize those skills to a lesser degree.  This is hardly an astonishing finding; to have discovered the contrary would have been much more surprising.[4]

Finally, outside evaluators of the project – as well as an official review by the U. S. General Accounting Office – determined that there were still other problems in its design and analysis that undermined the basic findings.[5]  Their overall conclusion, published in the Harvard Educational Review, was that, “because of misclassification of the models, inadequate measurement of results, and flawed statistical analysis,” the study simply “does not demonstrate that models emphasizing basic skills are superior to other models.”[6]  Furthermore, even if Direct Instruction really was better than other models at the time of the study, to cite that result today as proof of its superiority is to assume that educators have learned nothing in the intervening three decades about effective ways of teaching young children.  The value of newer approaches – including Whole Language, as we’ll see — means that comparative data from the 1960s now have a sharply limited relevance.

That suspicion is strengthened by recent anecdotal evidence about Direct Instruction and similar programs in which children are trained, not unlike pets, to master a prescribed set of low-level skills.  Reporters for the New York Times and Education Week visited Direct Instruction (DI) classrooms — in North Carolina and Texas, respectively — and coincidentally published their accounts in the same month, June 1998.  The Education Week reporter found that sixth-grade students, successfully trained to do well on the main standardized test used in Texas, couldn’t explain what was going on in the book they were reading or even what the title meant.  Apparently, she concluded, “mastering reading skills does not guarantee comprehension.”  The Times reporter had been told by the for-profit company running a DI-style school that all of their kindergartners had been trained to read.  “All you have to remember” as a teacher, he was told, “is that you can’t go off the script.”  But when the reporter showed the children “something basic they’d never seen,” they couldn’t make heads or tails of it.  A regimented drill-and-skill approach had trained them to “read” only what had been on the teachers’ script.[7]

Even apart from journalistic investigation, it’s common knowledge among many inner-city educators that children often make little if any meaningful progress with skills-based instruction.[8]  But failure in this situation is typically attributed to the teachers, or to the limited abilities of the children, or to virtually anything except the model itself.  In contrast, whenever problems persist in nontraditional classrooms, this is immediately cited as proof of the need to go “back to basics.”

Not only does this represent an indefensible double standard, but a lot more research dating back to the same era as the Follow Through project supports a very different conclusion.  Several independent studies of early-childhood education have compared tightly controlled, skills-oriented classrooms (such as DI) to an assortment of “developmentally appropriate” (DA) approaches, including those referred to as child-centered or constructivist, and those making use of the Montessori or High/Scope models.[9]

For example, an Illinois study of poor children from the mid-1960s found immediate achievement gains in reading and arithmetic for the DI group, a result that might have given the traditionalists something to boast about if it weren’t for the fact that the investigators continued tracking the students after they left preschool.  With each year that went by, the advantage of two years of regimented reading-skills instruction melted away, and soon proved equivalent to “an intensive 1-hour reading readiness support program” provided to another group.  One difference did show up much later, though:  almost three quarters of the DA kids ended up graduating from high school, as compared to less than half of the DI kids.  (The latter rate was equivalent to that of students who hadn’t attended preschool at all).[10]

Research that follows people over a considerable period of time is expensive to conduct and therefore relatively rare, but its findings are far more powerful than those from short-term studies.  Frankly, given how much happens to us over the years, it would be remarkable to find that any single variable from our early childhoods had a long-term effect.  That’s why the results from another such study are nothing short of amazing.  Back in the 1960s, a group of mostly African-American poor children from Michigan were randomly assigned to DI, free-play, or High/Scope constructivist preschools.  They were followed from that point, when they were three or four years old, all the way into adulthood.  As in the Illinois sample, the academic performance of the DI children was initially higher but soon became (and remained) indistinguishable from that of the others.  By the time they were 15 years old, other differences began showing up.  The DI group had engaged in twice as many “delinquent acts,” were less than half as likely to read books, and generally showed more social and psychological signs of trouble than did those who had attended either a free-play or a constructivist preschool.[11]

When the researchers checked in again eight years later, things had gotten even worse for the young adults who had attended a preschool with a heavy dose of skills instruction and positive reinforcement.  They didn’t differ from their peers in the other programs with respect to their literacy skills, total amount of schooling, income, or employment status.  But they were far more likely to have been arrested for a felony at some point and also to have been identified as “emotionally impaired or disturbed.”  (Six percent of the High/Scope and free-play preschool group had been so identified at some point, as compared to a whopping 47 percent of the DI group.)  The researchers also looked to see who was now married and living with his or her spouse.  The results:  18 percent of the free-play preschool group, 31 percent of the High/Scope group, and not a single person from the DI group.[12]

It might be tempting to say that these disturbing findings have to be weighed against the academic benefits of a back-to-basics preschool model – except that both studies showed that any such benefits are washed away very quickly.  Moreover, a third experiment, with kindergartners in Louisiana, failed to find even a short-term boost in test scores.  There were no significant differences between the two groups at the end of the year, or at the end of first or second grade.  What did distinguish the different models in this study was that the children who had been taught with the skills-based approach were “more hostile and aggressive, anxious and fearful, and hyperactive and distractible” than children who had attended more developmentally appropriate kindergarten classrooms – and they remained so a full year later.  (Other research has confirmed the presence of much higher levels of “stress-related behaviors” as a result of direct instruction techniques.)[13]  Furthermore, when the researchers broke the results down by race, economic background, and gender, they found that low-income black males were “most likely to be hurt by . . . teach-to-the-test instruction.”  This was true, first, because they experienced an unusual amount of stress, and second, because, for this group, there was a difference in academic achievement:   those in the skills-oriented classrooms didn’t do as well even on skills-oriented tests.[14]

Three other studies conducted in the 1980s and ‘90s seem to clinch the case:

* When DI was compared to a constructivist model not unlike High/Scope in six Alaskan kindergarten classrooms (whose students were mostly white and from economically diverse backgrounds), the latter students did as well or better on standardized tests of reading and math.[15]

* When a didactic, basic-skills focus was compared to a child-centered focus in 32 preschool and kindergarten classes in California, children in the former group did better on reading tests (consistent with the short-term advantage found in some of the other studies), neither better nor worse on math tests, and terribly on a range of nonacademic measures.  The skills kids had lower expectations of themselves, worried more about school, were more dependent on adults, and preferred easier tasks.[16]

*  A study of more than 250 children in Washington, D. C. that began in 1987 compared those from “child-initiated,” “middle-of-the-road,” and “academically directed” preschool  and kindergarten classrooms.   Those from the child-initiated preschools “actually mastered more basic skills by initiating their own learning experiences” and continued to do well as the years went by.  The middle-of-the-roaders fell behind their peers.  As for those from the academically directed group, their “social development declined along with mastery of first-grade reading and math objectives. . . . By fourth and fifth grades, children from academic pre-K programs were developmentally behind their peers and displayed notably higher levels of maladaptive behavior” – particularly in the case of boys.[17]

In keeping with my earlier cautions about deriving a single conclusion from a range of very different studies, I should emphasize that the research with young children includes many different variables that might affect the results:  social and economic class, age (what’s true of preschoolers may not be true of second graders), the specific nature of the child-centered alternative(s), and a focus on short-term versus long-term effects as well as on academic versus nonacademic issues.  Still, with the single exception of the Follow-Through study (where a skills-oriented model produced gains on a skills-oriented test, and even then, only at some sites), the results are striking for their consistent message that a tightly structured, traditionally academic model for young children provides virtually no lasting benefits and proves to be potentially harmful in many respects.

Addendum:

Newer references dealing with the use of traditional instruction for young children

* superiority of preschool classrooms in which children can choose their own activities (as compared with more academic and/or whole-group instruction):

— Rebecca Marcon, “Moving Up the Grades,” Early Childhood Research & Practice, Spring 2002

— J.E. Montie et al., “Preschool Experience in 10 Countries: Cognitive and Language Performance at Age 7,” Early Childhood Research Quarterly 21 (Fall 2006): 313 –331

*  disadvantages of direct instruction in preschool:

— Jasmine R. Ernst and Arthur J. Reynolds, “Preschool Instructional Approaches and Age 35 Health and Well-Being,” Preventive Medicine Reports 23 (2021)

*  how explicit instruction impedes exploration and learning:

— Elizabeth Bonawitz et al., “The Double-Edged Sword of Pedagogy,” Cognition 120 (2011): 322-30

*  academic superiority of constructivist kindergarten classrooms:

— Judy Pfannenstiel and Sharon Ford Schattgen, “Evaluating the Effects of Pedagogy Informed by Constructivism,” paper presented at the annual meeting of AERA, 1997

*  academic benefits for 2nd-3rd grade students whose teachers have a constructivist orientation:

— Fritz C. Staub and Elsbeth Stern, “The Nature of Teachers’ Pedagogical Content Beliefs Matters for Students’ Achievement Gains,” Journal of Educational Psychology 94 (2002): 344-55

*  academic & psychological benefits of nontraditional (differentiated, supportive) teaching in 1st grade:

— Kathryn E. Perry et al., “Teaching Practices and the Promotion of Achievement and Adjustment in First Grade,” Journal of School Psychology 45 (2007): 269-92

NOTES

[For full citations, please see the Reference section of The Schools Our Children Deserve.]

1. Stebbins et al., 1977, p. 166.

2. There is strong reason to doubt whether tests billed as measuring complex “cognitive, conceptual skills” really did so. Even the primary analysts conceded that “the measures on the cognitive and affective domains are much less appropriate” than is the main skills test (Stebbins et al., 35).  A group of experts on experimental design commissioned to review the study went even further, stating that the project  “amounts essentially to a comparative study of the effects of Follow Through models on the mechanics of reading, writing, and arithmetic” (House et al., 1978, p. 145).  (This raises the interesting question of whether it is even possible to measure the conceptual understanding or cognitive sophistication of young children with a standardized test.)

3. House et al., p. 158.

4.  Anderson et al., 1978, p. 164.

5.  The outside evaluators concluded that the original data analysts had defined an “effect” in such a way as to confound “the effectiveness of a program with its number of pupils” so that “larger programs could appear to be more effective” (House et al., p. 146).  They also argued that the level of analysis – individual children rather than schools or sites – had the effect of biasing the results in favor of the Direct Instruction model  (pp. 151-2).  Meanwhile, the General Accounting Office’s official review of the Follow Through research found that problems “in both the initial design and implementation of the experiment will limit OE’s [the Office of Education’s] ability to reach statistically reliable overall conclusions on the success of lack of success of the approaches for teaching young disadvantaged children.  The problems cannot practicably be overcome, and, when combined with the OE contractor’s reservations about design and measurement problems, raise questions about the experiment’s dependability to judge the approaches” (Office of Education, 1975, p. 25).

6.  House et al., pp. 130, 156.

7.  Manzo, 1998c, p. 37; Winerip, 1998, pp. 88-89.

8.  Linda Darling-Hammond (1997, p. 50) gives the example of the failure of the “heavily prescriptive, rigidly enforced competency-based curriculum (CBC) [which] was introduced [into Washington, D. C. schools] in the 1980s and has continued in effect throughout the years the district’s performance has plummeted.”

9.  The High/Scope curriculum, based on Piaget’s ideas, sees “the child as a self-initiating active learner” and places “a primary emphasis on problem solving and independent thinking. . . . Teachers do not simply stand out of the way and permit free play, but rather guide children’s choices toward developmentally appropriate experiences” (Schweinhart and Hohmann, 1992, pp. 16-18).  “Developmentally appropriate” practice emphasizes meeting the “individual needs” of the “whole child,” providing “activities that are relevant and meaningful,” with plenty of opportunity for “active exploration and concrete, hands-on experiences” so as to tap “children’s natural curiosity and desire to make sense of their world.”   Developmentally inappropriate classrooms, by contrast, segment the preschool or kindergarten curriculum into the traditional content areas, rely heavily on rewards and punitive consequences, give children little choice about what they’re doing, ignore individual differences, and use standardized tests (Hart et al., 1997, pp. 4-5).

10.  Karnes et al., 1983.

11.  Schweinhart et al., 1986.  The difference in book reading didn’t reach conventional levels of significance (p = .09).  Advocates of Direct Instruction conducted a longitudinal study of their own, comparing some of the original DI Follow-Through students in four communities to those from matched comparison schools when they were in high school.  They reported finding either better standardized test results or higher graduation rates (but not necessarily both) for DI students (Gersten and Keating, 1987).  However, unlike the Ypsilanti study and the others described here, there was no attempt to compare results for DI and distinctly different types of programs.  It’s unclear what model of instruction, if any, characterized the primary school experience of the comparison students.  If they received a less systematic version of the same kind of basic-skills training that the DI students got – which is entirely possible in light of how pervasive this kind of teaching was and is in the United States – then these results hardly lend support to the basic philosophy common to both conditions.

12. Schweinhart and Weikart, 1997.

13. Hart et al., p. 7

14. Charlesworth et al., 1993, pp. 18-22.

15. Rawl and O’Tuel, 1982.

16. Stipek et al., 1995.  A little over half of these children were Latino or African-American, and 42 percent were from low-income households.

17. Marcon, 1994, pp. 11-12.

Copyright © 1999, 2009 by Alfie Kohn. This article may be downloaded, reproduced, and distributed without permission as long as each copy includes this notice along with citation information (i.e., name of the periodical in which it originally appeared, date of publication, and author’s name). Permission must be obtained in order to reprint this article in a published work or in order to offer it for sale in any form. Please write to the address indicated on the Contact Us page.
 www.alfiekohn.org — © Alfie Kohn