The Study of Homework and Other Examples
Research, please forgive us. Our relationship with you is clearly dysfunctional. We proclaim to the world how much we care about you, yet we fail to treat you with the respect you deserve. We value you conditionally, listening only when you tell us what we want to hear. We sneak behind your back even while basking in the glow of your reputation. If you don’t leave us, it must be because you’re blind – maybe even double-blind – to our faults.
How do we abuse you, Research? Let us classify the ways.
A TAXONOMY OF ABUSES
Excessive reliance: Respect for research (and for science more generally) ought to include a recognition of its limits. While there are certainly people who refuse to concede that water is wet until this fact has been established by controlled studies, significant at p < .01, the reality is that many assumptions and choices we make every day really don’t require supporting data. Furthermore, even when scientific findings are relevant, there’s a difference between consulting them and depending on them as our sole guide. Conclusions can be informed by research without being wholly determined by it.
Similarly, it should be possible to question how science, with its emphasis on quantifiable variables, came to be the foundation for the academic study of learning. After all, educational insights could be derived from other fields of study such as anthropology, literature, history, philosophy  -- and, in some cases, from the insights suggested by personal experience. The assumption that all true knowledge is scientific (sometimes known as “scientism”) may be just as dangerous as an aversion to, or ignorance of, the scientific method. In short, skepticism, which is supposed to be the cornerstone of science, sometimes needs to be applied to science.
It’s particularly important to distinguish between relying on scientific techniques to investigate the natural world and relying on them in education, which is ultimately about understanding, responding to, and helping particular individuals. Even studies with reasonable criteria for evaluating the success of an intervention should be applied with caution because on-average findings, however reliable and valid, may not apply to every student. “Our current ‘scientific’ method focuses almost exclusively on identifying what works best generally,” education researcher Richard Allington has pointed out in these pages. But “children differ. Therein lies what worries me about ‘evidence-based’ policy making in education. Good teaching, effective teaching, is not just about using whatever science says ‘usually’ works best. It is all about finding out what works best for the individual child and the group of children in front of you.” One of the few universal truths about human learning is that there very few universal truths about human learning. To the extent that science is unavoidably “nomothetic” – that is, concerned with the discovery of generalizable laws – its relevance to education is necessarily limited.
Myopic reliance: How we make use of data also matters. It’s important to distinguish well-conducted from poorly conducted research, and to understand the outcome variables in a given investigation. For example, if someone were to announce that studies have shown traditional classroom discipline techniques are “effective,” our immediate response should be to ask, “Effective at what? Promoting meaningful learning? Concern for others? Or merely eliciting short-term obedience?” Empirical findings can come from rigorously conducted scientific studies but still be of limited value; everything depends on the objectives that informed the research.
Insufficient reliance: Research can be overused and it can be badly used, but let’s be honest: The most common reality is that it’s hardly used at all by the people who formulate and carry out education policy. That’s particularly worrisome in those cases where the need for supporting data is most acute, such as with policies that carry potentially serious disadvantages. When that’s not the case, it might be fine to say, “I can’t prove this idea will be helpful, but I believe there’s good reason to think it will be – and there don’t seem to be any compelling arguments to the contrary.” On such a basis, a principal might decide, for example, to schedule activities in which older and younger students spend time together in order to foster a sense of community in the school. Or a teacher might decide to allow extra time for class meetings so children can have more experience making decisions and solving problems together. Even if there aren’t any studies to justify such changes, there’s no reason not to give them a try.
But in other cases, the potential downside is considerable and we ought to insist on data before proceeding. This point is tacitly conceded by the number of people who hasten to assure us that research does support a particular idea they like. Yet there is no denying that many policies with no such support often continue, and are even expanded. Consider the practice of forcing students to repeat a grade. The evidence clearly shows that holding children back a year because they’re experiencing academic difficulties is about the worst possible course of action with respect to their academic success, their psychological well being, and the likelihood that they’ll eventually graduate. However, for reasons of ideological commitment or political expedience, policy makers and pundits invoke the specter of “social promotion” and demand that children be retained in grade despite the proven disadvantages of that strategy. In fact, this practice has grown in popularity “during the very time period that research has revealed its negative effects on those retained.”
Pseudo-reliance. Research makes a difference only if we know it exists, understand it correctly, and take it seriously. On many topics, even the first of these three conditions isn’t met. The most obvious answer to the question, “If the data say x, why are so many people doing y?” is that those data are published in journals with an average circulation in the high two figures. (I exaggerate only slightly.) But sometimes this explanation doesn’t apply. Sometimes people who make policy do have access to research; they just have no interest in learning what it shows. Or they may know what it shows but fail to heed it – perhaps because they don’t understand what they’re reading or because they’re reluctant to trust the results. Or, worse yet, they may deliberately create the impression that the data support a given policy even when that isn’t true.
Hence the tendency on the part of many writers, both scholarly and popular, to declare vaguely that “studies show” a practice is effective, the point being to give the appearance that their personal preferences enjoy scientific support. Rarely are they called upon to defend such pronouncements and name the studies. “In education,” as Douglas Reeves has observed, “the mantras of ‘studies show’ and ‘research proves’ are the staple of too many vacuous keynote speakers for whom a footnote is a distant memory of a high school term paper.”
Rather than misrepresenting what “the data” say, some authors and researchers misrepresent what specific studies have found. In such cases, it’s difficult to blame simple sloppiness or misunderstanding. Examples of what I’m calling pseudoreliance on research are easy enough to discover. Noam Chomsky once commented that much of academic scholarship consists of routine clerical work. Thus, when a published assertion is followed by a parenthetical note to “see” certain studies, it doesn’t require any special talent to accept the invitation: You just head over to the library, dig out those studies, and see what they say.
That’s exactly what I did after coming across the following sentences in a book by E. D. Hirsch, Jr.: “It has been shown convincingly that tests and grades strongly contribute to effective teaching” – and again, on the following page: “Research has clearly shown that students learn more when grades are given.” An accompanying footnote contained five citations. Given the existence of a considerable body of evidence showing that grades have precisely the opposite effect, I was curious to see what research Hirsch had found – particularly since he had elsewhere made a point of boasting that his views have “strong scientific foundations” in the sort of “consensus mainstream science” that is “published in the most rigorous scientific journals.” (He has also distinguished himself from “the educational community,” which “invokes research very selectively.”)
It turned out that the references Hirsch cited didn’t support his claim at all. As I reported several years ago, all five sources in his footnote dealt exclusively with the use of pass-fail grading options, and all were restricted to college students even though the focus of his book, and the context of his claim about grades, was elementary and secondary education. Four of the five sources were more than 25 years old. Two not only hadn’t been published in rigorous scientific journals; they hadn’t been published at all. Of the three published references, one was just an opinion piece and another consisted of a survey of the views of the instructors at one college. That left only one published source with any real data. It found that undergraduates who took all their courses on a pass-fail basis would have gotten lower grades than those who didn’t. But the researchers who conducted that study went on to conclude that “pass-fail grading might prove more beneficial if instituted earlier in the student’s career, before grade motivation becomes an obstacle.” In other words, the only published study that Hirsch cited to bolster his sweeping statement about how the value of grades is “clearly shown” by research actually raised questions about the use of grades during the very school years addressed by his book.
Selective reliance. Closely related to the practice of misrepresenting research findings is the tendency to invoke or ignore research selectively, depending on whether it supports ideas one happens to like. This is done in two ways: by making use of research only on certain topics and by citing only certain studies for a given topic.
In the first of these practices, much is made of the importance of data -- except on those occasions when they prove inconvenient; at that point, research is treated as thought it were irrelevant. Consider the inconsistent insistence on “scientifically validated” or “research-based” education policies. The officials who issue such demands in connection with, say, reading instruction may simultaneously pursue other agendas, such as draconian requirements for do-or-die standardized testing, for which there is no supporting evidence at all. (Indeed, the only data ever cited in defense of high-stakes testing consists of higher scores on the same tests that are used to enforce this agenda. Apart from the inherent methodological problems this raises, the fact is that scores can be made to rise even though meaningful learning, as assessed in other ways, does not improve at all.) However, this lack of empirical substantiation didn’t prevent the authors of the No Child Left Behind Act from using the words “scientific” and “scientifically” 116 times in the law’s text.
But is the available research being cited and summarized fairly even in those instances where scientific results are said to matter? Or are terms tendentiously defined and questions carefully framed so that only certain studies are included and, ultimately, only certain forms of teaching can meet the criteria? There is ample reason to doubt, for example, whether an approach consisting mostly of direct, systematic instruction in phonics skills accurately reflects the best available scientific findings – unless the term “scientific” is recast so as to exclude all data except those that support this position. In math, too, “the use of the code phrase research-based instruction” may permit “a narrow vision of research . . . [as] a way of promoting instruction aligned with ideology.”
Another way that policy makers use research selectively is by commissioning a study but then refusing to release it if the results fail to support a predetermined conclusion. It’s always possible for officials to claim that they decided not to release a report for other reasons, of course. But a strong case can be made that ideological considerations played a role in at least two instances. The first was Perspectives on Education in America, better known as the Sandia Report, commissioned by the federal government in 1990. After an exhaustive study, the authors concluded, “To our surprise, on nearly every measure, we found steady or slightly improving trends.” But evidently this was not the message that a Republican administration wanted to hear, given that its privatization agenda was predicated on the idea that American public schools are in terrible shape. The government refused to release the report, and only later was it published in an academic journal.
The second example I have in mind is a meta-analysis conducted by the National Literacy Panel that concluded bilingual education was superior to an English-only approach. When the study was completed in 2005, the Bush Administration’s Department of Education, which had provided the funding, declined to publish it. The official explanation was that it didn’t stand up well in peer review, even though “department officials had selected members of the panel and participated in all its meetings.”
I don’t mean to imply that it’s always easy for an observer to determine, let alone for many people to agree, when the spirit of science has been abused. Even fair-minded scholars often disagree vigorously about how best to pose a question or construct an experiment, about which results are meaningful and which studies are of sufficient quality and relevance to warrant inclusion in a review. Moreover, let’s not forget that value-free science -- in general and as it pertains to education -- is an illusion, a vestige of a discredited tradition known as positivism. Values are always present in a scientific enterprise just as they are in other human enterprises. That in itself may offer reason to be skeptical about talk of “evidence-based” educational policies. If you don’t know whom to worry about, you may as well start with those who take seriously the idea of absolute objectivity.
Public officials, however, are not the only people who use research selectively. Disturbing as it is to acknowledge, sometimes researchers themselves are guilty of this. The practice of giving an incomplete or inaccurate account of one’s own data offers a stunning example. Over the years, I’ve noticed that social scientists may be so committed to a given agenda that they ignore, or at least minimize the importance of, what their investigations have turned up if it wasn’t the outcome they seem to have been hoping for. Their conclusions and prescriptions, in other words, are sometimes strikingly at odds with their results.
This phenomenon was already common enough back in 1962 that the psychologist Harry Harlow, best known for his terry cloth monkey experiments, offered a satirical set of instructions for researchers who were preparing to publish their findings. “Whereas there are firm rules and morals concerning the collection and reporting of data which should be placed in the Results [section],” he reminded them, “these rules no longer are in force when one comes to the Discussion. Anything goes!”
One example that has received some attention on these pages deals, again, with reading – and, specifically, with discrepancies between the conclusions favoring direct instruction of phonics, which were contained in the widely circulated executive summary of the 1999 National Reading Panel report, and the actual results of the studies described in the report itself.
But I first encountered this “anything goes” approach in the 1980s, while sifting through research about the effects of television viewing on children. Jerry and Dorothy Singer, a husband-and-wife team who are critical of TV, turned up some unexpected (and, it would seem, unwelcome) evidence that watching television doesn’t always have a negative effect and may even be associated with desirable outcomes. Children who watched a lot, for example, were more enthusiastic in school than their peers who watched less. In another of their studies, preschoolers who logged more hours in front of the set tended to display more of an artistic orientation and speak longer sentences than other children. The Singers always mentioned such findings very quickly and then swept the results out of sight. By contrast, any results that supported an anti-TV view were enthusiastically repeated in the discussion section of the paper and then again in their subsequent publications.
Take the question of whether television has an adverse impact on children’s imagination -- a claim for which the Singers’ work is frequently cited. In a 1984 study, they described several tests they had performed, of which two showed a very weak negative relationship between viewing and imagination. Another test showed that children who watched a lot of TV were more imaginative than their peers. Yet the Singers concluded their article by emphasizing the negative result and, in a later paper, declared unequivocally that “heavy television viewing preempts active play practice and the healthy use of the imagination.” Anyone who had skipped the results section of their papers and read only the conclusions would have gotten a mighty skewed view of their actual findings.
HOMEWORK: A CASE STUDY
The kinds of research abuse that I’ve been describing have led me to proceed cautiously whenever I investigate a new topic. A case in point is my latest project, which deals with the effects of homework. I should probably admit to approaching the whole subject with a measure of skepticism even before I began combing through the studies. It strikes me as curious on the face of it that children are given additional assignments to be completed at home after they’ve spent most of the day in school – and even more curious that almost everyone takes this fact for granted. Even those who witness the unpleasant effects of homework on children and families rarely question it.
Such a posture of basic acceptance would be understandable if most teachers decided from time to time that a certain lesson ought to continue after school was over, and only then assigned students to read, write, figure out, or do something at home. Parents and students might have concerns about the specifics of certain assignments, but at least they would know that the teachers were exercising their judgment, deciding on a case-by-case basis whether circumstances really justified intruding on family time--and considering whether meaningful learning was likely to result.
But that scenario bears no relation to what happens in most American schools. Homework isn’t limited to those occasions when it seems appropriate and important. Most teachers and administrators aren’t saying, “It may be useful to do this particular project at home.” Rather, the point of departure seems to be, “We’ve decided ahead of time that children will have to do something every night (or several times a week). Later on we’ll figure out what to make them do.” This commitment to the idea of homework in the abstract is accepted by the overwhelming majority of schools--public and private, elementary and secondary. But it is defensible only if homework, per se--that is, the very fact of having to do it, irrespective of its content--is beneficial.
And is it? Intuition can’t provide the answer. If a plausible argument can be advanced that homework might have positive effects, just such an argument could also be made that it probably won’t--to say nothing of the stress, lost time for other activities, family conflict, and other potential disadvantages. Homework should not be assigned (and certainly not as the default condition) unless there are good data to demonstrate its value for most students.
But those data don’t exist. That’s the unambiguous conclusion of my investigation, which I describe in a book called The Homework Myth. To begin with, I discovered that the numerous research reviews on the subject that have been published over the last half-century are most notable for the widely differing conclusions reached by their authors. One decided that homework has “powerful effects on learning.” Another found that “there is no evidence that homework produces better academic achievement.” Still others don’t think enough good studies exist to permit a definitive conclusion. The fact that there isn’t anything close to unanimity among experts demonstrates just how superficial and misleading is the declaration we so often hear that “studies prove homework is beneficial.”
At a first pass, then, the available research might be summarized as inconclusive. But if we look more closely, even that description turns out to be too generous. Here are five reasons, offered in abridged form.
* “There is no evidence that any amount of homework improves the academic performance of elementary students.” That sentence, written by Harris Cooper, the nation’s most prominent homework researcher, emerged from an exhaustive meta-analysis he conducted in the 1980s, and the conclusion was then confirmed by another review that he and his colleagues published in 2006. To be more precise, virtually no good research has evaluated the impact of homework in the primary grades, whereas research has been done with students in the upper elementary grades and it generally fails to find any benefit.
* At best, most homework studies show only an association, not a causal relationship. In Cooper’s major research review, the correlation between time spent on homework, on the one hand, and achievement, on the other, was “nearly nonexistent” for grades 3-5, extremely low for grades 6-9, and moderate for grades 10-12. But while a significant correlation is clearly a prerequisite for declaring that homework provides academic benefits, it isn’t sufficient to justify that conclusion. Statistical principles don’t get much more basic than “correlation doesn’t prove causation.” Nevertheless, most research purporting to show a positive effect of homework (at least in high school) seems to be based on the assumption that when students who get, or do, more homework also score better on standardized tests, it follows that the higher scores were due to their having had more homework. In fact, there are almost always other explanations for why successful students might be in classrooms where more homework is assigned – let alone why these students might take more time with their homework than their peers do. Being born into a more affluent and highly educated family, for example, might be associated with higher achievement and with doing more homework (or attending the kind of school where more homework is assigned).
One of the most frequently cited studies in the field was published in the early 1980s by a researcher named Timothy Keith, who looked at survey results from tens of thousands of high school students and concluded that homework had a positive relationship to achievement, at least at that age. But a funny thing happened ten years later when he and a colleague looked at homework alongside other possible influences on learning such as quality of instruction, motivation, and which classes the students took. When all these variables were entered into the equation simultaneously, the result was “puzzling and surprising”: Homework no longer had any meaningful effect on achievement at all. This is only one of several studies that offer reason to doubt whether homework is beneficial even in high school.
* Homework studies confuse grades and test scores with learning. When researchers talk about the possibility that homework is academically useful, what they mean is that it may have a positive effect on one of three things: scores on tests designed by teachers, grades given by teachers, or scores on standardized exams. About the best thing that can be said for these numbers is that they’re easy to collect and report. Each is seriously flawed in its own way.
In studies that involve in-class tests, some students are given homework – which usually consists of reviewing a batch of facts about some topic – and then they, along with their peers who didn’t get the homework, take a quiz on that very material. The outcome measure, in other words, is precisely aligned to the homework that some students did and others didn’t do -- or that they did in varying amounts. It would be charitable to describe positive results from such studies as being of limited value.
In the second kind of study, course grades are used to determine whether homework made a difference. Apart from their general lack of validity and reliability, grades are particularly inappropriate for judging the effectiveness of homework because the same teacher who handed out the assignments then evaluates the students who completed them. The final grade a teacher chooses for a student will often be based at least partly on whether, and to what extent, that student did the homework. Thus, to say that more homework is associated with better school performance as measured by grades is to provide no useful information about whether homework is intrinsically valuable. Yet grades are the basis for a good number of the studies that are cited to defend that very conclusion. Not surprisingly, homework seems to have more of a positive effect when grades are used as the outcome measure.
Here’s one example. Cooper and his colleagues conducted a study in 1998 with both younger and older students (from grades 2 - 12), using both grades and standardized test scores to measure achievement. They also looked at how much homework was assigned by the teacher as well as how much time students spent on it. Thus, there were eight separate results to be reported. Here’s how they came out:
Effect on grades of amount of homework assigned No sig. relationship
Effect on test scores of amount of homework assigned No sig. relationship
Effect on grades of amount of homework done Negative relationship
Effect on test scores of amount of homework done No sig. relationship
Effect on grades of amount of homework assigned No sig. relationship
Effect on test scores of amount of homework assigned No sig. relationship
Effect on grades of amount of homework done Positive relationship
Effect on test scores of amount of homework done No sig. relationship
Of these eight comparisons, then, the only positive correlation – and it wasn’t a large one (r = .17) – was between how much homework older students actually did and their achievement as measured by grades. If that measure is viewed as dubious, if not downright silly, then one of the more recent studies conducted by the country’s best-known homework researcher fails to support the idea of assigning homework at any age.
The last, and most common, way to measure achievement is with standardized tests. Purely because they’re standardized, these are widely regarded as objective instruments for assessing children’s academic performance. But as I’ve argued elsewhere, such tests are a poor measure of intellectual proficiency – and, indeed, are more likely to be correlated with a shallow style of learning. If our children’s ability to understand ideas from the inside out is what matters to us, and if we don’t have any evidence that assigning homework helps them to acquire this proficiency, then all the research in the world showing that test scores rise when you make kids do more schoolwork at home doesn’t mean very much. That’s particularly true if the homework was designed specifically to improve the limited band of skills that appear on these tests. It’s probably not a coincidence that, even within the existing test-based research, homework appears to work better when the assignments involve rote learning and repetition rather than real thinking. After all, “works better” just means “produces higher scores on exams that measure low-level capabilities.”
I’m not aware of any studies that have even addressed the question of whether homework enhances the depth of students’ understanding of ideas or their passion for exploring them. The fact that more meaningful outcomes are hard to quantify does not make test scores or grades any more valid, reliable, or useful as measures. To use them anyway calls to mind the story of the man who looked for his lost keys near a streetlight one night, not because that was where he dropped them but just because the light was better there. The available research allows us to conclude nothing about whether homework improves children’s learning.
* The results of national and international exams raise further doubts about homework’s role. Students who take the National Assessment of Educational Progress also answer a series of questions about themselves, sometimes including how much time they spend on homework. The most striking result, particularly for elementary students, is the absence of an association between that statistic and their score on the exam. Even students who reported having been assigned no homework at all didn’t fare badly. On the 2000 math test, for example, fourth graders who did no homework got roughly the same score as those who did 30 minutes a night. Remarkably, the scores then declined for those who did 45 minutes, then declined again for those who did an hour or more! In eighth grade, the scores were higher for those who did between 15 and 45 minutes a night than for those who did no homework, but the results were worse for those who did an hour’s worth, and worse still for those did more than an hour. In twelfth grade, the scores were about the same regardless of whether students did only 15 minutes or more than an hour. Results on the reading test likewise provided no compelling evidence that homework helped.
International comparisons allow us to look for correlations between homework and test scores within each country and also for correlations across countries. In the 1990s, the Trends in International Mathematics and Science Study (TIMSS) became the most popular way of assessing what was going on around the world, although of course its conclusions can’t necessarily be generalized to other subjects. While the results varied somewhat, it usually turned out that doing some homework had a stronger relationship with test scores than doing none at all, but doing a little homework was also better than doing a lot. However, even that relationship didn’t show up in a separate series of studies involving elementary school students in China, Japan, and two U.S. cities: “There was no consistent linear or curvilinear relation between the amount of time spent on homework and the child’s level of academic achievement.” These researchers even checked to see if homework in first grade was related to achievement in fifth grade, the theory being that homework might provide gradual, long-term benefits to younger children. Again they came up empty handed.
As for correlations across cultures, two researchers combined TIMSS data from 1994 and 1999 in order to be able to compare practices in 50 countries. When they published their findings last year, they could scarcely conceal their surprise:
Not only did we fail to find any positive relationships, [but] the overall correlations between national average student achievement and national averages in the frequency, total amount, and percentage of teachers who used homework in grading are all negative! If these data can be extrapolated to other subjects – a research topic that warrants immediate study, in our opinion – then countries that try to improve their standing in the world rankings of student achievement by raising the amount of homework might actually be undermining their own success. . . . More homework may actually undermine national achievement.
* No evidence supports the idea that homework provides nonacademic benefits. If it can’t be shown that filling children’s backpacks and evenings with school assignments is likely to help them learn better, many people try to defend homework on other grounds instead. Rather than beginning with the question “What does it make sense to do with kids?” it seems as if the point of departure is to ask “What reasons can we come up with to justify homework, which we’re determined to assign in any case?” One such set of justifications involves the promotion of characteristics like responsibility, time management skills, perseverance, self-discipline, and independence. But as with claims about academic effects, we should ask to see what empirical support exists for what is really only a hypothesis before requiring children to sacrifice their free time or other activities. And the answer is that not a shred of evidence exists to support these claims. The idea that homework teaches good work habits or develops positive character traits could be described as an urban myth except for the fact that it’s taken seriously in suburban and rural areas, too.
In short, neither academic nor nonacademic justifications for homework are supported by the available evidence. These facts are extremely inconvenient for policy makers, researchers, and others who like homework for other reasons. But the typical response isn’t to rethink this preference in light of the data; more commonly, attempts are made to ignore, or somehow neutralize, those data. Indeed, most of the explosive growth in homework over the last decade or two has taken place with younger children even though this is the age group for which studies most clearly fail to show any positive effect. It would be difficult to imagine more compelling evidence of the irrelevance of evidence.
Moreover, the practices on the part of researchers that I had noticed with other issues show up again here. In fact, the topic of homework provides a reasonably good case study in the misleading and selective use of research.
It’s bad enough when op-ed columnists and politicians claim that “research proves” homework raises student achievement, teaches self-discipline, and so on. It’s more disturbing when researchers and the authors of serious publications about education make claims about specific studies on the subject that turn out to be false.
Consider the popular book Classroom Instruction That Works: Research Based Strategies for Increasing Student Achievement by Robert Marzano, Debra Pickering, and Jane Pollock. The subtitle immediately caught my attention, as did the fact that a full chapter was devoted to arguing for the importance of homework. The authors acknowledged that a prominent research review (namely, Cooper’s) provided scant support for the practice of giving homework to elementary school students. But they then declared that “a number of studies” published in recent years have shown that “homework does produce beneficial results for students in grades as low as 2nd grade.” This statement was followed by five citations, all of which I managed to track down. Here’s what I found.
Study 1 was limited to middle- and high-school students; no younger children were even included in the investigation. Study 2 looked at students of different ages but found no positive effect for the younger children – only a negative effect on their attitudes. (This is the same Cooper et al. study whose results I charted above.) Study 3, conducted in the 1970s, listed a number of practices employed by teachers whose students scored well on standardized tests. Among them was a tendency to assign more homework than their colleagues did, but the researchers made no attempt to determine what contribution, if any, was made by the homework; in fact, they cautioned that other, unnamed factors might have been more significant than any of those on the list. Study 4 measured how much time a group of students spent on the homework they were assigned but didn’t try to determine whether it was beneficial to assign more (or, for that matter, any at all). Even so, the researchers’ main conclusion was that “high amounts of homework time did not guarantee high performance.” Finally, the subjects of study 5 consisted of exactly six children with learning disabilities in a classroom featuring rigidly scripted lessons. The researcher sought to find out whether sending them home with more worksheets would yield better results on a five-minute test of rote memory. Even under these contrived conditions, the results were mostly negative.
I was frankly stunned by the extent of misrepresentation here. It wasn’t just that one or two of the cited studies offered weak support for the proposition. Rather, none of them offered any support. The claim advanced vigorously by Marzano and his colleagues, that homework provides academic benefits for younger children, actually had no empirical backing at all. But readers who took them at their word, perhaps impressed by a list of five sources, would never know that. (Nor is this the only example of problematic citations in their book.)
I then went looking for evidence regarding nonacademic effects and quickly found a scholarly article by Janine Bempechat, an enthusiastic defender of the “motivational” advantages of homework. In it, she wrote: “Overall, the research suggests that assigning homework in the early school years is beneficial more for the valuable motivational skills it serves to foster in the long term, than for short-term school grades.” This way of putting things seems to suggest that the absence of academic benefits is tantamount to the presence of non-academic benefits: If homework doesn’t help students to learn better, then it must help them to develop good work habits. (The possibility that it does neither is apparently beyond the realm of consideration.) However, Bempechat offered four citations in support of her claim. Again I dug up the articles. It turned out that none of her sources contained any empirical demonstration of such benefits – or even references to other studies that contained any.
One of the four citations Bempechat included was to an article by Joyce Epstein and Frances Van Voorhis which, apart from providing no data on the issue in question, made an interesting claim of its own: “Good teachers assign more homework (Corno 1996).” Would you be surprised to learn that the article by Corno that they reference actually says no such thing? In fact, it actually includes this statement: “The best teachers vary their use of homework according to students’ interests and capabilities. . . . The sheer amount of homework teachers assign has little to no relation to any objective indicator of educational accomplishment.”
Meanwhile, another well-known pair of scholars, Brian Gill and Steven Schlossman, whose specialty is tracking the history of homework attitudes and practices over the decades, assert in one of their monographs that “homework…can inculcate habits of self-discipline and independent study.” Their sole citation is to an article published in 1960. I returned to the stacks and discovered that its author had reviewed studies dealing with homework’s effects on achievement test scores. Only in his conclusion, after he had finished summarizing the results of all those studies, did he remark that many people hold the “opinion” that homework can have a positive effect on study habits and self-discipline. He then cited several essays in which that unsubstantiated opinion had been voiced.
RESULTS VS. CONCLUSIONS
If what I’ve called pseudoreliance on research shows up in the homework literature, so too does the version of selective reliance in which a researcher’s conclusion is at variance with his or her own results. Suzanne Ziegler, who wrote the article on homework in the Encyclopedia of Educational Research, went so far as to say, “A careful reading of the review articles tends to create a mistrust of homework researchers. It appears that the conclusions they have reached are sometimes nearly independent of the data they collected.” She penned those words in the mid-1980s, right around the time that an influential review of fifteen studies was published by Rosanne Paschal, Thomas Weinstein, and Herbert Walberg. In an article in Educational Leadership that described their review, these authors declared that “there seems little doubt that homework has substantial effects on students’ learning”; “research clearly indicates that greater amounts and higher standards of homework” would be beneficial.
But another researcher, Bill Barber, looked carefully at their original monograph and discovered that only four of the fifteen studies on which they had based this conclusion actually compared homework with no homework. (The rest had examined different homework methods or looked at other issues entirely, such as tutoring or enrichment activities.) Of the four relevant studies, two found no benefit at all to homework. The third found benefits at two of three grade levels, but all of the students who were assigned homework also received parental help. The last study found that students who were given math puzzles (unrelated to what was being taught in class) did as well as those who got traditional math homework.
With some trepidation I then decided to look more closely at the work of Harris Cooper. Because his reviews of the research are the most ambitious and the most recent, and because he is regarded as the country’s leading expert on the subject (and consequently is quoted in virtually every newspaper and magazine article about homework), I wanted to be certain that what he says squares with what his research reviews, and his own studies, have found. After all, Cooper laments that “the role of research in forming the homework attitudes and practices of teachers, parents, and policymakers has been minimal” and he particularly criticizes those who “cite isolated studies either to support or refute its value.”
The detailed summary of that literature that he provides, as we’ve already seen, includes the crucial acknowledgment that “there is no evidence that any amount of homework improves the academic performance of elementary students.” Oddly, though, when it comes time to offer advice, Cooper is adamant that younger children should be required to do homework. In fact, he urges school districts to “adopt a policy that requires some homework to be assigned at all grade levels” and to include in that policy “a succinct statement indicating that homework is a cost-effective technique that should have a positive effect on student achievement.”
Perhaps homework “should” have such an effect, but Cooper knows there’s no evidence that it does. What he and a group of colleagues say in light of that fact is most revealing: “It seems safe to conclude that the benefits of homework for young children should not be evaluated based solely upon homework’s immediate effects on grades or achievement test scores.” This response suggests a determination to find some justification for defending the practice of giving homework to all students. If research on academic effects fails to deliver the goods, then we’ll just have to look elsewhere. In fact, the implication seems to be that the failure to raise achievement levels doesn’t even matter because other criteria are actually more important after all.
And what are those other criteria? “Homework for young children should help them develop good study habits, foster positive attitudes toward school, and communicate to students the idea that learning takes place at home as well as at school.” Let’s put aside the last of these three putative benefits, which is almost comically circular -- making kids do academic assignments at home will teach them that they’re going to have to learn (academic content) at home – and consider the other two: positive attitudes and good study habits. I haven’t found, and Cooper hasn’t reported, any evidence that homework leads to an improvement in students’ attitudes. (At the elementary level, in fact, he discovered that exactly the opposite was true.) That leaves only one possible reason to assign homework, but unfortunately Cooper admitted four paragraphs earlier in that same article that “no studies looked at nonacademic outcomes like study habits.”
In his 2001 book, Cooper wrestles with the question again: “If homework has no noticeable effect on achievement in elementary school grades, why assign any at all? [Timothy] Keith’s comments on grade level and parent involvement hint at what I think is the primary rationale. In earlier grades, students can be given homework assignments meant not to produce dramatic gains in achievement” – he should have said any gains in achievement – “but, rather, to promote good attitudes and study habits.” He adds, “Of course, there is as yet no research evidence to support or refute whether the recommended types of homework for elementary school children actually have the intended effects.”
That all-important qualification is missing in an article Cooper published that same year. In its conclusion, he and a colleague wrote, “We have also reviewed the research and popular literature that suggests homework can have beneficial effects on young children well beyond immediate achievement and the development of study skills. It can help children recognize that learning can occur at home as well as at school. Homework can foster independent learning and responsible character traits.” The implication here is that research to back up this claim not only exists but was discussed in that very article. In fact, it doesn’t and it wasn’t.
In most of Cooper’s statements on the issue, including comments offered to reporters, the message that comes through clearly is that the pre-eminent researcher in the field believes – presumably on the basis of his research -- that young children should be doing homework. What does not come across is the message that no data have ever been found to justify this recommendation. You have to dig down pretty deep in his most scholarly book on the topic to discover how Cooper justifies a prescription that’s conspicuously inconsistent with the research he has analyzed. In his original review -- but not in any of his subsequent writings -- he admits that a list of “suggested [homework policy] guidelines would be quite short if they were based only on conclusions that can be drawn firmly from past research.” Since the data he has reviewed don’t permit the homework-for-all recommendation that he evidently is intent on offering, he therefore has chosen to set the bar much lower: “My recommendations are grounded in research in that none of them contradicts the conclusions of my review.” That’s a sentence worth reading twice. No studies show any benefit to assigning homework in elementary school, but because few show any harm, Cooper is free to say it should be done, and then to assert that this opinion is “grounded in research.” Of course, many studies have looked for a benefit and failed to find it; very few studies have bothered to investigate homework’s negative effects.
Cooper is also credited with the “ten-minute rule,” which many schools have adopted. It says that homework should “last about as long as ten minutes multiplied by the student’s grade level.” The practical effect of this recommendation is often to limit the length of assignments since many teachers assign far more than that amount – and Cooper himself is, ironically, sometimes cast in the role of a moderating influence. But, again, there doesn’t seem to be any research backing for this catchy formula, particularly as applied in elementary school. Cooper found that “more homework assigned by teachers each night was associated with less positive attitudes on the part of students,” but that doesn’t support the practice of giving “shorter but more frequent assignments” in the younger grades, as he suggests it does. Neither this nor any other findings seem to justify the practice of giving any homework at all to children in elementary school.
A careful reading of Coopers’ own studies – as opposed to his research reviews – reveals further examples of his determination to massage the numbers until they yield something – anything – on which to construct a defense of homework for younger children. (The fact that even these strenuous exertions ultimately fail to produce much of substance only underscores just how weak the case really is.) When you compare the results section to the conclusion section of these publications, the image that comes to mind is of a magician frantically waving a wand over an empty black hat and then describing the outlines of a rabbit that he swears sort of appeared.
By the way, I’m not the only reader to conclude that Cooper’s conclusions are way out ahead of the data: Ziegler’s entry in the Encyclopedia of Educational Research takes him to task for “his somewhat overstated conclusion” – an unusually pointed criticism in a publication of this kind – that “the more homework high school students do, the better their achievement.” After all, “Cooper has no data whatsoever to describe what actually happens beyond 10 hours [of homework] per week.”
Decades ago, an article in an education journal concluded with the following observation: “Fair assessment of the values of homework has been hampered by a tendency for authors of experimental research to frame their conclusions in terms that favor preconceived notions…” Ironically, this complaint reflected the writer’s belief that researchers ended up with a view of homework that was more negative than their data warranted. Whether or not that was really true of studies published in the 1930s, precisely the opposite now seems to be the case.
Discrepancies between a given researcher’s results and his prescriptions, or between the findings attributed to other sources and what those sources actually said, cast into sharp relief how the appearance of empirical support for the effectiveness of homework may be just that – appearance. But homework is just one of many possible examples of a more troubling phenomenon; the larger point is that we need to be skeptical readers in general. A citation that doesn’t really prove what it’s said to prove, or a conclusion that doesn’t match the data that preceded it, doesn’t just insult Research; ultimately, it insults all of us.
1. Frank Smith and Nel Noddings have also made this point.
2. Richard L. Allington, “Ideology Is Still Trumping Evidence,” Phi Delta Kappan, February 2005, pp. 462-63.
3. Gary Natriello, “Failing Grades for Retention,” School Administrator, August 1998, p. 15. Similarly, “Why is it that the stronger the research support for bilingual education” is, the “less support [we get] from policymakers?” asks James Crawford, the former executive director of the National Association for Bilingual Education (quoted in Mary Ann Zehr, “Advocates Note Need to Polish ‘Bilingual’ Pitch,” Education Week, February 1, 2006, p. 12).
4. Douglas B. Reeves, “Galileo’s Dilemma,” Education Week, May 8, 2002, p. 33.
5. E. D. Hirsch, Jr., The Schools We Need: And Why We Don’t Have Them (New York: Doubleday, 1996), pp. 181, 182.
6. E. D. Hirsch, Jr., “Response to Prof. Feinberg,” Educational Researcher, March 1998, pp. 38, 39; The Schools We Need, p. 127.
7. Alfie Kohn, The Schools Our Children Deserve: Moving Beyond Traditional Classrooms and “Tougher Standards” (Boston: Houghton Mifflin, 1999), pp. 209-10. This was far from the only example in Hirsch’s book in which the research failed to substantiate the claim for which it was cited, by the way. See also Kohn, pp. 294-97n4.
8. See, for example, Richard L. Allington, “How to Improve High-Stakes Test Scores Without Really Improving,” Issues in Education, vol. 6 (2000), pp. 115-24; Alfie Kohn, The Case Against Standardized Testing (Portsmouth, NH: Heinemann, 2000). The fact that no independent corroboration exists to show that testing, preceded by a steady diet of test preparation, has any real positive effect means that our children are serving as involuntary subjects in a huge high-stakes experiment.
9. Reeves, op. cit., p. 44.
10. For more on this topic, see any of numerous writings by Richard Allington, Gerald Coles, Elaine Garan, and Stephen Krashen, many of them published in the Kappan. Also see Kohn, 1999, op. cit., pp. 159-71, 217-26.
11. Bill Jacob, “Implementing Standards: The California Mathematics Textbook Debacle,” Phi Delta Kappan, November 2001, p. 266.
12. For details, see Daniel Tanner, “A Nation ‘Truly’ at Risk,” Phi Delta Kappan, December 1993, pp. 288-97.
13. Zehr, op. cit. Several years earlier, the Department released a statement that boasted: “We will change education to make it an evidence-based field.” (See www.ed.gov/about/reports/strat/plan2002-07/plan.pdf, p. 51.) What actually seems to be taking place is a campaign to change evidence to make it correspond to a certain ideology. These examples dealing with education policies may be symptomatic of a much wider and deeper phenomenon; see, for example, Chris Mooney’s 2005 book, The Republican War on Science (New York: Basic, 2005); and a report by the Union of Concerned Scientists titled Scientific Integrity in Policy Making: Investigation of the Bush Administration’s Abuse of Science, available at www.ucsusa.org/scientific_integrity/interference/reports-scientific-integrity-in-policy-making.html.
14. See David J. Ferrero, “Does ‘Research Based’ Mean ‘Value Neutral’?”, Phi Delta Kappan, February 2005, pp. 425-32; and Alfie Kohn, “Professors Who Profess,” Kappa Delta Pi Record, Spring 2003, pp. 108-13.
15. Harry F. Harlow, “Fundamental Principles for Preparing Psychology Journal Articles,” Journal of Comparative and Physiological Psychology, vol. 55 (1962), p. 895. Thanks to Jerry Bracey for calling this article to my attention.
16. For example, see Elaine M. Garan, “Beyond the Smoke and Mirrors: A Critique of the National Reading Panel Report on Phonics,” Phi Delta Kappan, March 2001, pp. 500-6.
17. The Singers played the phantom citation game, too. For example, they repeatedly asserted that children who are heavy TV viewers, or who watch any fast-paced program, cannot absorb information effectively. By way of proof, their later papers cited their early papers, but the early papers contained the same assertions in place of data. In one monograph, they claimed that certain types of programming may make children hyperactive, citing as proof three works by other researchers. When I tracked down these sources, two didn’t even mention hyperactivity and the third raised the claim only long enough to dismiss it as unsubstantiated. (Citations to the publications by the Singers are available on request. My own essay about television and children, which led me into this thicket, was eventually published as “Television and Children: ReViewing the Evidence,” in What to Look for in a Classroom . . . And Other Essays [San Francisco: Jossey-Bass, 1998].)
18. Alfie Kohn, The Homework Myth: Why Our Kids Get Too Much of a Bad Thing (Cambridge, MA: Da Capo Books, 2006). The first section of the book summarizes the evidence, the second section tries to explain why homework is so widely assigned and accepted despite what that evidence shows, and the third section draws from the practices of educators who have challenged the conventional wisdom in order to propose a different way of thinking about the subject.
19. The initial meta-analysis was published as Harris Cooper, Homework (White Plains, NY: Longman, 1989), then released in an abridged and slightly updated version entitled The Battle Over Homework, 2nd ed. (Thousand Oaks, CA: Corwin, 2001.) The quotation appeared on p. 109 of the 1989 edition. The recent reanalysis: Harris Cooper, Jorgianne Civey Robinson, and Erika A. Patall, “Does Homework Improve Academic Achievement?: A Synthesis of Research, 1987-2003,” Review of Educational Research, vol. 76 (2006), pp. 1-62.
20. Cooper 1989, p. 100. The correlations were .02, .07, and .25, respectively. In the 2006 meta-analysis, Cooper and his colleagues grouped the results into grades K-6 and 7-12. The latter correlation was either .20 or .25, depending on the statistical technique being used; the former correlation was “not significantly different from zero” (Cooper et al. 2006, p. 43).
21. Valerie A. Cool and Timothy Z. Keith, “Testing a Model of School Learning: Direct and Indirect Effects on Academic Achievement,” Contemporary Educational Psychology, vol. 16 (1991), pp. 28-44.
22. Cooper 1989, p. 72. That difference shrank in the latest batch of studies (Cooper et al. 2006), but still trended in the same direction.
23. Harris Cooper, James J. Lindsay, Barbara Nye, and Scott Greathouse, “Relationships Among Attitudes About Homework, Amount of Homework Assigned and Completed, and Student Achievement,” Journal of Educational Psychology, vol. 90 (1998), pp. 70-83.
24. See Kohn 1999 and 2000, op. cit.
25. Cooper 1989, p. 99.
26. See the table called “Average Mathematics Scores by Students’ Report on Time Spent Daily on Mathematics Homework at Grades 4, 8, and 12: 2000,” available from the National Center for Education Statistics at http://nces.ed.gov/nationsreportcard/mathematics/results/ homework.asp. As far as I can tell, no data on how 2004 NAEP math scores varied by homework completion have been published for nine- and thirteen-year-olds. Seventeen-year-olds were not asked to quantify the number of hours devoted to homework in 2004, but were asked whether they did homework “often,” “sometimes,” or “never” – and here more homework was correlated with higher scores (U.S. Department of Education, National Center for Education Statistics. NAEP 2004 Trends in Academic Progress, 2005, p. 63. Available at http://nces.ed.gov/nationsreportcard/pdf/main2005/2005464_3.pdf.)
27. In 2000, fourth graders who reported doing more than an hour of homework a night got exactly same score as those whose teachers assigned no homework at all. Those in the middle, who said they did 30-60 minutes a night, got slightly higher scores. (See http://nces.ed.gov/ nationsreportcard/reading/results/homework.asp). In 2004, those who weren’t assigned any homework did about as well as those who got either less than one hour or one to two hours; students who were assigned more than two hours a night did worse than any of the other three groups. For older students, more homework was correlated with higher reading scores (U.S. Department of Education 2005, op. cit., p. 50).
28. Ina V.S. Mullis, Michael O. Martin, Albert E. Beaton, Eugenio J. Gonzalez, Dana L. Kelly, and Teresa A. Smith, Mathematics and Science Achievement in the Final Years of Secondary School: IEA's Third International Mathematics and Science Report (Boston: International Association for the Evaluation of Educational Achievement, Lynch School of Education, Boston College, 1998), p. 114. Available at: http://isc.bc.edu/timss1995i/MathScienceC.html.
29. Chuansheng Chen, and Harold W. Stevenson, “Homework: A Cross-cultural Examination.” Child Development, vol. 60 (1989), pp. 556-57.
30. David P. Baker, and Gerald K. Letendre, National Differences, Global Similarities: World Culture and the Future of Schooling (Stanford, CA: Stanford University Press, 2005), pp. 127-28, 130. Emphasis in original.
31. Cooper’s reviews confirm this conclusion (see below), as does my own literature search. And the entry on homework in the authoritative Encyclopedia of Educational Research includes the following summary statement: “Of all the research questions asked about homework, the paramount one has always focused on the relationship between homework and academic achievement.” Whether homework has any effect on “objectives other than test marks and course grades – such as developing discipline and independence, extending understanding, or strengthening a positive attitude to learning – cannot be stated” (Suzanne Ziegler, “Homework,” in Marvin C. Alkin, ed., Encyclopedia of Educational Research, 6th ed., vol. 2 [New York: Macmillan, 1992], p. 603).
32. Robert J. Marzano, Debra J. Pickering, and Jane E. Pollock, Classroom Instruction That Works: Research-Based Strategies for Increasing Student Achievement (Alexandria, VA: Association for Supervision and Curriculum Development, 2001).
33. Harris Cooper, Jeffrey C. Valentine, Barbara Nye, and James J. Lindsay, “Relationships Between Five After-School Activities and Academic Achievement,” Journal of Educational Psychology, vol. 91 (1999), pp. 369-78. The primary purpose of the study was to assess the impact of involvement in extracurricular activities. But a correlation was found between time spent on homework by older students and the grades given to them by teachers.
34. Cooper et al. 1998, op. cit.
35. Thomas L. Good, Douglas A. Grouws, and Howard Ebmeier, Active Mathematics Teaching (New York: Longman, 1983). The first part of this book described a naturalistic study in which nine teachers whose students had high standardized math scores were compared to nine teachers whose students had lower scores. The former group gave more homework, but among many other differences they also covered material more quickly and did more whole-class teaching. Many experts view these practices as problematic, which may indicate just how poor a measure of learning standardized test scores really are; they often make bad instruction appear to be successful. In any case, not only was there no evidence of homework’s effects relative to the other variables being studied, but the authors cautioned that “correlational findings do not lead to direct statements about behaviors teachers should utilize in classrooms.” In fact, far from endorsing the use of homework, or any of the other practices on display, they continued, “We were well aware of the possibility that many factors other than the behaviors we had observed in high-achievement classrooms might be responsible for the higher achievement of students” (p. 29).
The second part of the book described an experimental study in which upper-elementary teachers were asked to do a number of things differently, including altering the content, context, and amount of the homework they gave. They were asked to limit it to fifteen minutes a night and also change when and how it was assigned, how it was scored, what explanations would precede it, and so on. Not only was homework only one of many simultaneous interventions, but the point was to change the homework experience, not to compare homework with its absence, so it would be impossible to infer any benefit from giving it.
36. Todd C. Gorges, and Stephen N. Elliott, “Homework: Parent and Student Involvement and Their Effects on Academic Performance,” Canadian Journal of School Psychology, vol. 11 (1995), pp. 18-31; quotation appears on p. 28. The study involved third and fifth graders in two suburban schools. More time spent on homework turned out not to be beneficial in three respects: There were no meaningful effects for the fifth graders, who were assigned more homework; the students who spent more time doing homework were the lower-achieving students; and the main impact of homework was on teachers’ perceptions of children’s competence, not on “actual subject-specific performance.”
37. Michael S. Rosenberg, “The Effects of Daily Homework Assignments on the Acquisition of Basic Skills by Students with Learning Disabilities,” Journal of Learning Disabilities, vol. 22 (1989), pp. 314-23. The problem, this researcher decided, was that the children didn’t always do the homework, or do it correctly, or do it alone. (He also observed that practice homework was of no value for children who hadn’t already learned the material during class.) In other words, his experiment too accurately matched the real world, where homework apparently provides little benefit. To remedy this, he set up a second experiment in which four children and their parents were pressed to follow his instructions to the letter. This time, drilling kids on spelling skills at home did improve quiz scores for three of the four students.
38. In another chapter, for example, Marzano and his colleagues (pp. 137-38) write: “Although the discovery approach has captured the fancy of many educators, there is not much research to indicate its superiority to other methods. Indeed, some researchers have made strong assertions about the lack of effectiveness of discovery learning, particularly as it relates to skills. For example, researchers McDaniel and Schlager (1990) note: ‘In our view, discovery learning does not produce better skill’ (p. 153).”
That would indeed be a “strong assertion” – albeit from only one pair of researchers – if the sentence in question ended there, as Marzano and his colleagues imply that it did. But here’s what McDaniel and Schlager actually wrote: “In our view, discovery learning does not produce better skill at applying the discovered strategy during transfer.” On the other hand, they add a few sentences later, “The benefit of discovering a strategy seems to be that it encourages the development, practice, and/or refinement of procedures that aid the learner in generating searches for new strategies” (See Mark A. McDaniel and Mark S. Schlager, “Discovery Learning and Transfer of Problem-Solving Skills,” Cognition and Instruction, vol. 7 , p. 153).
Marzano and his colleagues’ decision to quote only the first part of the sentence in question (without acknowledging that fact) leads readers to conclude inaccurately that these researchers share their own dim view of discovery learning. As for the more general assertion that “there is not much research to indicate its superiority to other methods,” everything depends on how one defines the “discovery approach.” If this term is understood to mean learning that is inquiry-based, open-ended, process-oriented, or otherwise designed so that students play an active role in constructing meaning, then Marzano et al.’s statement is demonstrably false, and their omission of the numerous studies demonstrating the benefits of this approach is as misleading as their cropping of the comment by the only researchers they do cite.
39. Janine Bempechat, “The Motivational Benefits of Homework: A Social-Cognitive Perspective,” Theory Into Practice, vol. 43 (2004), p. 193. These four citations are offered on the preceding page of her article (p. 192), as follows: “Those who have studied the effects of homework on academic achievement have discussed its non-academic benefits (Warton, 2001), its intermediary effects on motivation (Cooper et al., 1998), and its impact on the development of proximal student outcomes (Hoover-Dempsey et al., 2001) and general personal development (Epstein & Van Voorhis, 2001).” To be sure, all of these sources may have discussed these benefits, but none found that such benefits actually occur. The article by Hoover-Dempsey et al., for example, actually looked at the effects of parental involvement in homework, not at whether homework, per se, is beneficial.
Later in her essay (p. 194), Bempechat makes another assertion: “As previous research has shown, homework is a critical means of communicating standards and expectations (Natriello & McDill, 1986).” But what those authors actually discussed was whether setting high standards and expectations led students to spend more time on their homework. Nothing in their study permits the conclusion that homework itself is useful -- let alone “critical” -- for communicating those standards. It’s disturbing to imagine future writers citing Bempechat’s own article in support of the assertion that homework helps students to develop responsibility, study skills, self-discipline, and so on. (Incidentally, I’ve written to her twice to ask her about these discrepancies, and have yet to receive a reply.)
40. Joyce L. Epstein, and Frances L. Van Voorhis, “More Than Minutes: Teachers’ Roles in Designing Homework,” Educational Psychologist, vol. 36 (2001), p. 181.
41. Lyn Corno, “Homework Is a Complicated Thing,” Educational Researcher, November 1996, p. 28.
42. Brian P. Gill, and Steven L. Schlossman, “A Nation at Rest: The American Way of Homework,” Educational Evaluation and Policy Analysis, vol. 25 (2003), p. 333.
43. Suzanne Ziegler, “Homework,” ERIC document 274 418, June 1986, p. 8.
44. The review itself: Rosanne A. Paschal, Thomas Weinstein, and Herbert J. Walberg, “The Effects of Homework on Learning: A Quantitative Synthesis,” Journal of Educational Research, vol. 78 (1984), pp. 97-104. The description of it: Herbert J. Walberg, Rosanne A. Paschal, and Thomas Weinstein, “Homework’s Powerful Effects on Learning,” Educational Leadership, April 1985, p. 79.
45. Bill Barber, “Homework Does Not Belong on the Agenda for Educational Reform,” Educational Leadership, May 1986, p. 56. In that article, he also remarked that “if research tells us anything” about homework, it’s that “even when achievement gains have been found, they have been minimal, especially in comparison to the amount of work expended by teachers and students” (p. 55).
46. Cooper 2001, op. cit., p. xi; Harris Cooper and Jeffrey C. Valentine, “Using Research to Answer Practical Questions About Homework,” Educational Psychologist, vol. 36 (2001), p. 144.
47. Cooper 2001, op. cit., p. 64.
48. Laura Muhlenbruck, Harris Cooper, Barbara Nye, and James J. Lindsay, “Homework and Achievement: Explaining the Different Strengths of Relation at the Elementary and Secondary School Levels,” Social Psychology of Education, vol. 3 (2000), p. 315.
49. Harris Cooper, “Synthesis of Research on Homework,” Educational Leadership, November 1989, p. 90.
50. Ibid., p. 89.
51. Cooper 2001, op. cit., p. 58.
52. Cooper and Valentine, op. cit., p. 151. The phrase “we also reviewed the research” apparently refers to an extended passage earlier in the essay that summarizes one of Cooper’s previous articles – namely, Muhlenbruck et al. But that article contains no data to support these claims.
53. For example, “’Homework teaches children study and time-management skills,’ [Cooper] said. . . . ‘All kids should be doing homework’ ” (Jo Napolitano, “School’s Lesson Plan: No More Homework,” Chicago Tribune, May 7, 2005). And a columnist for the American School Board Journal writes, “As you might expect, [Cooper] finds plenty of positive effects associated with homework, including improving students’ study skills. . . developing their self-direction and responsibility” (Susan Black, “The Truth About Homework,” American School Board Journal, October 1996, p. 49). One can certainly understand how this writer formed the impression that Cooper actually “finds” these effects.
54. Cooper, Homework, p. 175.
55. Cooper says that he intends to draw not only from the data but from the “tacit knowledge” (the quotation marks are his) that he acquired from reading publications on the subject that don’t include any data, and also from “discussing homework issues with friends and colleagues” (Ibid.) That seems reasonable, but only if one makes it clear which of the resulting opinions aren’t substantiated by actual research.
56. Cooper 2001, p. 65. He also says that “general ranges for the frequency and duration of assignments” should be “influenced by community factors” (pp. 64-65). He doesn’t explain what this means, but elsewhere he is quoted as suggesting that more homework might be given in a high-pressured suburban district – presumably just because parents are demanding it, not because it is in any way justified (see Michael Winerip, “Homework Bound,” New York Times Education Life, January 3, 1999, p. 40).
57. Cooper 2001, p. 28, summarizing Cooper et al. 1998.
58. Example 1: The data reported by Cooper et al. 1998, which I displayed above, offered a pretty compelling case that homework didn’t do much for achievement regardless of how the results were carved up. But in the “Practical Implications” section of their conclusion (p. 82), the authors gave a very different impression. “First, by examining complex models and distinguishing between homework assigned and homework completed, we were able to show that, as early as the second and fourth grades, the frequency of completed homework assignments predicts grades.” In fact, what they found was a “nonsignificant trend” toward a correlation between how much of the assigned homework the students said they did and what grades their teachers gave them – a finding that arguably would have no practical significance even if had been statistically significant. The authors continue: “Further, to the extent that homework helps young students develop effective study habits” – and of course they provide no evidence that this happens to any extent – “our results suggest that homework in early grades can have a long-term developmental effect that reveals itself as an even stronger relationship between completion rates and grades when the student moves into secondary school. Thus, we suggest that the present study supports the assignment of homework in early grades, not necessarily for its immediate effects on achievement but rather for its potential long-term impact.” This remarkable claim is based solely on the fact that the same correlation (between how much of the assigned homework kids claimed to do and what grades they ultimately received) was significant for older students. Given that teachers’ grades generally reflect students’ compliance with respect to a lot of things, it’s amazing that there wasn’t a strong correlation at all age levels. But there isn’t a shred of evidence that the practice of assigning homework – which, remember, is what the authors are attempting to defend – has a beneficial “long-term impact” just because older kids get better grades for doing what they’re told.
Example 2: In Muhlenbruck et al., Cooper and his associates announce in their conclusion section that “homework appears to be assigned for different reasons in elementary school than in secondary school” (p. 315). This is evidently the outcome they were hoping to find in order to support the position that a lack of achievement effects for younger children shouldn’t bother us because homework at that age is really just about teaching study skills and responsibility. But what the researchers actually investigated in this study was what teachers believe is beneficial to students of different ages, which, needless to say, doesn’t prove that such benefits exist. Even those perceived differences, while statistically significant, were less than overwhelming. When asked whether they thought homework improved time-management skills, and when their responses (“very much,” “some,” or “not at all”) were converted to a numeric scale, the average response of 28 elementary teachers worked out to 2.86, whereas the average response of 52 high school teachers was 2.6. (The high school teachers were also slightly less enthusiastic in endorsing the idea that homework helped students to learn [2.6 vs. 2.78], which pretty much undercuts the whole premise that elementary school homework is uniquely intended for nonacademic purposes.) Other conclusions in this study, concerning possible explanations for the fact that homework is of no academic benefit to elementary school students, are similarly constructed on the basis of dubious and marginal results; see p. 314 and compare what’s said there to what had been reported earlier.
59. Ziegler 1992, op. cit., p. 604.
60. Avram Goldstein, “Does Homework Help? A Review of Research,” Elementary School Journal, vol. 60 (1960), p. 222.
Copyright © 2006 by Alfie Kohn. This article may be downloaded, reproduced, and distributed without permission as long as each copy includes this notice along with citation information (i.e., name of the periodical in which it originally appeared, date of publication, and author's name). Permission must be obtained in order to reprint this article in a published work or in order to offer it for sale in any form. Please write to the address indicated on the Contact Us page.
www.alfiekohn.org -- © Alfie Kohn