August 25, 2010
Turning Children into Data:
A Skeptic’s Guide to Assessment Programs
Not everything that counts can be counted, and not everything
that can be counted counts.
Programs with generic-sounding names that offer techniques for measuring (and raising) student achievement have been sprouting like fungi in a rainforest: “Learning Focused Schools,” “Curriculum Based Measurements,” “Professional Learning Communities,” and many others whose names include “data,” “progress,” or “RTI.” Perhaps you’ve seen their ads in education periodicals. Perhaps you’ve pondered the fact that they can afford these ads, presumably because of how much money they’ve already collected from struggling school districts.
When I’m asked about one of these programs, I have to confess that I just can’t keep up with every new stall that opens in this bazaar -- and the same is true of the neighboring marketplace that’s packed with discipline and classroom management programs. (Hint: here, extreme skepticism is warranted whenever the name includes the word “behavior.”) Still, it is possible to sketch some criteria for judging any given program -- preferably before someone requests a purchase order.
So let’s imagine that your community is buzzing about something called ABA: “Achievement-Based Assessment” -- or, perhaps, “Assessment-Based Achievement” -- whose website boasts of “monitoring and improving each student’s learning with proven data-focused strategies.”
Worth a try? Well, we certainly can’t decide on the basis of how ABA markets itself. Just about any descriptor that might seem appealing, even progressive, has been co-opted by now: Every outfit claims to help teachers “collaborate” in order to focus on the “learning” (rather than just the teaching) as they look at “authentic” outcomes and “differentiate” the instruction with a “developmental” approach that emphasizes “critical (or higher-order) thinking” skills -- in order to prepare your students for -- raise your hand if you saw this coming -- the “21st century.”
Obviously we’re going to have to look a little deeper and ask a few pointed questions.
1. What is its basic conception of assessment? To get a sense of how well things are going and where help is needed, we ought to focus on the actual learning that students do over a period of time -- ideally, deep learning that consists of more than practicing skills and memorizing facts. If you agree, then you’d be very skeptical about a program that relies on discrete, contrived, test-like assessments. You’d object to any procedure that seems mechanical, in which standardized protocols like rubrics supplant teachers’ professional judgments based on personal interaction with their students. And the only thing worse than “benchmark” tests (tests in between the tests) would be computerized monitoring tools, which reading expert Richard Allington has succinctly characterized as “idiotic.”
2. What is its goal? Ask not only what the program is but why it exists. Lots of talk about “student achievement” -- as opposed to, say, “students’ achievements” -- suggests that the program’s raison d’être is not to help kids understand ideas and become thoughtful questioners, but merely to raise their scores on standardized tests. (Elsewhere, I’ve reviewed evidence showing not only that these tests are completely inadequate for assessing important intellectual proficiencies but also that high scores are actually correlated with a superficial approach to learning.) Obviously, anyone who harbors doubts about the validity or value of standardized tests wouldn’t want to have anything to do with a program that’s designed mostly with them in mind.
3. Does it reduce everything to numbers? If all the earnest talk about “data” (in the context of educating children) doesn’t make you at least a little bit uneasy, it’s time to recharge your crap detector. Most assessment systems are based on an outdated behaviorist model that assumes nearly everything can -- and should -- be quantified. But the more educators allow themselves to be turned into accountants, the more trivial their teaching becomes and the more their assessments miss.
That’s why I was heartened recently to receive a note describing how some teachers on a Midwestern high school’s improvement team took a long, hard look at the Professional Learning Communities model and said no thanks. They were put off by its designers’ frank admiration of for-profit corporations as well as its “misguided premise that every subject area can be broken down into core concepts which then have to be quantified.” The teachers understood that learning doesn’t have to be measured in order to be assessed. And they feared that “true learning and engagement” -- along with a commitment to be “responsive to students’ needs [and] lives” -- might be lost.
These teachers ultimately decided to reject the technocratic PLC approach in favor of an alternative they designed themselves. It focused on teachers’ personal “connection[s] with our subject area” as the basis for helping students to think “like mathematicians or historians or writers or scientists, instead of drilling them in the vocabulary of those subject areas or breaking down the skills.” In a word, the teachers put kids before data.
Of course, this powerful exercise in professional development never would have happened if the administration had simply imposed PLCs (or a similar program) on the teachers, treating them like technicians who merely carry out orders. Which brings us to …
4. Is it about “doing to” or “working with”? Steer clear of any program whose curriculum or assessments are so structured, so prescriptive and prefabricated, that teachers lack any real autonomy. By now we ought to know that systems intended to be "teacher-proof" are not only disrespectful but chimerical: They are the perpetual-motion machines of education. One sure sign of disrespect is the use of incentives or sanctions to make teachers get with the program, including compensation that hinges on compliance or on some measure of student achievement. All that does is corrupt the measure (unless it’s a test score, in which case it’s already misleading), undermine collaboration among teachers, and make teaching less joyful and therefore less effective by meaningful criteria.
Likewise, you’d want to make sure that students’ autonomy is respected since kids should have a lot to say about their assessment. If they feel controlled, then even a cleverly designed program is unlikely to have a constructive effect. Again, any use of carrots and sticks should set off alarms. As Jerome Bruner once said, we want to create an environment where students can “experience success and failure not as reward and punishment but as information." That pretty much rules out grades or similar ratings.
5. Is its priority to support kids’ interest? In attempting to track and boost achievement, do we damage what’s most critical to long-term quality of learning: students’ desire to learn? It’s disturbing if a program is so preoccupied with data and narrowly defined skills that it doesn’t even bother to talk about this issue. More important, look at the real-world effects: Once a school adopts the program, are kids more excited about what they’re doing -- or has learning been made to feel like drudgery?
6. Does it avoid excessive assessment? Distilling a large body of research, psychologists Martin Maehr and the late Carol Midgley reminded us that “an overemphasis on assessment can actually undermine the pursuit of excellence.” That’s true even with reasonably good assessments, let alone with those that are standardized. The more that students are led to focus on how well they're doing, the less engaged they tend to become with what they're doing. Instead of stuff they want to figure out, the curriculum just becomes stuff at which they’re required to get better. A school that’s all about achievement and performance is a school that’s not really about discovery and understanding.
While some education conferences are genuinely inspiring, others serve mostly to demonstrate how even intelligent educators can be remarkably credulous, nodding agreeably at descriptions of programs that ought to elicit fury or laughter, avidly copying down hollow phrases from a consultant’s PowerPoint presentation, awed by anything that’s borrowed from the business world or involves digital technology.
Many companies and consultants thrive on this credulity, and also on teachers’ isolation, fatalism, and fear (of demands by clueless officials to raise test scores at any cost). With a good dose of critical thinking and courage, a willingness to say “This is bad for kids and we won’t have any part of it,” we could drive these outfits out of business -- and begin to take back our schools.
Copyright © 2010 by Alfie Kohn. This article may be downloaded, reproduced, and distributed without permission as long as each copy includes this notice along with citation information (i.e., name of the periodical in which it originally appeared, date of publication, and author's name). Permission must be obtained in order to reprint this article in a published work or in order to offer it for sale in any form. We can be reached through the Contact Us page.
www.alfiekohn.org -- © Alfie Kohn