When “Big Data” Goes to School

image_print
March 7, 2018

When “Big Data” Goes to School

By Alfie Kohn

Here’s a rule of thumb for you: An individual’s enthusiasm about the employment of “data” in education is directly proportional to his or her distance from actual students. Policy makers and economists commonly refer to children in the aggregate, apparently viewing them mostly as a source of numbers to be crunched. They do this even more than consultants and superintendents, who do it more than principals, who do it more than teachers. The best teachers, in fact, tend to recoil from earnest talk about the benefits of “data-driven instruction,” the use of “data coaches,” “data walls,” and the like.

Making matters worse, the data in question typically are just standardized test scores — even though, as I’ve explained elsewhere, that’s not the only reason to be disturbed by this datamongering. And it doesn’t help when the process of quantifying kids (and learning) is festooned with adjectives such as “personalized” or “customized.”

But here’s today’s question: If collecting and sorting through data about students makes us uneasy, how should we feel about the growing role of Big Data?

Let’s start by noting that this term doesn’t seem to have a single precise meaning. Some people assume it refers just to gathering more numerical information. Some say it refers primarily to the statistical modeling techniques used to make predictions based on whatever data have been collected. And at least one writer believes the term is now used mostly by critics — to refer to a worrisomely worshipful attitude toward data.

To be fair, vacuuming up huge quantities of numerical descriptors can sometimes allow us to see patterns and make predictions. Like an aerial view, it offers a unique perspective that has its uses. Christian Rudder makes a case for big data in his 2014 book Dataclysm, and his persuasiveness may partly rest on the fact that he’s funny, unpretentious, politically progressive, and likes to talk about sex. “It’s not numbers that will deny us our humanity; it’s the calculated decision to stop being human,” he argues at one point.

But this, I fear, is just a version of the old canard that technology, per se, is neutral, that everything depends on how it’s used. By now we should have realized that methods leave an imprint on goals and technology in particular has a causal impact. (Read Neil Postman’s Amusing Ourselves to Death and Nicholas Carr’s The Shallows if you’re not yet convinced.) The reckless reduction of human beings to numbers is offensive regardless of what’s done with those numbers. An aerial view by definition fails to capture the individuality of the people on the ground, and there’s a price to pay if we spend our days looking at humanity — or even literature[1] — that way.

Part of the problem is that we end up ignoring or minimizing the significance of whatever doesn’t lend itself to data analytics. It’s rather like the old joke about the guy searching for his lost keys at night near a street light even though that’s not where he’d dropped them. (“But the light is so much better here!”) No wonder education research – increasingly undertaken by economists – often relies on huge data sets consisting of standardized test results. Those scores may be lousy representations of learning – and, indeed, egregiously misleading. But, by gum, they sure are readily available.

“What’s left out?”, then, is one critical question to ask. Another is: “Who benefits from it?” Noam Scheiber, a reporter who covers workplace issues, recently observed that big data is “massively increasing the power asymmetry between exploiters and exploitees.” (For more on this, check out Cathy O’Neil’s book Weapons of Math Destruction.[2]) And these questions need to be asked about big data in education as much as anywhere else. In the context of K-12 schooling, as I’ve already noted, this usually involves standardized test scores – not just a summative, and often high-stakes, exam, but a relentless regimen of testing (repackaged as “formative assessment”) that’s meant to drive the teaching throughout the year. Lately this same reductive sensibility has been leaching into higher education, much to the dismay of many who teach there, under the banner of “learning outcomes assessment.”

But “data” in college also may refer to grades.[3] For an interesting case study, consider a mostly uncritical account that appeared in the New York Times in early 2017. It seems various companies have convinced universities to pay for computer programs that use predictive analytics to monitor students’ progress, the idea being to figure out when a low grade in a particular course may be associated with a risk of dropping out at some point. “Our big data doesn’t need to know exactly why a student gets a bad grade,” one administrator explained. “We’re looking at a pattern.”

What the data analysts are peddling is the capacity to crunch more numbers, to look not just at GPAs but at (all students’) individual course grades. Notice that no one is proposing to spot problems by sitting down with students and asking them how things are going — at least not until the computer flags those who are having trouble. The risk diagnosis is based on what the software says about their grades rather than on what the students themselves might say.

Moreover, we’re asked to accept that if students don’t get a good grade in this course, they probably won’t in that one either — and that this reflects a shortcoming with the students rather than with the quality of the courses (what’s being taught and how). The excitement over big data — more numbers than ever before! — is a seductive distraction from posing troubling questions about what those numbers represent. Or what they necessarily exclude.

By analogy, think of those widespread claims that “studies show” it’s beneficial to make students take advanced math courses in high school. Such assertions are cited reverently despite the fact that they offer a textbook example of what’s called a selection effect: It’s not so much that taking calculus helps students but that the kinds of students who take calculus would tend to do well later anyway. Second, “beneficial” often turns out to mean “correlated with success on subsequent math courses,” which begs the question of why the vast majority of students need to take any of them.[4] (Research also convincingly proves that it’s advantageous to take Latin 1…in the sense that doing so will greatly improve one’s grades in Latin 2.) My point is that the same is true of pronouncements about the value of crunching stats to home in on the intro courses where it’s supposedly vital to get a good grade.

Anyone who has observed the enthusiasm for training students to show more “grit” or develop a “growth mindset” should know what it means to focus on fixing the kid so he or she can better adapt to the system rather than asking inconvenient questions about the system itself. Big data basically gives us more information, based on grades, about which kids need fixing (and how and when), making it even less likely that anyone would think to challenge the destructive effects of — and explore alternatives to — the practice of grading students.[5]

Predictive analytics allows administrators to believe they’re keeping a watchful eye on their charges when in fact they’re learning nothing about each student’s experience of college, his or her needs, fears, hopes, beliefs, and state of mind. Creating a “personalized” data set underscores just how impersonal the interaction with students is, and it may even compound that problem. At the same time that this approach reduces human beings to a pile of academic performance data, it also discourages critical thought about how the system, including teaching and evaluation, affects those human beings.

Neither of these objections is addressed by collecting data about other aspects of students’ lives, too. The same New York Times article describes an experiment with “tracking freshmen…as they swipe their identification cards to go to the library or gym, pay for a meal in the cafeteria or buy a sweatshirt in the bookstore” in an attempt to measure “social interaction.” Those bits of data don’t allow us to claim we know a given student. Nor do they prompt us to examine underlying structural issues with their education. What the expansion of big data does do is raise additional concerns about Big Brother since more of students’ activities are being monitored. (It also suggests the troubling possibility that some schools may flag at-risk students not to help them but to get rid of them so as to improve the institution’s on-time graduation rate.)

When educators reduce students to data, they miss an awful lot. When they rely on big data, they may be making things even worse.

NOTES

1. Yes, number crunchers have set themselves the task of drawing conclusions about literature based on computer tabulations of the appearance of specific words in a vast collection of books. If your reaction is that something important has been missed, the same reaction probably would be appropriate when big data is brought to bear on education or psychology.

2. Also see Frank Pasquale’s Black Box Society and this bibliography of other critiques. For a short review of methodological concerns – a reminder that the data often tell us much less than we assume — see this essay.

3. This should be a useful reminder that the problem isn’t just with a particular metric but with the overreliance on quantification itself. Rather than asking “How do we measure….?” educators and policy makers ought to ask “How do we assess…?” in order to avoid locking themselves into the subset of assessment that demands a reduction to numbers.

4. On the first point, see the late Grant Wiggins, “A Diploma Worth Having,” Educational Leadership, March 2011, pp. 31-2. On the second point, see Andrew Hacker, The Math Myth — And Other Stem Delusions (New Press, 2016). Also see Nicholson Baker, “Wrong Answer: The Case Against Algebra II,” Harper’s, September 2013, pp. 31-8.

5. I’ve heard that, under the influence of the management guru W. Edwards Deming, when an assembly line worker at Toyota screwed up, managers would shake his hand and thank him for helping to expose a design flaw in the system. These managers realized that the system is primarily responsible for the success or failure of individuals in a workplace – which indicates that rewarding or punishing people (for example, with incentive plans and other pay-for-performance schemes) is not only manipulative and destructive of intrinsic motivation but also simply an exercise in missing the point.


To be notified whenever a new article or blog is posted on this site, please enter your e-mail address at www.alfiekohn.org/sign-up.