Morning News Story About Improvements in Dallas Schools Is Data-Driven Bullshit

The latest numbers seem to make Mike Miles look like a big success in his three-year tenure here. But the Morning News has some statistocological things to say about that.
The latest numbers seem to make Mike Miles look like a big success in his three-year tenure here. But the Morning News has some statistocological things to say about that.
Mark Graham

Dear New Dallas Morning News Editor Mike Wilson:

For some reason that I guess I’m not smart enough to figure out — and I blame Eric Celeste, not you — Celeste’s July 7 item on Frontburner about you and the Morning News employee buyouts, “DMN Offers Buyouts to 167 Employees,” keeps popping up on my Facebook like the ghost of Christmas past. It spooks me a little.

But it serves also to keep you in mind, a not unpleasant thing. We shook hands at my wife’s recent retirement from the News, and you seemed very pleasant. I wish I could say the same for myself, but … ask anybody.

Therefore today I am offering this little sort-of-public-excuse-me expression of personal concern, not because I’m trying to make you disappear from my Facebook but because of something I saw in your paper recently.

This week on your education blog, a headline asked, “Is this new evidence of ‘a Dallas Miracle’ at Dallas DISD?” The implicit answer suggested by the tone of the story was, “Nah, not really.” That was striking, because the overwhelming evidence offered by the facts in the story was that the answer should have been yes.

The facts were that student achievement in Dallas public schools took a major leap upward three years ago, plateaued two years ago and advanced sufficiently last year to put the Dallas Independent School District in the top 25 percent of the state’s 200 biggest districts, for the first time ahead of archrival Houston, whose students had out-paced Dallas students for as long as anyone could remember.

Hey, Wilson, you’ve been around long enough to know what the real news story is here. It’s all about recently departed Superintendent Mike Miles. Under his regime there was a lot of smoke, thunder and pain, all of which he predicted, but Miles told us the pain would be worth the gain. Based on the numbers in your blog item, it was.

Education Research Group, or ERG, offers us the only clean and organized measurement of student progress we’ve got, taking statewide test scores and putting them into an algorithm to balance that data against poverty and other powerful social influences on achievement. It’s the only way we can see how well the district does at taking the students who walk in the door the first day and then teaching them for a year. How well did DISD do under Miles compared with other districts dealing with the same demographics?

Splendidly, according to ERG, not so swell, according to you guys. And your people did have their reasons, which were very statistical, sorta kinda. The story says, for example, that the ERG chart of achievement progress, “doesn’t really show anything direct about student achievement. It shows how Dallas did compared with the other 200 largest districts. If a lot of the others got worse and Dallas stayed about the same, Dallas would jump in rank.”

OK. So now I guess you’re going to answer the riddle you have set. You’re going to tell me that the other 199 large districts all sank by a certain amount, enough to lend Dallas a false appearance of advancement which the consultant did not reveal.

But, no. Radio silence on that. You just sort of threw a kind of statistical something or other up against the wall and wagged your finger knowingly at me.

A couple paragraphs later, your piece makes a serious stab at statisticalizationology: You tell me, “... the method used to remove the effects of poverty greatly magnifies the importance of the relatively small remaining differences in the data. So a little ‘noise’ can create jumps or drops that aren’t actually tied to student achievement.”

Oh, my goodness. So the consultant claimed to be showing us student achievement, but this noise got in there, and it maybe wasn’t achievement at all? I would think that alone would make a pretty good story. The headline might be, “Consultant Fakes Up Achievement to Help Out Miles. So-Called Gains Nuthin’ but Noise.”

So now I guess you’re going to show me how that noise works and tell me just what the so-called gains would have been without all the racket. And this will be easy for you to do, because you guys are very statisticological. But, no, to my disappointment: radio silence again. Instead of sticking around, you lit it, tossed it out the window and then put the pedal to the metal.

You have been around town long enough by now to understand the backstory here as it involves your newspaper. Throughout Miles’ tenure your school district reporters were biting on every two-bit, ginned-up, fake anti-Miles scandal the teachers unions and the patronage machinery could offer them, I think because they were really hungry.

Upcoming Events

More recently Eric Celeste, who curates the Learning Curve blog at D Magazine, has landed some very direct hits on your paper for its strange tendency when evaluating student achievement to write off and ignore Spanish language test scores. None of the possible explanations for that sound good.

I read the Columbia Journalism piece by Richard Parker a month ago about your decision to come here from Nate Silver’s FiveThirtyEight.com and take over the News. Notable in your remarks in that story was your commitment to “data stories,” which I think are stories with data in them.

“Data stories are hard to publish,” you said, “because you have to get the data, which often isn’t easy, and then you have to analyze and make sense of it.”

Yes.

I don’t see how the getting-the-data part is any more difficult than any other kind of reporting. In fact it seems a lot easier than getting public officials to admit to drug use, stuff like that. But I guess I see what you mean. It's heavy lifting for most journalists. The big one, however, is the making sense.

And here, as the official local keeper of ancient lore, I must toss in an additional factor. Not only do you have to make sense of the data, you have to promise not to use the data or fuzzy elliptical generalizations about the data to camouflage bad reporting.

They may or may not have told you: You’re not the first guy at the News to think of data and computers and all that stuff. Ten years ago your paper hired some computerized statiscolarian types to work with reporters on a story about jury selection. After much reporting, your regular reporters found that the ethnic makeup of jurors selected to sit on local trials matched the ethnic makeup of the jury pool, which is how things should have been, which meant somebody spent a whole lot of time and payroll on a story that didn’t make.

But the computeritarian people were able to ride in to the rescue with something they called a “logistic regression analysis model.” It showed that, even if the numbers came out the way they should have, then District Attorney Bill Hill, who was an old white guy, was still a racist.

Now, if somebody ever told me they had a logistic regression analysis model on me, I would just stick out my wrists and say, “Slap the cuffs on me, Cap’n, I’m goin’ peacable.” But not Bill Hill. He was pissed. He took the story to a bunch of actual statisticians, as I believe they are called — credentialed academics who work with this stuff in juried journals all the time — and he said, “WTF?” or words to that effect.

The academics agreed to look at it, but they said first the News needed to show them the algorithm the paper used and the assumptions that went into it. This a request that would require a two-minute phone call in the scientific world, where no one would dream of publishing conclusions based on an algorithm without showing the guts of it to anybody who asked.

But the News refused to release the information to the academics. I think they may eventually have given it to somebody, but only many months after the news value of the story had gone to ash and Hill was screwed politically. But their refusal to release it when it meant something could only point to one conclusion: There was something deeply smelly about the way the paper had used statistics and pseudo-science to juice up a lame story. It was a reprehensible chapter in contemporary journalism.

By the way, Mike, I notice that you told CJR you could foresee working a lot with the Texas Tribune. I hope you will go carefully there. Ask your people about the Tribune’s coverage of the Wallace Hall University of Texas admissions scandal story. They’ve got a weakness for sticking a fat thumb on the scale when one of their major contributors is the focus of a story.

All in all, I’d say just this: statistics galore, go for it. But we’ll be watching for bullshit. Back when I was a student at the Institute for Remedial Journalism and Corrections, they taught us an algorithm for that one, too. I think it was: Bullshit plus statistics equals bullshit.


Sponsor Content

Newsletters

All-access pass to the top stories, events and offers around town.

  • Top Stories
    Send:

Newsletters

All-access pass to top stories, events and offers around town.

Sign Up >

No Thanks!

Remind Me Later >