By Jim Schutze
By Rachel Watts
By Lauren Drewes Daniels
By Anna Merlan
By Lee Escobedo
While our governor has been out running for president on the strength of a dramatic rise in test scores for Texas schoolchildren, it would have been helpful to know that students who took the last statewide math test received a passing score of 70 percent for answering 50 percent of the questions correctly.
This represents a major and recent change. In the fall of 1998, a student had to answer 70 percent of the questions on the Texas Assessment of Academic Skills correctly to get a score of 70 percent.
The fact that any adjusting of scores is going on at all--especially of the magnitude occurring in just the last year--is troubling to some test-watchers because the Texas tests originally were billed as straight "criterion-referenced" tests. A criterion-referenced exam is like the quiz a geometry teacher writes: That is, the teacher draws up a list of 10 things students should know and then asks 10 questions to test them on it.
That's supposed to be very different from a "norm-referenced" test, where the point is not to add up what students know but to see where students stand in relation to other children their age. What seems to have changed, with very little public fanfare, is that the no-nonsense criterion-referenced TAAS is being heavily normed.
When TAAS was initiated, the results had a more commonsense ring. It was a true criterion-referenced test, and scores originally were reported as so-called "raw" or real scores: The percentage of answers correct was the score. In the flinty rhetoric of educational "accountability" in Texas, the TAAS test was a "what-you-see-is-what-you-get" deal. But in 1994, in response to fretting from school administrators and teacher groups that felt some TAAS tests might be harder year-to-year than others, the Texas Education Agency began applying the Texas Learning Index (TLI), an arcane formula designed to smooth out scores in case one year's new test was slightly easier or harder than the previous year's.
Even at that, the annual TLI adjustments were minor--barely fractional to non-existent--until the fall of 1999. Suddenly the adjustments became major, raising the question in some observers' minds of just what the manipulations are really designed to do. Are they supposed to make one year's test exactly as hard as the last year's would have been for the same student? Or are they designed to crank out a certain number of passing scores no matter what the children know?
Some respected authorities in the field of testing say the so-called "Texas miracle"--a general improvement in scores since testing began--is absolutely not a hoax, and the adjustments that the state has made since 1994 are legitimate, even necessary calibrations to make the test a more accurate measurement of student achievement. But other authorities with rank and prestige in the field say they think the adjustments are suspect, especially given the timing.
Unfortunately, there has not been time for a healthy debate on the issue of TAAS score adjustments because it was only in the last few weeks that the precise scope of the adjustments became known. On October 20, state Rep. Domingo Garcia, a longtime critic and watchdog over the Texas Education Agency, received a set of numbers from TEA showing the adjustments. Garcia, a Dallas Democrat who has led the charge against TEA on several issues related to minority students, received data on the TAAS score adjustments in response to a formal demand under the Texas Public Information Act.
Debbie Graves Ratcliffe, spokeswoman for TEA, says Garcia did not have to file a formal request and could simply have asked for it as a legislator. "We would have given it to him," she says.
But Garcia says his staff gave up on normal channels after a year of efforts to get specific answers to specific questions, especially whether annual adjustments were being made in the percentage of correct answers required to win a passing grade of "70 percent" on the TAAS. The information is coming out now, he says, because it took him this long to get it out of TEA.
The existence of a formula for adjusting TAAS score has been no secret among education experts in Texas, but it is certainly little-known to the public.
Garcia does not claim to see a smoking gun in the information he has received. What he thinks he does see is smoke.
"It looks funny that a 50 percent is a 70 percent," he says. "It looks funny that the time when they started doing these major changes is a year ago when the governor started running for president.
"But my questions are not really about that. My questions are whether the kids are really learning, as we are told, and whether they are really being prepared to get into college and to do well in college. Or are we artificially inflating scores for political purposes at the expense of our children?"
Garcia says the urgency of these questions is sharpened by the recent Rand Corp. report that found the achievement gains attributed to TAAS scores in Texas not only are not supported by other tests but are in some key instances contradicted. Rand researchers found some national tests show much slower achievement growth in Texas, and even some Texas tests show a decline in overall college preparedness of Texas public high-school graduates.
Defenders of TAAS and partisans of Gov. George W. Bush have claimed the Rand study was flawed because TAAS measures only knowledge of the Texas state public school curriculum, while the other tests considered by Rand look at what children are taught nationwide.
In any event, the TLI is not George W. Bush's invention. Mandated by law, it is carried out according to a practice called "equating," which is widely used in the testing industry and generally accepted as legitimate. An oversimplified explanation would be that questions for each year's new test are field-tested to see whether they are harder or easier than the questions on the 1994 test. The purpose is to adjust the toughness or easiness of each year's test back to an original standard, so that kids don't look dumber one year just because the questions on their TAAS test happen to be harder than the ones that kids were asked on the same test last year.
From the time these adjustments were first ordered by the Legislature in 1994 until 1999, the equating studies found that no adjustment at all was needed on the vast majority of tests in most years. In a few instances, only very minor adjustments were made, most of them slightly upward, so that it took a little bit better than a 70 percent raw score to get a TLI score of 70 percent.
The anomaly in the data sent to Garcia has to do with what happened in the fall of 1999, when raw scores on all of the state's TAAS tests suddenly plummeted. The equating studies were finding that scores needed to be cranked back up 10 to 20 percentage points in order to guarantee a uniform level of difficulty back to 1994, considered "Year One" of the current accountability cycle.
TEA spokeswoman Ratcliffe says TAAS tests in 1999 became "more rigorous" because of a legislatively mandated change in the basis of the test. Questions on earlier TAAS tests had been keyed to something called "Essential Elements" (EE)--a list of things teachers were required by law to present to their students in each course. By 1999, the state had switched over to a new list of things to be taught, called the "Texas Essential Knowledge and Skills (TEKS)." The new list is supposed to be more rigorous, Ratcliffe says, and therefore the test based on the new list is tougher.
But it is at this seam that the political element intrudes on the landscape of TAAS. When Texas, in its tough-minded desire to force schools to teach more, adopted a more rigorous required curriculum and then keyed statewide tests to it, why shouldn't Texas have expected fewer kids to pass those tests? At least in the first few years, wasn't it reasonable to expect kids to get lower scores?
In fact, raw scores did fall dramatically after the switch to TEKS. But in a letter to the state's public school administrators a year ago, Texas Education Commissioner Jim Nelson promised them that the TLI equating studies would erase the effects of the tougher new curricula and tests.
"Since a child who could have passed last year's test will also pass this year's," Nelson assured the troops, "there will be no change from the perspective of a school district for purposes of accountability."
That was cheery news, no doubt, for the state's public-school administrators, who can win $25,000 bonuses in some districts for good pass rates on the TAAS and can suffer serious negative effects from low pass rates.
Not everyone is cheered by the TLI. Walter Haney, a Boston College education professor who spent two years analyzing the TAAS, is skeptical about the use of equating to bridge two tests after the underlying basis of the tests was changed from EE to TEKS.
"There's something extremely fishy going on with regard to the equating on the TAAS," Haney says. "Theoretically, you can't equate two tests unless the content specifications are the same."
But every attempt to quibble on the equating process seems to beg an overarching question: In a transitional period when the curriculum and the tests have been made harder, presumably because Texas is demanding more of its students, why should the raw scores be heavily manipulated to produce the appearance of a flat level of difficulty?
TEA test expert Keith Cruse says, "That's strictly a policy question. You would have to talk to the State Board of Education about that."
Texas Education Commissioner Nelson says that there will come a time fairly soon when the TEA will suspend its efforts to equate every test back to the 1994 benchmark of difficulty and will start all over from a new "Year One."
"Come 2003, we're going to finish this cycle of our accountability system and move, because of the requirements of Senate Bill 103, to a whole new system. We will have a new Year One."
Major movement in the system of statewide testing in Texas is directed by law and does not happen at the discretion of Nelson or the TEA. It certainly can't be fiddled at the last moment to help a political campaign, even a presidential one. But the governor and the Legislature did have an option, in designing this system, to suspend the equating process and begin a whole new cycle last year when the basis of the test was changed.
Critics of TAAS think they know why the changeover wasn't made last year. John Fullinwider, Dallas "teacher-of-the-year" in 1997, a winner of the Dallas "Excellence in Teaching" award in 1996 and a longtime community activist, says if the state had stopped trying to equate back to 1994 last year, "George Bush would be out there on the campaign trail defending lower TAAS scores instead of bragging about the Texas miracle.
"Whether the motive is sinister or not, the political effect is awfully convenient. In spite of all the talk of making the test more rigorous, they've actually padded these scores. There's a cushion, so that during George Bush's second term the scores can continue to rise."
But for Fullinwider, who teaches kids in trouble in the Dallas system's alternative high school, the real bite of the issue is far removed from presidential politics.
"The stakes on TAAS are so high," he says. "At the campus level now, you have people being indicted for cheating on TAAS scores. Then to find out that at the very pinnacle of TEA in Austin, they're doing something with the numbers to show steady progress. It's fraudulent. The very idea of basing a kid's graduation on this test given all we know about it now is just indefensible."