He Who Casts the First Aspersion
The Oklahoma State Department of Education wants you to know that they too have access to “researchers.” Of course theirs is only one person and comes without the air quotes commonly found during campaign shindigs. Oh, and she’s not from around here.
From the Tulsa World:
A controversial study by researchers at two Oklahoma universities that deems the state’s A-F school grading system as flawed is “misleading,” according to an in-house analysis by staffers at the Oklahoma State Department of Education.
“It’s not to cast aspersions on the researchers at all,” said department spokesman Phil Bacharach. “But we think it’s important to put that research in context, especially if it’s going to be used as a sort of blanket criticism of the grades.”
Nothing against these fine researchers, but their work is misleading. Gee, how could that be seen as criticism?
The SDE has borrowed a “Harvard University strategic data fellow” for two years (at a cost to the state of $85,000 per year) to conduct analysis of … well, of hopefully more than this. While the OU/OSU researchers found flaws with the A-F Report Cards, Clifford found flaws with the design of the study. These include:
- Using only urban schools representing about three percent of the total enrolled population in Oklahoma
- Using a population that is not representative of the state population as a whole
- Inserting controls for poverty and race into the calculations
So many things puzzle me about this article, but Clifford dismissing the sample size is just intellectually dishonest. The OU/OSU study included more than 15,000 data points. From a research perspective, that’s huge. Whatever your findings are at that point, they are bound to be statistically significant. Plus, just last month, Superintendent Barresi said accountability would be calculated for schools on subgroups as low as 10 students, because that is a significant number (as opposed to previous reporting requirements of 30 and 52).
Think about that student count again. The researchers had student-level data for 15,000 students – three percent of the students in Oklahoma. The federal government doesn’t come anywhere close to testing three percent of students for NAEP, yet we have to listen to scores of politicians bloviate over their results. Tomorrow, when international PISA results come out, the sample size will be even less representative. Yet these results will get the words flowing from think tank after think tank.
Clifford conducted her own study, and for some reason, did not control for poverty and race. She also translated results into months and years of learning. The problem is that nothing in the design of Oklahoma’s assessments lends itself to that kind of output. The technical manuals for the tests don’t equate differences between scaled scores to months and years of learning. Honestly, we can all conduct our own studies with our different methodologies. Each would have their merits and limitations. For example, a few weeks ago, I ran regression tests using site-level data and then district-level data. The greatest limitation of my findings was that I was comparing units of very different sizes. Nonetheless, the conclusion that poverty matters more than any other variable was undeniable. As such, the decision to exclude poverty from any model studying student achievement or school accountability probably should come with a thorough and compelling theoretical framework.
As for the finding in the OU/OSU report that a single letter grade is not a clear or reliable measure of school performance, Assistant State Superintendent Maridyth McBee had this to say:
It definitely is accurate in telling everyone what percent of students at that school are proficient in reading, math, science and writing, and what percentage of students are growing from a lower achievement level to a higher achievement level.
If that is the goal, all the SDE has to do is publish test data for schools. What percent of students passed the Algebra I EOI? The A-F Report Card actually doesn’t tell us that. What percent of students passed the third grade reading test? That either.
Not to cast aspersions, but useful data like that wouldn’t give education reformers their talking points.