Independent Analysis of Oklahoma’s A-F Report Cards
Yesterday, a report critical of Oklahoma’s A-F Report Cards was released by CCOSA (the state administrators’ organization) and OSSBA (the state school board members’ association). The report was produced jointly by the Oklahoma Center for Education Policy (OU) and the Center for Educational Research and Evaluation (OSU). In other words, a lot of people smarter than I looked at the inputs and outputs of the A-F Report Cards and found significant flaws. This paragraph from the report’s executive summary speaks volumes:
Accountability systems are only useful if their measures are credible and clear. Despite good intentions, the features of the Oklahoma A-F grading system produce school letter grades that are neither clear, nor comparable; their lack of clarity makes unjustified decisions about schools. Further, A-F grades are not productive for school improvement because they do not explain the how or why of low performance. Building on what has already been done, Oklahoma can and should move toward a more trustworthy and fair assessment system for holding schools accountable and embracing continuous, incremental improvement.
The report then lists problems statistically with the calculations. Scores assigned “do not seem to correspond to any recognizable metric.” The use of proficiency levels “introduces grouping error.” There is “unclear conceptual meaning of the index” for student growth. Whole school performance grades are skewed by “overreliance on attendance and graduation rates.”
The authors also discuss practical consequences of the evaluation system:
By not making explicit threats to the validity of report card grades, the OSDE misinforms the public about the credibility and utility of the A-F accountability system.
Performance information from the current A-F Report Card has limited improvement value; particularly, it is not useful for diagnosing causes of performance variation.
The summative aspects of the accountability system overshadow formative uses of assessment and performance.
High stakes testing, as a cornerstone of school assessment and accountability, corrupts instructional delivery by focusing effort on learning that is easily measured.
The first of these is the key problem with what the SDE has done by introducing the report cards. When the SDE says a school or district is failing, the determination is based on highly flawed information. Honestly, they lack credibility in identifying great schools as well. The last of these consequences is a problem somewhat independent of the A-F Report Cards; we’ve been limiting the content of teaching for decades by over-testing. The increased stakes now just amplify this problem.
One word not used in the report is volatile, but the findings point to the fact that any school’s letter grade lacks stability. If we are to change the weights of one of the variables, just a little, the letter grade could change. Part of this is the arbitrary and capricious manner in which the formula was constructed. Another part is what the report identified as grouping error. All schools scoring a B in any category get 3 points. An 89 gets 3 points. So does an 80. If we are to accept the premise that these scales have meaning, then an 80 would be better grouped with a 79 than with an 89, right?
A lot of what’s in the report matches what I’ve been saying for months. Fortunately, the authors have the professional credibility that an anonymous blogger can’t enjoy. They also have the research credentials to make the criticisms more pointed. They say intellectually what I’ve been trying to say passionately. They take their time saying what I usually try to cover in 500-800 words.
It would be a disservice to the authors to cut and paste the entire 32 page document here, but the whole document is quoteworthy. It’s their work, not mine, but I absolutely love it.
So far, the Tulsa World has responded favorably to the report. The Oklahoman must still be reading it.
Do yourself a favor. Read it cover to cover. Share it. Prolifically.