Home > Uncategorized > A-F Proposed Revisions Part Four: Whole School Improvement

A-F Proposed Revisions Part Four: Whole School Improvement

February 27, 2013

A quick review of where we are right now…

A couple of weeks ago, Superintendent Barresi misled parents to believe that the OU/OSU researchers had changed their minds and fallen in love with the A-F Report Cards.

Last Wednesday, a House committee voted 10-1 to abandon the adopted rules for A-F Report Cards.

Friday, the SDE issued new rules, along with a statement that they have heard all of our concerns.

Later Friday, I read the proposed rule changes and concluded that in some sections, nothing is changing. In others, things are getting worse.

Sunday, the Oklahoman ran an editorial saying that not everybody has a problem with the report cards. As evidence, they cited Northwestern economics professor David N. Figlio, who called our system “an exemplar of systems of its type and a model for other states and jurisdictions to follow.” Figlio, is a favorite of Jeb Bush and the Foundation for Education Excellence. He is also a researcher who has previously written critically of accountability systems:

There is good reason to believe that school accountability systems, both in Florida and nationally, evaluate schools very noisily….due to measurement problems, schools that are rewarded in systems such as the new federal program will tend to be punished in the next, suggesting that school rankings under these systems tend to be quite unstable….school grades under systems such as that introduced by the federal accountability plan are highly uncorrelated with any conception of school ‘value added’ that economists typically consider. Taken together, these papers indicate that school accountability measures such as the Florida system and the new federal system are largely unrelated to the school’s contribution to student performance and are likely measured with considerable noise. This inherent randomness, as well as the disconnection between school grades and what these grades seek to measure, provide the impetus for the current study.

He goes on to find that perceptions of school grades can impact housing prices and where people choose to relocate. So for all of us who have ever moved and looked for the “best schools” for our kids, we’ve probably been fooling ourselves with noise and inherent randomness. In truth, I can’t reconcile his quote from the Oklahoman with his statements above.

What will be interesting is to see how Superintendent Barresi responds to all of this criticism Thursday at the State Board of Education meeting. Discussion of the OU/OSU report is on the agenda pretty early. If I were a board member, I’d want an honest appraisal of the report and an explanation of Barresi’s comments to parents – something better than calling it a misunderstanding.

Now, back to the review of the proposed rule changes.

If the Bottom Quartile Growth section is frustrating to schools, the Whole School Improvement section is downright confusing. There are grade span differences. Some criteria are weighted more heavily than others. And the use of surveys for bonus points (now eliminated) made schools wonder if their time was just being wasted (it was).

Let’s start with the name of this section. Why is it Whole School Improvement? If you’ve seen the table of ten elementary schools I’ve used several times this week, you may have noticed that all had either a 95 or 96, which is simply a measure of attendance. That’s right, elementary schools got 33 percent of their A-F Report Card grade on one statistic that is largely attributable to factors outside of their control. This hardly captures the whole school climate and has nothing to do with improvement.

There are no rule changes listed for elementary schools, but the guidelines issued by the SDE in the fall indicate that for the 2012-13 school year, elementary schools will also have to include dropout rate as an indicator of Whole School Improvement. The dropout rate, however, will only count for four percent of this section. This means that a school with an A for attendance will likely have an A for whole school improvement, no matter what the dropout rate is. In other words, a new reporting measure has been added, but it will not change a school’s section grade or overall letter grade.

The revisions for middle schools leave intact the three criteria that were in place last year for Whole School Improvement: attendance, higher level coursework, and dropout rate. The proposed revisions clean up the coursework section so that schools receive more credit for students taking multiple advanced or honors courses.

The high school indicators for Whole School Improvement have undergone the greatest revisions, and they are not all bad. The two most notable changes are allowing schools to count more than one advanced course for students (as with middle school) and merging the AP performance criteria with the advanced coursework performance criteria. This includes the industry standard (Career Tech) component, which is a pretty good change. For some unknown reason, the ACT/SAT performance indicator will count the most recent test taken, instead of the highest one.

One fundamental problem with the rules – both as written before and as now revised – is that they do not assign weights to the different indicators in each section. While we knew last spring that high schools would have eight indicators, we were not aware, however, that one of them would be worth 79 percent of Whole School Improvement and that the other seven would not collectively have the statistical power to override that impact. Apparently, even after state law passes and administrative rules are adopted, the SDE has the authority to do whatever it pleases. How’s that for accountability?

Meanwhile, we have until March 25th to comment on the proposed rules.

  1. No comments yet.
  1. May 17, 2013 at 7:11 am
Comments are closed.
%d bloggers like this: