Home > Uncategorized > The SDE Doubles Down on Writing Scores

The SDE Doubles Down on Writing Scores

June 2, 2014

Last week and over the weekend, Oklahoma educators blew up social media with outrage over the score reports that schools have received for the fifth and eighth grade writing tests. In short, very few of us have any confidence that the CTB/McGraw-Hill temporary scorers correctly applied the rubric correctly to the student responses. Some districts have written letters to the SDE, asking for relief (and received no response). Others are in the process of doing just that.

In case you are unfamiliar with the problems we are seeing, I will summarize from my post yesterday on this subject.

  • The rubric does not seem to have been used correctly.
  • Most students received the same sub-score for all five writing skills.
  • Students who properly cited a prepared text received deductions.
  • The cost to re-score student responses is ridiculous.

Rob Miller also wrote extensively about the test results over the weekend.

Today, one educator decided to ask the SDE about the writing tests on their Facebook page.

Writing Tests on Facebook

For those of you who can’t see images when my blog comes to your email, here’s the question:

SDE, will something be done about the issues being cited with the scoring of fifth and eighth grade writing tests? Since these scores will impact my son’s school grade (A-F), I am concerned. As an educator, I find it very unlikely that 81 percent of students received the same score across the board in the different subcategories. What does the SDE plan to do about this, especially given the excessive fee being asked for rescoring. Call me a cynic but who believes that the test co will admit errors?

And here was the SDE’s response:

Each test was assessed by two independent scorers — as well as a third when individual scores differed by more than one point on any trait — who employed a rubric made widely available to school districts and the public on the SDE website at sde.ok.gov. Initial reports from CTB/McGraw-Hill suggest that the test taker’s use of passage-based content and utilizing his or her own words were among the more prevalent issues in scoring of fifth- and eighth-grade writing tests.

If you need more information, you might want to check CTB’s Oklahoma testing manuals and contact information: http://www.ctb.com/netcaster/categoryIndex.html…

Another resource is the SDE page (scroll to the middle of the page) containing the rubrics and standards used for training and grading: http://ok.gov/sde/test-support-teachers-and-administrators.

Thanks. We already had that information. They’re giving us the equivalent of the KFC Double Down – which incidentally seems like a really bad idea. They have a losing hand and they’re trying to convince us otherwise.

This issue is quickly working its way into my Top 20 Countdown. Late last week, this seemed like it was possibly just another CTB SNAFU. Somehow, Barresi & Co. have conjured all the finesse and grace to which we have become accustomed and made it theirs.

Real Oklahoma teachers have read the student responses, applied the rubrics (which they already know how to use), and found major disagreements with the results. Meanwhile, the SDE is defending the CTB process, which utilizes temporary scorers from across the country.

For the sake of argument, let’s look at what the aforementioned process would mean, when applied to a student paper (using real scores from 8th grade students).

  Reader #1 Reader #2 Average
Ideas & Development 2.0 3.0 2.5
Organization, Unity, & Coherence 2.0 3.0 2.5
Word Choice 2.0 3.0 2.5
Sentences & Paragraphs 2.0 3.0 2.5
Grammar Usage & Mechanics 2.0 3.0 2.5
Composite Score 32 45 38
Score Range Limited Knowledge Proficient Proficient

The first reader found the response to be limited in quality. The second found it to be proficient. The student received a 2.5 across the board for each of the five analytical traits, and a score of proficient. In this case, the student got the benefit of the doubt. Look at the following scores for three different students, however.

  Response #1 Response #2 Response #3
Ideas & Development 2.5 2.0 2.0
Organization, Unity, & Coherence 2.5 2.5 2.5
Word Choice 2.0 2.0 2.5
Sentences & Paragraphs 2.0 2.0 2.5
Grammar Usage & Mechanics 2.0 2.0 2.5
Composite Score 35 35 36
Score Range Limited Knowledge Limited Knowledge Proficient

Riddle me this, Batman….

  • How is Response #3 only one Composite Score point better than Response #2?
  • How are #1 and #2 the same?
  • Why is it fair that one reader for Response #1 thought the essay was proficient and the other thought it was limited, and the second reader’s opinion prevailed?

These real student scores show the subjectivity capriciousness of having disembodied readers scoring student responses from afar. It also shows how convoluted the process of converting raw scores into composite scores is. The resulting information is not accurate, transparent, or clear.

The SDE’s terse response to the teacher’s question is yet another example of where their priorities lay. They trust the testing company – which has failed us repeatedly – more than they trust us. I know that painting the entire agency with such a broad brush oversimplifies this situation, but how are we possibly supposed to feel valued? This is another testing debacle. This is another piece of evidence that the administration – and the reformers’ obsession with all things testing – is an abject disaster.

Not that we needed it.

Advertisements
%d bloggers like this: