Posts Tagged ‘Writing Tests’

6-14-15 #oklaed Chat: Teaching and Assessing Writing

June 13, 2015 Comments off

6-13-15 #oklaed Chat: Teaching and Assessing Writing

I don’t want to spend much time talking about the fact that for the second straight year, Oklahoma’s fifth and eighth grade writing assessments won’t be used in calculating the A-F report cards. I was appreciative when Superintendent Hofmeister made the decision to throw the scores out, although there was a small amount of backlash from her usual critics.

How much did we spend on that test?

I don’t know. How much did you spend on the food that went bad in your fridge during the last 12 months? Just because you spent the money doesn’t mean you have to eat the rancid cheese. It will make you sick, for crying out loud!

To me, this sequence of events highlights the Legislature’s failure to act in any meaningful way to deal with education issues during the 2015 session. They’ve ordered the SDE to study the A-F Report Cards. Meanwhile, we’ll still receive them.

They also put off the elimination of any state tests until the new math and English/language arts standards are in place. I can see the logic there, to an extent. On the other hand, I don’t care what standards we have in place; the writing tests we currently give students have always been – and will always be – a complete waste of money. I also – as you might have gathered last year – have a complete lack of faith in the ability of the testing industry to assess student writing ability.

That’s enough about that. As I have mentioned before, I became a teacher because of my love of writing.


Even now, as an adjunct professor, my favorite part of teaching is reading what my students write. I have strong opinions on writing instruction by the language arts teachers, but I also have strong opinions about other teachers’ expectations for student writing. Some of the best writing instruction I received in high school was from my tenth-grade U.S. History teacher, who I seriously underappreciated at the time.

The ability to write effectively is a key to unlocking more doors as adults. Dare I say that it’s critical to college and career readiness? Maybe I should change it to what Tyler Bridges suggested yesterday: future ready.

With that in mind, Sunday night’s #oklaed chat, which I will be hosting, is over the instruction and assessment of writing. Below is a preview of the questions; the first one is huge and will likely require follow-up discussion.

Q1: How should writing instruction look at the various grade levels?

Q2: Should writing expectations vary from subject to subject in school?

Q3: How has writing instruction changed as a result of technology?

Q4: What mode of writing (descriptive, informative, narrative, persuasive/argumentative) is most critical for students to learn?

Q5: How could blogging or tweeting be used in the classroom?

Q6: What is the best way to provide grammar instruction to students in order to improve writing?

Q7: Should writing and reading be taught as a combined discipline or two separate subjects?

Q8: What would it take for a state writing assessment to mean something to students, teachers, and parents?

See you on Twitter Sunday night at 8:00! Remember to use the #oklaed hashtag with all of your responses.

A Sense of Urgency

April 20, 2015 9 comments

When I was 16, I began my official days as a wage earner at Mazzio’s Pizza on the north side of Norman. At the very minimum wage of $3.35/hour, my goal was to make enough money in one shift to pay for the gas it took my 1974 LTD to drive there and back from the south side of Norman. Sure, gas was something like 79 cents a gallon, but this was one of the great American land yachts. By my admittedly sketchy math, it wasn’t worth my time to work fewer than four hours at a time.

photo (1)Once I was at work, I’m not sure I was even worth what they were paying me. I remember my very first night there. I was washing dishes with one of those hoses that hangs down over an industrial size sink, just casually rinsing a rack of plates that was ready to go into the big, bad commercial dishwasher. The assistant manager walked up behind me and asked, “Rick, do you know what the phrase, ‘sense of urgency’ means?”

I said something along the lines of, “I think so.” She said, “Good, because if you want to have a second night here,” you’ll show me. Well, I did want a second night. I was 16. I had a gas guzzling car. Most importantly, Mazzio’s had a promotion featuring cool, colorful sunglasses that the rest of my high school surely would mistake for Ray Bans™. I spent the rest of that night washing dishes, busing tables, and mopping floors like a mad man. I had entered numbers and words onto a W-4 for the first time and I was not to be denied.

That was 1986, and to this day, the phrase “sense of urgency” makes me think of my first night of a seven year run in food service. It’s also the phrase that has come to mind frequently during the past week as I have watched Joy Hofmeister work to right a wrong.

In case you missed it, last Monday night, social media was buzzing with the information that most students taking online state tests were receiving instant scores and performance levels upon submitting their last answer. While I’ve always wondered why getting scores back to the schools takes so long after testing, I wasn’t exactly looking for an instant answer either.

After attending a work event, Joy noted on Twitter that she wasn’t ok with this practice either.

What I found out several days later was that she called testing staff into the office that night and immediately tasked the testing company, Measured Progress, with fixing it.

That’s a sense of urgency.

What I also didn’t know at that time was that this new feature of online testing was a surprise left for all of us by the previous administration. In fact, it’s right in the 2013 Request for Proposals (RFP) for the testing contract.

Oklahoma’s online testing program stems from the need for students and educators to receive the results of testing quickly as required by law. The online system must provide to students immediate raw score results (and performance levels for pre-equated tests) and complete student results within two weeks for schools and districts. The supplier should provide a detailed description of the system that addresses each of the topics below. In addition, the SDE prefers an online management system that enrolls and tracks paper and online testers within the same program (p. 20).

How did we miss that at the time? I guess we were all busy looking up the new testing vendor to realize that the state was asking for new features. Measured Progress actually had to write new code to make this feature possible. I don’t know if it was a large or small undertaking, but they did it, meeting the terms of the contract. When asked by Joy to undo this as soon as possible, they did – in under a week.

I won’t get into the horror stories of students seeing the word Unsatisfactory on the screen and bursting into tears. I will say that fixing this problem is a good cap to a solid first 100 days by the new state superintendent. She ended double testing in junior high math. She eliminated the writing field test. She announced the mode of writing for February’s fifth and eighth grade tests. She’s lobbied the legislature for testing relief and money for teacher pay. She actually showed up at the education rally, and other than a slam poet from Mustang, she stole the show.

If the first 100 days of her administration have been marked by urgency, I hope the next 1000 will be marked by persistence. There are many more battles to fight. Many are much larger. All involve the same goal – doing right by the students of this state.

Reason #8 to Pick a New State Superintendent: The 2014 Writing Test Debacle

Yesterday’s post covered a comedy of errors that ended in a giant snafu – the 2013 A-F Release. After putting it on the blog, I settled in for a quiet night of living vicariously through my blogger friend, Rob Miller. Rob, it seemed, was fortunate enough to attend the debate in Tulsa between incumbent State Superintendent of Public Instruction Janet Costello Barresi and her primary challenger, Joy Hofmeister.

There were several telling moments from the debate, and I fully expect a recap from Rob on his blog later. In the meantime, here are a few of my favorite tweets (his and others) from the evening.

Here Barresi admits that a week before Governor Fallin signed HB 3399, which overturned the Common Core, she was aware of the decision. That probably would have been good information to have shared with those working in her curriculum department at the SDE. Until the final hour leading up to the governor’s announcement, they were still working with schools and lobbying hard to encourage a veto of the bill. Fortunately, the SDE has a solution handy.

Oh, my mistake. That’s not the SDE. That’s the Fake OKSDE Twitter account. He or she (what’s up with anonymity, people?) has been away from Twitter since the fall. Last night’s return was welcome and timely. It also came in handy when Barresi made a couple of serious mistakes.

Apparently, counting to two is hard.

This was the best one. She simply has no clue what she’s talking about. Fake OKSDE summed it up thusly:

Fake OKSDE Microfibers

After seeing that, I couldn’t help myself and added this one:


I look forward to Rob’s recap of the evening, but if you want to read more about how little our state superintendent understands about the testing required of our students, check out the Tulsa World.

To refresh your memory, here’s how the Top 10 in our countdown began:

#10 – Ignoring Researchers

#9 – The A-F Rollout

#8 – The 2014 Writing Test Debacle

It’s always hard to know exactly where to place a current event within a list of things that have happened over time. Is this so high because the problem is ongoing? Hard to say. When it started, I figured it was steering towards the honorable mention. Before we look at this year’s test problem, let’s go back to October 2013. That’s probably when we should have known this wouldn’t go well.

Writing Assessment Update

OK State Dept of Ed sent this bulletin at 10/07/2013 09:02 AM CDT

Dear Superintendent, Principal and District Test Coordinator,

It has been brought to our attention that some Grade 5 and Grade 8 Writing Assessments need to be scored by a third reader and will likely receive a new writing score.  The original two readers did not agree sufficiently to produce a valid score for the students’ writing.  You may or may not have students who will receive new scores.  If you do, the students whose papers are being re-scored are posted on the State Department of Education Single Sign On Site. Click below the chalk board in the Accountability A to F box. Next, click on the reports tab found on the blue bar near the top of the screen. The students are listed by school.

Please know that the impacted students will receive new writing assessment scores in the middle of October.

Yes, a week before the SDE released our A-F grades last fall, they were aware that CTB still needed to re-score some of our writing tests. With that in mind, why is it so difficult for the SDE to take seriously the irregularities that school districts are pointing out to them? Maybe I’m getting ahead of myself. Let’s fast-forward to the first of this month. When schools around Oklahoma started looking at their fifth and eighth grade writing scores they noticed some serious problems.

  • The rubric does not seem to have been used correctly.
  • Most students received the same sub-score for all five writing skills.
  • Students who properly cited a prepared text received deductions for plagiarism.

Several districts contacted the SDE about the concerns. One teacher even reached out on their Facebook page. This was the response:

Each test was assessed by two independent scorers – as well as a third when individual scores differed by more than one point on any trait – who employed a rubric made widely available to school districts and the public on the SDE website at Initial reports from CTB/McGraw-Hill suggest that the test taker’s use of passage-based content and utilizing his or her own words were among the more prevalent issues in scoring of fifth- and eighth-grade writing tests.

The SDE public relations firm then linked to technical manuals.

Teachers who had taken the time to review student responses using the rubric were finding many instances in which their own professional judgment could not reconcile with the assigned score. Unfortunately, besides the three problems listed above, there is a fourth.

  • The cost to re-score student responses is ridiculous.

As one district put it to the SDE:

The fee of $125 is exorbitant. Scorers paid by CTB receive a low hourly wage and have to keep a relatively high production rate during the time they are under temporary assignment with the testing company. While we understand that some processing costs exist, none of that would explain the $125 fee. By our most conservative estimates, this amounts to a 90% mark-up of CTB’s out-of-pocket expenses. In other words, the fee is in place as a deterrent to keep districts from asking for tests to be re-scored.

The way I see it, this was a missed opportunity by the agency and Barresi’s campaign to side with schools and hammer another nail into the testing company. Unfortunately, the SDE just doesn’t take schools seriously. Teachers can’t possibly be right. The testing company knows the kids better than the teachers, right?

That might be true if it weren’t for the fact that the people grading our writing tests are temporary hires working for about $11.05 per hour. They don’t even have to have any background in education (which obviously isn’t a deal breaker for the SDE).

Earlier this week, several of the complaining districts received nearly identical three-page responses from the SDE. Lisa Chandler, one of the agency’s newest hires, quoted heavily from the technical manuals for interpreting the rubric, as well as the training protocol for scorers. She spent all that time telling the schools nothing. In short, just shut your mouths and move on with your lives.

Fortunately for those impacted by this issue, this isn’t the end of it. School districts are concerned about having the tools to give students and parents feedback about writing. Teachers and principals are concerned about the impact the flawed scores might have on teacher evaluations and school report card grades.

This issue is recent, but worthy of the top ten, partly because of what it symbolizes. As we continue struggling with an obtuse leadership that refuses to take an inept testing company to task, we face the larger burden of the fact that neither entity treats teachers with respect. We burn through a lot of taxpayer money, and the only proof that the scores are accurate is because they tell us so. If we question it, we’re protecting the status quo.

Eleven days left.

Glad you could make it back, @FakeOKSDE. You’re sorely needed.

Just About Anybody can Score the Writing Tests

Today, the job announcement for CTB test scorers has been circulating on Facebook and Twitter. Thankfully. I was afraid I’d come home from work and have nothing to write. After all, tomorrow morning’s post for the #16 spot in the countdown is nearly complete.

Title Evaluator with Bachelor degree
Description Welcome! You have taken the next step towards putting your bachelor’s degree to work! Apply for a great opportunity with Kelly Services and CTB/McGraw-Hill.

As an Evaluator, you will be joining a group of people dedicated to improving education for our nation’s children. We invite you to continue this on-line information and screening process to determine if this is the right job for you. Qualified applicants will be invited to schedule an in-person interview. We have positions available at CTB’s scoring locations in Sacramento, CA, Indianapolis, IN, and Lake Mary, FL.

Evaluators assign scores to student responses for various assessment tests including, but not limited to, reading and language arts, mathematics, science, and social studies for grades K-12. Training is provided by content specialists using established scoring guidelines (rubrics) for reading and assigning scores to open-ended response items. Student responses take the form of essays, graphs and diagrams, to give just a few examples.

Work is performed at one of CTB’s scoring centers (locations to follow this introduction). Student responses are scanned and are viewed on a standard personal computer. Basic computer skills and experience, and ability to use a mouse and keyboard are required.

Position Requirements Applicants must have knowledge of standard English language writing conventions and/or other content specific knowledge such as science, mathematics, social studies, etc. Teachers and individuals with education backgrounds are encouraged to apply; however, teaching experience is not required.


Evaluators work on a project basis, with most projects lasting from several days to several weeks. Projects run from approximately March through June. Work hours are Monday through Friday from 8:30 AM to 4:30 PM or from 6:00 PM to 10:30 PM. Once assigned to a specific project, Evaluators must commit to completing that project. Employees may take time off between projects if desired, or ask to be reassigned to the next available project if available.


As an Evaluator, you will be joining a group of people dedicated to improving education for our nation’s children. Many Kelly employees return year after year to work at CTB and we’re told by our employees that they enjoy the spirit of camaraderie while at work. Kelly employees can also take advantage of promotional opportunities within the CTB workforce, or put their skills to work at other Kelly customer locations during CTB’s down cycles.

Educational Requirements Qualified applicants must possess a minimum of a bachelor’s degree. Verification of your degree will be required.
Salary $11.05 per hour
Location Sacramento/Rancho Cordova, CA

If you have a bachelor’s degree, can move to Rancho Cordova, and are dedicated to improving education for our nation’s children (and can survive on $11.05/hour),you can score for CTB. While the individuals responding to this announcement might not necessarily be the ones who work with our writing tests, the job announcement still gives a glimpse into the process. In spite of this, the SDE still thinks that these scorers are more qualified to tell us how our children are performing than their teachers.

Given the complexity of the fifth grade and eighth grade writing prompts, I don’t feel that great about that.

The SDE Doubles Down on Writing Scores

June 2, 2014 Comments off

Last week and over the weekend, Oklahoma educators blew up social media with outrage over the score reports that schools have received for the fifth and eighth grade writing tests. In short, very few of us have any confidence that the CTB/McGraw-Hill temporary scorers correctly applied the rubric correctly to the student responses. Some districts have written letters to the SDE, asking for relief (and received no response). Others are in the process of doing just that.

In case you are unfamiliar with the problems we are seeing, I will summarize from my post yesterday on this subject.

  • The rubric does not seem to have been used correctly.
  • Most students received the same sub-score for all five writing skills.
  • Students who properly cited a prepared text received deductions.
  • The cost to re-score student responses is ridiculous.

Rob Miller also wrote extensively about the test results over the weekend.

Today, one educator decided to ask the SDE about the writing tests on their Facebook page.

Writing Tests on Facebook

For those of you who can’t see images when my blog comes to your email, here’s the question:

SDE, will something be done about the issues being cited with the scoring of fifth and eighth grade writing tests? Since these scores will impact my son’s school grade (A-F), I am concerned. As an educator, I find it very unlikely that 81 percent of students received the same score across the board in the different subcategories. What does the SDE plan to do about this, especially given the excessive fee being asked for rescoring. Call me a cynic but who believes that the test co will admit errors?

And here was the SDE’s response:

Each test was assessed by two independent scorers — as well as a third when individual scores differed by more than one point on any trait — who employed a rubric made widely available to school districts and the public on the SDE website at Initial reports from CTB/McGraw-Hill suggest that the test taker’s use of passage-based content and utilizing his or her own words were among the more prevalent issues in scoring of fifth- and eighth-grade writing tests.

If you need more information, you might want to check CTB’s Oklahoma testing manuals and contact information:…

Another resource is the SDE page (scroll to the middle of the page) containing the rubrics and standards used for training and grading:

Thanks. We already had that information. They’re giving us the equivalent of the KFC Double Down – which incidentally seems like a really bad idea. They have a losing hand and they’re trying to convince us otherwise.

This issue is quickly working its way into my Top 20 Countdown. Late last week, this seemed like it was possibly just another CTB SNAFU. Somehow, Barresi & Co. have conjured all the finesse and grace to which we have become accustomed and made it theirs.

Real Oklahoma teachers have read the student responses, applied the rubrics (which they already know how to use), and found major disagreements with the results. Meanwhile, the SDE is defending the CTB process, which utilizes temporary scorers from across the country.

For the sake of argument, let’s look at what the aforementioned process would mean, when applied to a student paper (using real scores from 8th grade students).

  Reader #1 Reader #2 Average
Ideas & Development 2.0 3.0 2.5
Organization, Unity, & Coherence 2.0 3.0 2.5
Word Choice 2.0 3.0 2.5
Sentences & Paragraphs 2.0 3.0 2.5
Grammar Usage & Mechanics 2.0 3.0 2.5
Composite Score 32 45 38
Score Range Limited Knowledge Proficient Proficient

The first reader found the response to be limited in quality. The second found it to be proficient. The student received a 2.5 across the board for each of the five analytical traits, and a score of proficient. In this case, the student got the benefit of the doubt. Look at the following scores for three different students, however.

  Response #1 Response #2 Response #3
Ideas & Development 2.5 2.0 2.0
Organization, Unity, & Coherence 2.5 2.5 2.5
Word Choice 2.0 2.0 2.5
Sentences & Paragraphs 2.0 2.0 2.5
Grammar Usage & Mechanics 2.0 2.0 2.5
Composite Score 35 35 36
Score Range Limited Knowledge Limited Knowledge Proficient

Riddle me this, Batman….

  • How is Response #3 only one Composite Score point better than Response #2?
  • How are #1 and #2 the same?
  • Why is it fair that one reader for Response #1 thought the essay was proficient and the other thought it was limited, and the second reader’s opinion prevailed?

These real student scores show the subjectivity capriciousness of having disembodied readers scoring student responses from afar. It also shows how convoluted the process of converting raw scores into composite scores is. The resulting information is not accurate, transparent, or clear.

The SDE’s terse response to the teacher’s question is yet another example of where their priorities lay. They trust the testing company – which has failed us repeatedly – more than they trust us. I know that painting the entire agency with such a broad brush oversimplifies this situation, but how are we possibly supposed to feel valued? This is another testing debacle. This is another piece of evidence that the administration – and the reformers’ obsession with all things testing – is an abject disaster.

Not that we needed it.

Writing Tests Gone Awry

This story is too new to make the Top 20 Countdown that I started earlier today, but in time, it may rise to that level.I have received messages from administrators in two school districts sharing letters that they have sent to Superintendent Janet Barresi and the SDE about the questionable scores received on the 2014 fifth and eighth grade writing tests. Rob Miller has already covered this issue thoroughly on his blog. I won’t rehash all of his talking points, but I’ll get into those too. Suffice it to say that at least three large districts are scratching their heads over the scoring process.

This is from District #1:

We have serious concerns with the state’s application of the writing rubric. It appears that readers looked at a paper and assigned it a number that they input in all the sub scores. When papers were re-scored by local teachers, administrators and literacy experts, the scores among the sub scores varied greatly. Students could produce a paper that had good mechanics, sentences, paragraphs, spelling, punctuation, etc. and lack important aspects about citation and coherence. Others presented good arguments and citations but did so with run on sentences and poor spelling. Needless to say, in our scoring, it was rare for a paper to receive the same sub score across the entire rubric. That being said, we see approximately 80% of 8th grade scores and 60% of 5th grade scores coming back with no variation across the five writing traits in the rubric. The problems we see with these scores make us question the use of the rubric at all.

CTB officials informed district test coordinators at their meeting on May 28, 2014 that the writing tests were scored to determine a percentage of “plagiarism.” This was the first mention of a reduction in scores due to a “certain percentage of plagiarism.” The actual percentage used was not shared with the attendees but was promised to be provided at a later date. We have grave concerns about this aspect of scoring because the students were asked to cite text evidence in their essays. The fifth grade test instructions stated, “Be sure to state your opinion and support it using information presented in both passages.” The eighth grade test instructions stated, “Be sure to state a claim and address an opposing claim using evidence presented in both passages.”

District #2 covered some of the same ground, and then added this:

With these fundamental concerns in mind, we will be requesting that a considerable percentage of our tests be re‐scored. We do not, however, feel that the district should be liable for these costs. The fee of $125 is exorbitant. Scorers paid by CTB receive a low hourly wage and have to keep a relatively high production rate during the time they are under temporary assignment with the testing company. While we understand that some processing costs exist, none of that would explain the $125 fee. By our most conservative estimates, this amounts to a 90% mark-up of CTB’s out-of-pocket expenses. In other words, the fee is in place as a deterrent to keep districts from asking for tests to be re-scored.

Our immediate plan is to continue reviewing our student responses and compiling a list of those that we wish to have re-scored. Our request to you is that we not be charged for the effort. The dedicated teachers of this district are reviewing these responses on their own time. At the very least, CTB could do the same.

The critical points here seem to be:

  • The rubric does not seem to have been used correctly.
  • Most students received the same sub-score for all five writing skills.
  • Students who properly cited a prepared text received deductions.
  • The cost to re-score student responses is ridiculous.

On the first point, Rob took a good look at the rubric.

There are five areas scored on the writing rubric. Both the fifth and eighth grade rubrics for the “transitional CCSS writing test” include the following scored standards. The scoring “weights” for each standard are also listed. I will come back to this in a minute because this is where things start to get fishy.

Ideas and Development—30%
Organization, Unity, and Coherence—25%
Word Choice—15%
Sentences and Paragraphs—15%
Grammar and Usage and Mechanics—15%

Both writing rubrics are on the OSDE website and can be viewed (5th) HERE and (8th) HERE.

Let’s get back to the scoring. Each of the five standards is graded on a scale of 1.0 to 4.0 in 0.5 increments. Again, using the 755 scores that I have at my disposal, let me show you how the scores for the 8th grade test break down at my school. The lowest score possible is a 15 and the highest score is a 60.

At first glance, it appears that the scores are derived by combining the point totals from each standard and multiplying by three. I have bolded those scores where this rule seems to apply. It is also evident that this is not always the case.

Total score:
5 = 15
5.5 = 24
6.5 = 25
7.5 = 29
8.5, 9.0, or 9.5 = 30
10.0 = 32
10.5 = 35
11.0 = 35 or 36 (36 is proficient score)
11.5, 12.0 = 36
12.5 = 38
13.0 = 37 (only one of these)
13.5 = 41 or 42
14.0 = 41 or 42
15 = 45
16 = 47
16.5 = 48
17.5 = 52
18.0 = 54
19.5 = 56
20 = 60

It is obvious from this chart is that the weights discussed above WERE NOT USED, or were used haphazardly. Any score of 12.0 earned a 36 regardless of how the scoring was distributed. Yet in one case a score of 11.0 earned a passing score of 36 with individual standard scores of 3/2/2/2/2 while another 11 (2/3/2/2/2) scored a limited knowledge score of 35.

However, a 10 always earns a 32, a 15 always earns a 45, and so on for most of the scores. The only exceptions were for scores of 11.0 (35 or 36), 13.5 (41 or 42), 14.0 (also 41 or 42).

Also note that the odd fact that a score of 7.5 earns a 29 while a 8.5, 9.0, or 9.5 only earns one more point (30). Suffice it to say, this doesn’t seem to make much sense.

On the second point, it seems pretty absurd that most students would receive the exact same score for each writing trait. It’s possible that it could happen for some, but not for 81% of the responses, as it happened at Jenks Middle School. Just from the things I’ve seen on Facebook and Twitter this weekend, it is not an isolated problem.

Let me say this another way: they’re not just picking on Jenks this time!

My only explanation for this is that the scorers are rushed. They read a response, develop an overall impression, and then assign points – in many cases giving the essay a 2.0 all the way across (which seems to be the most common score).

For one trait in particular, Sentences and Paragraphs, here are the bullet points for a response receiving a score of two:

  • Limited variety of sentence structure, type, and length
  • Several fragments or run-ons
  • Little or no attempt at paragraphing

Teachers looking over the images of their student responses are adamant that these statements are not accurate descriptors of what they are seeing. An essay lacking in ideas and development might very well have appropriate use of Sentences and Paragraphs.

As for the first district’s concerns about plagiarism, apparently every single fifth and eighth grade language arts teacher in the state misunderstood the instructions. Believe it or not, the SDE and/or CTB were unclear about something. I’d be more willing to believe the scorers (who are temporary laborers) had no clue what to do with cited information. Maybe their training prior to scoring is in adequate.

Rob is right. The teachers and administrators up in arms throughout the state are right too. What is wrong, however, is the expectation that school districts generate a purchase order and gamble on having the tests re-scored. At that price, though, why would anybody risk it?

In case you haven’t noticed, the ramifications of bad A-F Report Card grades can be huge. They can force a good school to jump through countless hoops for years – hoops that really don’t foster school improvement. With the crazy change to define Full Academic Year as beginning Oct. 1, the elimination of modified assessments for special education students, and the number of high-achieving students who received exemptions from state tests due to 2013 legislation, many grades and subjects are seeing lower test scores in 2014.

Students deserve accurate scores. So do schools. And they shouldn’t have to pay out their eyeballs for it.

%d bloggers like this: