Home > Uncategorized > A Brief Primer in “Scholarly Jargon”

A Brief Primer in “Scholarly Jargon”

January 24, 2013

One prescient commenter on the Oklahoman’s editorial slamming the CCOSA/OSSBA commissioned study of A-F Report Cards attributed the paper’s snarky tone to a lack of understanding. The writer stated, “Really dok, your inability to grasp the jargon of a scholarly report doesn’t mean it’s skewed.” I think this is part of the problem. Grades scaled A-F appeal most to people who want a simplistic answer to the key question, “How is my school doing?” Research done by real live scholars breaks that mold.

To help the editorial writers, I’ve captured ten statements from the report containing scholarly jargon and provided a brief explanation. Why ten? Because it’s a nice round number that people recognize. And that’s what really matters.

Statements

  1. The scores assigned to represent proficiency levels (0, .2, 1.0, 1.2) do not seem to correspond to any recognizable metric.
  2. The metric does not justify the mathematical manipulations performed in the A-F scaling.
  3. The use of proficiency levels rather than test scores in these computations introduces grouping error.
  4. Information is lacking regarding classification consistency.
  5. Attendance and graduation rates are known to be correlated with socioeconomic status.
  6. By not making explicit threats to the validity of report card grades, the OSDE misinforms the public about the credibility and utility of the A-F accountability system.
  7. There is a nonlinear relationship between proficiency level and growth since growth is restricted at the top.
  8. The A-F accountability system is susceptible to forms of “test score pollution.
  9. The A-F accountability system stands in contrast with more comprehensive understandings of measuring organizational effectiveness.
  10. [E]xternal inducements to task performance reliably undermine motivation.

Explanations

  1. The idea of a recognizable metric is that the system would use some kind of a scale that makes sense or sound familiar. The A-F grades are recognizable. The scale used to assign value to performance, as the report indicates, is completely arbitrary.
  2. The SDE created a formula in which an advanced score is worth six times the value of a limited knowledge score. They manipulated the results to create a performance index that does not tell anybody how many students actually passed each test. And the justification for the formula is completely unclear.
  3. This image I uploaded a while back showing the distribution of Report Card GPAs by poverty levels gives a good visual explanation of grouping error. statterplot
  4. The classification inconsistency comes from the fact that test results come before performance levels are set. The four categories are not equally grouped. For example, on one test, the range of scores for Proficient may be just a few responses from high to low. On the test for the same subject in the next grade, it may be much larger. On some tests, the range of scores for Advanced is very narrow. On others, it is quite wide.
  5. Statistically, correlation is the relationship between two variables. One common mistake is to say that correlation shows causation. It doesn’t necessarily. Poverty is highly correlated with other variables such as low school achievement, poor health, and family dysfunction. In this report, the researchers discuss correlation to explain why attendance rates and graduation rates are somewhat factors that schools can’t control. I would add that if a school is doing well in terms of student achievement and growth, is attendance really all that important? Yes, students should be in school whenever possible. If they aren’t, and they still do well, though, what is the huge concern?
  6. I would have re-written this sentence. The authors don’t mean the SDE should make “explicit threats.” They mean, “There are threats, and the SDE should state these explicitly.” Months of scrutiny have exposed many flaws in the methodology for the grades. The SDE’s denial of those concerns and promotion of these grades misinforms the public.
  7. Do you remember calculating slope in Algebra? Slope is the way to represent a linear relationship. Slope is also a way to explain correlation … rise over run … y = mx + b … But sometimes the relationship curves. Students scoring poorly have a lot more opportunity for growth than students scoring very well. Taking the average of student growth and applying it to students misrepresents this reality. Badly.
  8. A huge problem in accountability systems is the idea that they force schools to filter all curriculum decisions into various methods of test preparation. Nothing else gets through. Tests should sample what students know. Instead, they end up capturing the entirety of what has been taught. This is test score pollution.
  9. In short, the A-F Report Cards keep us from seeing whether or not schools are truly effective. They capture a snapshot of all the wrong things.
  10. I absolutely loved this sentence from page 23. It contests the idea that before A-F…before API…before NCLB, schools were complacent. Nothing could be farther from the truth. I’ve said repeatedly on this blog that schools try to improve continuously because the people working there care about their kids. Nobody wants to be embarrassed (or validated) by grades that misinform the public. Ultimately, when the teacher is with the class, that all goes out the window.

The truth is that most of us aren’t professional researchers or scholars. Most Oklahomans aren’t even educators. That’s perfectly fine. We all speak a variety of languages that make sense in our own circles. I’m an educator and a researcher. I get what the report says. But even if I didn’t  I wouldn’t take the intellectually dishonest path of rejecting the findings. The researchers at OU and OSU are spot on with this report. I detect that most people get that, even if certain newspapers do not.

  1. SCM
    January 24, 2013 at 5:56 pm

    Information is lacking regarding classification consistency. What this really means is that your formula resulted schools being placed in the correct group most of the time…that the groupings are reliable. One of the big problems here is that Oklahoma’s state tests have reliability problems — that’s why they don’t print score bands on your child’s test report. The score bands, for MANY students, would cross at least 2 of the score categories. And I’m not picking on J. Barresi here; Oklahoma’s tests have had reliability problems since they were created, but no one seems to care about that.

    This is one of the reasons it would have been better to create an A-F formula that used each child’s score instead of each child’s category. The category placements are really very reliable.

    Like

    • January 24, 2013 at 9:35 pm

      Thanks for the clarification. You definitely explained it better than I did.

      Like

  2. SCM
    January 24, 2013 at 6:13 pm

    Ack. Aren’t. The category placement AREN’T really very reliable.

    Like

  3. January 24, 2013 at 9:38 pm

    My biggest fear after posting this one was that someone would question my definition of the word “brief.”

    Like

  1. No trackbacks yet.
Comments are closed.
%d bloggers like this: