Another Look at District Report Cards
Readers commenting on Friday’s blog made two good, critical observations and asked one really good question.
- Causation does not equal correlation.
- District GPA cannot be considered a continuous variable.
- How does the distribution of district grades correlate to school size?
First, Joe Love made the point that causation does not equal correlation. This is probably the most fundamental thing to understand when you hear any statistical observation. My all-time favorite example is that students who eat breakfast are more likely to be successful in school. The truth is that students who come from homes with adequate resources are likely to have the following factors in place: better nutrition, greater access to health care, parents (or siblings) who read to them, appropriate clothing and shelter, and high expectations. Better nutrition likely is important to school success. However, it is typically a byproduct of the extent to which all household needs are met. Of all these variables, it is the household income that typically produces the strongest correlation to school success. So yes, I would argue that socio-economic status produces a causal relationship to student achievement (and therefore, school report card grades).
Next, Kathy points out that on the scatterplot, all the grade point averages line up in rows, and that they represent more of a rank ordering of districts than they do a continuous variable. This is also a correct point. Honestly, it’s one of the biggest criticisms I hear from principals. They want to know why an 89 is treated the same as an 80. If schools (and districts) received points based on their performance within a letter grade range, the scores would spread out more. Some B’s would become A’s, and some C’s would become B’s. We couldn’t allow that to happen, though.
The question was an interesting one to me, in part because I know a lot of small school teachers, principals, and superintendents. I did the same thing as before, matching districts by GPA and school size, then running a Pearson test (but not the $1.7 billion kind). Remember from Friday, that the correlation between free and reduced lunch participation rate and GPA was -0.44, which would be considered moderate. This time the correlation was 0.03, which is completely negligible. This tells me that large districts are no more or less likely to have a particular letter grade than small ones. It also tells me that if I ran a multiple regression model using both poverty and district size, the impact of size would be negligible.
On a slightly related note, I thought the Tulsa World did a good job this morning discussing the real intent behind assigning letter grades to schools: humiliation. Meanwhile, the Oklahoman points out that the two districts receiving F’s face no sanctions. Both are worth a read.