More Accountability Analysis
I have great readers; let me start with that. They are astute, and they have initiative.
I think the people who frequent this blog understand that I love a number of things. Kids are first on that list. That’s why I became a teacher. That’s why I’ve dedicated my career to education. I also love the public school system, which is why I’m always interested in making it better. I reject poorly-researched ideas that are more political than productive. Still, I have always believed that this profession is too important for those of us in it ever to be content that we are doing enough. The children are too important. We can always do better.
This leads me to another thing I love: the use of data to contribute to a narrative. I always believe that good numbers tell us something. I also always believe that numbers are never the entire story.
This takes me back to several things I’ve learned from my readers over the past few days. As you’ll recall, on November 6th when the State Department of Education released the A-F Report Cards for schools (and initially for districts), I provided statistics of how the scores broke down by district. Harold Brooks has provided a comment on that post, providing even more details.
I looked at everything labelled HS, MS or JHS, and ES and then binned everything else together into another category. Most of the “other” are single schools in small towns that I assume are elementary, but I’m not going to go through all of that. “Other” also includes schools that don’t follow the standard naming convention in most of the grade file.
The results. First, raw counts:
Grade
HS
MS/JHS
ES
Other
A
21
18
108
21
B
173
81
218
45
C
91
105
250
44
D
21
50
168
34
F
11
28
99
34
Second, by percentages for each kind of school (e.g., numbers under HS are percentages of HS getting that grade):
Grade
HS
MS/JHS
ES
Other
A
41.6
6.4
12.8
11.8
B
34.1
28.7
25.9
25.3
C
17.9
37.2
29.7
24.7
D
4.1
17.7
19.9
19.1
F
2.2
9.9
11.7
19.1
HS is that easiest school to get an A or B (3 out of 7 HS are A, 3 out of 4 are B or better). There’s not a lot of difference in the rest of the categories, but MS/JHS is hardest to get an A.
A good school is a good school. I don’t believe that Oklahoma’s high schools are that much better than Oklahoma’s middle and elementary schools. Last year’s grades showed the same tendency. However, under the previous accountability system (yes, there was one), the converse was true. Elementary schools consistently scored much higher.
Another reader pointed me to this spreadsheet showing all school districts in Oklahoma, their student counts, and the percentages of students eligible for free and reduced lunch. The table also has bilingual student counts, which is information I previously didn’t have. Last week, I ran correlations between school grades (and district grades) and poverty. Yet another reader suggested to me that I run correlations between the grades and poverty, this time only using districts with more than 1,000 students.
Comparison | Correlation |
All District Grades to Poverty | -.52 |
Large District Grades to Poverty | -.80 |
Large District Grades to Bilingual | -.32 |
Large District Grades to Poverty + Bilingual | -.76 |
Small District Grades to Poverty | -.51 |
Small District Grades to Bilingual | -.10 |
Small District Grades to Poverty + Bilingual | -.45 |
Both factors – poverty and bilingual education – seem to impact large districts to a greater extent. Statistically speaking, there are a couple of factors here. One is that the data for bilingual counts include a lot of schools with none reported. Zeros in statistics skew results (as they do with student grades). Another factor is that there were 131 of the large districts (still a statistically significant sample) and 386 small ones.
My takeaway from this is that while the report cards tell the story of schools’ accomplishments only to a limited extent, and while my analysis from before built on that, there is always more to learn, if you’re willing to unpack the data and find out what is happening. Among our largest schools, we see more variance in socio-economic levels. We also know that urban poverty and rural poverty are not identical.
I can’t state enough how much I appreciate the work that went into compiling this data.
Finally (for this post), I want to point to a graphic that I saw posted several times on Facebook and Twitter yesterday. The article, “How Poverty Impacts Students’ Test Scores, In 4 Graphs,” shows that nationally, students in poverty struggled more than those not in poverty on the National Assessment of Education Progress (NAEP) exams in 4th and 8th grade reading in math.
Look for yourself. There’s always more information out there, if you’re just willing to “research” it.
-
December 2, 2013 at 5:03 pmHe Who Casts the First Aspersion | okeducationtruths