July 17, 2003

Do charter schools work?

A new study from the Manhattan Institute claims to show that charter school students have better test score performances than similar student groups from nearby public schools. Authors Jay P. Greene, Greg Forster, and Marcus Winters focused on measuring the "effect that all the untargeted charter schools...had on test scores when compared to the performance of their closest regular public schools":

These results showed a positive effect from charter schools and were statistically significant, but the size of the effect was modest. Untargeted charter schools made math test score improvements that were 0.08 standard deviations greater than those of neighboring public schools during a one year period. For a student starting at the 50th percentile, this would amount to a gain of 3 percentile points, to the 53rd percentile. Reading test score results showed 0.04 standard deviations greater improvement in untargeted charter schools than in their closest regular public schools over the course of a year, a benefit that would raise a 50th-percentile student 2 percentile points to the 52nd percentile.

Because these results are statistically significant, we can be very confident that the charter schools in our study did have a positive effect on test scores...

Bas Braams is contesting their conclusions on his new blog, Scientifically Correct. He claims that the authors confused overall school improvement with individual student improvement, and that this mistake invalidates their conclusions:

Now I remind the reader of the concept of value-added assessment... Value-added assessment employs, ideally, performance data on individual pupils over multiple years, and looks at improvements over time. It is a way to factor out the effects of different student backgrounds, because these are, one assumes, reflected in their initial test performance. If one doesn't have data on individual pupils then one can use data on grades within a school. In that case the incremental performance that one cares for is that between a certain grade in one year and the next higher grade the next year, on the assumption that this involves approximately the same student population.

Greene et al. could certainly have used such grade-to-grade value added assessment in their work. However, they did something different. They look at the overall performance of each school in one year and compare it to the overall school performance the next year...And so, the authors completely confuse a measure of the improvement of schools with a measure of the improvement of student performance. Charter schools could be performing wonderfully or they could be performing dismally relative to public schools in improving student performance, and it would not be seen on the whole school year to year test score improvements that are the basis of this report. It would be seen, of course, in traditional value-added assessment at the pupil or grade level.

In other words, Bas is claiming that the authors used the wrong indicator to measure actual student gains over time, and that student gains are the real measure here of how charter schools are doing. In the comments section on Bas's page, though, some disagreement has appeared over whether the authors of the report were using school-level test data, which wouldn't show the correct test scores gains, or grade-level test data, which might.

Commenter Richard Phelps notes that it's problematic from the start to assume that students at untargeted charter schools are, in fact, equivalent to their public school counterparts, since by definition, if they are enrolled in charter schools rather than the local public schools, there is some self-selection process in place. Parents who enroll their kids in charter schools may be more aware of, or more concerned about, their kids' educational progress, which might in and of itself foster educational gains.

As for me, well, my alarm bells went off at the phrase:

Because these results are statistically significant, we can be very confident that the charter schools in our study did have a positive effect on test scores.

Ahem. When two groups are observed to have significantly different means on the dependent variable of interest, we can be reasonably confident that the two groups are in fact different. We cannot be sure with any degree of confidence that membership in one group causes, or has an effect on, the dependent variable of interest. Had children been randomly assigned to charter schools or public schools, then we could talk about cause and effect. The "untargeted" definition that the authors follow does not produce random assignment.

Richard's point is particularly pertinent here, as the self-selection involved in charter school assignment means the experimental and control group may not have been equal to begin with, so we wouldn't expect mean scores to be equal when we measure them.

Posted by kswygert at July 17, 2003 10:53 AM
Sitemeter