Advertisement

Times responds to criticism of teacher analysis

Share

This article was originally on a blog post platform and may be missing photos, graphics or links. See About archive blog posts.

The Times has released the following statement in response to criticism of its Grading the Teachers project:

The Times has received several inquiries from readers about a study done at the University of Colorado’s National Education Policy Center regarding our series “Grading the Teachers.” In public statements, policy center officials have argued that the study invalidates a Times analysis of the effectiveness of some 6,000 elementary school teachers in the Los Angeles Unified School District. The policy center’s research does no such thing. Its study, released last week, shows only that their analysis, using somewhat different data and assumptions than we used, produced results somewhat different from our own.

Advertisement

In its press releases, officials of the policy center have claimed that this discredits our work -- that the center’s analysis is right and therefore ours is wrong. This assertion does not stand up to scrutiny.

Value-added analysis involves taking hundreds of thousands of student test scores and analyzing them to determine how much of each student’s academic growth can be attributed to his or her teacher, controlling for factors that are beyond a teacher’s control such as poverty, the educational background of a student’s parents and class size. It is a powerful tool for accountability, providing a way to compare one teacher’s performance in the classroom with another’s. Experts in the field differ widely about how to do the analysis -- which variables mathematical terms to crank into the equations and how much weight to give them. Over a period of more than a year’s work, The Times consulted many experts, then chose one experienced researcher, Richard Buddin of the RAND Corp., to conduct the analysis.

After receiving Buddin’s analysis, The Times shared it with leading researchers, including skeptics of value added methods, and incorporated their input. Our reporters then tested the results in the real world, visiting classrooms across Los Angeles to observe and interview teachers.

The policy center’s researchers did an analysis using a different formula and not surprisingly came up with different results. On that basis, the policy center’s publication director, Alex Molnar, has made the sweeping and false claim that the data used by The Times’ is “simply not capable of producing the teacher ratings” that we have published. That assertion comes despite the fact that the center’s analysis actually correlates with The Times’ work in the vast majority of cases.

Mr. Molnar’s claim boils down to this: Until a perfect value-added system is developed that everyone agrees upon, nothing should be published. We reject that idea.

We have said repeatedly in our stories and in our database that value-added analysis is not a perfect system. As we have written, even its strongest advocates say value-added should not be the sole measure of a teacher’s performance. But even as an imperfect system, it is far more rigorous, objective and useful to parents and others than current evaluations, in which an administrator typically spends a few minutes observing a class then fills out a short form on which well over 90% of teachers are rated “satisfactory.” The value-added method works particularly well to identify the two groups that one most would want to single out -- the most effective teachers and the least. This is information that we feel strongly the public should have access to.

Advertisement

Although the policy center’s statements imply that it analyzed exactly the same data that Buddin analyzed for The Times, its study in fact used 93,000 fewer student records, some 15% of the total. Why those records were excluded from their study is unknown -- the policy center’s researchers have not disclosed how they went about their work -- nor can it be determined what effect it had on their results.

The policy center’s researchers also based some of their conclusions on a different pool of teachers than the ones for whom we published scores. Their study analyzed scores for some 11,000 teachers, whereas we published only about 6,000 scores. We did not publish scores for any instructor who had taught fewer than 60 students -- a decision we made to enhance the reliability of the analysis. The lead researcher for the policy center’s study, Derek Briggs, told us in an e-mail last week that the 60-student minimum we used “serves to mitigate” some of his concerns about our analysis. But the policy center has not acknowledged that fact publicly.

Finally, a major source of funds for the policy center is the Great Lakes Center for Education Research and Practice, a foundation set up by the National Education Association and six major Midwestern teacher unions affiliates. The NEA was one of the teacher union groups that backed an unsuccessful call for a boycott of The Times when “Grading the Teachers” was first published.

Although the policy center presents itself as a source of neutral scientific research, the language of their public statements has been anything but dispassionate, including a call for The Times not only to remove “Grading the Teachers” from our website, but to “apologize” for our work.

For years, school districts around the country, as well as academic experts, have conducted value-added analyses of teacher performance which they have kept secret. With “Grading the Teachers,” we put this information before the public, with ample explanation of the method’s limitations. That, we submit, is exactly what a newspaper should do. Mr. Molnar would like to put this information back behind locked doors. We disagree.

Advertisement