Readers' Representative Journal

A conversation on newsroom ethics and standards

« Previous Post | Readers' Representative Journal Home | Next Post »

Times responds to criticism of teacher analysis

The Times has released the following statement in response to criticism of its Grading the Teachers project:

The Times has received several inquiries from readers about a study done at the University of Colorado’s National Education Policy Center regarding our series “Grading the Teachers.” In public statements, policy center officials have argued that the study invalidates a Times analysis of the effectiveness of some 6,000 elementary school teachers in the Los Angeles Unified School District. The policy center’s research does no such thing. Its study, released last week, shows only that their analysis, using somewhat different data and assumptions than we used, produced results somewhat different from our own.

In its press releases, officials of the policy center have claimed that this discredits our work -- that the center’s analysis is right and therefore ours is wrong. This assertion does not stand up to scrutiny.

Value-added analysis involves taking hundreds of thousands of student test scores and analyzing them to determine how much of each student’s academic growth can be attributed to his or her teacher, controlling for factors that are beyond a teacher’s control such as poverty, the educational background of a student’s parents and class size. It is a powerful tool for accountability, providing a way to compare one teacher’s performance in the classroom with another’s. Experts in the field differ widely about how to do the analysis -- which variables mathematical terms to crank into the equations and how much weight to give them. Over a period of more than a year’s work, The Times consulted many experts, then chose one experienced researcher, Richard Buddin of the RAND Corp., to conduct the analysis.

After receiving Buddin’s analysis, The Times shared it with leading researchers, including skeptics of value added methods, and incorporated their input. Our reporters then tested the results in the real world, visiting classrooms across Los Angeles to observe and interview teachers. 

The policy center’s researchers did an analysis using a different formula and not surprisingly came up with different results. On that basis, the policy center’s publication director, Alex Molnar, has made the sweeping and false claim that the data used by The Times’ is “simply not capable of producing the teacher ratings” that we have published. That assertion comes despite the fact that the center’s analysis actually correlates with The Times’ work in the vast majority of cases.

Mr. Molnar’s claim boils down to this: Until a perfect value-added system is developed that everyone agrees upon, nothing should be published. We reject that idea.

We have said repeatedly in our stories and in our database that value-added analysis is not a perfect system. As we have written, even its strongest advocates say value-added should not be the sole measure of a teacher’s performance. But even as an imperfect system, it is far more rigorous, objective and useful to parents and others than current evaluations, in which an administrator typically spends a few minutes observing a class then fills out a short form on which well over 90% of teachers are rated “satisfactory.” The value-added method works particularly well to identify the two groups that one most would want to single out -- the most effective teachers and the least. This is information that we feel strongly the public should have access to.

Although the policy center’s statements imply that it analyzed exactly the same data that Buddin analyzed for The Times, its study in fact used 93,000 fewer student records, some 15% of the total. Why those records were excluded from their study is unknown -- the policy center’s researchers have not disclosed how they went about their work -- nor can it be determined what effect it had on their results.

The policy center’s researchers also based some of their conclusions on a different pool of teachers than the ones for whom we published scores. Their study analyzed scores for some 11,000 teachers, whereas we published only about 6,000 scores. We did not publish scores for any instructor who had taught fewer than 60 students -- a decision we made to enhance the reliability of the analysis. The lead researcher for the policy center’s study, Derek Briggs, told us in an e-mail last week that the 60-student minimum we used “serves to mitigate” some of his concerns about our analysis. But the policy center has not acknowledged that fact publicly.

Finally, a major source of funds for the policy center is the Great Lakes Center for Education Research and Practice, a foundation set up by the National Education Association and six major Midwestern teacher unions affiliates. The NEA was one of the teacher union groups that backed an unsuccessful call for a boycott of The Times when “Grading the Teachers” was first published.

Although the policy center presents itself as a source of neutral scientific research, the language of their public statements has been anything but dispassionate, including a call for The Times not only to remove “Grading the Teachers” from our website, but to “apologize” for our work.

For years, school districts around the country, as well as academic experts, have conducted value-added analyses of teacher performance which they have kept secret. With “Grading the Teachers,” we put this information before the public, with ample explanation of the method’s limitations. That, we submit, is exactly what a newspaper should do. Mr. Molnar would like to put this information back behind locked doors. We disagree.


Post a comment
If you are under 13 years of age you may read this message board, but you may not participate.
Here are the full legal terms you agree to by using this comment form.

Comments are moderated, and will not appear until they've been approved.

If you have a TypeKey or TypePad account, please Sign In

Comments (6)

Even if all parties involved agreed that the original gauge was valid, the Times was still dead wrong to engage in the project. Determining or approving the design of a gauge to measure teachers, or any other profession outside their own, is beyond the scope of journalists. It's outside their skillset, beyond their role -- just all-around wrong.

The role of the press is to serve as messenger. With the teacher-rating series, the Times set itself up as judge and jury -- and, some would say, executioner. That's out of bounds. The Times needs to press reset; rethink its ethics, professional standards and journalistic role; and recant and apologize. The Times brings shame on the entire already-battered news industry with this mistake.

-- Caroline Grannan, former San Jose Mercury News copy editor

As a education researcher with a high level of interest in value-added research, I agree with the National Education Policy Center. The LA Times should take down the so-called value-added model (VAM) data base & apologize to LAUSD teachers & to the public. Many concerned members of the education research committee have been saying this since before the LA Times published the alleged "effectiveness scores" but our warnings & our advice were ignored. This is not just a debate between technicians over how to analyze a set of data. This cannot be done with reliability & validity. VAM is itself based on fatally flawed assumptions about the relationship between student achievement & teacher effectiveness. The LA Times should never have undertaken this project. News organizations should not be pretending to conduct education research since they are not bound by the same ethical, professional & technical standards as legitimate researchers. Now it is time for the Times to have the journalistic & social integrity to admit their mistakes & set about the business of repairing the damage done to education in general & to the LAUSD teachers.

Shame on "journalists" Jason Felch and Jason Song and the LA Times for all of this. Felch and Song actually interviewed me in preparation for this nonsense (as I am a professor/researcher and research what value-added models can and CANNOT do) and marginalized every single research-based recommendation and word of caution I had to offer. Now I see why they could not include any of what research continues to offer on the topic as this would have derailed and deflated their entire series.

They should have known better and in many ways they did -- they just chose not to report about it. This is what should be at the heart of this follow-up on ethics and standards.

In addition, if I am one of the leading value-added researchers or skeptics to whom you all are referring with the following statement: "After receiving Buddin’s analysis, The Times shared it with leading researchers, including skeptics of value added methods, and incorporated their input." They did not include one piece of the advice I provided during over approximately one hour of communicating with them over the phone about value-added. In addition, they never shared the Buddin results in advance of the release of any articles.

You also note they provided "ample explanation of the method’s limitations." Take a walk through the research literature on this topic and you too will see everything they in fact marginalized and conveniently left out. This too was discussed during this phone conversation. Take a look at our phone records and be the judge regarding everything that has not been disclosed to your readers.

And now to twist, again, what research suggests and what in particular was evidenced in this Colorado study, again to support their/your apriori conclusions, is beyond unprofessional and unethical. And to oversimplify Dr. Molnar’s so-called “position” that “Until a perfect value-added system is developed that everyone agrees upon, nothing should be published” is ludicrous. Everyone in the research community knows a perfect value-added system will never exist, as a perfect standardized test on which these models are built will also never exist. Such simplistic thinking is doing nobody any favors, although it is definitely bolstering nonsense like this.

All you have done is perpetuated, across the country this time, why the media often cannot be trusted. I stand witness to this here.

Educationl research on a major issue like value added is conducted by researchers trained in data analysis. Their work is published in peer reviewed journals. The Times hired a single economist, not particularly distinguished or neutral, to perform its analysis. The newspaper has no competence to judge the quality of Buddin's work nor to judge the quality of the Colorado study, for that matter. (The Colorado study is newsworthy. Did the LA Times report the story?) Moreover, there are distinguished scholars of educational evaluation in Los Angeles, such as Eva Baker, director of the Center for Study of Evaluation at UCLA. She is on record -- in print -- saying that value added is a highly flawed evaluational method. Why didn't the Times hire someone of her stature, even if only to critique its study? Or why -- in true scientific manner -- did it not hire multiple scholars to do the evaluation? Answer: it has a political axe to grind.

I worked as a teacher for a short period of time.
I understand some teachers are not affective. I agree that these teachers need to be removed.
What I witnessed first hand that made me run for the hills--- was the parents.
Parents do not teach their children to push themselves, be respectful, and involved at such a high degree it is abuse and neglect. School was a social outlet for children and a babysitting facility for parents who want to do nothing.
I was subbing in a junior high science class in Los Angeles and the students did not know what an atheist was. I was beyond floored. Nor did they know what Mandelbrot was, mutations, or what continent Libya was on. One student was absent that day and told me the next day that he knew what an atheist was. He happened to be my most curious student, mature, with solid parents. He also did not fit in very well and often would be picked-on. The cruelty, neglect and entitlement made me sick. I have zero respect for current K-12 school institutions.
My point being learning has many factors, not just teachers, and you are leaving out these important factors that do indeed effect outcomes. Other factors for instance, eating square meals, not having to feel chronically vigilant due to a violent neighborhood, and having interested, stable, functioning parents.

I really got a laugh when I checked the NEPC website and found out that under their menu item "think tank review" there is an item "Bunkum Awards". All their Bunkum Awards went to studies that didn't support the teachers unions point of view, mainly studies done by "conservative" think tanks. Now that I see where they're getting their funding I guess I shouldn't be surprised. As my grandfather use to say "I may have been yesterday but I wasn't born late last night". There's no way that I'm buying that their critique of the LA Times study is objective and unbiased. One really obvious error they made in their own study is in their so-called Rothstein "falsification analysis". The obvious reason for the results they got in this analysis is that the hardest to teach kids are being assigned to the worst teachers, not that there is some other mysterious variable effecting the Times value-added results. The maddening thing about this is that the schools, and frequently the parents, all know who the worst teachers are, and generally the parents of the hardest to teach kids don't have the resources to advocate on behalf of their kids. NEPC and their teacher union paymasters should be ashamed of themselves.

This is the backup site for The Los Angeles Times. We'll post news and information if becomes inoperable or inaccessible.

this is a test breaking news post |  April 16, 2013, 1:45 pm »


Have a story tip?

Please send to

Can I call someone with news?

Yes. The city desk number is (213) 237-7847.