Advertisement

L.A. Times stands by its teacher ratings

Share

This article was originally on a blog post platform and may be missing photos, graphics or links. See About archive blog posts.

An article in last Monday’s Times has come under fire from critics who say it misrepresents the results of a review of the “value-added analysis” of L.A. Unified teachers that was published in print and online last August.

The review, conducted by the National Education Policy Center at the University of Colorado at Boulder, looked at the LAUSD data that The Times used in its “Grading the Teachers” series. The Feb. 7 article by Jason Felch said the review “confirms the broad conclusions of a Times analysis of teacher effectiveness in the Los Angeles Unified School District while raising concerns about the precision of the ratings.”

Advertisement

The policy center issued a news release taking issue with the article, saying its researchers believed The Times’ teacher-effectiveness ratings were based on “unreliable and invalid research.” Therefore, the release continued, the study “confirms very few of The Times’ conclusions.”

Several readers e-mailed The Times, questioning the reporting.

The article “distorted the study’s findings for self-serving purposes,” said one reader.

“It smacks of either shock journalism or a deliberate attempt to mislead the public on behalf of big business and privatizers,” said another.

Readers raised two basic questions about the Colorado study and The Times’ handling of it: Did the article accurately reflect the findings of the study? Does the study invalidate the “Grading the Teachers” series?

The article said this about the study: The authors largely confirmed The Times’ findings for the teachers classified as most and least effective. But the authors also said that slightly more than half of all English teachers they examined could not be reliably distinguished from average. The general approach used by The Times and the Colorado researchers, known as ‘value added,’ yields estimates, not precise measures.

The article listed the percentages of teachers who the study said had been incorrectly labeled as more effective or less effective. Altogether, 22% of English teachers and 14% of math teachers ended up on different sides of the “effective” line in the center’s study as compared with The Times’ analysis.

National Education Policy Center publications director Alex Molnar said the article should have included an additional finding:

Advertisement

NEPC researchers demonstrated that the inclusion of three additional sets of variables in the model [The Times] used –- a longer history of a student’s test performance, peer influence, and school -- leads to dramatic changes in teacher ratings. For reading test outcomes in particular, as many as half of all teachers would be rated differently.

On what basis did the article say that the Colorado study “confirms the broad conclusions” of The Times’ earlier work?

“The huge public-policy question that folks have been arguing about since we first published our ratings is whether there is such a thing as a ‘teacher effect’ that can be measured statistically –- whether teachers have a significant impact on what their students learn or whether student achievement is all about demographics, differences among schools, family background and other factors outside of teachers’ control,” said Assistant Managing Editor David Lauter, who oversees the California reporters and editors.

“The Colorado study comes down on our side of that debate. Their study said the teacher impact that they found was actually ‘slightly larger’ than the effect found by Dick Buddin, the economist who did the underlying work for The Times.

“For parents and others concerned about this issue, that’s the most significant finding: the quality of teachers matters. So although they disagree with us about how to measure the teacher effect, it was entirely accurate to say that their study confirmed some parts of our work and criticized others.”

Molnar also questioned the assignment of Felch, one of the reporters on “Grading the Teachers,” to cover the center’s study that “directly criticized the research” used in the earlier reporting.

Advertisement

Lauter said that he’d considered the appearance of conflict , but that “it seemed to me that any Times reporter who wrote about the study could be accused of a conflict. So, it seemed to me we were best off having the person who understood the subject best write about it.” However, Lauter acknowledged, “maybe that was the wrong call.”

Does the Colorado study invalidate The Times’ original series?

The Colorado researchers took data from the Los Angeles school district on student test scores and ran it through a different value-added model than the one used by Buddin.

When the Colorado researchers compared the results of their model with the results of Buddin’s model, they found that the two “correlate” with each other 92% of the time on math scores and 76% of the time on English. But they also found that as many as half the English teachers might have ended up in different categories under their model than under the model The Times used.

It’s important to note that the Colorado study was not based on entirely the same data Buddin used. Roughly 93,000 student test scores that Buddin used were excluded in the Colorado study -– that’s roughly 15% of the data. The Colorado researchers did not contact The Times before preparing their report in order to compare data sets, and they have not explained the reasons for the dropped data. They also have not disclosed basic information about some of their statistical techniques that would allow outside researchers to assess their work.

In at least one case, the Colorado study contained a finding that the researchers concede overstated the differences between their model and the one used by The Times. The researchers based their work on a pool of roughly 11,000 teachers. But The Times published scores for only 6,000 teachers because the project excluded scores from any teacher who had not taught at least 60 students. When informed of that discrepancy over the weekend, Derek Briggs, the lead author of the Colorado study, said in an e-mail to The Times that the use of the 60-student limit “serves to mitigate” some of the shortcomings his study had alleged. Briggs and his colleagues, however, have not made that information public.

It also is worth noting that the policy center is partly funded by the Great Lakes Center for Education Research and Practice, which is run by the top officials of several Midwestern teachers unions and supported by the National Education Assn., the largest teachers union in the country and a vociferous critic of value-added analysis.

Advertisement

The fact that the two models differ does not tell you which is “right” and which is “wrong,” Lauter said. As Professor Thomas Kane of Harvard noted in a Washington Post article about the Colorado study, “we still don’t know yet which [model] was the right one.”

In the more than five months since the publication of “Grading the Teachers,” The Times has received suggestions and critiques from many experts on value-added analysis. In the next few weeks, The Times plans to publish version two of the database, updated with a new year of data and incorporating a number of changes based on that feedback.

[Updated 3 p.m.: The Times has also released a statement addressing criticism of its analysis.]

-- Deirdre Edgar

Advertisement