- Our Research
- Working With Us
- Our Data
- Our Team
- About Us
LA Times and Value Added: The Wrong Tool for the Right Job
September 2, 2010
One of the big news stories this week was the decision by the Los Angeles Times to publish their findings from a value-added analysis of the teachers at the Los Angeles Unified School District, naming the third through fifth grade teachers included in the study, and ranking them from most to least effective. The motivation for publicly naming teachers and their ranks was that doing so would be sensationalistic, and likely to sell newspapers. No, sorry, that wasn’t it. I must have just imagined that. Their stated rationale was that teachers are public employees, and that the public has the right to know.
Maybe so. But the public also has the right to know that measuring value-added is not like measuring height or weight, where the standards – feet and pounds – are clearly defined, and for which we have accurate, precise rulers and scales. For value-added metrics, researchers have no consistent standards and no precise measurement tools, so it’s critically important to take such findings as the LA Times rankings with a grain of salt. And while the LA Times does note these caveats, the limitations aren’t what make the headlines, and they’re not what people remember.
There are many different statistical models that fall under the umbrella term of “Value-added”, each with a slightly different set of assumptions that lead to a different set of teacher rankings. Furthermore, even if everyone agreed on a single model and approach to value-added, there is sufficient measurement error in each annual set of teacher rankings that a single teacher may bounce up and down considerably from one year to the next, even if he/she was performing exactly the same way every year. How can that possibly lead to effective evaluation of teacher performance?
One of the main problems that I have with value-added models in general is that they purport to measure growth. They don’t. Growth occurs when some characteristic (say height) changes over time, and one measures that change from the first to the second measurement. For example, if my daughter was 4’ 7” in third grade and then 4’ 10” in fourth grade, then I would know that she grew 3 inches over that time. But most state tests cannot measure growth because the measurement scales used at each grade are different. Instead of using inches to measure height in both grades 3 and grade 4, they might use “googlepleens” in grade 3 and “ectoplars” in the grade 4. (I’m being facetious here, but the point is that you can’t measure growth when the measurement scales are different on each occasion).
Instead, what value-added does is to examine whether a student’s relative standing (percentile rank) has changed over time. If that standing has increased, then value-added models consider this change in rank to be growth. The problem here is that rankings tell us nothing about growth or actual change. If I joined a new running club, and my top running speed (say a 10 minute mile) placed me at the 25th percentile, I would know that I ran slower than 75% of my cohort. But if I stayed with the group for a year, practicing regularly, and at the end of that year, I was now at the 30th percentile, did my running speed improve? There’s no way to tell. If the club membership has changed, and a bunch of slowpokes have joined, then my 30th percentile rank might now correspond to a 12 minute mile top speed, a 20% decrease. The point is that relative rankings tell us only how we compare to others and nothing about how much or whether growth took place.
I’m all for teachers being held accountable for their performance, and rewarding excellent teachers. But using state test performance to do so is wrong, since state tests aren’t designed to measure teacher performance any more than they are designed to measure growth. In my opinion, value-added is a great example of using the wrong tool for the right job. For a more detailed report on the challenges associated with using student data to measure teacher performance, read this report by the Economic Policy Institute.
Post A Comment: