When publishing exec Cathie Black replaces Joel Klein as the head of the nation's largest school system in coming, she'll bring big-time business-world cred with questionable applicability to the world of education.
So she'll have something in common with the internal teacher ratings at the center of a very public tug-of-war between the city's Department of Education (DOE) and the United Federation of Teachers (UFT)—a dispute that will be either one of the last things Klein handles, or the first tasks Black takes up.
On October 20, the DOE, citing Freedom of Information Act requests from prominent news organizations, announced that it would release the names and individual ratings of 12,000 public school teachers. This came hard on the heels an August Los Angeles Times expose that published local teachers' scores.
The next day, the United Federation of Teachers filed suit to prevent the release and publication of teacher names. On October 22nd, a state judge stayed any release of the information before November 24th.
In the tempest, details have gotten lost. So far, the debate has turned on questions of teacher privacy and alleged flaws in how the scores are calculated.
But what's missing is any understanding of who will be affected—fewer than 1 in 6 teachers—should the scores be released, or what the scores in question really mean.
'D' is for 'details'
While such ratings have been used in states like Tennessee since 1992, New York City's Department of Education only began using internal value-added measures in 2006. This year, the teachers union and the city agreed that the ratings will count for a quarter of a teacher's overall professional evaluation – one of a range of measures, like classroom observation and other assessments, used to assess teacher performance. But the city and the union never agreed to make the scores public.
Value-added scores, a paradigm adopted from the profit-loss model in the business world, use student progress, as reflected in standardized test scores over time, to assess the impact of individual teachers. It's meant to show which teachers help students make stronger gains – and which don't. A teacher whose students post higher gains than comparable students in a comparable class, earns a higher value-added score. Because scores are calculated by comparing test results from year to year, students and teachers need at least two years of test scores to be able to develop a value-added rating.
It's unclear if value-added scores provide the kind of precise measure they promise. What is clear is that, in New York City at least, the reports pertain to only 12,000 of the city's nearly 80,000-member teaching force.
By design, the value-added model, developed by a team led by Columbia economics professor Jonah Rockoff, only applies to the small subset of teachers in the “testing grades”—grades 4 through 8. New York students begin taking standardized tests in the 3rd grade, but since two years of testing are needed to calculate value-added scores, 4th grade is the first year where students have produced enough data for their teachers to be scored.
In the middle schools, where most teachers specialize by subject, only English and math teachers are eligible for value-added ratings, because those subjects are tested by the state.
All other teachers—those in kindergarten, first, second and third grades; those who teach middle school science, social studies, art, or foreign languages; and every high school teacher in New York City's more than 400 high schools—are “ignored by value-added assessment,” according to Sean Corcoran, assistant professor of education economics at NYU, whose