Friday, March 9, 2012


Stanford Center for Opportunity Policy in Education (SCOPE)
Test scores largely reflect whom a teacher teaches, not how well they teach.


Recent writings on value-added methods from SCOPE and others

Under pressure to meet Race to the Top requirements, more and more states are adopting, or poised to adopt, "value-added" models (VAMs) of teacher assessment required by the federal competition. In this rush for compliance, important findings on the effectiveness of VAMs are being overlooked.
Education researchers weighing in on this subject recently include Linda Darling-Hammond, Audrey Amrein-Beardsley, Edward Haertel, and Jesse Rothstein. Their review of the research shows VAMs to be sorely lacking as a tool for evaluating teachers. Below are two articles, as well as links to other recent writing on VAMs, explaining why test score gains are poor indicators of teacher effectiveness. The articles also share examples of the effects of VAMs where they have been used in the United States and illustrate effective ways to evaluate teachers that not only accurately assess teachers' effectiveness but improve practice, remove poor teachers, and increase student achievement.

"Evaluating Teacher Evaluation"
Education Week via the Phi Delta Kappan | February 29, 2012
Article by Linda Darling-Hammond, Audrey Amrein-Beardsley, Edward Haertel, and Jesse Rothstein
This piece was originally presented in a briefing that was sponsored by the American Education Research Association (AERA) and the National Academy of Education (NAE).
Excerpt: Using VAMs for individual teacher evaluation is based on the belief that measured achievement gains for a specific teacher’s students reflect that teacher’s “effectiveness.” This attribution, however, assumes that student learning is measured well by a given test, is influenced by the teacher alone, and is independent from the growth of classmates and other aspects of the classroom context. None of these assumptions is well supported by current evidence.

"Value-Added Evaluation Hurts Teaching"
Education Week | March 5, 2012 Commentary by Linda Darling-Hammond
Excerpt: Most troubling is that [New York] city released [teacher value-added] scores while warning that huge margins of error surround the ratings: more than 30 percentile points in math and more than 50 percentile points in English language arts. Soon these scores will be used in a newly negotiated evaluation system that, as it is designed, will identify most teachers in New York state as less than effective.
 Audrey Amrein-Beardsley is associate professor of education at Arizona State University; Linda Darling-Hammond is the Charles E. Ducommun Professor of Education at Stanford, Co-Director of SCOPE, former president of AERA, and member of NAE; Edward Haertel is Jacks Family Professor of Education at Stanford University, Vice-President for Programs at NAE, and chair of the National Research Council's Board on Testing and Assessment; and Jesse Rothstein is Associate Professor of Public Policy and Economics at UC Berkeley and in 2009-10 he was Senior Economist at the U.S. Council of Economic Advisers and then Chief Economist at the U.S. Department of Labor.

New York Times - Mike Winerip: "Hard-Working Teachers, Sabotaged When Student Test Scores Slip."  March 4, 2012.

Teach for Us - Gary Rubenstein:"Analyzing Released NYC Value-Added Data." This three-part series by Rubenstein offers a research-rich analysis of VAMs in New York.  February, 2012

Economic Policy Institute - Problems with the use of student test scores to evaluate teachers:Richard Rothstein, Helen Ladd, Diane Ravitch, Eva Baker, Paul Barton, Linda Darling-Hammond, Edward Haertel, Robert Linn, Richard Shavelson and Lorrie Shepard discuss research on test-based incentives and student achievement.  August 27, 2010.

1 comment:

  1. Really i appreciate the effort you made to share the knowledge.The topic here i found was really effective
    small business marketing consultant


Note: Only a member of this blog may post a comment.