Test scores largely
reflect whom a teacher teaches, not how well they teach.
LEADING RESEARCHERS GIVE
VALUE-ADDED TEACHER ASSESSMENT AN "F"
Recent writings on value-added methods from
SCOPE and others
Under pressure to meet
Race to the Top requirements, more and more states are
adopting, or poised to adopt, "value-added" models (VAMs) of teacher
assessment required by the federal competition. In this rush for compliance,
important findings on the effectiveness of VAMs are being overlooked.
Education researchers weighing in on this subject recently include Linda Darling-Hammond, Audrey Amrein-Beardsley, Edward Haertel, and Jesse Rothstein. Their review of the research shows VAMs to be sorely lacking as a tool for evaluating teachers. Below are two articles, as well as links to other recent writing on VAMs, explaining why test score gains are poor indicators of teacher effectiveness. The articles also share examples of the effects of VAMs where they have been used in theUnited
States and illustrate effective ways to evaluate
teachers that not only accurately assess teachers' effectiveness but improve
practice, remove poor teachers, and increase student achievement.
Education researchers weighing in on this subject recently include Linda Darling-Hammond, Audrey Amrein-Beardsley, Edward Haertel, and Jesse Rothstein. Their review of the research shows VAMs to be sorely lacking as a tool for evaluating teachers. Below are two articles, as well as links to other recent writing on VAMs, explaining why test score gains are poor indicators of teacher effectiveness. The articles also share examples of the effects of VAMs where they have been used in the
"Evaluating Teacher Evaluation"
Education Week via the Phi Delta Kappan |February 29, 2012
Article by Linda Darling-Hammond, Audrey Amrein-Beardsley, Edward Haertel, and Jesse Rothstein
This piece was originally presented in a briefing that was sponsored by the American Education Research Association (AERA) and the National Academy of Education (NAE).
Education Week via the Phi Delta Kappan |
Article by Linda Darling-Hammond, Audrey Amrein-Beardsley, Edward Haertel, and Jesse Rothstein
This piece was originally presented in a briefing that was sponsored by the American Education Research Association (AERA) and the National Academy of Education (NAE).
Excerpt:
Using VAMs for individual teacher evaluation is based on the belief that
measured achievement gains for a specific teacher’s students reflect that
teacher’s “effectiveness.” This attribution, however, assumes that student
learning is measured well by a given test, is influenced by the teacher alone,
and is independent from the growth of classmates and other aspects of the
classroom context. None of these assumptions is well supported by current
evidence.
"Value-Added Evaluation Hurts Teaching"
Education Week |March 5, 2012 Commentary by Linda
Darling-Hammond
Education Week |
Excerpt:
Most troubling is that [New York ]
city released [teacher value-added] scores while warning that huge margins of
error surround the ratings: more than 30 percentile points in math and more
than 50 percentile points in English language arts. Soon these scores will be
used in a newly negotiated evaluation system that, as it is designed, will
identify most teachers in New York
state as less than effective.
Audrey Amrein-Beardsley is
associate professor of education at Arizona State University; Linda
Darling-Hammond is the Charles E. Ducommun Professor of Education at Stanford,
Co-Director of SCOPE, former president of AERA, and member of NAE; Edward Haertel is Jacks Family
Professor of Education at Stanford University, Vice-President for Programs at
NAE, and chair of the National Research Council's Board on Testing and
Assessment; and Jesse Rothstein is Associate Professor of Public Policy and
Economics at UC Berkeley and in 2009-10 he was Senior Economist at the U.S.
Council of Economic Advisers and then Chief Economist at the U.S. Department of
Labor.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.