SEATTLE, WA, AUGUST 12, 2015 - Panelists talked about various aspects of value-added models, commonly referred to as VAMs, while the discussant posed a new question about the use of evaluation models during a panel discussion on the hot-button topic today at the 2015 Joint Statistical Meetings (JSM 2015) in Seattle.
The panel discussion, titled "Value-Added Models: A Primer and Discussion," featured four experts in the areas of statistics, education research and VAMs. They are:
- Jennifer E. Broatch, assistant professor of statistics at Arizona State University
- Jennifer L. Green, assistant professor of statistics at Montana State University
- Dan D. Goldhaber, director of the National Center for Analysis of Longitudinal Data in Education Research and director of the University of Washington's Center for Education Data and Research
- Robert H. Meyer, research professor and director of the Value-Added Research Center at the University of Wisconsin-Madison
VAMs--which are sometimes used in such high-stakes decisions as determining compensation, evaluating and ranking teachers, hiring or dismissing teachers and principals and awarding tenure--have been much discussed and debated. Many in the educational community argue the models are not effective at measuring a teacher's impact on student learning and can be used to evaluate teachers for students or courses they don't teach. On the other side, many elected and appointed government officials such as U.S. Education Secretary Arne Duncan defend the use of VAMs as an effective tool that measures the value a teacher adds to student-achievement growth.
Green, whose research focuses on the application, development and refinement of statistical methods to study the effects of professional development programs on teacher and student outcomes, is co-principal investigator of two education-related National Science Foundation-funded grants: "Data Connections" and "Collaborative Research-RealVAMS." Through these grants, she is working with public school districts and Math Science Partnerships to combine multiple databases of teacher and student outcomes and laying groundwork for connecting various measures of teaching quality and teacher and student attributes to student learning trajectories.
Within education, the term "value-added" often refers to changes in achievement above or below what is expected by a student. By definition, value-added models aim to estimate the effects of education factors such as professional development programs, teachers, schools and districts on student learning while controlling for prior student achievement, she explained.
"Value-added models are only as good as the data used in these models. It is important to choose student outcomes that reflect learning objectives and create data systems that provide valid and reliable information about these outcomes," said Green.
She added VAMs generally provide better predictions of growth when student progress is followed over several years. Consequently, such a long-term approach to estimating educational effects must have an accurate baseline to understand the student's starting level of learning and how he or she is progressing over time.
Broatch, whose primary research focus is on the application of statistical methods to educational data, is the principal investigator of the Collaborative Research-RealVAMS grant. She said the project identifies teachers and educational programs that not only contribute to higher-than-expected test scores, but also enhance student science, technology, engineering and math (STEM) career readiness and other indicators of educational achievement.
In her panel discussion remarks, Broatch commented on current education assessment models, the modeling assumptions of each and issues involving value-added modeling strategies and assumptions. There are several types of VAMs, leading to different approaches for estimating educational effects. Some models have better statistical properties than others, but there is still a lot unknown in this area of ongoing research, she said.
"With evaluation systems increasingly reliant upon value-added measures of teacher and program effectiveness, it is imperative to create the most accurate, reliable and valid estimates of contributions to student achievement," she said.
Goldhaber, who was the panel discussant, has been conducting research that uses VAMs for 20 years and, for the past decade, researching how the models can be used to evaluate the performance of individual educators. He has published a number of articles focused on the theoretical and empirical underpinnings (or the statistical properties) of VAMs.
During his remarks, Goldhaber said the public debate on VAMs largely has centered on their flaws. He suggested a better focus for both opponents and supporters would be the evaluation system as a whole and whether the use of value-added (or any other measure) improves the decisions made as a consequence of the evaluation.
"All forms of evaluation have flaws. So, if the standard for education evaluation is never making mistakes, we should abandon the very concept of evaluation. Thus, the right question is not whether value-added is flawed. Rather it is how do value-added measures compare to other means of evaluating teachers in terms of the information the measure provides us about performance. Does the information from value-added or other forms of evaluation increase our ability to upgrade the quality of the teacher workforce? That's the right question to be asking."
The American Statistical Association last year issued a statement that described VAMs as complex statistical models and urged the engagement of statisticians and the use of sound statistical practices when using data and statistical models in education evaluation initiatives. "It is our hope that a better understanding of the statistical perspective of VAMs will constructively inform their use in the evaluation of our nation's teachers and the ongoing discussion," noted then-ASA President-elect David R. Morganstein in a press release issue with the statement.
JSM 2015 is being held August 8-13 at the Washington State Convention Center in Seattle. More than 6,000 statisticians--representing academia, business and industry, as well as national, state and local governments--from numerous countries are attending North America's largest statistical science gathering.
About JSM 2015
JSM, which has been held annually since 1974, is being conducted jointly this year by the American Statistical Association (http://www.
About the American Statistical Association
The ASA is the world's largest community of statisticians and the second-oldest continuously operating professional society in the United States. Its members serve in industry, government and academia in more than 90 countries, advancing research and promoting sound statistical practice to inform public policy and improve human welfare. For additional information, please visit the ASA website at http://www.