COMPARABILITY OF LARGE-SCALE ASSESSMENTS
Click the report cover to download the full volume.
This National Academy of Education (NAEd) volume provides guidance to key stakeholders on how to accurately report and interpret comparability assertions concerning large-scale educational assessments as well as how to ensure greater comparability by paying close attention to key aspects of assessment design, content, and procedures. The goal of the volume is to provide guidance to relevant state-level educational assessment and accountability decision makers, leaders, and coordinators; consortia members; technical advisors; vendors; and the educational measurement community regarding how much and what types of variation in assessment content and procedures can be allowed, while still maintaining comparability across jurisdictions and student populations. At the same time, the larger takeaways from this volume will hopefully provide guidance to policy makers using assessment data to enact legislation and regulations and to district- and school-level leadership to determine resource allocations, and also provide greater contextual understanding for those in the media using test scores to make comparability determinations.
Click here to watch the virtual NCME session on this report, “How to Achieve (or Partially Achieve) Comparability of Scores from Large-Scale Assessments”.
COMPARABILITY of LARGE-SCALE EDUCATIONAL ASSESSMENTS
ISSUES and RECOMMENDATIONS
Editors
Amy I. Berman, National Academy of Education
Edward H. Haertel, Stanford University
James W. Pellegrino, University of Illinois at Chicago
CONTENTS
Front Matter
Executive Summary
1. Introduction – Framing the Issues
Amy Berman, National Academy of Education; Edward Haertel, Stanford University; and James Pellegrino, University of Illinois at Chicago
2. Comparability of Individual Students’ Scores on the “Same Test”
Charles DePascale and Brian Gong, National Center for the Improvement of Educational Assessment (Center for Assessment)
3. Comparability of Aggregated Group Scores on the “Same Test”
Leslie Keng and Scott Marion, Center for Assessment
4. Comparability Within a Single Assessment System
Mark Wilson, University of California, Berkeley, and Richard Wolfe, Ontario Institute for Studies in Education, University of Toronto
5. Comparability Across Different Assessment Systems
Marianne Perie, Measurement in Practice, LLC
6. Comparability When Assessing English Learner Students
Molly Faulkner-Bond, WestEd, and James Soland, University of Virginia/Northwest Evaluation Asssociation (NWEA)
7. Comparability When Assessing Individuals with Disabilities
Stephen Sireci and Maura O’Riordan, University of Massachusetts, Amherst
8. Comparability in Multilingual and Multicultural Assessment Contexts
Kadriye Ercikan, Educational Testing Service/University of British Columbia, and Han-Hui Por, Educational Testing Service
9. Interpreting Test-Score Comparisons
Randy Bennett, Educational Testing Service
Biographical Sketches of Steering Committee Members and Authors
Steering Committee Members
Contact
- Edward Haertel (Co-chair)
Stanford Graduate School of Education
- Louis Gomez
University of California, Los Angeles
- Marshall S. (Mike) Smith
Harvard University
- Joan Herman
University of California, Los Angeles
- James Pellegrino (Co-chair)
University of Illinois at Chicago - Diana Pullin
Boston College - Guadalupe Valdes
Stanford University - Larry Hedges
Northwestern University
The project and research was supported by funding from Smarter Balanced/University of California Santa Cruz. The opinions expressed are those of the NAEd and authors and do not represent views of Smarter Balanced/University of California Santa Cruz.