tpp_evaluation

2013 NAEd Report Re-Published

 

Evaluation of Teacher Preparation Programs

Note: The information on this website is based on data collected during the writing process of the 2013 NAEd report.

Purposes of Teacher Preparation Program Evaluations

What are TPPs?

Teacher preparation programs (TPPs) are where prospective teachers gain a foundation of knowledge about pedagogy and subject matter, as well as early exposure to practical classroom experience. Although competence in teaching, as in all professions, is shaped significantly by on-the-job experiences and continuous learning, the programs that prepare teachers to work in K-12 classrooms can be early and important contributors to the quality of instruction. Evaluating the quality and effectiveness of TPPs is a necessary ingredient to improved teaching and learning.​

Why evaluate TPPs?

Accountability

Ensuring accountability, which involves monitoring program quality and providing reliable information to the general public and policy makers.​

Consumer Protections

Providing information for consumers, which includes giving prospective teachers data that can help them make good choices from among the broad array of preparation programs, and giving future employers of TPP graduates information to help with hiring decisions.​

Program Improvement

Enabling self-improvement by teacher preparation programs, which entails providing institutions with information to help them understand the strengths and weaknesses of their existing programs and using this information​.​

7

Principles

There are seven core principles that guide the report.

Although program evaluation is important, it is not sufficient in itself to bring about improvements in teacher preparation, teaching quality, and student learning.

Because we assume there is a basic linkage among teacher preparation, teaching quality, and student learning, the main goal of TPP evaluation is the continuous improvement of teaching quality and student learning (we use “learning” as shorthand for academic, behavioral, and social outcomes of education). However, although program evaluation is important, it is not sufficient in itself to bring about improvements in teacher preparation, teaching quality, and student learning.

Because authority for education in the United States is, by design, diffused, the evaluation of TPPs will always include multiple systems operated by different groups with different purposes and interests.

Because authority for education in the United States is, by design, diffused, the evaluation of TPPs will always include multiple systems operated by different groups with different purposes and interests. Unless the decentralized system of governance over American education changes, we assume that there will always be different evaluation methods that rely on different data with results intended for different audiences. No single method or mechanism is likely to completely satisfy multiple, legitimate, and potentially incompatible demands for valid, reliable, and usable information.

Validity should be the principal criterion for assessing the quality of program evaluation measures and systems.

Validity should be the principal criterion for assessing the quality of program evaluation measures and systems. The word “validity” is shorthand for the extent to which evaluation data support specific inferences about individual or organizational performance. In this report we define validity broadly to include (1) the quality of evidence and theory that supports the interpretation of evaluation results; and (2) the consequences of the use of evaluations for individuals, organizations, or the general public.

We assume that any measure—or, for that matter, any TPP evaluation system that uses multiple measures—has limitations that should be weighed against potential benefits.

We assume that any measure—or, for that matter, any TPP evaluation system that uses multiple measures—has limitations that should be weighed against potential benefits. The fact that there are imperfections in evaluation systems is not a sufficient reason to stop evaluating, but rather an argument for investing in the development of improved methods. But trying to find the “perfect” set of measures is a fool’s errand; a more rational approach is to explore the relative benefits and costs of alternative approaches and to consider whether, on balance, there is evidence that the benefits outweigh the costs—not just the costs in dollars and cents but the costs defined more generally in terms of unintended negative consequences.

We assume that differential effects of TPP evaluation systems—for diverse populations of prospective teachers and the communities in which they may work—matter, and should be incorporated as a component of validity analysis and as a design criterion.

We assume that differential effects of TPP evaluation systems—for diverse populations of prospective teachers and the communities in which they may work—matter, and should be incorporated as a component of validity analysis and as a design criterion. It is especially important to consider potential inequities that may arise from the interpretation of evaluation results and their application. Special attention should be paid to unintended impacts on the morale, capacity, and reputation of TPPs that cater to different pools of potential teacher candidates or that are intended to serve communities struggling to overcome socioeconomic disadvantage. We are not suggesting differential standards for the evaluation of program quality, but rather we are flagging the importance of studying how those standards may, for example, lead to the reduction in the supply of prospective future teachers and/or an interruption in the flow of potentially excellent and dedicated teachers into poor neighborhoods.

TPP evaluation systems should themselves be held accountable.
TPP evaluation systems should themselves be held accountable. Private and commercial organizations and government agencies that produce, promulgate, or mandate evaluations must be clear about their intents and uses and, to the extent possible, provide evidence that intended and unintended consequences have been considered. Evaluators and users of evaluation data must be open to critique as the basis for continuous improvement and be willing and able to explore policies aimed at reinforcing appropriate uses of evaluation information.
TPP evaluation systems should be adaptable to changing educational standards, curricula, assessment, and modes of instruction.

As we prepare this report, expectations for teaching and standards of student learning (and other valued outcomes) are again changing. The implementation of the Common Core State Standards, for example, and associated new assessment technologies will necessarily shape the context for evaluating TPPs. The need for flexibility in adapting to these types of changes must be balanced against the legitimate desire for evaluations designed to provide reliable trend information. Achieving a workable balance requires an appreciation of tradeoffs and an acceptance of compromises in designing systems with diverse purposes.

How are federal, state, and local agencies and other organizations reacting to the public demand for evidence of the quality of teacher preparation?​

How are institutions that prepare future teachers—universities, teacher colleges, private non-university organizations, and others—handling the challenges of providing better information about the quality of their programs?

What is known about the relative effectiveness of different approaches to evaluating TPPs?

How well do different existing or potential methods align with the multiple intended uses of evaluation results?

What are the most important principles and considerations to guide the design and implementation of new evaluation systems?

5

Questions

There are five questions reviewed in the report.

Pin It on Pinterest

Share This