Evaluation Essentials Evaluation Essentials is an indispensable text that offers an introduction to program evaluation. Examples of program descriptions from a variety of sectors including public policy, public health, non-profit management, social work, arts management, education, international assistance, and labor illustrate the book's step-by-step approach to the process and methods of program evaluation. Perfect for students as well as new evaluators, Evaluation Essentials offers a comprehensive foundation in the core concepts, theories, and methods of program evaluation.
Beth Osborne Daponte, Ph.D., is a senior research scholar at the Institution for Social and Policy Studies and lecturer in the School of Management at Yale University. Currently, she is also working with a large community foundation, helping it address its evaluation challenges at both the organizational and programmatic levels.
Figures and Tables ixPreface xiAcknowledgments xiiiThe Author xvOne: Introduction 1Learning Objectives 1The Evaluation Framework 3Summary 7Key Terms 7Discussion Questions 7Two: Describing the Program 9Learning Objectives 9Motivations for Describing the Program 11Common Mistakes Evaluators Make When Describing the Program 12Conducting the Initial Informal Interviews 12Pitfalls in Describing Programs 13The Program Is Alive, and So Is Its Description 14Program Theory 15The Program Logic Model 20Challenges of Programs with Multiple Sites 29Program Implementation Model 30Program Theory and Program Logic Model Examples 30Summary 53Key Terms 54Discussion Questions 54Three: Laying the Evaluation Groundwork 55Learning Objectives 55Evaluation Approaches 56Framing Evaluation Questions 57Insincere Reasons for Evaluation 60Who Will Do the Evaluation? 60External Evaluators 61Internal Evaluators 62Confidentiality and Ownership of Evaluation Ethics 63Building a Knowledge Base from Evaluations 64High Stakes Testing 65The Evaluation Report 66Summary 68Key Terms 69Discussion Questions 69Four: Causation 71Learning Objectives 71Necessary and Sufficient 72Types of Effects 81Lagged Effects 81Permanency of Effects 81Functional Form of Impact 81Summary 83Key Terms 83Discussion Questions 84Five: the Prisms of Validity 85Learning Objectives 85Statistical Conclusion Validity 87Small Sample Sizes 88Measurement Error 90Unclear Questions 91Unreliable Treatment Implementation 91Fishing 92Internal Validity 92Threat of History 93Threat of Maturation 94Selection 94Mortality 95Testing 96Statistical Regression 97Instrumentation 98Diffusion of Treatments 99Compensatory Equalization of Treatments 99Compensatory Rivalry and Resentful Demoralization 100Construct Validity 100Mono-Operation Bias 102Mono-Method Bias 102External Validity 103Summary 105Key Terms 105Discussion Questions 106Six: Attributing Outcomes to the Program: Quasi-experimental Design 107Learning Objectives 107Quasi-Experimental Notation 108Frequently Used Designs That Do Not Show Causation 109One-Group Posttest-Only 109Posttest-Only with Nonequivalent Groups 110Participants’ Pretest-Posttest 111Designs That Generally Permit Causal Inferences 112Untreated Control Group Design with Pretest and Posttest 112Delayed Treatment Control Group 118Different Samples Design 120Nonequivalent Observations Drawn from One Group 121Nonequivalent Groups Using Switched Measures 122Cohort Designs 123Time Series Designs 125Archival Data 127Summary 128Key Terms 128Discussion Questions 129Seven: Collecting Data 131Learning Objectives 131Informal Interviews 132Focus Groups 132Survey Design 136Sampling 140Ways to Collect Survey Data 143Anonymity and Confidentiality 144Summary 146Key Terms 147Discussion Questions 147Eight: Conclusions 149Learning Objectives 149Using Evaluation Tools to Develop Grant Proposals 150Hiring an Evaluation Consultant 152Summary 152Key Terms 153Discussion Questions 153Appendix A: American Community Survey 155Glossary 157References 163Index 165