Del i serien SAGE Benchmarks in Social Research Methods
Contemporary Trends in Evaluation Research
Inbunden, Engelska, 2015
13 059 kr
Tillfälligt slut
Evaluation is an essential characteristic of the human condition, and perhaps the single most important and sophisticated cognitive process in the repertoire of human reasoning and logic. Evaluation serves society by providing affirmations of worth, value and improvement to name just a few, and is a process which permeates all areas of human activity, scholarship and production. This work is split into four volumes: Volume One: Contains articles featuring contemporary issues and emerging trends in evaluationVolume Two: Contains articles highlighting recent theoretical, methodological, and empirical developments in quantitative evaluation designsVolume Three: Contains a summation of articles on recent developments in qualitative and mixed methods evaluation practiceVolume Four: Contains a synthesis of articles on enduring issues of evaluation training and practice
Produktinformation
- Utgivningsdatum2015-10-07
- Mått156 x 234 x 101 mm
- Vikt2 960 g
- FormatInbunden
- SpråkEngelska
- SerieSAGE Benchmarks in Social Research Methods
- Antal sidor1 600
- Upplaga1
- FörlagSAGE Publications
- ISBN9781446266373
Tillhör följande kategorier
- Volume OnePart One: Contemporary TrendsSection One: Research on Evaluation Theory, Method, and PracticeAdvancing Empirical Scholarship to Further Develop Evaluation Theory and Practice - Christina ChristieDeveloping Standards for Empirical Examinations of Evaluation Theory - Robin MillerResearch on Evaluation: A Needs Assessment - Michael Szanyi, Tarek Azzam and Matthew GalenTaking Stock of Empowerment Evaluation: An Empirical Review - Robin Miller and Rebecca CampbellDesigning Evaluations: A Study Examining Preferred Evaluation Designs of Educational Evaluators - Tarek Azzam and Michael SzanyiA Systematic Review of Theory-Driven Evaluation Practice from 1990 to 2009 - Chris Coryn, Lindsay Noakes, Carl Westine and Daniela SchröterEvaluator Characteristics and Methodological Choice - Tarek AzzamResearch on Evaluation Use: A Review of the Empirical Literature from 1986 to 2005 - Kelli Johnson, Lija Greenseid, Stacie Toal, Jean King, Frances Lawrenz and Boris VolkovEvaluation Use: Results from a Survey of U.S. American Evaluation Association Members - Dreolin Fleischer and Christina ChristieGoing through the Process: An Examination of the Operationalization of Process Use in Empirical Research on Evaluation - Courtney Amo and J. Bradley CousinsAn Empirical Examination of Validity in Evaluation - Laura Peck, Yushim Kim and Joanna LucioPart Two: Emerging IssuesSection One: Visualizing Evaluation DataData Visualization and Evaluation - Tarek Azzam, Stephanie Evergreen, Amy Germuth and Susan KistlerGIS in Evaluation: Utilizing the Power of Geographic Information Systems to Represent Evaluation Data - Tarek Azzam and David RobinsonSection Two: CommunicationUnlearning Some of Our Social Scientist Habits - E. Jane DavidsonReconceptualizing Evaluator Roles - Gary Skoltis, Jennifer Morrow and Erin BurrVolume TwoPart One: Methodological DevelopmentsSection One: Perspectives on ValidityValidity Frameworks for Outcome Evaluation - Huey Chen, Stewart Donaldson and Melvin MarkReframing Validity in Research and Evaluation: A Multidimensional, Systematic Model of Valid Inference - George JulnesRecommendations for Practice: Justifying Claims of Generalizability - Larry HedgesSection Two: Perspectives on CausalityCampbell and Rubin: A Primer and Comparison of Their Approaches to Causal Inference in Field Settings - William ShadishContemporary Thinking about Causation in Evaluation: A Dialogue with Tom Cook and Michael Scriven - Thomas Cook, Michael Scriven, Chris Coryn and Stephanie EvergreenCampbell’s and Rubin’s Perspectives on Causal Inference - Stephen West and Felix ThoemmesEvaluating Methods for Estimating Program Effects - Charles ReichardtReflections Stimulated by the Comments of Shadish (2010) and West & Thoemmes (2010) - Donald RubinAn Economist’s Perspective on Shadish (2010) and West and Thoemmes (2010) - Guido ImbensPart Two: Empirical DevelopmentsSection One: Quasi-Experiments that Resemble Experiments`Three Conditions under Which Experiments and Observational Studies Produce Comparable Causal Estimates: New Findings from Within-Study Comparisons - Thomas Cook, William Shadish and Vivian WongCan Nonrandomized Experiments Yield Accurate Answers? A Randomized Experiment Comparing Random and Nonrandom Assignments - William Shadish, M.H. Clark and Peter SteinerComment: The Design and Analysis of Gold Standard Randomized Experiments - Donald RubinRejoinder - William Shadish, M.H. Clark and Peter SteinerAn Assessment of Propensity Score Matching as a Nonexperimental Impact Estimator: Evidence from Mexico’s PROGRESA Program - Juan Diaz and Sudhanshu HandaExamining the Internal Validity and Statistical Precision of the Comparative Interrupted Time Series Design by Comparison with a Randomized Experiment - Travis St. Clair, Thomas Cook and Kelly HallbergSection Two: Improving the Design of Cluster-Randomized TrialsEmergent Principles for the Design, Implementation, and Analysis of Cluster-based Experiments in Social Science - Thomas CookUsing Covariates to Improve Precision for Studies That Randomize Schools to Evaluate Educational Interventions - Howard Bloom, Lashawn Richburg-Hayes and Alison BlackStrategies for Improving Precision in Group-Randomized Experiments - Stephen Raudenbush, Andres Martinez and Jessaca SpybrookNew Empirical Evidence for the Design of Group Randomized Trials in Education - Robin Jacob, Pei Zhu and Howard BloomIntraclass Correlations and Covariate Outcome Correlations for Planning Two- and Three-Level Cluster-Randomized Experiments in Education - Larry Hedges and Eric HedbergThe Implications of “Contamination” for Experimental Design in Education - Christopher RhoadsStratified Sampling Using Cluster Analysis: A Sample Selection Strategy for Improved Generalizations from Experiments - Elizabeth TiptonVolume ThreePart One: Developments in Qualitative MethodsSection One: Advances in Qualitative Analysis TechniquesA General Inductive Approach for Analyzing Qualitative Evaluation Data - David ThomasQualitative Comparative Analysis (QCA) and Related Systematic Comparative Methods: Recent Advances and Remaining Challenges for Social Science Research - Benoît RihouxA New Realistic Evaluation Analysis Method: Linked Coding of Context, Mechanism, and Outcome Relationships - Suzanne Jackson and Gillian KollaA Proposed Model for the Analysis and Interpretation of Focus Groups in Evaluation Research - Oliver MasseyPart Two: Developments in Mixed MethodsSection One: Defining Mixed MethodsMixed Methods Research: A Research Paradigm Whose Time Has Come - R. Burke Johnson and Anthony OnwuegbuzieToward a Methodology of Mixed Methods Social Inquiry - Jennifer GreeneToward a Definition of Mixed Methods Research - R. Burke Johnson, Anthony Onwuegbuzie and Lisa TurnerIntegrating Quantitative and Qualitative Research: How Is It Done? - Alan BrymanIs Mixed Methods Social Inquiry a Distinctive Methodology? - Jennifer GreenePutting the MIXED Back Into Quantitative and Qualitative Research in Educational Research and Beyond: Moving toward the Radical Middle - Anthony OnwuegbuzieSection Two: Mixed Methods TypologiesA General Typology of Research Designs Featuring Mixed Methods - Charles Teddlie and Abbas TashakkoriConducting Mixed Analyses: A General Typology - Anthony Onwuegbuzie et al.A Typology Of Mixed Methods Research Designs - Nancy Leech and Anthony OnwuegbuzieSection Three: Mixed Methods in PracticeTransformative Paradigm: Mixed Methods and Social Justice - Donna MertensGrounded Theory in Practice: Is It Inherently a Mixed Method? - R. Burke Johnson, Marilyn McGowan and Lisa TurnerCommunities of Practice: A Research Paradigm for the Mixed Methods Approach - Martyn DenscombeA Theory-Driven Evaluation Perspective on Mixed Methods Research - Huey ChenMixed Methods and Credibility of Evidence in Evaluation - Donna Mertens and Sharlene Hesse-BiberGuidelines for Conducting and Reporting Mixed Research in the Field of Counseling and Beyond - Nancy Leech and Anthony OnwuegbuzieThe Validity Issue in Mixed Research - Anthony Onwuegbuzie and R. Burke JohnsonMixed Data Analysis: Advanced Integration Techniques - Anthony Onwuegbuzie et al.Volume FourPart One: Enduring Issues of Evaluation PracticeSection One: MetaevaluationQuality, Context, and Use: Issues in Achieving the Goals of Metaevaluation - Leslie Cooksy and Valerie CaracelliConcurrent Meta-Evaluation: A Critique - Carl Hanssen, Franz Lawrenz and Diane DunetMetaevaluation in Practice: Selection and Application of Criteria - Leslie Cooksy and Valerie CaracelliEvaluating the Quality of Self-Evaluations: The (Mis)match between Internal and External Meta-Evaluation - Jan Vanhoof and Peter Van PetegemMeta-Evaluation Revisited - Michael ScrivenSection Two: EthicsExpanding the Conversation on Evaluation Ethics - Thomas SchwandtThe Good, the Bad, and the Evaluator: 25 Years of AJE Ethics - Michael MorrisEthics and Development Evaluation: Introduction - Patrick GrassoEveryday Ethics: Reflections on Practice - Gretchen Rossman and Sharon RallisNonparticipant to Participant: A Methodological Perspective on Ethics - Scott RosasSection Three: Using Program Theory in EvaluationConstructing Theories of Change: Methods and Sources - Paul Mason and Marian BarnesUnpacking Black Boxes: Mechanisms and Theory Building in Evaluation - Brad Astbury and Frans LeeuwUsing Programme Theory to Evaluate Complicated and Complex Aspects of Interventions - Patricia RogersPart Two: Enduring Issues of Evaluation TrainingSection One: Evaluation Capacity Building/DevelopmentA Research Synthesis on the Evaluation Capacity Building Literature - Susan Labin, Jennifer Duffy, Duncan Meyers, Abraham Wandersman and Catherine LeseneA Multidisciplinary Model of Evaluation Capacity Building - Hallie Preskill and Shanelle BoyleMeasuring Evaluation Capacity – Results and Implications of a Danish Study - Steffen Nielsen, Sebastian Lemire and Majbritt SkovA Self-Assessment Procedure for Use in Evaluation Training - Daniel Stufflebeam and Lori WingateSection Two: Evaluator CompetenceEstablishing Essential Competencies for Program Evaluators - Laurie Stevahn, Jean King, Gail Ghere and Jane MinnemaEvaluator Competencies: What’s Taught versus What’s Sought - Jennifer Dewey, Bianca Montrosse, Daniela Schröter, Carolyn Sullins and John Mattox IIA Conversation on Cultural Competence in Evaluation - Joseph Trimble, Ed Trickett, Celia Fisher and Leslie GoodyearDevelopment and Validation of the Cultural Competence of Program Evaluators (CCPE) Self-Report Scale - Krystall Dunaway, Jennifer Morrow and Bryan PorterEmphasizing Cultural Competence in Evaluation: A Process-Oriented Approach - Luba Botcheva, Johanna Shih and Lynne Huffman