Experimental Economics
- Nyhet
Theory and Practice
Häftad, Engelska, 2025
439 kr
Kommande
A landmark practical guide from the twenty-first-century pioneer in economics.Experimental economics—generating and interpreting data to understand human decisions, motivations, and outcomes—is today all but synonymous with economics as a discipline. The advantages of the experimental method for understanding causal effects make it the gold standard for an increasingly empirical field. But until now the discipline has lacked comprehensive and definitive guidance for how to optimally design and conduct economic experiments. For more than 30 years, John A. List has been at the forefront of using experiments to advance economic knowledge, expanding the domain of economic experiments from the lab to the real-world. Experimental Economics is his A-to-Z compendium for students and researchers on the ground floor of designing, conducting, analyzing, and interpreting data that they generate. List seeks not only to guide readers on how to develop and implement their experimental projects—everything from design to administrative and ethical considerations—but to help them avoid all the mistakes he’s made in his career, too. Experimental Economics codifies its author’s refined approach to the design, execution, and analysis of laboratory and field experiments. It is a milestone work poised to become the definitive reference for the next century of economics (and economists).
Produktinformation
- Utgivningsdatum2025-12-12
- Mått178 x 254 x undefined mm
- Vikt454 g
- FormatHäftad
- SpråkEngelska
- Antal sidor784
- FörlagThe University of Chicago Press
- ISBN9780226820675
Tillhör följande kategorier
John A. List is the Kenneth C. Griffin Distinguished Service Professor in Economics at the University of Chicago. He is a member of the American Academy of Arts and Sciences and a research associate of NBER. He is the author, most recently, of the best-selling book, The Voltage Effect: How to Make Good Ideas Great and Great Ideas Scale.
- PrefacePart I. Experimental Methods in Economics1. IntroductionKey Ideas1.1 Causal InferenceExperimental Problem 1: Quantifying Economic Fundamentals, Measuring Treatment Effects, and Identifying Key Mediators and Moderators in an Ethically Responsible MannerExperimental Problem 2: Predicting If the Causal Impacts of Treatments Implemented in One Environment Transfer to Other Environments, Whether Spatially, Temporally, or Scale Differentiated1.2 The Book’s Game PlanNotesReferences2. A Primer on Economic ExperimentsKey Ideas2.1 Four Running Examples2.2 The Empirical Approach in Economics2.3 Experiments in Economics2.3.1 Laboratory Experiments2.3.2 Field Experiments2.3.2.1 Seven Criteria That Define Field ExperimentsExperimental Subjects: Population and SelectionExperimental Environment2.3.3 What Parameters Do the Various Experimental Types Recover?2.4 What Experimental Type to Choose2.4.1 Control across the Experimental Spectrum for Identification Purposes2.4.2 Control across the Experimental Spectrum for Measurement Purposes2.4.3 The Ability to Replicate across the Experimental Spectrum2.4.4 Control across the Experimental Spectrum for Inferential Purposes2.4.5 Control across the Experimental Spectrum to Ensure External Validity2.5 Conclusions: Key Complementarities Exist across the Lab and FieldAppendix 2.1 Introducing General Potential Outcomes NotationNotesReferences3. Internal Validity: Identification in Economic ExperimentsKey Ideas3.1 Four Running Examples3.2 The Assignment Mechanism3.3 Potential Outcomes Framework3.4 From Individual Treatment Effects to Average Treatment Effects3.5 How Selection Leads to Bias3.5.1 Using Randomization to Solve the Selection Problem3.6 Introducing EPATE: The Case When τPi = 1 ≠ τ3.7 Recovering and Interpreting Heterogeneity of Treatment Effects3.8 Violations of the Exclusion Restrictions3.8.1 SUTVA3.8.2 Observability3.8.3 Compliance3.8.4 Statistical Independence3.9 ConclusionsAppendix 3.1 Recovering the Wedge between τ and τ̃τ (Derivation of Equation 3.5)Appendix 3.2 The Brass Tacks of Estimating the Effects of Training ProgramsNotesReferences4. Statistical Conclusion Validity: Measurement in Economic ExperimentsKey Ideas4.1 Two Running Examples4.2 Perspectives on Sampling Frameworks4.2.1 Subpopulations in the Superpopulation Framework4.3 Estimating Treatment Effects and Making Inference4.3.1 Motivating the Difference-in-Means Estimator for ATE Parameters4.3.2 Single Hypothesis Testing and Statistical Power4.3.3 Multiple Hypothesis Testing4.3.3.1 Family-Wise Error Rate4.3.3.2 Approaches to Controlling the FWERBonferroni CorrectionHolm Stepdown CorrectionList et al. FWER Correction4.3.4 Introducing the Difference-in-Differences Estimator for ATE Parameters4.3.5 Introducing an Alternative to ATE Parameters: Fisher’s Randomization Inference4.4 ConclusionsAppendix 4.1 Code for List et al. (2019) and List et al. (2023)Installation (2019)Command Procedure (2019)Installation (2023)Command Procedure (2023)NotesReferencesPart II. Designing Economic Experiments5. Optimal Experimental DesignKey Ideas5.1 Three Running Examples5.2 Basic Principles of Statistical Power5.3 The Case of a Binary Treatment with Continuous Outcomes5.3.1 Putting It All Together to Create an Optimal Design5.4 The Case of a Binary Treatment with Binary Outcomes5.5 Varying Treatment Levels with Continuous Outcomes5.6 Expanding the Tool Kit5.6.1 Heterogeneity in Participant Costs5.6.2 Clustered Experimental Designs5.6.3 Optimal Design with Multiple Hypothesis Adjustment5.7 Less Considered Design Choices to Enhance Statistical Power5.7.1 Including Covariates in the Estimation Model5.7.2 Designs to Maximize Compliance5.7.3 The Nature of the Sample5.7.4 Measurement Choices5.7.5 Factorial Designs5.8 ConclusionsAppendix 5.1 An Example of the Power of Simulation Methods: The Case of Varying Treatment Levels with Binary OutcomesAppendix 5.2 Step-by-Step Flexible Regression AdjustmentAppendix 5.3 Introducing Full and Fractional Factorial DesignsThree FactorsAppendix 5.4 A Walk-Through ExampleNotesReferences6. Randomization TechniquesKey Ideas6.1 Three Running Examples6.2 Classical Assignment Mechanisms6.3 Classical Randomization Approaches6.3.1 Bernoulli Trials6.3.2 Completely Randomized Experiments (CRE)6.3.3 Randomized Block (Stratified) Experiments6.3.4 Rerandomization Approaches6.3.5 Optimal Stratification with Matched-Pairs Designs6.3.5.1 Efficient Matching Minimizing Mean-Squared Error6.4 Design-Conscious Inference6.4.1 Statistical Inference in CREs6.4.2 Adjusting Inference under Alternative Randomization Schemes6.5 What to Do with Unanticipated Covariates6.6 ConclusionsAppendix 6.1 A Review of Rerandomization ApproachesNotesReferences7. Heterogeneity and Causal ModerationKey Ideas7.1 Four Running Examples7.2 Estimating Heterogeneities in Simple Cases7.2.1 Using Causal Forests to Estimate HeterogeneitiesEight-Step Causal Forest Procedure7.3 Basic Mechanics of Causal Moderation7.3.1 Causal Moderation in Economic Experiments7.4 Two Crucial Margins of Heterogeneity: Intensive and Extensive7.4.1 Bounding the Intensive and Extensive Margin Effects7.4.2 Using Baseline Outcome Data to Identify Intensive Margin Effects7.4.3 A Tobit Approach to Estimating Margins7.5 ConclusionsNotesReferences8. Mediation: Exploring Relevant MechanismsKey Ideas8.1 Three Running Examples8.2 Mediation: The Basics of Causal Pathways8.2.1 Decomposing Total Effects in the Presence of Mediators8.2.2 Moving the Goalposts: Controlled and Principal-Strata Effects8.3 Applied Mediation Analysis for Economic Experiments8.3.1 A Parametric Workhorse and Its Pitfalls8.3.2 Basic Case: Binary Randomized Treatment8.3.3 Separate Randomization of Treatment and Mediator8.3.4 Paired Design8.3.5 Crossover Design8.4 ConclusionsAppendix 8.1 Putting It All Together: Traditional Mediation Analysis and Alternative Approaches Using an In-Home Parent Visitation ProgramNotesReferences9. Experiments with Longitudinal ElementsKey Ideas9.1 Three Running Examples9.2 Potential Outcomes in Repeated Exposure Designs9.2.1 Treatment Effects in the Presence of Repeated Exposures9.3 Staggered Experimental Design9.4 Leveraging Pre- and Post-treatment Outcomes to Increase Power9.4.1 Including Covariates and Pre-treatment Outcomes in the Estimation Model9.4.2 Leveraging Pre-treatment Outcomes in a Panel Data Estimation Model9.4.2.1 Gains from Pre-treatment Outcome Measures9.4.2.2 Autocorrelations That Vary with Treatment9.4.3 Choosing the Optimal Number of Pre-treatment and Post-treatment Periods9.4.4 Threats to Internal Validity9.5 Experimental Designs with Outcomes Measured Long after Treatment9.5.1 Identification Assumptions When Outcomes Are Far Removed from Treatment9.5.2 Statistical Surrogates9.5.2.1 Internal Validity of Statistical Surrogates9.5.2.2 Putting the Comparability and Surrogacy Assumptions into Perspective9.5.2.3 Interpreting Surrogates9.5.2.4 Multiple Surrogates9.6 ConclusionsAppendix 9.1 Optimal Staggered DesignsAppendix 9.2 Clustered Design in Panel Data SettingsAppendix 9.3 Cluster-Randomized Experiments in Settings That Generate Short Panel DataNotesReferences10. Within-Subject Experimental DesignsKey Ideas10.1 Three Running Examples10.2 Potential Outcomes in a Within-Subject Design10.3 Identification Assumptions in a Within-Subject Design10.4 Threats to the Internal Validity of Within-Subject Designs10.4.1 Threats to Balanced Panel10.4.2 Threats to Temporal Stability10.4.2.1 Crossover Designs and Latin Squares10.4.3 Threats to Causal Transience10.4.3.1 Washout Periods10.5 Key Advantages of Within-Subject Designs10.5.1 Heterogeneity and the Full Distribution of Treatment Effects10.5.2 Experimental Power10.5.2.1 Minimum Detectable Effects for Within-Subject Designs10.6 ConclusionsNotesReferencesPart III. Violations of Exclusion Restrictions11. SUTVA: Interference and Hidden TreatmentsKey Ideas11.1 Three Running Examples11.2 SUTVA Violation: Interference11.2.1 Treatment Effect Parameters11.2.2 Difference-in-Means11.3 Approaches to Dealing with Interference Violations11.3.1 Linear-in-Means Model11.3.2 Clustered Randomized Trials to Attenuate Spillovers11.3.3 Randomization Inference under Interference11.4 Embracing Spillovers: Randomized Saturation Designs11.4.1 Designs to Explore Spillovers11.5 Hidden Versions of Treatment11.5.1 Potential Outcomes with Hidden Versions of Treatment11.5.2 Implications of Hidden Versions of Treatment11.6 ConclusionsAppendix 11.1 Optimal Saturation DesignsNotesReferences12. Observability: Nonrandom AttritionKey Ideas12.1 Two Running Examples12.2 Attrition in the Potential Outcomes Framework12.2.1 Internal Validity for Respondents12.2.2 Internal Validity for Study Participants12.3 Tests for Internal Validity12.3.1 Tests Using Baseline Outcome Data12.3.2 Selective Attrition Test12.3.3 Determinants of Attrition Test12.3.4 Attrition Rates That Vary by Treatment12.4 Analyzing Data with Attrition12.4.1 Available Case Analysis12.4.2 Horowitz and Manski Bounds12.4.3 Inverse Probability Weighting12.4.4 Selection Models12.4.5 Lee Bounds12.5 Missing Covariates12.5.1 Complete and Available Case Analysis12.5.2 Dummy Variable Adjustment12.5.3 Imputation12.6 Six Design Tips to Attenuate Attrition12.7 ConclusionsAppendix 12.1 Putting It All Together with CHECCNotesReferences13. Complete Compliance: One-Sided and Two-Sided ViolationsKey Ideas13.1 Two Running Examples13.2 A Framework for Imperfect Compliance13.2.1 As-Treated Analysis Reintroduces the Selection Problem13.2.2 Intention-to-Treat (ITT) Analysis13.3 Randomization as an Instrumental Variable and New Assumptions13.4 Calculating ATEs for Compliers13.4.1 Characterizing Compliers13.4.2 Widening the Goalposts: Bounding the ATE13.5 Six Design Tips to Attenuate Noncompliance13.6 ConclusionsAppendix 13.1 Encouragement DesignsNotesReferences14. Statistical Independence and Compromised RandomizationKey Ideas14.1 Three Running Examples14.2 Statistical Independence: The Basics14.3 Tests for Compromised Randomization14.3.1 Comparing Planned versus Actual Assignment14.3.2 Computing P-Values to Test for Compromised Randomization14.3.3 Informal Checks of Compromised Randomization14.4 Case 1: A Rerandomization Approach14.5 Case 2a: Inference with Compromised Randomization and Full Documentation14.5.1 Inference When the Randomization Procedure Is Correlated with Potential Outcomes14.6 Case 2b: Inference with Compromised Randomization and Only Partial Documentation14.6.1 An Example of Compromised Randomization Being Partly Understood at the Aggregate Level14.6.2 Breaking Down the Randomization Procedure14.6.3 A Basic Model14.6.4 Testing a Single Joint Null Hypothesis14.7 A Decision-Theoretic Framework with Incomplete Documentation14.7.1 Modeling the Randomization Protocol14.7.2 Partially Identifying Model Parameters14.7.3 Worst-Case Randomization Test14.8 Seven Design Tips to Prevent Compromised Randomization14.8.1 Three Tips When the Researcher Is Responsible for Randomization14.8.2 Four Tips When the Experimenter Relies on Partners for Randomization14.9 ConclusionsAppendix 14.1 Using Fisher’s Sharp Inference with Compromised RandomizationAppendix 14.2 Putting the Ideas of Section 14.6 in MotionAppendix 14.3 Extending Section 14.6 to Test Multiple HypothesesNotesReferencesPart IV. Building Scientific Knowledge15. Building Confidence in (and Knowledge from) Experimental ResultsKey Ideas15.1 Three Running Examples15.2 The Philosophy of Building Knowledge from Experimental Results15.3 A Framework for Building Confidence in Experimental Results15.3.1 Effects of α and β on the PSP15.3.2 Null Results Are Informative Too15.4 From the Researcher to the Research Community15.4.1 Replication Types15.4.1.1 Interpreting Replication Results15.4.1.2 Building Knowledge and Confidence with Replications15.4.1.3 Why Are Replications an Endangered Species in Economics?15.5 The Beauty of Selective Data Generation: From the Lab to the Field15.6 ConclusionsAppendix 15.1 Gaining Insights into Equation 15.5 and BeyondUnbiased, Sympathetic, and Adversarial ReplicationsHeterogeneity across Replicating TeamsShould We Have Confidence in Our Updating from Experimental Results?NotesReferences16. Generalizability and ScalingKey Ideas16.1 Two Running Examples16.2 External Validity Primers16.2.1 From Treatment Effects to the Parameter of Interest16.2.2 Three Types of Horizontal Generalizability16.2.3 Assumptions Yielding τ = τ*16.3 Digging Deeper into Assumptions 16.1–16.416.3.1 Assumption 16.1: Selection into the Experiment16.3.1.1 A Model of Selection into Experiments16.3.1.2 How Experimental Design Affects Selection16.3.2 Assumption 16.2: Representativeness of the Population16.3.3 Assumptions 16.3 and 16.4: Investigation Neutrality and Parallelism16.3.3.1 Experimenter Scrutiny: Effects of A16.3.3.2 Experimental Environment: Effects of E16.3.3.3 Stakes: Effects of Ii16.4 Scaling16.4.1 A Behavioral Model of Scaling16.4.2 Constructive Steps Forward: The SANS ConditionsAuthor Onus Probandi16.4.3 Three Waves of Scientific Research16.5 ConclusionsAppendix 16.1 Mechanics of Scaling UpNotesReferencesPart V. The Ethical and Practical Sides of Economic Experiments17. The Ethics of Economic ExperimentsKey Ideas17.1 Four Running Examples17.2 Ethics Primer17.2.1 A Simple Economic Model17.2.2 A Simple Philosophical Framework17.3 Three Theories of (Research) Ethics17.3.1 Consequentialism17.3.2 Deontological Ethics17.3.3 Rule Consequentialism17.4 Putting It All Together17.4.1 Truthful, Unbiased, and Transparent Reporting of Results and Conflicts of Interest17.4.2 Appropriate Data Governance and Management17.4.3 Conflicts between Individual Protections and Scientific Discovery17.4.3.1 Should You Even Do an Experiment?17.4.3.2 With Whom Should You Experiment?17.4.3.3 How Should You Experiment?17.4.3.3.1 Informed Consent: Respecting Autonomy17.4.3.3.2 Defining Benefits and Harm: From the Subject to Innocent Bystanders17.4.3.3.3 Outright Deception and Incomplete Disclosure17.5 Benchmarking Research Ethics: Gold to Plutonium-23917.6 ConclusionsAppendix 17.1 Data Governance and Management PlaybookBeing Trustworthy for Knowledge CreationBeing Trustworthy regarding SubjectsDifferential PrivacyBeing Trustworthy for Third PartiesAccessibility and AccountabilitySecurityNotesReferences18. Pre-treatment Administrative ResponsibilitiesKey Ideas18.1 One Running Example18.2 Overarching Goals of Pre-treatment Tasks18.3 Institutional Review18.3.1 IRBs and Research Ethics18.3.2 IRB Application Materials18.3.2.1 IRB Requirements: Who, What, How, and to Whom?18.3.3 IRB Review Process and Determinations18.3.3.1 IRBs and Informed Consent18.3.3.2 IRBs and Outright Deception18.3.3.3 IRBs and Pilots18.3.3.4 IRBs and Multi-institutional Research18.3.3.5 Communication with IRBs18.4 Registries and Pre-analysis Plans18.4.1 Trial Registries18.4.1.1 Existing Registries18.4.1.2 The AEA Registry18.4.1.3 Registry Limitations18.4.2 Pre-analysis Plans18.5 Data Use Agreements and Outside Partners18.5.1 Components of a DUA18.6 Due Diligence Administrative Checklist18.7 ConclusionsAppendix 18.1 A Plea to the IRBWhat Should IRBs Do?A. Gather Information Typically Contained in Pre-registrations and PAPsB. Focus on the RelevantC. Be Honest with ThemselvesD. Be Clear and ConsistentE. Guide How Researchers Should Work with Third PartiesNotesReferences19. Optimal Use of Incentives in Economic ExperimentsKey Ideas19.1 Four Running Examples19.2 A Simple Economic Model19.2.1 Extending the Model to Explore Knowledge Creation: Internal Validity19.2.1.1 Within-Subject versus Between-Subject Design19.2.1.2 Statistical Surrogates19.2.2 Extending the Model to Explore Knowledge Creation: Improving Inference19.2.2.1 Nuts and Bolts of Design19.2.2.2 Pilot Experiments19.2.2.3 Mediators and Moderators19.2.2.4 EP2: From τPi = 1 to τ and Beyond19.2.2.5 EP2: From One Environment to Another19.2.2.6 EP2: Fostering Scaling by Adding Option C Thinking to Designs19.3 Creating the Microeconomic Environment19.3.1 Using Induced Values for Control19.3.2 Potentially Losing Control19.3.2.1 An Inferential Challenge: Flat Payoffs19.3.2.2 An Inferential Challenge: Construct Validity19.3.2.3 Experimental Instructions across the Empirical Spectrum19.4 ConclusionsAppendix 19.1 Inducing Risk PostureAppendix 19.2 Tips for Writing Experimental Instructions across the Empirical Spectrum10 Tips for Writing Laboratory Experimental InstructionsFrom the Lab to the Field8 Tips for Artefactual Field Experiments (AFEs)6 Tips for Framed Field Experiments (FFEs)5 Tips for Natural Field Experiments (NFEs)Practical ImplementationConclusionNotesReferences20. Epilogue: The (Written) Road to Scientific Knowledge DiffusionKey Ideas20.1 Give the People What They Want! But . . . What Do They Want?20.2 Creating a Logical Framework20.2.1 Applying BEC HolisticallyPREP20.3 Your Writing Style20.3.1 Getting Started: An Eight-Step “Inside-Out Approach” to Writing Scientific Studies20.4 Introducing Your Pen to the World20.5 EpilogueAppendix 20.1 PREP Checklist: Proper Reporting in an Experimental PaperNotesReferencesPart VI. “How To” SupplementsS1: How to Conduct Experiments in Markets: From the Lab to the FieldS2: How to Conduct Experiments with Organizational PartnershipsS3: How to Conduct Experiments with ChildrenS4: How to Conduct Experiments to Measure Preferences, Beliefs, and ConstraintsS5: How to Conduct Experiments to Generate Unconventional DataGlossaryNotation Crib SheetFurther ReadingsIndex