Del 37 - Wiley Series in Probability and Statistics
Advances in Longitudinal Survey Methodology
Inbunden, Engelska, 2021
Av Peter Lynn, UK) Lynn, Peter (Institute for Social and Economic Research, University of Essex
1 589 kr
Produktinformation
- Utgivningsdatum2021-04-08
- Mått156 x 234 x 33 mm
- Vikt907 g
- FormatInbunden
- SpråkEngelska
- SerieWiley Series in Probability and Statistics
- Antal sidor544
- FörlagJohn Wiley & Sons Inc
- ISBN9781119376934
Tillhör följande kategorier
Peter Lynn is Professor of Survey Methodology and Director of the Institute for Social and Economic Research (ISER), University of Essex. ISER is one of the leading research centres in the world for longitudinal survey methods and Professor Lynn has headed the survey methods programme at ISER since he joined Essex in 2001. Professor Lynn has published more than 60 articles on survey methods topics in top scientific journals, mostly on topics specific to longitudinal surveys, in addition to numerous book chapters, reports and other articles.
- List of Contributors xviiPreface xxiiiAbout the Companion Website xxvii1 Refreshment Sampling for Longitudinal Surveys 1Nicole Watson and Peter Lynn1.1 Introduction 11.2 Principles 61.3 Sampling 71.3.1 Sampling Frame 71.3.2 Screening 81.3.3 Sample Design 101.3.4 Questionnaire Design 101.3.5 Frequency 111.4 Recruitment 131.5 Data Integration 141.6 Weighting 151.7 Impact on Analysis 181.8 Conclusions 20References 222 Collecting Biomarker Data in Longitudinal Surveys 26Meena Kumari and Michaela Benzeval2.1 Introduction 262.2 What Are Biomarkers, and Why Are They of Value? 272.2.1 Detailed Measurements of Ill Health 282.2.2 Biological Pathways 292.2.3 Genetics in Longitudinal Studies 312.3 Approaches to Collecting Biomarker Data in Longitudinal Studies 322.3.1 Consistency and Relevance of Measures Over Time 332.3.2 Panel Conditioning and Feedback 352.3.3 Choices of When and Who to Ask for Sensitive or Invasive Measures 362.3.4 Cost 392.4 The Future 40References 423 Innovations in Participant Engagement and Tracking in Longitudinal Surveys 47Lisa Calderwood, Matt Brown, Emily Gilbert and Erica Wong3.1 Introduction and Background 473.2 Literature Review 483.3 Current Practice 523.4 New Evidence on Internet and Social Media for Participant Engagement 553.4.1 Background 553.4.2 Findings 563.4.2.1 MCS 563.4.2.2 Next Steps 573.4.3 Summary and Conclusions 583.5 New Evidence on Internet and Social Media for Tracking 583.5.1 Background 583.5.2 Findings 603.5.3 Summary and Conclusions 613.6 New Evidence on Administrative Data for Tracking 623.6.1 Background 623.6.2 Findings 633.6.3 Summary and Conclusions 673.7 Conclusion 68Acknowledgements 69References 694 Effects on Panel Attrition and Fieldwork Outcomes from Selection for a Supplemental Study: Evidence from the Panel Study of Income Dynamics 74Narayan Sastry, Paula Fomby and Katherine A. McGonagle4.1 Introduction 744.2 Conceptual Framework 754.3 Previous Research 774.4 Data and Methods 784.5 Results 864.6 Conclusions 95Acknowledgements 98References 985 The Effects of Biological Data Collection in Longitudinal Surveys on Subsequent Wave Cooperation 100Fiona Pashazadeh, Alexandru Cernat and Joseph W. Sakshaug5.1 Introduction 1005.2 Literature Review 1015.3 Biological Data Collection and Subsequent Cooperation: Research Questions 1065.4 Data 1085.5 Modelling Steps 1095.6 Results 1105.7 Discussion and Conclusion 1145.8 Implications for Survey Researchers 116References 1176 Understanding Data Linkage Consent in Longitudinal Surveys 122Annette Jäckle, Kelsey Beninger, Jonathan Burton and Mick P. Couper6.1 Introduction 1226.2 Quantitative Research: Consistency of Consent and Effect of Mode of Data Collection 1256.2.1 Data and Methods 1256.2.2 Results 1286.2.2.1 How Consistent Are Respondents about Giving Consent to Data Linkage between Topics? 1286.2.2.2 How Consistent Are Respondents about Giving Consent to Data Linkage over Time? 1306.2.2.3 Does Consistency over Time Vary between Domains? 1316.2.2.4 What Is the Effect of Survey Mode on Consent? 1326.3 Qualitative Research: How Do Respondents Decide Whether to Give Consent to Linkage? 1366.3.1 Methods 1366.3.2 Results 1376.3.2.1 How Do Participants Interpret Consent Questions? 1376.3.2.2 What Do Participants Think Are the Implications of Giving Consent to Linkage? 1416.3.2.3 What Influences the Participant’s Decision Whether or Not to Give Consent? 1426.3.2.4 How Does the Survey Mode Influence the Decision to Consent? 1446.3.2.5 Why Do Participants Change their Consent Decision over Time? 1446.4 Discussion 145Acknowledgements 147References 1487 Determinants of Consent to Administrative Records Linkage in Longitudinal Surveys: Evidence from Next Steps 151Darina Peycheva, George Ploubidis and Lisa Calderwood7.1 Introduction 1517.2 Literature Review 1537.3 Data and Methods 1557.3.1 About the Study 1557.3.2 Consents Sought and Consent Procedure 1567.3.3 Analytic Sample 1577.3.4 Methods 1587.4 Results 1607.4.1 Consent Rates 1607.4.2 Regression Models 1637.4.2.1 Concepts and Variables 1637.4.2.2 Characteristics Related to All or Most Consent Domains 1647.4.2.3 National Health Service (NHS) Records 1647.4.2.4 Police National Computer (PNC) Criminal Records 1677.4.2.5 Education Records 1677.4.2.6 Economic Records 1707.5 Discussion 1737.5.1 Summary of Results 1737.5.2 Methodological Considerations and Limitations 1767.5.3 Practical Implications 177References 1778 Consent to Data Linkage: Experimental Evidence from an Online Panel 181Ben Edwards and Nicholas Biddle8.1 Introduction 1818.2 Background 1828.2.1 Experimental Studies of Data Linkage Consent in Longitudinal Surveys 1838.3 Research Questions 1868.4 Method 1878.4.1 Data 1878.4.2 Study 1: Attrition Following Data Linkage Consent 1878.4.3 Study 2: Testing the Effect of Type and Length of Data Linkage Consent Questions 1888.5 Results 1908.5.1 Do Requests for Data Linkage Consent Affect Response Rates in SubsequentWaves? (RQ1) 1908.5.2 Do Consent Rates Depend on Type of Data Linkage Requested? (RQ2a) 1918.5.3 Do Consent Rates Depend on Survey Mode? (RQ2b) 1938.5.4 Do Consent Rates Depend on the Length of the Request? (RQ2c) 1938.5.5 Effects on Understanding of the Data Linkage Process (RQ3) 1948.5.6 Effects on Perceptions of the Risk of Data Linkage (RQ4) 1978.6 Discussion 198References 2009 Mixing Modes in Household Panel Surveys: Recent Developments and New Findings 204Marieke Voorpostel, Oliver Lipps and Caroline Roberts9.1 Introduction 2049.2 The Challenges of Mixing Modes in Household Panel Surveys 2059.3 Current Experiences with Mixing Modes in Longitudinal Household Panels 2079.3.1 The German Socio-Economic Panel (SOEP) 2079.3.2 The Household, Income, and Labour Dynamics in Australia (HILDA) Survey 2089.3.3 The Panel Study of Income Dynamics (PSID) 2099.3.4 The UK Household Longitudinal Study (UKHLS) 2119.3.5 The Korean Labour and Income Panel Study (KLIPS) 2129.3.6 The Swiss Household Panel (SHP) 2139.4 The Mixed-Mode Pilot of the Swiss Household Panel Study 2149.4.1 Design of the SHP Pilot 2149.4.2 Results of the FirstWave 2179.4.2.1 Overall Response Rates in the Three Groups 2179.4.2.2 Use of Different Modes in the Three Groups 2179.4.2.3 Household Nonresponse in the Three Groups 2199.4.2.4 Individual Nonresponse in the Three Groups 2219.5 Conclusion 223References 22410 Estimating the Measurement Effects of Mixed Modes in Longitudinal Studies: Current Practice and Issues 227Alexandru Cernat and Joseph W. Sakshaug10.1 Introduction 22710.2 Types of Mixed-Mode Designs 23010.3 Mode Effects and Longitudinal Data 23210.3.1 Estimating Change from Mixed-Mode Longitudinal Survey Data 23310.3.2 General Concepts in the Investigation of Mode Effects 23310.3.3 Mode Effects on Measurement in Longitudinal Data: Literature Review 23510.4 Methods for Estimating Mode Effects on Measurement in Longitudinal Studies 23710.5 Using Structural Equation Modelling to Investigate Mode Differences in Measurement 23910.6 Conclusion 245Acknowledgement 246References 24611 Measuring Cognition in a Multi-Mode Context 250Mary Beth Ofstedal, Colleen A. McClain and Mick P. Couper11.1 Introduction 25011.2 Motivation and Previous Literature 25111.2.1 Measurement of Cognition in Surveys 25111.2.2 Mode Effects and Survey Response 25211.2.3 Cognition in a Multi-Mode Context 25211.2.4 Existing Mode Comparisons of Cognitive Ability 25411.3 Data and Methods 25611.3.1 Data Source 25611.3.2 Analytic Sample 25611.3.3 Administration of Cognitive Tests 25711.3.4 Methods 25811.3.4.1 Item Missing Data 25911.3.4.2 Completion Time 25911.3.4.3 Overall Differences in Scores 25911.3.4.4 Correlations Between Measures 25911.3.4.5 Trajectories over Time 26011.3.4.6 Models Predicting Cognition as an Outcome 26011.4 Results 26111.4.1 Item-Missing Data 26111.4.2 Completion Time 26211.4.3 Differences in Mean Scores 26211.4.4 Correlations Between Measures 26311.4.5 Trajectories over Time 26311.4.6 Substantive Models 26511.5 Discussion 266Acknowledgements 268References 26812 Panel Conditioning: Types, Causes, and Empirical Evidence of What We Know So Far 272Bella Struminskaya and Michael Bosnjak12.1 Introduction 27212.2 Methods for Studying Panel Conditioning 27312.3 Mechanisms of Panel Conditioning 27612.3.1 Survey Response Process and the Effects of Repeated Interviewing 27612.3.2 Reflection/Cognitive Stimulus 27912.3.3 Empirical Evidence of Reflection/Cognitive Stimulus 28012.3.3.1 Changes in Attitudes Due to Reflection 28012.3.3.2 Changes in (Self-Reported) Behaviour Due to Reflection 28212.3.3.3 Changes in Knowledge Due to Reflection 28412.3.4 Social Desirability Reduction 28512.3.5 Empirical Evidence of Social Desirability Effects 28512.3.6 Satisficing 28712.3.7 Empirical Evidence of Satisficing 28812.3.7.1 Misreporting to Filter Questions as a Conditioning Effect Due to Satisficing 28812.3.7.2 Misreporting to More Complex Filter (Looping) Questions 28912.3.7.3 Within-Interview and Between-Waves Conditioning in Filter Questions 29012.4 Conclusion and Implications for Survey Practice 292References 29513 Interviewer Effects in Panel Surveys 302Simon Kühne and Martin Kroh13.1 Introduction 30213.2 Motivation and State of Research 30313.2.1 Sources of Interviewer-Related Measurement Error 30313.2.1.1 Interviewer Deviations 30413.2.1.2 Social Desirability 30513.2.1.3 Priming 30713.2.2 Moderating Factors of Interviewer Effects 30713.2.3 Interviewer Effects in Panel Surveys 30813.2.4 Identifying Interviewer Effects 31013.2.4.1 Interviewer Variance 31013.2.4.2 Interviewer Bias 31113.2.4.3 Using Panel Data to Identify Interviewer Effects 31213.3 Data 31313.3.1 The Socio-Economic Panel 31313.3.2 Variables 31413.4 The Size and Direction of Interviewer Effects in Panels 31413.4.1 Methods 31413.4.2 Results 31813.4.3 Effects on Precision 32013.4.4 Effects on Validity 32113.5 Dynamics of Interviewer Effects in Panels 32213.5.1 Methods 32413.5.2 Results 32413.5.2.1 Interviewer Variance 32413.5.2.2 Interviewer Bias 32513.6 Summary and Discussion 326References 32914 Improving Survey Measurement of Household Finances: A Review of New Data Sources and Technologies 337Annette Jäckle, Mick P. Couper, Alessandra Gaia and Carli Lessof14.1 Introduction 33714.1.1 Why Is Good Financial Data Important for Longitudinal Surveys? 33814.1.2 Why New Data Sources and Technologies for Longitudinal Surveys? 33914.1.3 How Can New Technologies Change the Measurement Landscape? 34014.2 The Total Survey Error Framework 34114.3 Review of New Data Sources and Technologies 34314.3.1 Financial Aggregators 34614.3.2 Loyalty Card Data 34614.3.3 Credit and Debit Card Data 34714.3.4 Credit Rating Data 34814.3.5 In-Home Scanning of Barcodes 34914.3.6 Scanning of Receipts 35014.3.7 Mobile Applications and Expenditure Diaries 35014.4 New Data Sources and Technologies and TSE 35214.4.1 Errors of Representation 35214.4.1.1 Coverage Error 35214.4.1.2 Non-Participation Error 35314.4.2 Measurement Error 35514.4.2.1 Specification Error 35514.4.2.2 Missing or Duplicate Items/Episodes 35614.4.2.3 Data Capture Error 35714.4.2.4 Processing or Coding Error 35714.4.2.5 Conditioning Error 35714.5 Challenges and Opportunities 358Acknowledgements 360References 36015 How to Pop the Question? Interviewer and Respondent Behaviours When Measuring Change with Proactive Dependent Interviewing 368Annette Jäckle, Tarek Al Baghal, Stephanie Eckman and Emanuela Sala15.1 Introduction 36815.2 Background 37015.3 Data 37415.4 Behaviour Coding Interviewer and Respondent Interactions 37615.5 Methods 37915.6 Results 38015.6.1 Does the DIWording Affect how Interviewers and Respondents Behave? (RQ1) 38115.6.2 Does theWording of DI Questions Affect the Sequences of Interviewer and Respondent Interactions? (RQ2) 38215.6.3 Which Interviewer Behaviours Lead to Respondents Giving Codeable Answers? (RQ3) 38515.6.4 Are the Different Rates of Change Measured with Different DI Wordings Explained by Differences in I and R Behaviours? (RQ4) 38615.7 Conclusion 388Acknowledgements 390References 39016 Assessing Discontinuities and Rotation Group Bias in Rotating Panel Designs 399Jan A. van den Brakel, Paul A. Smith, Duncan Elliott, Sabine Krieg, Timo Schmid and Nikos Tzavidis16.1 Introduction 39916.2 Methods for Quantifying Discontinuities 40116.3 Time Series Models for Rotating Panel Designs 40216.3.1 Rotating Panels and Rotation Group Bias 40216.3.2 Structural Time Series Model for Rotating Panels 40416.3.3 Fitting Structural Time Series Models 40716.4 Time Series Models for Discontinuities in Rotating Panel Designs 40816.4.1 Structural Time Series Model for Discontinuities 40916.4.2 Parallel Run 41016.4.3 Combining Information from a Parallel Run with the Intervention Model 41116.4.4 Auxiliary Time Series 41216.5 Examples 41216.5.1 Redesigns in the Dutch LFS 41216.5.2 Using a State Space Model to Assess Redesigns in the UK LFS 41716.6 Discussion 419References 42117 Proper Multiple Imputation of Clustered or Panel Data 424Martin Spiess, Kristian Kleinke and Jost Reinecke17.1 Introduction 42417.2 Missing Data Mechanism and Ignorability 42517.3 Multiple Imputation (MI) 42617.3.1 Theory and Basic Approaches 42617.3.2 Single Versus Multiple Imputation 42917.3.2.1 Unconditional Mean Imputation and Regression Imputation 43017.3.2.2 Last Observation Carried Forward 43017.3.2.3 Row-and-Column Imputation 43217.4 Issues in the Longitudinal Context 43417.4.1 Single-Level Imputation 43517.4.2 Multilevel Multiple Imputation 43717.4.3 Interactions and Non-Linear Associations 43917.5 Discussion 441References 44318 Issues in Weighting for Longitudinal Surveys 447Peter Lynn and Nicole Watson18.1 Introduction: The Longitudinal Context 44718.1.1 Dynamic Study Population 44718.1.2 Wave Non-Response Patterns 44818.1.3 Auxiliary Variables 44918.1.4 Longitudinal Surveys as a Multi-Purpose Research Resource 45018.1.5 Multiple Samples 45018.2 Population Dynamics 45118.2.1 Post-Stratification 45118.2.2 Population Entrants 45318.2.3 Uncertain Eligibility 45418.3 Sample Participation Dynamics 45818.3.1 Subsets of Instrument Combinations 45918.3.2 Weights for Each Pair of Instruments 46118.3.3 Analysis-SpecificWeights 46218.4 Combining Multiple Non-Response Models 46318.5 Discussion 465Acknowledgements 466References 46719 Small-Area Estimation of Cross-Classified Gross Flows Using Longitudinal Survey Data 469Yves Thibaudeau, Eric Slud and Yang Cheng19.1 Introduction 46919.2 Role of Model-Assisted Estimation in Small Area Estimation 47019.3 Data and Methods 47119.3.1 Data 47119.3.2 Estimate and Variance Comparisons 47319.4 Estimating Gross Flows 47419.5 Models 47519.5.1 Generalised Logistic Fixed Effect Models 47519.5.2 Fixed Effect Logistic Models for Estimating Gross Flows 47619.5.3 Equivalence between Fixed-Effect Logistic Regression and Log-Linear Models 47719.5.4 Weighted Estimation 47819.5.5 Mixed-Effect Logit Models for Gross Flows 47919.5.6 Application to the Estimation of Gross Flows 48119.6 Results 48119.6.1 Goodness of Fit Tests for Fixed Effect Models 48119.6.2 Fixed-Effect Logit-Based Estimation of Gross Flows 48319.6.3 Mixed Effect Models 48319.6.4 Comparison of Models through BRR Variance Estimation 48319.7 Discussion 486Acknowledgements 488References 48820 Nonparametric Estimation for Longitudinal Data with Informative Missingness 491Zahoor Ahmad and Li-Chun Zhang20.1 Introduction 49120.2 Two NEE Estimators of Change 49420.3 On the Bias of NEE 49720.4 Variance Estimation 49920.4.1 NEE (Expression 20.3) 49920.4.2 NEE (Expression 20.6) 50020.5 Simulation Study 50120.5.1 Data 50220.5.2 Response Probability Models 50220.5.3 Simulation Set-up 50320.5.4 Results 50420.6 Conclusions 507References 511Index 513
Du kanske också är intresserad av
Improving Survey Methods
Uwe Engel, Ben Jann, Peter Lynn, Annette Scherpenzeel, Patrick Sturgis, Germany) Engel, Uwe (University of Bremen, CH) Jann, Ben (University of Bern, UK) Lynn, Peter (University of Essex, NL) Scherpenzeel, Annette (Utrecht University, UK) Sturgis, Patrick (University of Southampton
1 149 kr
Improving Survey Methods
Uwe Engel, Ben Jann, Peter Lynn, Annette Scherpenzeel, Patrick Sturgis, Germany) Engel, Uwe (University of Bremen, CH) Jann, Ben (University of Bern, UK) Lynn, Peter (University of Essex, NL) Scherpenzeel, Annette (Utrecht University, UK) Sturgis, Patrick (University of Southampton
2 529 kr