Del 757 - Wiley Series in Probability and Statistics
Meta Analysis
A Guide to Calibrating and Combining Statistical Evidence
Häftad, Engelska, 2008
Av Elena Kulinskaya, Stephan Morgenthaler, Robert G. Staudte, Elena (Statistical Advisory Service) Kulinskaya, Stephan (Swiss Federal Institute of Technology) Morgenthaler, Australia) Staudte, Robert G. (La Trobe University, Robert G Staudte
1 139 kr
Produktinformation
- Utgivningsdatum2008-02-29
 - Mått154 x 229 x 17 mm
 - Vikt431 g
 - FormatHäftad
 - SpråkEngelska
 - SerieWiley Series in Probability and Statistics
 - Antal sidor288
 - FörlagJohn Wiley & Sons Inc
 - ISBN9780470028643
 
Tillhör följande kategorier
Dr. E. Kulinskaya – Director, Statistical Advisory Service, Imperial College, London. Professor S. Morgenthaler – Chair of Applied Statistics, Ecole Polytechnique Fédérale de Lausanne, Switzerland. Professor Morgenthaler was Assistant Professor at Yale University prior to moving to EPFL and has chaired various ISI committees. Professor R. G. Staudte – Department of Statistical Science, La Trobe University, Melbourne. During his career at La Trobe he has served as Head of the Department of Statistical Science for five years and Head of the School of Mathematical and Statistical Sciences for two years. He was an Associate Editor for the Journal of Statistical Planning & Inference for 4 years, and is a member of the American Statistical Association, the Sigma Xi Scientific Research Society and the Statistical Society of Australia.
- Preface xiiiPart I The Methods 11 What can the reader expect from this book? 31.1 A calibration scale for evidence 41.1.1 T-values and p-values 41.1.2 How generally applicable is the calibration scale? 61.1.3 Combining evidence 71.2 The efficacy of glass ionomer versus resin sealants for prevention of caries 81.2.1 The data 81.2.2 Analysis for individual studies 91.2.3 Combining the evidence: fixed effects model 101.2.4 Combining the evidence: random effects model 101.3 Measures of effect size for two populations 111.4 Summary 132 Independent measurements with known precision 152.1 Evidence for one-sided alternatives 152.2 Evidence for two-sided alternatives 182.3 Examples 192.3.1 Filling containers 192.3.2 Stability of blood samples 202.3.3 Blood alcohol testing 203 Independent measurements with unknown precision 233.1 Effects and standardized effects 233.2 Paired comparisons 263.3 Examples 273.3.1 Daily energy intake compared to a fixed level 273.3.2 Darwin’s data on Zea mays 284 Comparing treatment to control 314.1 Equal unknown precision 314.2 Differing unknown precision 334.3 Examples 354.3.1 Drop in systolic blood pressure 354.3.2 Effect of psychotherapy on hospital length of stay 375 Comparing K treatments 395.1 Methodology 395.2 Examples 425.2.1 Characteristics of antibiotics 425.2.2 Red cell folate levels 436 Evaluating risks 476.1 Methodology 476.2 Examples 496.2.1 Ultrasound and left-handedness 496.2.2 Treatment of recurrent urinary tract infections 497 Comparing risks 517.1 Methodology 517.2 Examples 547.2.1 Treatment of recurrent urinary tract infections 547.2.2 Diuretics in pregnancy and risk of pre-eclamsia 548 Evaluating Poisson rates 578.1 Methodology 578.2 Example 608.2.1 Deaths by horse-kicks 609 Comparing Poisson rates 639.1 Methodology 649.1.1 Unconditional evidence 649.1.2 Conditional evidence 659.2 Example 679.2.1 Vaccination for the prevention of tuberculosis 6710 Goodness-of-fit testing 7110.1 Methodology 7110.2 Example 7410.2.1 Bellbirds arriving to feed nestlings 7411 Evidence for heterogeneity of effects and transformed effects 7711.1 Methodology 7711.1.1 Fixed effects 7711.1.2 Random effects 8011.2 Examples 8111.2.1 Deaths by horse-kicks 8111.2.2 Drop in systolic blood pressure 8211.2.3 Effect of psychotherapy on hospital length of stay 8311.2.4 Diuretics in pregnancy and risk of pre-eclamsia 8412 Combining evidence: fixed standardized effects model 8512.1 Methodology 8612.2 Examples 8712.2.1 Deaths by horse-kicks 8712.2.2 Drop in systolic blood pressure 8813 Combining evidence: random standardized effects model 9113.1 Methodology 9113.2 Example 9413.2.1 Diuretics in pregnancy and risk of pre-eclamsia 9414 Meta-regression 9514.1 Methodology 9514.2 Commonly encountered situations 9814.2.1 Standardized difference of means 9814.2.2 Difference in risk (two binomial proportions) 9914.2.3 Log relative risk (two Poisson rates) 9914.3 Examples 10014.3.1 Effect of open education on student creativity 10014.3.2 Vaccination for the prevention of tuberculosis 10115 Accounting for publication bias 10515.1 The downside of publishing 10515.2 Examples 10715.2.1 Environmental tobacco smoke 10715.2.2 Depression prevention programs 109Part II The Theory 11116 Calibrating evidence in a test 11316.1 Evidence for one-sided alternatives 11416.1.1 Desirable properties of one-sided evidence 11516.1.2 Connection of evidence to p-values 11516.1.3 Why the p-value is hard to understand 11616.2 Random p-value behavior 11816.2.1 Properties of the random p-value distribution 11816.2.2 Important consequences for interpreting p-values 11916.3 Publication bias 11916.4 Comparison with a Bayesian calibration 12116.5 Summary 12317 The basics of variance stabilizing transformations 12517.1 Standardizing the sample mean 12517.2 Variance stabilizing transformations 12617.2.1 Background material 12617.2.2 The Key Inferential Function 12717.3 Poisson model example 12817.3.1 Example of counts data 12917.3.2 A simple vst for the Poisson model 12917.3.3 A better vst for the Poisson model 13217.3.4 Achieving a desired expected evidence 13217.3.5 Confidence intervals 13217.3.6 Simulation study of coverage probabilities 13417.4 Two-sided evidence from one-sided evidence 13417.4.1 A vst based on the chi-squared statistic 13517.4.2 A vst based on doubling the p-value 13717.5 Summary 13818 One-sample binomial tests 13918.1 Variance stabilizing the risk estimator 13918.2 Confidence intervals for p 14018.3 Relative risk and odds ratio 14218.3.1 One-sample relative risk 14318.3.2 One-sample odds ratio 14418.4 Confidence intervals for small risks p 14518.4.1 Comparing intervals based on the log and arcsine transformations 14518.4.2 Confidence intervals for small p based on the Poisson approximation to the binomial 14618.5 Summary 14719 Two-sample binomial tests 14919.1 Evidence for a positive effect 14919.1.1 Variance stabilizing the risk difference 14919.1.2 Simulation studies 15119.1.3 Choosing sample sizes to achieve desired expected evidence 15119.1.4 Implications for the relative risk and odds ratio 15319.2 Confidence intervals for effect sizes 15319.3 Estimating the risk difference 15519.4 Relative risk and odds ratio 15519.4.1 Two-sample relative risk 15519.4.2 Two-sample odds ratio 15719.4.3 New confidence intervals for the RR and OR 15719.5 Recurrent urinary tract infections 15719.6 Summary 15820 Defining evidence in t-statistics 15920.1 Example 15920.2 Evidence in the Student t-statistic 15920.3 The Key Inferential Function for Student’s model 16220.4 Corrected evidence 16420.4.1 Matching p-values 16420.4.2 Accurate confidence intervals 16620.5 A confidence interval for the standardized effect 16720.5.1 Simulation study of coverage probabilities 16920.6 Comparing evidence in t- and z-tests 17020.6.1 On substituting s for s in large samples 17020.7 Summary 17121 Two-sample comparisons 17321.1 Drop in systolic blood pressure 17321.2 Defining the standardized effect 17421.3 Evidence in the Welch statistic 17521.3.1 The Welch statistic 17521.3.2 Variance stabilizing the Welch t-statistic 17621.3.3 Choosing the sample size to obtain evidence 17721.4 Confidence intervals for d 17721.4.1 Converting the evidence to confidence intervals 17721.4.2 Simulation studies 17821.4.3 Drop in systolic blood pressure (continued) 17921.5 Summary 17922 Evidence in the chi-squared statistic 18122.1 The noncentral chi-squared distribution 18122.2 A vst for the noncentral chi-squared statistic 18222.2.1 Deriving the vst 18222.2.2 The Key Inferential Function 18322.3 Simulation studies 18422.3.1 Bias in the evidence function 18422.3.2 Upper confidence bounds; confidence intervals 18522.4 Choosing the sample size 18822.4.1 Sample sizes for obtaining an expected evidence 18822.4.2 Sample size required to obtain a desired power 19022.5 Evidence for l > l 0 19022.6 Summary 19123 Evidence in F-tests 19323.1 Variance stabilizing transformations for the noncentral F 19323.2 The evidence distribution 19723.3 The Key Inferential Function 20023.3.1 Refinements 20323.4 The random effects model 20323.4.1 Expected evidence in the balanced case 20523.4.2 Comparing evidence in REM and FEM 20623.5 Summary 20624 Evidence in Cochran’s Q for heterogeneity of effects 20724.1 Cochran’s Q: the fixed effects model 20824.1.1 Background material 20824.1.2 Evidence for heterogeneity of fixed effects 21024.1.3 Evidence for heterogeneity of transformed effects 21124.2 Simulation studies 21124.3 Cochran’s Q: the random effects model 21424.4 Summary 21825 Combining evidence from K studies 21925.1 Background and preliminary steps 21925.2 Fixed standardized effects 22025.2.1 Fixed, and equal, standardized effects 22025.2.2 Fixed, but unequal, standardized effects 22125.2.3 Nuisance parameters 22125.3 Random transformed effects 22225.3.1 The random transformed effects model 22225.3.2 Evidence for a positive effect 22325.3.3 Confidence intervals for k and δ: K small 22425.3.4 Confidence intervals for k and δ: K large 22425.3.5 Simulation studies 22525.4 Example: drop in systolic blood pressure 22725.4.1 Inference for the fixed effects model 22925.4.2 Inference for the random effects model 23025.5 Summary 23026 Correcting for publication bias 23126.1 Publication bias 23126.2 The truncated normal distribution 23326.3 Bias correction based on censoring 23526.4 Summary 23827 Large-sample properties of variance stabilizing transformations 23927.1 Existence of the variance stabilizing transformation 23927.2 Tests and effect sizes 24027.3 Power and efficiency 24327.4 Summary 247References 249Index 253
 
"A book that offers an alternative, widely applicable, rigorously justified theory of meta-analysis." (Evidence Based Medicine, April 2009) "The book is well written and includes many examples. The book provides an interesting angle on statistical inference by introducing the concept of ‘evidence’. I enjoyed this concept very much." (Statistics in Medicine, May 2009)"I found the book well written, reasonably complete, and easy to read … .I recommend this book for both the new and experienced meta-analysts." (Journal of Biopharmaceutical Statistics, March 2009)