Bayesian Biostatistics
Inbunden, Engelska, 2012
Av Emmanuel Lesaffre, Andrew B. Lawson, Belgium) Lesaffre, Emmanuel (The Netherlands & K.U. Leuven, Leuven, USA) Lawson, Andrew B. (Medical University of South Carolina, Andrew B Lawson
1 499 kr
Produktinformation
- Utgivningsdatum2012-07-27
- Mått170 x 249 x 33 mm
- Vikt1 021 g
- FormatInbunden
- SpråkEngelska
- SerieStatistics in Practice
- Antal sidor544
- FörlagJohn Wiley & Sons Inc
- ISBN9780470018231
Tillhör följande kategorier
Emmanuel Lesaffre, Professor of Statistics, Biostatistical Centre, Catholic University of Leuven, Leuven, Belgium. Dr Lesaffre has worked on and studied various areas of biostatistics for 25 years. He has taught a variety of courses to students from many disciplines, from medicine and pharmacy, to statistics and engineering, teaching Bayesian statistics for the last 5 years. Having published over 200 papers in major statistical and medical journals, he has also Co-Edited the book Disease Mapping and Risk Assessment for Public Health, and was the Associate Editor for Biometrics. He is currently Co-Editor of the journal “Statistical Modelling: An International Journal”, Special Editor of two volumes on Statistics in Dentistry in Statistical Methods in Medical Research, and a member of the Editorial Boards of numerous journals. Andrew Lawson, Professor of Statistics, Dept of Epidemiology & Biostatistics, University of South Carolina, USA. Dr Lawson has considerable and wide ranging experience in the development of statistical methods for spatial and environmental epidemiology. He has solid experience in teaching Bayesian statistics to students studying biostatistics and has also written two books and numerous journal articles in the biostatistics area. Dr Lawson has also guest edited two special issues of “Statistics in Medicine” focusing on Disease Mapping. He is a member of the editorial boards of the journals: Statistics in Medicine and .
- Preface xiiiNotation, terminology and some guidance for reading the book xviiPart I Basic Concepts in Bayesian Methods1 Modes of statistical inference 31.1 The frequentist approach: A critical reflection 41.1.1 The classical statistical approach 41.1.2 The P-value as a measure of evidence 51.1.3 The confidence interval as a measure of evidence 81.1.4 An historical note on the two frequentist paradigms∗ 81.2 Statistical inference based on the likelihood function 101.2.1 The likelihood function 101.2.2 The likelihood principles 111.3 The Bayesian approach: Some basic ideas 141.3.1 Introduction 141.3.2 Bayes theorem – discrete version for simple events 151.4 Outlook 18Exercises 192 Bayes theorem: Computing the posterior distribution 202.1 Introduction 202.2 Bayes theorem – the binary version 202.3 Probability in a Bayesian context 212.4 Bayes theorem – the categorical version 222.5 Bayes theorem – the continuous version 232.6 The binomial case 242.7 The Gaussian case 302.8 The Poisson case 362.9 The prior and posterior distribution of h(θ) 402.10 Bayesian versus likelihood approach 402.11 Bayesian versus frequentist approach 412.12 The different modes of the Bayesian approach 412.13 An historical note on the Bayesian approach 422.14 Closing remarks 44Exercises 443 Introduction to Bayesian inference 463.1 Introduction 463.2 Summarizing the posterior by probabilities 463.3 Posterior summary measures 473.3.1 Characterizing the location and variability of the posterior distribution 473.3.2 Posterior interval estimation 493.4 Predictive distributions 513.4.1 The frequentist approach to prediction 523.4.2 The Bayesian approach to prediction 533.4.3 Applications 543.5 Exchangeability 583.6 A normal approximation to the posterior 603.6.1 A Bayesian analysis based on a normal approximation to the likelihood 603.6.2 Asymptotic properties of the posterior distribution 623.7 Numerical techniques to determine the posterior 633.7.1 Numerical integration 633.7.2 Sampling from the posterior 653.7.3 Choice of posterior summary measures 723.8 Bayesian hypothesis testing 723.8.1 Inference based on credible intervals 723.8.2 The Bayes factor 743.8.3 Bayesian versus frequentist hypothesis testing 763.9 Closing remarks 78Exercises 794 More than one parameter 824.1 Introduction 824.2 Joint versus marginal posterior inference 834.3 The normal distribution with μ and σ2 unknown 834.3.1 No prior knowledge on μ and σ2 is available 844.3.2 An historical study is available 864.3.3 Expert knowledge is available 884.4 Multivariate distributions 894.4.1 The multivariate normal and related distributions 894.4.2 The multinomial distribution 904.5 Frequentist properties of Bayesian inference 924.6 Sampling from the posterior distribution: The Method of Composition 934.7 Bayesian linear regression models 964.7.1 The frequentist approach to linear regression 964.7.2 A noninformative Bayesian linear regression model 974.7.3 Posterior summary measures for the linear regression model 984.7.4 Sampling from the posterior distribution 994.7.5 An informative Bayesian linear regression model 1014.8 Bayesian generalized linear models 1014.9 More complex regression models 1024.10 Closing remarks 102Exercises 1025 Choosing the prior distribution 1045.1 Introduction 1045.2 The sequential use of Bayes theorem 1045.3 Conjugate prior distributions 1065.3.1 Univariate data distributions 1065.3.2 Normal distribution – mean and variance unknown 1095.3.3 Multivariate data distributions 1105.3.4 Conditional conjugate and semiconjugate distributions 1115.3.5 Hyperpriors 1125.4 Noninformative prior distributions 1135.4.1 Introduction 1135.4.2 Expressing ignorance 1145.4.3 General principles to choose noninformative priors 1155.4.4 Improper prior distributions 1195.4.5 Weak/vague priors 1205.5 Informative prior distributions 1215.5.1 Introduction 1215.5.2 Data-based prior distributions 1215.5.3 Elicitation of prior knowledge 1225.5.4 Archetypal prior distributions 1265.6 Prior distributions for regression models 1295.6.1 Normal linear regression 1295.6.2 Generalized linear models 1315.6.3 Specification of priors in Bayesian software 1345.7 Modeling priors 1345.8 Other regression models 1365.9 Closing remarks 136Exercises 1376 Markov chain Monte Carlo sampling 1396.1 Introduction 1396.2 The Gibbs sampler 1406.2.1 The bivariate Gibbs sampler 1406.2.2 The general Gibbs sampler 1466.2.3 Remarks∗ 1506.2.4 Review of Gibbs sampling approaches 1526.2.5 The Slice sampler∗ 1536.3 The Metropolis(–Hastings) algorithm 1546.3.1 The Metropolis algorithm 1556.3.2 The Metropolis–Hastings algorithm 1576.3.3 Remarks∗ 1596.3.4 Review of Metropolis(–Hastings) approaches 1616.4 Justification of the MCMC approaches∗ 1626.4.1 Properties of the MH algorithm 1646.4.2 Properties of the Gibbs sampler 1656.5 Choice of the sampler 1656.6 The Reversible Jump MCMC algorithm∗ 1686.7 Closing remarks 172Exercises 1737 Assessing and improving convergence of the Markov chain 1757.1 Introduction 1757.2 Assessing convergence of a Markov chain 1767.2.1 Definition of convergence for a Markov chain 1767.2.2 Checking convergence of the Markov chain 1767.2.3 Graphical approaches to assess convergence 1777.2.4 Formal diagnostic tests 1807.2.5 Computing the Monte Carlo standard error 1867.2.6 Practical experience with the formal diagnostic procedures 1887.3 Accelerating convergence 1897.3.1 Introduction 1897.3.2 Acceleration techniques 1897.4 Practical guidelines for assessing and accelerating convergence 1947.5 Data augmentation 1957.6 Closing remarks 200Exercises 2018 Software 2028.1 WinBUGS and related software 2028.1.1 A first analysis 2038.1.2 Information on samplers 2068.1.3 Assessing and accelerating convergence 2078.1.4 Vector and matrix manipulations 2088.1.5 Working in batch mode 2108.1.6 Troubleshooting 2128.1.7 Directed acyclic graphs 2128.1.8 Add-on modules: GeoBUGS and PKBUGS 2148.1.9 Related software 2148.2 Bayesian analysis using SAS 2158.2.1 Analysis using procedure GENMOD 2158.2.2 Analysis using procedure MCMC 2178.2.3 Other Bayesian programs 2208.3 Additional Bayesian software and comparisons 2218.3.1 Additional Bayesian software 2218.3.2 Comparison of Bayesian software 2228.4 Closing remarks 222Exercises 223Part II Bayesian Tools for Statistical Modeling9 Hierarchical models 2279.1 Introduction 2279.2 The Poisson-gamma hierarchical model 2289.2.1 Introduction 2289.2.2 Model specification 2299.2.3 Posterior distributions 2319.2.4 Estimating the parameters 2329.2.5 Posterior predictive distributions 2379.3 Full versus empirical Bayesian approach 2389.4 Gaussian hierarchical models 2409.4.1 Introduction 2409.4.2 The Gaussian hierarchical model 2409.4.3 Estimating the parameters 2419.4.4 Posterior predictive distributions 2439.4.5 Comparison of FB and EB approach 2449.5 Mixed models 2449.5.1 Introduction 2449.5.2 The linear mixed model 2449.5.3 The generalized linear mixed model 2489.5.4 Nonlinear mixed models 2539.5.5 Some further extensions 2569.5.6 Estimation of the random effects and posterior predictive distributions 2569.5.7 Choice of the level-2 variance prior 2589.6 Propriety of the posterior 2609.7 Assessing and accelerating convergence 2619.8 Comparison of Bayesian and frequentist hierarchical models 2639.8.1 Estimating the level-2 variance 2639.8.2 ML and REml estimates compared with Bayesian estimates 2649.9 Closing remarks 265Exercises 26510 Model building and assessment 26710.1 Introduction 26710.2 Measures for model selection 26810.2.1 The Bayes factor 26810.2.2 Information theoretic measures for model selection 27410.2.3 Model selection based on predictive loss functions 28610.3 Model checking 28810.3.1 Introduction 28810.3.2 Model-checking procedures 28910.3.3 Sensitivity analysis 29510.3.4 Posterior predictive checks 30010.3.5 Model expansion 30810.4 Closing remarks 316Exercises 31611 Variable selection 31911.1 Introduction 31911.2 Classical variable selection 32011.2.1 Variable selection techniques 32011.2.2 Frequentist regularization 32211.3 Bayesian variable selection: Concepts and questions 32511.4 Introduction to Bayesian variable selection 32611.4.1 Variable selection for K small 32611.4.2 Variable selection for K large 33011.5 Variable selection based on Zellner’s g-prior 33311.6 Variable selection based on Reversible Jump Markov chain Monte Carlo 33611.7 Spike and slab priors 33911.7.1 Stochastic Search Variable Selection 34011.7.2 Gibbs Variable Selection 34311.7.3 Dependent variable selection using SSVS 34511.8 Bayesian regularization 34511.8.1 Bayesian LASSO regression 34611.8.2 Elastic Net and further extensions of the Bayesian LASSO 35011.9 The many regressors case 35111.10 Bayesian model selection 35511.11 Bayesian model averaging 35711.12 Closing remarks 359Exercises 360Part III Bayesian Methods in Practical Applications12 Bioassay 36512.1 Bioassay essentials 36512.1.1 Cell assays 36512.1.2 Animal assays 36612.2 A generic in vitro example 36912.3 Ames/Salmonella mutagenic assay 37112.4 Mouse lymphoma assay (L5178Y TK+/−) 37312.5 Closing remarks 37413 Measurement error 37513.1 Continuous measurement error 37513.1.1 Measurement error in a variable 37513.1.2 Two types of measurement error on the predictor in linear and nonlinear models 37613.1.3 Accommodation of predictor measurement error 37813.1.4 Nonadditive errors and other extensions 38213.2 Discrete measurement error 38213.2.1 Sources of misclassification 38213.2.2 Misclassification in the binary predictor 38313.2.3 Misclassification in a binary response 38613.3 Closing remarks 38914 Survival analysis 39014.1 Basic terminology 39014.1.1 Endpoint distributions 39114.1.2 Censoring 39214.1.3 Random effect specification 39314.1.4 A general hazard model 39314.1.5 Proportional hazards 39414.1.6 The Cox model with random effects 39414.2 The Bayesian model formulation 39414.2.1 A Weibull survival model 39514.2.2 A Bayesian AFT model 39714.3 Examples 39714.3.1 The gastric cancer study 39714.3.2 Prostate cancer in Louisiana: A spatial AFT model 40114.4 Closing remarks 40615 Longitudinal analysis 40715.1 Fixed time periods 40715.1.1 Introduction 40715.1.2 A classical growth-curve example 40815.1.3 Alternate data models 41415.2 Random event times 41715.3 Dealing with missing data 42015.3.1 Introduction 42015.3.2 Response missingness 42115.3.3 Missingness mechanisms 42215.3.4 Bayesian considerations 42415.3.5 Predictor missingness 42415.4 Joint modeling of longitudinal and survival responses 42415.4.1 Introduction 42415.4.2 An example 42515.5 Closing remarks 42916 Spatial applications: Disease mapping and image analysis 43016.1 Introduction 43016.2 Disease mapping 43016.2.1 Some general spatial epidemiological issues 43116.2.2 Some spatial statistical issues 43316.2.3 Count data models 43316.2.4 A special application area: Disease mapping/risk estimation 43416.2.5 A special application area: Disease clustering 43816.2.6 A special application area: Ecological analysis 44316.3 Image analysis 44416.3.1 fMRI modeling 44616.3.2 A note on software 45517 Final chapter 45617.1 What this book covered 45617.2 Additional Bayesian developments 45617.2.1 Medical decision making 45617.2.2 Clinical trials 45717.2.3 Bayesian networks 45717.2.4 Bioinformatics 45817.2.5 Missing data 45817.2.6 Mixture models 45817.2.7 Nonparametric Bayesian methods 45917.3 Alternative reading 459Appendix: Distributions 460A.1 Introduction 460A.2 Continuous univariate distributions 461A.3 Discrete univariate distributions 477A.4 Multivariate distributions 481References 484Index 509
“In conclusion, we consider the book by Lesaffre and Lawson a noteworthy contribution to the dissemination of Bayesian methods, and a good manual of reference for many common and some specialized applications in biomedical research. The great variety of examples and topics covered offers both advantages and disadvantages. Some parts might be too specialized for statistics students, but lecturers and applied statisticians will benefit a lot from the authors’ wealth of experience.” (Biometrical Journal, 15 July 2013)“The book Bayesian Biostatisticsby Lesaffre and Lawson, is a welcoming addition to this important area of research in biostatistical applications. For example, in the area of clinical trials, Bayesian methods provide flexibility and benefits for incorporating historical data with current data and then using the resulting posterior to make probability statements for different outcomes”.(Journal of Biopharmaceutical Statistics, 1 January 2013)