Statistical Methodologies with Medical Applications
Inbunden, Engelska, 2016
Av Poduri S.R.S. Rao, USA) Rao, Poduri S.R.S. (Profession of Statistics, University of Rochester, Rochester NY, Poduri S. R. S. Rao, Poduri S R S Rao
1 309 kr
Beställningsvara. Skickas inom 7-10 vardagar
Fri frakt för medlemmar vid köp för minst 249 kr.This book presents the methodology and applications of a range of important topics in statistics, and is designed for graduate students in Statistics and Biostatistics and for medical researchers. Illustrations and more than ninety exercises with solutions are presented. They are constructed from the research findings of the medical journals, summary reports of the Centre for Disease Control (CDC) and the World Health Organization (WHO), and practical situations. The illustrations and exercises are related to topics such as immunization, obesity, hypertension, lipid levels, diet and exercise, harmful effects of smoking and air pollution, and the benefits of gluten free diet. This book can be recommended for a one or two semester graduate level course for students studying Statistics, Biostatistics, Epidemiology and Health Sciences. It will also be useful as a companion for medical researchers and research oriented physicians.
Produktinformation
- Utgivningsdatum2016-12-09
- Mått155 x 229 x 20 mm
- Vikt476 g
- FormatInbunden
- SpråkEngelska
- Antal sidor288
- FörlagJohn Wiley & Sons Inc
- ISBN9781119258490
Tillhör följande kategorier
SRS Rao Poduri, Professor of Statistics, University of Rochester. Since receiving his?Ph.D degree in Statistics in 1965 from Harvard University under the supervision of the eminent professor William G. Cochran, Professor Poduri has?been teaching courses in five or six major areas of statistics to graduate and undergraduate students at the University of Rochester.
- Topics for illustrations, examples and exercises xvPreface xviiList of abbreviations xix1 Statistical measures 11.1 Introduction 11.2 Mean, mode and median 21.3 Variance and standard deviation 31.4 Quartiles, deciles and percentiles 41.5 Skewness and kurtosis 51.6 Frequency distributions 61.7 Covariance and correlation 71.8 Joint frequency distribution 91.9 Linear transformation of the observations 101.10 Linear combinations of two sets of observations 10Exercises 112 Probability, random variable, expected value and variance 142.1 Introduction 142.2 Events and probabilities 142.3 Mutually exclusive events 152.4 Independent and dependent events 152.5 Addition of probabilities 162.6 Bayes’ theorem 162.7 Random variables and probability distributions 172.8 Expected value, variance and standard deviation 172.9 Moments of a distribution 18Exercises 183 Odds ratios, relative risk, sensitivity, specificity and the ROC curve 193.1 Introduction 193.2 Odds ratio 193.3 Relative risk 203.4 Sensitivity and specificity 213.5 The receiver operating characteristic (ROC) curve 22Exercises 224 Probability distributions, expectations, variances and correlation 244.1 Introduction 244.2 Probability distribution of a discrete random variable 254.3 Discrete distributions 254.3.1 Uniform distribution 254.3.2 Binomial distribution 264.3.3 Multinomial distribution 274.3.4 Poisson distribution 274.3.5 Hypergeometric distribution 284.4 Continuous distributions 294.4.1 Uniform distribution of a continuous variable 294.4.2 Normal distribution 294.4.3 Normal approximation to the binomial distribution 304.4.4 Gamma distribution 314.4.5 Exponential distribution 324.4.6 Chisquare distribution 334.4.7 Weibull distribution 344.4.8 Student’s t, and F distributions 344.5 Joint distribution of two discrete random variables 344.5.1 Conditional distributions, means and variances 354.5.2 Unconditional expectations and variances 364.6 Bivariate normal distribution 37Exercises 38Appendix A4 38A4.1 Expected values and standard deviations of the distributions 38A4.2 Covariance and Correlation of the Numbers of Successes X and Failures (n – X) of the Binomial Random Variable 395 Means, standard errors and confidence limits 405.1 Introduction 405.2 Expectation, variance and standard error (S.E.) of the sample mean 415.3 Estimation of the variance and standard error 425.4 Confidence limits for the mean 435.5 Estimator and confidence limits for the difference of two means 445.6 Approximate confidence limits for the difference of two means 465.6.1 Large samples 465.6.2 Welch-Aspin approximation (1949, 1956) 465.6.3 Cochran’s approximation (1964) 465.7 Matched samples and paired comparisons 475.8 Confidence limits for the variance 485.9 Confidence limits for the ratio of two variances 495.10 Least squares and maximum likelihood methods of estimation 49Exercises 51Appendix A5 52A5.1 Tschebycheff’s inequality 52A5.2 Mean square error 536 Proportions, odds ratios and relative risks: Estimation and confidence limits 546.1 Introduction 546.2 A single proportion 546.3 Confidence limits for the proportion 556.4 Difference of two proportions or percentages 566.5 Combining proportions from independent samples 566.6 More than two classes or categories 576.7 Odds ratio 586.8 Relative risk 59Exercises 59Appendix A6 60A6. 1 Approximation to the variance of lnp 1 607 Tests of hypotheses: Means and variances 627.1 Introduction 627.2 Principle steps for the tests of a hypothesis 637.2.1 Null and alternate hypotheses 637.2.2 Decision rule, test statistic and the Type I & II errors 637.2.3 Significance level and critical region 647.2.4 The p-value 647.2.5 Power of the test and the sample size 657.3 Right-sided alternative, test statistic and critical region 657.3.1 The p-value 667.3.2 Power of the test 667.3.3 Sample size required for specified power 677.3.4 Right-sided alternative and estimated variance 687.3.5 Power of the test with estimated variance 697.4 Left-sided alternative and the critical region 697.4.1 The p-value 707.4.2 Power of the test 707.4.3 Sample size for specified power 717.4.4 Left-sided alternative with estimated variance 717.5 Two-sided alternative, critical region and the p-value 727.5.1 Power of the test 737.5.2 Sample size for specified power 747.5.3 Two-sided alternative and estimated variance 747.6 Difference between two means: Variances known 757.6.1 Difference between two means: Variances estimated 767.7 Matched samples and paired comparison 777.8 Test for the variance 777.9 Test for the equality of two variances 787.10 Homogeneity of variances 79Exercises 808 Tests of hypotheses: Proportions and percentages 828.1 A single proportion 828.2 Right-sided alternative 828.2.1 Critical region 838.2.2 The p-value 848.2.3 Power of the test 848.2.4 Sample size for specified power 848.3 Left-sided alternative 858.3.1 Critical region 858.3.2 The p-value 868.3.3 Power of the test 868.3.4 Sample size for specified power 868.4 Two-sided alternative 878.4.1 Critical region 878.4.2 The p-value 888.4.3 Power of the test 888.4.4 Sample size for specified power 898.5 Difference of two proportions 908.5.1 Right-sided alternative: Critical region and p-value 908.5.2 Right-sided alternative: Power and sample size 918.5.3 Left-sided alternative: Critical region and p-value 928.5.4 Left-sided alternative: Power and sample size 938.5.5 Two-sided alternative: Critical region and p-value 938.5.6 Power and sample size 948.6 Specified difference of two proportions 958.7 Equality of two or more proportions 958.8 A common proportion 96Exercises 979 The Chisquare statistic 999.1 Introduction 999.2 The test statistic 999.2.1 A single proportion 1009.2.2 Specified proportions 1009.3 Test of goodness of fit 1019.4 Test of Independence: (r X C) Classification 1019.5 Test of independence: (2x2) classification 1049.5.1 Fisher’s exact test of independence 1059.5.2 Mantel-Hanszeltest statistic 106Exercises 107Appendix A9 109A9.1 Derivations of 9.4(a) 109A9.2 Equality of the proportions 10910 Regression and correlation 11010.1 Introduction 11010.2 The regression model: One independent variable 11010.2.1 Least squares estimation of the regression 11210.2.2 Properties of the estimators 11310.2.3 ANOVA (Analysis of Variance) for the significance of the regression 11410.2.4 Tests of hypotheses, confidence limits and prediction intervals 11610.3 Regression on two independent variables 11810.3.1 Properties of the estimators 12010.3.2 ANOVA for the significance of the regression 12110.3.3 Tests of hypotheses, confidence limits and prediction intervals 12210.4 Multiple regression: The least squares estimation 12410.4.1 ANOVA for the significance of the regression 12610.4.2 Tests of hypotheses, confidence limits and prediction intervals 12710.4.3 Multiple correlation, adjusted R 2 and partial correlation 12810.4.4 Effect of including two or more independent variables and the partial F-test 12910.4.5 Equality of two or more series of regressions 13010.5 Indicator variables 13210.5.1 Separate regressions 13210.5.2 Regressions with equal slopes 13310.5.3 Regressions with the same intercepts 13410.6 Regression through the origin 13510.7 Estimation of trends 13610.8 Logistic regression and the odds ratio 13810.8.1 A single continuous predictor 13910.8.2 Two continuous predictors 13910.8.3 A single dichotomous predictor 14010.9 Weighted Least Squares (WLS) estimator 14110.10 Correlation 14210.10.1 Test of the hypothesis that two random variables are uncorrelated 14310.10.2 Test of the hypothesis that the correlation coefficient takes a specified value 14310.10.3 Confidence limits for the correlation coefficient 14410.11 Further topics in regression 14410.11.1 Linearity of the regression model and the lack of fit test 14410.11.2 the Assumption That V (ε I Xi)= σ2 , Same at Each Xi 14610.11.3 Missing observations 14610.11.4 Transformation of the regression model 14710.11.5 Errors of Measurements of (Xi , Yi) 147Exercises 148Appendix A10 149A0.1 Square of the Correlation of Yi and Ŷi 149A10.2 Multiple regression 149A10.3 Expression for SSR in (10.38) 15111 Analysis of variance and covariance: Designs of experiments 15211.1 Introduction 15211.2 One-way classification: Balanced design 15311.3 One-way random effects model: Balanced design 15511.4 Inference for the variance components and the mean 15511.5 One-way classification: Unbalanced design and fixed effects 15711.6 Unbalanced one-way classification: Random effects 15911.7 Intraclass correlation 16011.8 Analysis of covariance: The balanced design 16111.8.1 The model and least squares estimation 16111.8.2 Tests of hypotheses for the slope coefficient and equality of the means 16311.8.3 Confidence limits for the adjusted means and their differences 16411.9 Analysis of covariance: Unbalanced design 16511.9.1 Confidence limits for the adjusted means and the differences of the treatment effects 16711.10 Randomized blocks 16811.10.1 Randomized blocks: Random and mixed effects models 17011.11 Repeated measures design 17011.12 Latin squares 17211.12.1 The model and analysis 17211.13 Cross-over design 17411.14 Two-way cross-classification 17511.14.1 Additive model: Balanced design 17611.14.2 Two-way cross-classification with interaction: Balanced design 17811.14.3 Two-way cross-classification: Unbalanced additive model 17911.14.4 Unbalanced cross-classification with interaction 18311.14.5 Multiplicative interaction and Tukey’s test for nonadditivity 18411.15 Missing observations in the designs of experiments 184Exercises 186Appendix A11 189A11.1 Variance of σα 2 in (11.25) from Rao (1997, p. 20) 189A11.2 The total sum of squares (Txx , Tyy) and sum of products (Txy) can be expressed as the within and between components as follows 18912 Meta-analysis 19012.1 Introduction 19012.2 Illustrations of large-scale studies 19012.3 Fixed effects model for combining the estimates 19112.4 Random effects model for combining the estimates 19312.5 Alternative estimators for σ 2 α 19412.6 Tests of hypotheses and confidence limits for the variance components 194Exercises 195Appendix A12 19613 Survival analysis 19713.1 Introduction 19713.2 Survival and hazard functions 19813.3 Kaplan-Meir product-limit estimator 19813.4 Standard error of Ŝ(tm) and confidence limits for S(tm) 19913.5 Confidence limits for S(tm) with the right-censored observations 19913.6 Log-Rank test for the equality of two survival distributions 20113.7 Cox’s proportional hazard model 202Exercises 203Appendix A13 Expected value and variance of Ŝ(tm) and confidence limits for S(tm) 20314 Nonparametric statistics 20514.1 Introduction 20514.2 Spearman’s rank correlation coefficient 20514.3 The Sign test 20614.4 Wilcoxon (1945) Matched-pairs Signed-ranks test 20814.5 Wilcoxon’s test for the equality of the distributions of two non-normal populations with unpaired sample observations 20914.5.1 Unequal sample sizes 21014.6 McNemer’s (1955) matched pair test for two proportions 21014.7 Cochran’s (1950) Q-test for the difference of three or more matched proportions 21114.8 Kruskal-Wallis one-way ANOVA test by ranks 212Exercises 21315 Further topics 21515.1 Introduction 21515.2 Bonferroni inequality and the Joint Confidence Region 21515.3 Least significant difference (LSD) for a pair of treatment effects 21715.4 Tukey’s studentized range test 21715.5 Scheffe’s simultaneous confidence intervals 21815.6 Bootstrap confidence intervals 21915.7 Transformations for the ANOVA 220Exercises 221Appendix A15 221A5.1 Variance stabilizing transformation 221Solutions to exercises 222Appendix tables 249References 261Index 264