Medical Statistics from Scratch
An Introduction for Health Professionals
Häftad, Engelska, 2019
679 kr
Beställningsvara. Skickas inom 5-8 vardagar
Fri frakt för medlemmar vid köp för minst 249 kr.Correctly understanding and using medical statistics is a key skill for all medical students and health professionals.In an informal and friendly style, Medical Statistics from Scratch provides a practical foundation for everyone whose first interest is probably not medical statistics. Keeping the level of mathematics to a minimum, it clearly illustrates statistical concepts and practice with numerous real-world examples and cases drawn from current medical literature.Medical Statistics from Scratch is an ideal learning partner for all medical students and health professionals needing an accessible introduction, or a friendly refresher, to the fundamentals of medical statistics.
Produktinformation
- Utgivningsdatum2019-10-11
- Mått170 x 244 x 28 mm
- Vikt930 g
- FormatHäftad
- SpråkEngelska
- Antal sidor496
- Upplaga4
- FörlagJohn Wiley & Sons Inc
- ISBN9781119523888
Hoppa över listan
Mer från samma författare
Understanding Clinical Papers
David Bowers, Allan House, David Owens, Bridgette Bewick, UK) Bowers, David (The Nuffield Institute of Health, University of Leeds and Leeds General Infirmary, UK) House, Allan (School of Medicine, University of Leeds, UK) Owens, David (School of Medicine, University of Leeds, UK) Bewick, Bridgette (?School of Medicine, University of Leeds
759 kr
Tillhör följande kategorier
DAVID BOWERS, Leeds Institute of Health Sciences, School of Medicine, University of Leeds, Leeds, UK.
- Preface to the 4th Edition xixPreface to the 3rd Edition xxiPreface to the 2nd Edition xxiiiPreface to the 1st Edition xxvIntroduction xxviiI Some Fundamental Stuff 11 First things first – the nature of data 3Variables and data 3Where are we going …? 5The good, the bad, and the ugly – types of variables 5Categorical data 6Nominal categorical data 6Ordinal categorical data 7Metric data 8Discrete metric data 8Continuous metric data 9How can I tell what type of variable I am dealing with? 10The baseline table 11II Descriptive Statistics 152 Describing data with tables 17Descriptive statistics. What can we do with raw data? 18Frequency tables – nominal data 18The frequency distribution 19Relative frequency 20Frequency tables – ordinal data 20Frequency tables – metric data 22Frequency tables with discrete metric data 22Cumulative frequency 24Frequency tables with continuous metric data – grouping the raw data 25Open‐ended groups 27Cross‐tabulation – contingency tables 28Ranking data 303 Every picture tells a story – describing data with charts 31Picture it! 32Charting nominal and ordinal data 32The pie chart 32The simple bar chart 34The clustered bar chart 35The stacked bar chart 37Charting discrete metric data 39Charting continuous metric data 39The histogram 39The box (and whisker) plot 42Charting cumulative data 44The cumulative frequency curve with discrete metric data 44The cumulative frequency curve with continuous metric data 44Charting time‐based data – the time series chart 47The scatterplot 48The bubbleplot 494 Describing data from its shape 51The shape of things to come 51Skewness and kurtosis as measures of shape 52Kurtosis 55Symmetric or mound‐shaped distributions 56Normalness – the Normal distribution 56Bimodal distributions 58Determining skew from a box plot 595 Measures of location – Numbers R us 62Numbers, percentages, and proportions 62Preamble 63N umbers, percentages, and proportions 64Handling percentages – for those of us who might need a reminder 65Summary measures of location 67The mode 68The median 69The mean 70Percentiles 71Calculating a percentile value 72What is the most appropriate measure of location? 736 Measures of spread – Numbers R us – (again) 75Preamble 76The range 76The interquartile range (IQR) 76Estimating the median and interquartile range from the cumulative frequency curve 77The boxplot (also known as the box and whisker plot) 79Standard deviation 82Standard deviation and the Normal distribution 84Testing for Normality 86Using SPSS 86Using Minitab 87Transforming data 887 Incidence, prevalence, and standardisation 92Preamble 93The incidence rate and the incidence rate ratio (IRR) 93The incidence rate ratio 94Prevalence 94A couple of difficulties with measuring incidence and prevalence 97Some other useful rates 97Crude mortality rate 97Case fatality rate 98Crude maternal mortality rate 99Crude birth rate 99Attack rate 99Age‐specific mortality rate 99Standardisation – the age‐standardised mortality rate 101The direct method 102The standard population and the comparative mortality ratio (CMR) 103The indirect method 106The standardised mortality rate 107III The Confounding Problem 1118 Confounding – like the poor, (nearly) always with us 113Preamble 114What is confounding? 114Confounding by indication 117Residual confounding 119Detecting confounding 119Dealing with confounding – if confounding is such a problem, what can we do about it? 120Using restriction 120Using matching 121Frequency matching 121One‐to‐one matching 121Using stratification 122Using adjustment 122Using randomisation 122IV Design and Data 1259 Research design – Part I: Observational study designs 127Preamble 128Hey ho! Hey ho! it’s off to work we go 129Types of study 129Observational studies 130Case reports 130Case series studies 131Cross‐sectional studies 131Descriptive cross‐sectional studies 132Confounding in descriptive cross‐sectional studies 132Analytic cross‐sectional studies 133Confounding in analytic cross‐sectional studies 134From here to eternity – cohort studies 135Confounding in the cohort study design 139Back to the future – case–control studies 139Confounding in the case–control study design 141Another example of a case–control study 142Comparing cohort and case–control designs 143Ecological studies 144The ecological fallacy 14510 Research design – Part II: getting stuck in – experimental studies 146Clinical trials 147Randomisation and the randomised controlled trial (RCT) 148Block randomisation 149Stratification 149Blinding 149The crossover RCT 150Selection of participants for an RCT 153Intention to treat analysis (ITT) 15411 Getting the participants for your study: ways of sampling 156From populations to samples – statistical inference 157Collecting the data – types of sample 158The simple random sample and its offspring 159The systematic random sample 159The stratified random sample 160The cluster sample 160Consecutive and convenience samples 161How many participants should we have? Sample size 162Inclusion and exclusion criteria 162Getting the data 163V Chance Would Be a Fine Thing 16512 The idea of probability 167Preamble 167Calculating probability – proportional frequency 168Two useful rules for simple probability 169Rule 1. The multiplication rule for independent events 169Rule 2. The addition rule for mutually exclusive events 170Conditional and Bayesian statistics 171Probability distributions 171Discrete versus continuous probability distributions 172The binomial probability distribution 172The Poisson probability distribution 173The Normal probability distribution 17413 Risk and odds 175Absolute risk and the absolute risk reduction (ARR) 176The risk ratio 178The reduction in the risk ratio (or relative risk reduction (RRR)) 178A general formula for the risk ratio 179Reference value 179N umber needed to treat (NNT) 180What happens if the initial risk is small? 181Confounding with the risk ratio 182Odds 183Why you can’t calculate risk in a case–control study 185The link between probability and odds 186The odds ratio 186Confounding with the odds ratio 189Approximating the risk ratio from the odds ratio 189VI The Informed Guess – An Introduction to Confidence Intervals 19114 Estimating the value of a single population parameter – the idea of confidence intervals 193Confidence interval estimation for a population mean 194The standard error of the mean 195How we use the standard error of the mean to calculate a confidence interval for a population mean 197Confidence interval for a population proportion 200Estimating a confidence interval for the median of a single population 20315 Using confidence intervals to compare two population parameters 206What’s the difference? 207Comparing two independent population means 207An example using birthweights 208Assessing the evidence using the confidence interval 211Comparing two paired population means 215Within‐subject and between‐subject variations 215Comparing two independent population proportions 217Comparing two independent population medians – the Mann–Whitney rank sums method 219Comparing two matched population medians – the Wilcoxon signed‐ranks method 22016 Confidence intervals for the ratio of two population parameters 224Getting a confidence interval for the ratio of two independent population means 225Confidence interval for a population risk ratio 226Confidence intervals for a population odds ratio 229Confidence intervals for hazard ratios 232VII Putting it to the Test 23517 Testing hypotheses about the difference between two population parameters 237Answering the question 238The hypothesis 238The null hypothesis 239The hypothesis testing process 240The p‐value and the decision rule 241A brief summary of a few of the commonest tests 242Using the p‐value to compare the means of two independent populations 244Interpreting computer hypothesis test results for the difference in two independent population means – the two‐sample t test 245Output from Minitab – two‐sample t test of difference in mean birthweights of babies born to white mothers and to non‐white mothers 245Output from SPSS_: two‐sample t test of difference in mean birthweights of babies born to white mothers and to non‐white mothers 246Comparing the means of two paired populations – the matched‐pairs t test 248Using p‐values to compare the medians of two independent populations: the Mann–Whitney rank‐sums test 248How the Mann–Whitney test works 249Correction for multiple comparisons 250The Bonferroni correction for multiple testing 250Interpreting computer output for the Mann–Whitney test 252With Minitab 252With SPSS 252Two matched medians – the Wilcoxon signed‐ranks test 254Confidence intervals versus hypothesis testing 254What could possibly go wrong? 255Types of error 256The power of a test 257Maximising power – calculating sample size 258Rule of thumb 1. Comparing the means of two independent populations (metric data) 258Rule of thumb 2. Comparing the proportions of two independent populations (binary data) 25918 The Chi‐squared (χ2) test – what, why, and how? 261Of all the tests in all the world – you had to walk into my hypothesis testing procedure 262Using chi‐squared to test for related‐ness or for the equality of proportions 262Calculating the chi‐squared statistic 265Using the chi-squared statistic 267Yate’s correction (continuity correction) 268Fisher’s exact test 268The chi‐squared test with Minitab 269The chi‐squared test with SPSS 270The chi‐squared test for trend 272SPSS output for chi‐squared trend test 27419 Testing hypotheses about the ratio of two population parameters 276Preamble 276The chi‐squared test with the risk ratio 277The chi‐squared test with odds ratios 279The chi‐squared test with hazard ratios 281VIII Becoming Acquainted 28320 Measuring the association between two variables 285Preamble – plotting data 286Association 287The scatterplot 287The correlation coefficient 290Pearson’s correlation coefficient 290Is the correlation coefficient statistically significant in the population? 292Spearman’s rank correlation coefficient 29421 Measuring agreement 298To agree or not agree: that is the question 298Cohen’s kappa (κ) 300Some shortcomings of kappa 303Weighted kappa 303Measuring the agreement between two metric continuous variables, the Bland–Altmann plot 303IX Getting into a Relationship 30722 Straight line models: linear regression 309Health warning! 310Relationship and association 310A causal relationship – explaining variation 312Refresher – finding the equation of a straight line from a graph 313The linear regression model 314First, is the relationship linear? 315Estimating the regression parameters – the method of ordinary least squares (OLS) 316Basic assumptions of the ordinary least squares procedure 317Back to the example – is the relationship statistically significant? 318Using SPSS to regress birthweight on mother’s weight 318Using Minitab 319Interpreting the regression coefficients 320Goodness‐of‐fit, R2 320Multiple linear regression 322Adjusted goodness‐of‐fit: R̄2 324Including nominal covariates in the regression model: design variables and coding 326Building your model. Which variables to include? 327Automated variable selection methods 328Manual variable selection methods 329Adjustment and confounding 330Diagnostics – checking the basic assumptions of the multiple linear regression model 332Analysis of variance 33323 Curvy models: logistic regression 334A second health warning! 335The binary outcome variable 335Finding an appropriate model when the outcome variable is binary 335The logistic regression model 337Estimating the parameter values 338Interpreting the regression coefficients 338Have we got a significant result? statistical inference in the logistic regression model 340The Odds Ratio 341The multiple logistic regression model 343Building the model 344Goodness‐of‐fit 34624 Counting models: Poisson regression 349Preamble 350Poisson regression 350The Poisson regression equation 351Estimating β1 and β2 with the estimators b0 and b1 352Interpreting the estimated coefficients of a Poisson regression, b0 and b1 352Model building – variable selection 355Goodness‐of‐fit 357Zero‐inflated Poisson regression 358Negative binomial regression 359Zero‐inflated negative binomial regression 361X Four More Chapters 36325 Measuring survival 365Preamble 366Censored data 366A simple example of survival in a single group 366Calculating survival probabilities and the proportion surviving: the Kaplan–Meier table 368The Kaplan–Meier curve 369Determining median survival time 369Comparing survival with two groups 370The log‐rank test 371An example of the log‐rank test in practice 372The hazard ratio 372The proportional hazards (Cox’s) regression model – introduction 373The proportional hazards (Cox’s) regression model – the detail 376Checking the assumptions of the proportional hazards model 377An example of proportional hazards regression 37726 Systematic review and meta‐analysis 380Introduction 381Systematic review 381The forest plot 383Publication and other biases 384The funnel plot 386Significance tests for bias – Begg’s and Egger’s tests 387Combining the studies: meta‐analysis 389The problem of heterogeneity – the Q and I2 tests 38927 Diagnostic testing 393Preamble 393The measures – sensitivity and specificity 394The positive prediction and negative prediction values (PPV and NPV) 395The sensitivity–specificity trade‐off 396Using the ROC curve to find the optimal sensitivity versus specificity trade‐off 39728 Missing data 400The missing data problem 400Types of missing data 403Missing completely at random (MCAR) 403Missing at Random (MAR) 403Missing not at random (MNAR) 404Consequences of missing data 405Dealing with missing data 405Do nothing – the wing and prayer approach 406List‐wise deletion 406Pair‐wise deletion 407Imputation methods – simple imputation 408Replacement by the Mean 408Last observation carried forward 409Regression‐based imputation 410Multiple imputation 411Full Information Maximum Likelihood (FIML) and other methods 412Appendix: Table of random numbers 414References 415Solutions to Exercises 424Index 457