Statistical Tools for the Comprehensive Practice of Industrial Hygiene and Environmental Health Sciences
Inbunden, Engelska, 2017
1 799 kr
Beställningsvara. Skickas inom 11-20 vardagar
Fri frakt för medlemmar vid köp för minst 249 kr.Reviews and reinforces concepts and techniques typical of a first statistics course with additional techniques useful to the IH/EHS practitioner. Includes both parametric and non-parametric techniques described and illustrated in a worker health and environmental protection practice contextIllustrated through numerous examples presented in the context of IH/EHS field practice and research, using the statistical analysis tools available in Excel® wherever possibleEmphasizes the application of statistical tools to IH/EHS-type data in order to answer IH/EHS-relevant questionsIncludes an instructor’s manual that follows in parallel with the textbook, including PowerPoints to help prepare lectures and answers in the text as for the Exercises section of each chapter.
Produktinformation
- Utgivningsdatum2017-03-03
- Mått183 x 257 x 25 mm
- Vikt839 g
- FormatInbunden
- SpråkEngelska
- Antal sidor400
- FörlagJohn Wiley & Sons Inc
- ISBN9781119143017
Tillhör följande kategorier
David L. Johnson has over 40 years of experience in environmental engineering and occupational safety and health practice, research, and teaching. Dr. Johnson was a practicing environmental engineer and industrial hygienist with the United States Army for 20 years, serving in a variety of positions in the United States, Europe, and the Middle East. He joined the faculty of the University of Oklahoma’s College of Public Health, Department of Occupational and Environmental Health in 1991.
- Preface xvAcknowledgments xviiAbout the Author xixAbout the Companion Website xxi1 Some Basic Concepts 11.1 Introduction 11.2 Physical versus Statistical Sampling 21.3 Representative Measures 31.4 Strategies for Representative Sampling 31.5 Measurement Precision 41.6 Probability Concepts 61.6.1 The Relative Frequency Approach 71.6.2 The Classical Approach – Probability Based on Deductive Reasoning 71.6.3 Subjective Probability 71.6.4 Complement of a Probability 71.6.5 Mutually Exclusive Events 81.6.6 Independent Events 81.6.7 Events that Are Not Mutually Exclusive 91.6.8 Marginal and Conditional Probabilities 91.6.9 Testing for Independence 111.7 Permutations and Combinations 121.7.1 Permutations for Sampling without Replacement 121.7.2 Permutations for Sampling with Replacement 131.7.3 Combinations 131.8 Introduction to Frequency Distributions 141.8.1 The Binomial Distribution 141.8.2 The Normal Distribution 161.8.3 The Chi-Square Distribution 201.9 Confidence Intervals and Hypothesis Testing 221.10 Summary 231.11 Addendum: Glossary of Some Useful Excel Functions 231.12 Exercises 26References 282 Descriptive Statistics and Methods of Presenting Data 292.1 Introduction 292.2 Quantitative Descriptors of Data and Data Distributions 292.3 Displaying Data with Frequency Tables 332.4 Displaying Data with Histograms and Frequency Polygons 342.5 Displaying Data Frequency Distributions with Cumulative Probability Plots 352.6 Displaying Data with NED and Q–Q Plots 382.7 Displaying Data with Box-and-Whisker Plots 412.8 Data Transformations to Achieve Normality 422.9 Identifying Outliers 432.10 What to Do with Censored Values? 452.11 Summary 452.12 Exercises 46References 483 Analysis of Frequency Data 493.1 Introduction 493.2 Tests for Association and Goodness-of-Fit 503.2.1 r × c Contingency Tables and the Chi-Square Test 503.2.2 Fisher’s Exact Test 543.3 Binomial Proportions 553.4 Rare Events and the Poisson Distribution 573.4.1 Poisson Probabilities 573.4.2 Confidence Interval on a Poisson Count 603.4.3 Testing for Fit with the Poisson Distribution 613.4.4 Comparing Two Poisson Rates 623.4.5 Type I Error, Type II Error, and Power 643.4.6 Power and Sample Size in Comparing Two Poisson Rates 643.5 Summary 653.6 Exercises 66References 694 Comparing Two Conditions 714.1 Introduction 714.2 Standard Error of the Mean 714.3 Confidence Interval on a Mean 724.4 The t-Distribution 734.5 Parametric One-Sample Test – Student’s t-Test 744.6 Two-Tailed versus One-Tailed Hypothesis Tests 764.7 Confidence Interval on a Variance 774.8 Other Applications of the Confidence Interval Concept in IH/EHS Work 794.8.1 OSHA Compliance Determinations 794.8.2 Laboratory Analyses – LOB, LOD, and LOQ 804.9 Precision, Power, and Sample Size for One Mean 814.9.1 Sample Size Required to Estimate a Mean with a Stated Precision 814.9.2 Sample Size Required to Detect a Specified Difference in Student’s t-Test 814.10 Iterative Solutions Using the Excel Goal Seek Utility 824.11 Parametric Two-Sample Tests 834.11.1 Confidence Interval for a Difference in Means: The Two-Sample t-Test 834.11.2 Two-Sample t-Test When Variances Are Equal 844.11.3 Verifying the Assumptions of the Two-Sample t-Test 854.11.3.1 Lilliefors Test for Normality 864.11.3.2 Shapiro–Wilk W-Test for Normality 874.11.3.3 Testing for Homogeneity of Variance 914.11.3.4 Transformations to Stabilize Variance 934.11.4 Two-Sample t-Test with Unequal Variances – Welch’s Test 934.11.5 Paired Sample t-Test 954.11.6 Precision, Power, and Sample Size for Comparing Two Means 964.12 Testing for Difference in Two Binomial Proportions 994.12.1 Testing a Binomial Proportion for Difference from a Known Value 1004.12.2 Testing Two Binomial Proportions for Difference 1004.13 Nonparametric Two-Sample Tests 1024.13.1 Mann–Whitney U Test 1024.13.2 Wilcoxon Matched Pairs Test 1044.13.3 McNemar and Binomial Tests for Paired Nominal Data 1054.14 Summary 1074.15 Exercises 107References 1115 Characterizing the Upper Tail of the Exposure Distribution 1135.1 Introduction 1135.2 Upper Tolerance Limits 1135.3 Exceedance Fractions 1155.4 Distribution Free Tolerance Limits 1175.5 Summary 1195.6 Exercises 119References 1216 One-Way Analysis of Variance 1236.1 Introduction 1236.2 Parametric One-Way ANOVA 1236.2.1 How the Parametric ANOVA Works – Sums of Squares and the F-Test 1246.2.2 Post hoc Multiple Pairwise Comparisons in Parametric ANOVA 1276.2.2.1 Tukey’s Test 1276.2.2.2 Tukey–Kramer Test 1286.2.2.3 Dunnett’s Test for Comparing Means to a Control Mean 1306.2.2.4 Planned Contrasts Using the Scheffé S Test 1326.2.3 Checking the ANOVA Model Assumptions – NED Plots and Variance Tests 1346.2.3.1 Levene’s Test 1346.2.3.2 Bartlett’s Test 1356.3 Nonparametric Analysis of Variance 1366.3.1 Kruskal–Wallis Nonparametric One-Way ANOVA 1376.3.2 Post hoc Multiple Pairwise Comparisons in Nonparametric ANOVA 1396.3.2.1 Nemenyi’s Test 1396.3.2.2 Bonferroni–Dunn Test 1406.4 ANOVA Disconnects 1426.5 Summary 1446.6 Exercises 145References 1497 Two-Way Analysis of Variance 1517.1 Introduction 1517.2 Parametric Two-Way ANOVA 1517.2.1 Two-Way ANOVA without Interaction 1547.2.2 Checking for Homogeneity of Variance 1547.2.3 Multiple Pairwise Comparisons When There Is No Interaction Term 1547.2.4 Two-Way ANOVA with Interaction 1567.2.5 Multiple Pairwise Comparisons with Interaction 1587.2.6 Two-Way ANOVA without Replication 1607.2.7 Repeated-Measures ANOVA 1607.2.8 Two-Way ANOVA with Unequal Sample Sizes 1627.3 Nonparametric Two-Way ANOVA 1627.3.1 Rank Tests 1627.3.1.1 The Rank Test 1627.3.1.2 The Rank Transform Test 1667.3.1.3 Other Options – Aligned Rank Tests 1667.3.2 Repeated-Measures Nonparametric ANOVA – Friedman’s Test 1667.3.2.1 Friedman’s Test without Replication 1677.3.2.2 Multiple Comparisons for Friedman’s Test without Replication 1697.3.2.3 Friedman’s Test with Replication 1707.3.2.4 Multiple Comparisons for Friedman’s Test with Replication 1727.4 More Powerful Non-ANOVA Approaches: Linear Modeling 1727.5 Summary 1727.6 Exercises 172References 1788 Correlation Analysis 1818.1 Introduction 1818.2 Simple Parametric Correlation Analysis 1818.2.1 Testing the Correlation Coefficient for Significance 1848.2.1.1 t-Test for Significance 1858.2.1.2 F-Test for Significance 1868.2.2 Confidence Limits on the Correlation Coefficient 1868.2.3 Power in Simple Correlation Analysis 1878.2.4 Comparing Two Correlation Coefficients for Difference 1888.2.5 Comparing More Than Two Correlation Coefficients for Difference 1898.2.6 Multiple Pairwise Comparisons of Correlation Coefficients 1908.3 Simple Nonparametric Correlation Analysis 1908.3.1 Spearman Rank Correlation Coefficient 1908.3.2 Testing Spearman’s Rank Correlation Coefficient for Statistical Significance 1918.3.3 Correction to Spearman’s Rank Correlation Coefficient When There Are Tied Ranks 1938.4 Multiple Correlation Analysis 1958.4.1 Parametric Multiple Correlation 1958.4.2 Nonparametric Multiple Correlation: Kendall’s Coefficient of Concordance 1958.5 Determining Causation 1988.6 Summary 1988.7 Exercises 198References 2049 Regression Analysis 2059.1 Introduction 2059.2 Linear Regression 2059.2.1 Simple Linear Regression 2079.2.2 Nonconstant Variance – Transformations and Weighted Least Squares Regression 2099.2.3 Multiple Linear Regression 2139.2.3.1 Multiple Regression in Excel 2159.2.3.2 Multiple Regression Using the Excel Solver Utility 2189.2.3.3 Multiple Regression Using Advanced Software Packages 2219.2.4 Using Regression for Factorial ANOVA with Unequal Sample Sizes 2229.2.5 Multiple Correlation Analysis Using Multiple Regression 2279.2.5.1 Assumptions of Parametric Multiple Correlation 2339.2.5.2 Options When Collinearity Is a Problem 2339.2.6 Polynomial Regression 2349.2.7 Interpreting Linear Regression Results 2349.2.8 Linear Regression versus ANOVA 2359.3 Logistic Regression 2359.3.1 Odds and Odds Ratios 2369.3.2 The Logit Transformation 2389.3.3 The Likelihood Function 2409.3.4 Logistic Regression in Excel 2409.3.5 Likelihood Ratio Test for Significance of MLE Coefficients 2419.3.6 Odds Ratio Confidence Limits in Multivariate Models 2439.4 Poisson Regression 2439.4.1 Poisson Regression Model 2439.4.2 Poisson Regression in Excel 2449.5 Regression with Excel Add-ons 2459.6 Summary 2469.7 Exercises 246References 25210 Analysis of Covariance 25310.1 Introduction 25310.2 The Simple ANCOVA Model and Its Assumptions 25310.2.1 Required Regressions 25510.2.2 Checking the ANCOVA Assumptions 25810.2.2.1 Linearity, Independence, and Normality 25810.2.2.2 Similar Variances 25810.2.2.3 Equal Regression Slopes 25810.2.3 Testing and Estimating the Treatment Effects 25910.3 The Two-Factor Covariance Model 26110.4 Summary 26110.5 Exercises 261Reference 26311 Experimental Design 26511.1 Introduction 26511.2 Randomization 26611.3 Simple Randomized Experiments 26611.4 Experimental Designs Blocking on Categorical Factors 26711.5 Randomized Full Factorial Experimental Design 27011.6 Randomized Full Factorial Design with Blocking 27111.7 Split Plot Experimental Designs 27211.8 Balanced Experimental Designs – Latin Square 27311.9 Two-Level Factorial Experimental Designs with Quantitative Factors 27411.9.1 Two-Level Factorial Designs for Exploratory Studies 27411.9.2 The Standard Order 27511.9.3 Calculating Main Effects 27611.9.4 Calculating Interactions 27811.9.5 Estimating Standard Errors 27811.9.6 Estimating Effects with REGRESSION in Excel 27911.9.7 Interpretation 28011.9.8 Cube, Surface, and NED Plots as an Aid to Interpretation 28011.9.9 Fractional Factorial Two-Level Experiments 28211.10 Summary 28211.11 Exercises 283References 28412 Uncertainty and Sensitivity Analysis 28512.1 Introduction 28512.2 Simulation Modeling 28512.2.1 Propagation of Errors 28612.2.2 Simple Bounding 28712.2.2.1 Sums and Differences 28712.2.2.2 Products and Ratios 28712.2.2.3 Powers 28912.2.3 Addition in Quadrature 28912.2.3.1 Sums and Differences 28912.2.3.2 Products and Ratios 29012.2.3.3 Powers 29212.2.4 LOD and LOQ Revisited – Dust Sample Gravimetric Analysis 29212.3 Uncertainty Analysis 29512.4 Sensitivity Analysis 29612.4.1 One-at-a-Time (OAT) Analysis 29612.4.2 Variance-Based Analysis 29712.5 Further Reading on Uncertainty and Sensitivity Analysis 29712.6 Monte Carlo Simulation 29712.7 Monte Carlo Simulation in Excel 29812.7.1 Generating Random Numbers in Excel 29812.7.2 The Populated Spreadsheet Approach 29912.7.3 Monte Carlo Simulation Using VBA Macros 29912.8 Summary 30312.9 Exercises 303References 30713 Bayes’ Theorem and Bayesian Decision Analysis 30913.1 Introduction 30913.2 Bayes’ Theorem 31013.3 Sensitivity, Specificity, and Positive and Negative Predictive Value in ScreeningTests 31013.4 Bayesian Decision Analysis in Exposure Control Banding 31213.4.1 Introduction to BDA 31213.4.2 The Prior Distribution and the Parameter Space 31413.4.3 The Posterior Distribution and Likelihood Function 31413.4.4 Relative Influences of the Prior and the Data 31513.4.5 Frequentist versus Bayesian Perspectives 31613.5 Exercises 316References 318A z-Tables of the Standard Normal Distribution 321B Critical Values of the Chi-Square Distribution 327C Critical Values for the t-Distribution 329D Critical Values for Lilliefors Test 331Reference 332E Shapiro–Wilk W Test 𝜶 Coefficients and Critical Values 333Reference 336F Critical Values of the F Distribution for 𝜶 = 0.05 337G Critical U Values for the Mann–Whitney U Test 341Reference 342H Critical Wilcoxon Matched Pairs Test t Values 343Reference 344I K Values for Upper Tolerance Limits 345Reference 346J Exceedance Fraction 95% Lower Confidence Limit versus Z 347Reference 347K q Values for Tukey’s, Tukey–Kramer, and Nemenyi’s MSD Tests 349L q′ Values for Dunnett’s Test 351Reference 353M Q Values for the Bonferroni–Dunn MSD Test 355N Critical Spearman Rank Correlation Test Values 357O Critical Values of Kendall’s W 359Reference 361Index 363