Fundamentals of Statistical Reasoning in Education
Häftad, Engelska, 2014
Av Theodore Coladarci, Casey D. Cobb, Theodore (University of Maine) Coladarci, Casey D. (University of Connecticut) Cobb, Casey D Cobb
1 359 kr
Produktinformation
- Utgivningsdatum2014-02-11
- Mått203 x 252 x 25 mm
- Vikt726 g
- FormatHäftad
- SpråkEngelska
- Antal sidor448
- Upplaga4
- FörlagJohn Wiley & Sons Inc
- ISBN9781118425213
Tillhör följande kategorier
Theodore Coladarci is Professor of Educational Psychology at the University of Maine. He has published extensively, including Elementary Descriptive Statistics, which he co-authored with A.P. Coladarci.Casey D. Cobb is the Raymond Neag Professor of Educational Policy at the Neag School of Education at the University of Connecticut. His current research interests include policies on school choice, accountability, and school reform, where he examines the implications for equity and educational opportunity. He is also co-author of Leading dynamic schools (Corwin Press), and has published in such journals as Educational Evaluation and Policy Analysis, Educational Policy, Education and Urban Society, Educational Leadership, and Review of Research in Education.
- Chapter 1 Introduction 11.1 Why Statistics? 11.2 Descriptive Statistics 21.3 Inferential Statistics 31.4 The Role of Statistics in Educational Research 41.5 Variables and Their Measurement 51.6 Some Tips on Studying Statistics 8PART 1 DESCRIPTIVE STATISTICS 13Chapter 2 Frequency Distributions 142.1 Why Organize Data? 142.2 Frequency Distributions for Quantitative Variables 142.3 Grouped Scores 152.4 Some Guidelines for Forming Class Intervals 172.5 Constructing a Grouped-Data Frequency Distribution 182.6 The Relative Frequency Distribution 192.7 Exact Limits 212.8 The Cumulative Percentage Frequency Distribution 222.9 Percentile Ranks 232.10 Frequency Distributions for Qualitative Variables 252.11 Summary 26Chapter 3 Graphic Representation 343.1 Why Graph Data? 343.2 Graphing Qualitative Data: The Bar Chart 343.3 Graphing Quantitative Data: The Histogram 353.4 Relative Frequency and Proportional Area 393.5 Characteristics of Frequency Distributions 413.6 The Box Plot 443.7 Summary 45Chapter 4 Central Tendency 524.1 The Concept of Central Tendency 524.2 The Mode 524.3 The Median 534.4 The Arithmetic Mean 544.5 Central Tendency and Distribution Symmetry 574.6 Which Measure of Central Tendency to Use? 594.7 Summary 59Chapter 5 Variability 665.1 Central Tendency Is Not Enough: The Importance of Variability 665.2 The Range 675.3 Variability and Deviations From the Mean 685.4 The Variance 695.5 The Standard Deviation 705.6 The Predominance of the Variance and Standard Deviation 715.7 The Standard Deviation and the Normal Distribution 725.8 Comparing Means of Two Distributions: The Relevance of Variability 735.9 In the Denominator: n Versus n −1 755.10 Summary 76Chapter 6 Normal Distributions and Standard Scores 816.1 A Little History: Sir Francis Galton and the Normal Curve 816.2 Properties of the Normal Curve 826.3 More on the Standard Deviation and the Normal Distribution 826.4 z Scores 846.5 The Normal Curve Table 876.6 Finding Area When the Score Is Known 886.7 Reversing the Process: Finding Scores When the Area Is Known 916.8 Comparing Scores From Different Distributions 936.9 Interpreting Effect Size 946.10 Percentile Ranks and the Normal Distribution 966.11 Other Standard Scores 976.12 Standard Scores Do Not “Normalize” a Distribution 986.13 The Normal Curve and Probability 986.14 Summary 99Chapter 7 Correlation 1067.1 The Concept of Association 1067.2 Bivariate Distributions and Scatterplots 1067.3 The Covariance 1117.4 The Pearson r 1177.5 Computation of r: The Calculating Formula 1187.6 Correlation and Causation 1207.7 Factors Influencing Pearson r 1227.8 Judging the Strength of Association: r2 1257.9 Other Correlation Coefficients 1277.10 Summary 127Chapter 8 Regression and Prediction 1348.1 Correlation Versus Prediction 1348.2 Determining the Line of Best Fit 1358.3 The Regression Equation in Terms of Raw Scores 1388.4 Interpreting the Raw-Score Slope 1418.5 The Regression Equation in Terms of z Scores 1418.6 Some Insights Regarding Correlation and Prediction 1428.7 Regression and Sums of Squares 1458.8 Residuals and Unexplained Variation 1478.9 Measuring the Margin of Prediction Error: The Standard Error of Estimate 1488.10 Correlation and Causality (Revisited) 1528.11 Summary 153PART 2 INFERENTIAL STATISTICS 163Chapter 9 Probability and Probability Distributions 1649.1 Statistical Inference: Accounting for Chance in Sample Results 1649.2 Probability: The Study of Chance 1659.3 Definition of Probability 1669.4 Probability Distributions 1689.5 The OR/addition Rule 1699.6 The AND/Multiplication Rule 1719.7 The Normal Curve as a Probability Distribution 1729.8 “So What?”—Probability Distributions as the Basis for Statistical Inference 1749.9 Summary 175Chapter 10 Sampling Distributions 17910.1 From Coins to Means 17910.2 Samples and Populations 18010.3 Statistics and Parameters 18110.4 Random Sampling Model 18110.5 Random Sampling in Practice 18310.6 Sampling Distributions of Means 18410.7 Characteristics of a Sampling Distribution of Means 18510.8 Using a Sampling Distribution of Means to Determine Probabilities 18810.9 The Importance of Sample Size (n) 19110.10 Generality of the Concept of a Sampling Distribution 19310.11 Summary 193Chapter 11 Testing Statistical Hypotheses About μ When σ Is Known: The One-Sample z Test 19911.1 Testing a Hypothesis About μ: Does “Homeschooling” Make a Difference? 19911.2 Dr. Meyer’s Problem in a Nutshell 20011.3 The Statistical Hypotheses: H0 and H1 20111.4 The Test Statistic z 20211.5 The Probability of the Test Statistic: The p Value 20311.6 The Decision Criterion: Level of Significance (α) 20411.7 The Level of Significance and Decision Error 20711.8 The Nature and Role of H0 and H1 20911.9 Rejection Versus Retention of H0 20911.10 Statistical Significance Versus Importance 21011.11 Directional and Nondirectional Alternative Hypotheses 21211.12 The Substantive Versus the Statistical 21411.13 Summary 215Chapter 12 Estimation 22212.1 Hypothesis Testing Versus Estimation 22212.2 Point Estimation Versus Interval Estimation 22312.3 Constructing an Interval Estimate of μ 22412.4 Interval Width and Level of Confidence 22612.5 Interval Width and Sample Size 22712.6 Interval Estimation and Hypothesis Testing 22812.7 Advantages of Interval Estimation 23012.8 Summary 230Chapter 13 Testing Statistical Hypotheses About μ When σ Is Not Known: The One-Sample t Test 23513.1 Reality: σ Often Is Unknown 23513.2 Estimating the Standard Error of the Mean 23613.3 The Test Statistic t 23713.4 Degrees of Freedom 23813.5 The Sampling Distribution of Student’s t 23913.6 An Application of Student’s t 24213.7 Assumption of Population Normality 24413.8 Levels of Significance Versus p Values 24413.9 Constructing a Confidence Interval for μ When σ Is Not Known 24613.10 Summary 247Chapter 14 Comparing the Means of Two Populations: Independent Samples 25314.1 From One Mu (μ) to Two 25314.2 Statistical Hypotheses 25414.3 The Sampling Distribution of Differences Between Means 25514.4 Estimating σx̄1-x̄2 25714.5 The t Test for Two Independent Samples 25914.6 Testing Hypotheses About Two Independent Means: An Example 26014.7 Interval Estimation of μ1 − μ2 26214.8 Appraising the Magnitude of a Difference: Measures of Effect Size for − 26414.9 How Were Groups Formed? The Role of Randomization 26814.10 Statistical Inferences and Nonstatistical Generalizations 26914.11 Summary 270Chapter 15 Comparing the Means of Dependent Samples 27815.1 The Meaning of “Dependent” 27815.2 Standard Error of the Difference Between Dependent Means 27915.3 Degrees of Freedom 28115.4 The t Test for Two Dependent Samples 28115.5 Testing Hypotheses About Two Dependent Means: An Example 28315.6 Interval Estimation of μD 28615.7 Summary 287Chapter 16 Comparing the Means of Three or More Independent Samples: One-Way Analysis of Variance 29416.1 Comparing More Than Two Groups: Why Not Multiplet Tests? 29416.2 The Statistical Hypotheses in One-Way ANOVA 29516.3 The Logic of One-Way ANOVA: An Overview 29616.4 Alison’s Reply to Gregory 29916.5 Partitioning the Sums of Squares 30016.6 Within-Groups and Between- Groups Variance Estimates 30316.7 The F Test 30416.8 Tukey’s “HSD” Test 30616.9 Interval Estimation of μi − μj 30816.10 One-Way ANOVA: Summarizing the Steps 30916.11 Estimating the Strength of the Treatment Effect: Effect Size (ω2) 31116.12 ANOVA Assumptions (and Other Considerations) 31216.13 Summary 313Chapter 17 Inferences about the Pearson Correlation Coefficient 32217.1 From μ to ρ 32217.2 The Sampling Distribution of r When ρ = 0 32217.3 Testing the Statistical Hypothesis That ρ = 0 32417.4 An Example 32417.5 In Brief: Student’s t Distribution and the Regression Slope (b) 32617.6 Table E 32617.7 The Role of n in the Statistical Significance of r 32817.8 Statistical Significance Versus Importance (Again) 32917.9 Testing Hypotheses Other Than ρ = 0 32917.10 Interval Estimation of ρ 33017.11 Summary 332Chapter 18 Making Inferences From Frequency Data 33818.1 Frequency Data Versus Score Data 33818.2 A Problem Involving Frequencies: The One-Variable Case 33918.3 χ2: A Measure of Discrepancy Between Expected and Observed Frequencies 34018.4 The Sampling Distribution of χ2 34118.5 Completion of the Voter Survey Problem: The χ2 Goodness-of-Fit Test 34318.6 The χ2 Test of a Single Proportion 34418.7 Interval Estimate of a Single Proportion 34518.8 When There Are Two Variables: The χ2 Test of Independence 34718.9 Finding Expected Frequencies in the Two-Variable Case 34818.10 Calculating the Two-Variable χ2 35018.11 The χ2 Test of Independence: Summarizing the Steps 35118.12 The 2 × 2 Contingency Table 35218.13 Testing a Difference Between Two Proportions 35318.14 The Independence of Observations 35318.15 χ2 and Quantitative Variables 35418.16 Other Considerations 35518.17 Summary 355Chapter 19 Statistical “Power” (and How to Increase It) 36319.1 The Power of a Statistical Test 36319.2 Power and Type II Error 36419.3 Effect Size (Revisited) 36519.4 Factors Affecting Power: The Effect Size 36619.5 Factors Affecting Power: Sample Size 36719.6 Additional Factors Affecting Power 36819.7 Significance Versus Importance 36919.8 Selecting an Appropriate Sample Size 37019.9 Summary 373 Epilogue A Note on (Almost) Assumption-Free Tests 379References 380Appendix A Review of Basic Mathematics 382A.1 Introduction 382A.2 Symbols and Their Meaning 382A.3 Arithmetic Operations Involving Positive and Negative Numbers 383A.4 Squares and Square Roots 383A.5 Fractions 384A.6 Operations Involving Parentheses 385A.7 Approximate Numbers, Computational Accuracy, and Rounding 386Appendix B Answers to Selected End-of-Chapter Problems 387Appendix C Statistical Tables 408Glossary 421Index 427Useful Formulas 433
"This book, like the first three editions, is written largely with students of education in mind. Accordingly, the authors have drawn primarily on examples and issues found in school settings, such as those having to do with instruction, learning, motivation, and assessment. The emphasis on educational applications notwithstanding, the authors are confident that readers will find this book of general relevance to other disciplines in the behavioral sciences as well." (Zentralblatt MATH 2016)