Statistical Reasoning in the Behavioral Sciences
Häftad, Engelska, 2021
Av Bruce M. King, Patrick J. Rosopa, Edward W. Minium, Bruce M. (University of New Orleans) King, Patrick J. (Clemson University) Rosopa, Edward W. (San Jose State University) Minium, Bruce M King, Patrick J Rosopa, Edward W Minium
1 259 kr
Beställningsvara. Skickas inom 7-10 vardagar
Fri frakt för medlemmar vid köp för minst 249 kr.Cited by more than 300 scholars, Statistical Reasoning in the Behavioral Sciences continues to provide streamlined resources and easy-to-understand information on statistics in the behavioral sciences and related fields, including psychology, education, human resources management, and sociology. Students and professionals in the behavioral sciences will develop an understanding of statistical logic and procedures, the properties of statistical devices, and the importance of the assumptions underlying statistical tools. This revised and updated edition continues to follow the recommendations of the APA Task Force on Statistical Inference and greatly expands the information on testing hypotheses about single means. The Seventh Edition moves from a focus on the use of computers in statistics to a more precise look at statistical software. The “Point of Controversy” feature embedded throughout the text provides current discussions of exciting and hotly debated topics in the field. Readers will appreciate how the comprehensive graphs, tables, cartoons and photographs lend vibrancy to all of the material covered in the text.
Produktinformation
- Utgivningsdatum2021-03-02
- Mått203 x 252 x 20 mm
- Vikt862 g
- FormatHäftad
- SpråkEngelska
- Antal sidor496
- Upplaga7
- FörlagJohn Wiley & Sons Inc
- ISBN9781119379737
Tillhör följande kategorier
- PREFACE viiABOUT THE BOOK AND AUTHORS x1 INTRODUCTION 11.1 Descriptive Statistics, 31.2 Inferential Statistics, 31.3 Our Concern: Applied Statistics, 41.4 Variables and Constants, 51.5 Scales of Measurement, 61.6 Scales of Measurement and Problems of Statistical Treatment, 81.7 Do Statistics Lie?, 9Point of Controversy: Are Statistical Procedures Necessary?, 111.8 Some Tips on Studying Statistics, 121.9 Statistics and Computers, 121.10 Summary, 132 FREQUENCY DISTRIBUTIONS, PERCENTILES, AND PERCENTILE RANKS 162.1 Organizing Qualitative Data, 162.2 Grouped Scores, 182.3 How to Construct a Grouped Frequency Distribution, 192.4 Apparent versus Real Limits, 212.5 The Relative Frequency Distribution, 212.6 The Cumulative Frequency Distribution, 222.7 Percentiles and Percentile Ranks, 242.8 Computing Percentiles from Grouped Data, 252.9 Computation of Percentile Rank, 282.10 Summary, 283 GRAPHIC REPRESENTATION OF FREQUENCY DISTRIBUTIONS 323.1 Basic Procedures, 323.2 The Histogram, 333.3 The Frequency Polygon, 343.4 Choosing between a Histogram and a Polygon, 353.5 The Bar Diagram and the Pie Chart, 373.6 The Cumulative Percentage Curve, 393.7 Factors Affecting the Shape of Graphs, 403.8 Shape of Frequency Distributions, 423.9 Summary, 434 CENTRAL TENDENCY 464.1 The Mode, 464.2 The Median, 474.3 The Mean, 484.4 Properties of the Mode, 494.5 Properties of the Mean, 50Point of Controversy: Is It Permissible to Calculate the Mean for Tests in the Behavioral Sciences?, 514.6 Properties of the Median, 524.7 Measures of Central Tendency in Symmetrical and Asymmetrical Distributions, 534.8 The Effects of Score Transformations, 544.9 Summary, 555 VARIABILITY AND STANDARD (z) SCORES 585.1 The Range and Semi-Interquartile Range, 585.2 Deviation Scores, 605.3 Deviational Measures: The Variance, 615.4 Deviational Measures: The Standard Deviation, 625.5 Calculation of the Variance and Standard Deviation: Raw-Score Method, 635.6 Calculation of the Standard Deviation with SPSS, 64Point of Controversy: Calculating the Sample Variance: Should We Divide by n or (n − 1)?, 675.7 Properties of the Range and Semi-Interquartile Range, 685.8 Properties of the Standard Deviation, 685.9 How Big Is a Standard Deviation?, 695.10 Score Transformations and Measures of Variability, 695.11 Standard Scores (z Scores), 705.12 A Comparison of z Scores and Percentile Ranks, 735.13 Summary, 746 STANDARD SCORES AND THE NORMAL CURVE 786.1 Historical Aspects of the Normal Curve, 786.2 The Nature of the Normal Curve, 816.3 Standard Scores and the Normal Curve, 816.4 The Standard Normal Curve: Finding Areas When the Score Is Known, 836.5 The Standard Normal Curve: Finding Scores When the Area Is Known, 866.6 The Normal Curve as a Model for Real Variables, 886.7 The Normal Curve as a Model for Sampling Distributions, 88Point of Controversy: How Normal Is the Normal Curve?, 896.8 Summary, 897 CORRELATION 927.1 Some History, 937.2 Graphing Bivariate Distributions: The Scatter Diagram, 957.3 Correlation: A Matter of Direction, 967.4 Correlation: A Matter of Degree, 987.5 Understanding the Meaning of Degree of Correlation, 997.6 Formulas for Pearson’s Coefficient of Correlation, 1007.7 Calculating r from Raw Scores, 1017.8 Calculating r with SPSS, 1037.9 Spearman’s Rank-Order Correlation Coefficient, 1067.10 Correlation Does Not Prove Causation, 1077.11 The Effects of Score Transformations, 1107.12 Cautions Concerning Correlation Coefficients, 1107.13 Summary, 1148 PREDICTION 1188.1 The Problem of Prediction, 1188.2 The Criterion of Best Fit, 120Point of Controversy: Least-Squares Regression versus the Resistant Line, 1218.3 The Regression Equation: Standard-Score Form, 1228.4 The Regression Equation: Raw-Score Form, 1238.5 Error of Prediction: The Standard Error of Estimate, 1258.6 An Alternative (and Preferred) Formula for SYX, 1278.7 Calculating the “Raw-Score” Regression Equation and Standard Error of Estimate with SPSS, 1288.8 Error in Estimating Y from X, 1308.9 Cautions Concerning Estimation of Predictive Error, 1328.10 Prediction Does Not Prove Causation, 1338.11 Summary, 1339 INTERPRETIVE ASPECTS OF CORRELATION AND REGRESSION 1369.1 Factors Influencing r: Degree of Variability in Each Variable, 1369.2 Interpretation of r: The Regression Equation I, 1379.3 Interpretation of r: The Regression Equation II, 1399.4 Interpretation of r : Proportion of Variation in Y Not Associated with Variation in X, 1409.5 Interpretation of r: Proportion of Variance in Y Associated with Variation in X, 1429.6 Interpretation of r: Proportion of Correct Placements, 1449.7 Summary, 14510 PROBABILITY 14710.1 Defining Probability, 14810.2 A Mathematical Model of Probability, 14910.3 Two Theorems in Probability, 15010.4 An Example of a Probability Distribution: The Binomial, 15110.5 Applying the Binomial, 15310.6 Probability and Odds, 15510.7 Are Amazing Coincidences Really That Amazing?, 15510.8 Summary, 15611 RANDOM SAMPLING AND SAMPLING DISTRIBUTIONS 16011.1 Random Sampling, 16111.2 Using a Table of Random Numbers, 16311.3 The Random Sampling Distribution of the Mean: An Introduction, 16411.4 Characteristics of the Random Sampling Distribution of the Mean, 16611.5 Using the Sampling Distribution of X to Determine the Probability for Different Ranges of Values of X, 16811.6 Random Sampling without Replacement, 17311.7 Summary, 17312 INTRODUCTION TO STATISTICAL INFERENCE: TESTING HYPOTHESES ABOUT A SINGLE MEAN (z) 17512.1 Testing a Hypothesis about a Single Mean, 17612.2 The Null and Alternative Hypotheses, 17612.3 When Do We Retain and When Do We Reject the Null Hypothesis?, 17812.4 Review of the Procedure for Hypothesis Testing, 17812.5 Dr. Brown’s Problem: Conclusion, 17812.6 The Statistical Decision, 18012.7 Choice of HA: One-Tailed and Two-Tailed Tests, 18212.8 Review of Assumptions in Testing Hypotheses about a Single Mean, 183Point of Controversy: The Single-Subject Research Design, 18412.9 Summary, 18513 TESTING HYPOTHESES ABOUT A SINGLE MEAN WHEN 𝜎 IS UNKNOWN (t) 18713.1 Estimating the Standard Error of the Mean When 𝜎 Is Unknown, 18713.2 The t Distribution, 18913.3 Characteristics of Student’s Distribution of t, 19113.4 Degrees of Freedom and Student’s Distribution of t, 19213.5 An Example: Has the Violent Content of Television Programs Increased?, 19313.6 Calculating t from Raw Scores, 19613.7 Calculating t with SPSS, 19813.8 Levels of Significance versus p-Values, 20013.9 Summary, 20214 INTERPRETING THE RESULTS OF HYPOTHESIS TESTING: EFFECT SIZE, TYPE I AND TYPE II ERRORS, AND POWER 20514.1 A Statistically Significant Difference versus a Practically Important Difference, 205Point of Controversy: The Failure to Publish “Nonsignificant” Results, 20614.2 Effect Size, 20714.3 Errors in Hypothesis Testing, 21014.4 The Power of a Test, 21214.5 Factors Affecting Power: Difference between the True Population Mean and the Hypothesized Mean (Size of Effect), 21214.6 Factors Affecting Power: Sample Size, 21314.7 Factors Affecting Power: Variability of the Measure, 21414.8 Factors Affecting Power: Level of Significance (𝛼), 21414.9 Factors Affecting Power: One-Tailed versus Two-Tailed Tests, 21414.10 Calculating the Power of a Test, 216Point of Controversy: Meta-Analysis, 21714.11 Estimating Power and Sample Size for Tests of Hypotheses about Means, 21814.12 Problems in Selecting a Random Sample and in Drawing Conclusions, 22014.13 Summary, 22115 TESTING HYPOTHESES ABOUT THE DIFFERENCE BETWEEN TWO INDEPENDENT GROUPS 22415.1 The Null and Alternative Hypotheses, 22415.2 The Random Sampling Distribution of the Difference between Two Sample Means, 22515.3 Properties of the Sampling Distribution of the Difference between Means, 22815.4 Determining a Formula for t, 22815.5 Testing the Hypothesis of No Difference between Two Independent Means: The Dyslexic Children Experiment, 23115.6 Use of a One-Tailed Test, 23415.7 Calculation of t with SPSS, 23415.8 Sample Size in Inference about Two Means, 23715.9 Effect Size, 23715.10 Estimating Power and Sample Size for Tests of Hypotheses about theDifference between Two Independent Means, 24115.11 Assumptions Associated with Inference about the Difference between Two Independent Means, 24215.12 The Random-Sampling Model versus the Random-Assignment Model, 24315.13 Random Sampling and Random Assignment as Experimental Controls, 24415.14 Summary, 24516 TESTING FOR A DIFFERENCE BETWEEN TWO DEPENDENT (CORRELATED) GROUPS 24916.1 Determining a Formula for t, 25016.2 Degrees of Freedom for Tests of No Difference between Dependent Means, 25116.3 An Alternative Approach to the Problem of Two Dependent Means, 25116.4 Testing a Hypothesis about Two Dependent Means: Does Text Messaging Impair Driving?, 25216.5 Calculating t with SPSS, 25416.6 Effect Size, 25716.7 Power, 25816.8 Assumptions When Testing a Hypothesis about the Difference between Two Dependent Means, 25916.9 Problems with Using the Dependent-Samples Design, 25916.10 Summary, 26117 INFERENCE ABOUT CORRELATION COEFFICIENTS 26417.1 The Random Sampling Distribution of r, 26417.2 Testing the Hypothesis That 𝜌 = 0, 26517.3 Fisher’s z′ Transformation, 26717.4 Strength of Relationship, 26817.5 A Note about Assumptions, 26817.6 Inference When Using Spearman’s rS, 26917.7 Summary, 26918 AN ALTERNATIVE TO HYPOTHESIS TESTING: CONFIDENCE INTERVALS 27118.1 Examples of Estimation, 27218.2 Confidence Intervals for 𝜇X, 27318.3 The Relation between Confidence Intervals and Hypothesis Testing, 27618.4 The Advantages of Confidence Intervals, 27618.5 Random Sampling and Generalizing Results, 27718.6 Evaluating a Confidence Interval, 278Point of Controversy: Objectivity and Subjectivity in Inferential Statistics: Bayesian Statistics, 27918.7 Confidence Intervals for 𝜇X − 𝜇Y , 28018.8 Sample Size Required for Confidence Intervals of 𝜇X and 𝜇X − 𝜇Y , 28318.9 Confidence Intervals for 𝜌, 28518.10 Where Are We in Statistical Reform?, 28618.11 Summary, 28719 TESTING FOR DIFFERENCES AMONG THREE OR MORE GROUPS: ONE-WAY ANALYSIS OF VARIANCE (AND SOME ALTERNATIVES) 28919.1 The Null Hypothesis, 29119.2 The Basis of One-Way Analysis of Variance: Variation within and Between Groups, 29119.3 Partition of the Sums of Squares, 29319.4 Degrees of Freedom, 29519.5 Variance Estimates and the F Ratio, 29619.6 The Summary Table, 29719.7 Example: Does Playing Violent Video Games Desensitize People to Real-Life Aggression?, 29819.8 Comparison of t and F, 30119.9 Raw-Score Formulas for Analysis of Variance, 30219.10 Calculation of ANOVA for Independent Measures with SPSS, 30319.11 Assumptions Associated with ANOVA, 30619.12 Effect Size, 30619.13 ANOVA and Power, 30719.14 Post Hoc Comparisons, 30819.15 Some Concerns about Post Hoc Comparisons, 31019.16 An Alternative to the F Test: Planned Comparisons, 31019.17 How to Construct Planned Comparisons, 31119.18 Analysis of Variance for Repeated Measures, 31419.19 Calculation of ANOVA for Repeated Measures with SPSS, 31919.20 Summary, 32120 FACTORIAL ANALYSIS OF VARIANCE: THE TWO-FACTOR DESIGN 32620.1 Main Effects, 32720.2 Interaction, 32920.3 The Importance of Interaction, 33120.4 Partition of the Sums of Squares for Two-Way ANOVA, 33220.5 Degrees of Freedom, 33620.6 Variance Estimates and F Tests, 33720.7 Studying the Outcome of Two-Factor Analysis of Variance, 33820.8 Effect Size, 34020.9 Calculation of Two-Factor ANOVA with SPSS, 34120.10 Planned Comparisons, 34220.11 Assumptions of the Two-Factor Design and the Problem of Unequal Numbers of Scores, 34320.12 Mixed Two-Factor Within-Subjects Design, 34420.13 Calculation of the Mixed Two-Factor Within-Subjects Design with SPSS, 34820.14 Summary, 34921 CHI-SQUARE AND INFERENCE ABOUT FREQUENCIES 35321.1 The Chi-Square Test for Goodness of Fit, 35321.2 Chi-Square (𝜒2) as a Measure of the Difference between Observed and Expected Frequencies, 35521.3 The Logic of the Chi-Square Test, 35621.4 Interpretation of the Outcome of a Chi-Square Test, 35821.5 Different Hypothesized Proportions in the Test for Goodness of Fit, 35821.6 Effect Size for Goodness-of-Fit Problems, 35921.7 Assumptions in the Use of the Theoretical Distribution of Chi-Square, 36021.8 Chi-Square as a Test for Independence between Two Variables, 36021.9 Finding Expected Frequencies in a Contingency Table, 36221.10 Calculation of 𝜒2 and Determination of Significance in a Contingency Table, 36321.11 Measures of Effect Size (Strength of Association) for Tests of Independence, 364Point of Controversy: Yates’ Correction for Continuity, 36521.12 Power and the Chi-Square Test of Independence, 36721.13 Summary, 36822 SOME (ALMOST) ASSUMPTION-FREE TESTS 37122.1 The Null Hypothesis in Assumption-Freer Tests, 37222.2 Randomization Tests, 37222.3 Rank-Order Tests, 37422.4 The Bootstrap Method of Statistical Inference, 37522.5 An Assumption-Freer Alternative to the t Test of a Difference between Two Independent Groups: The Mann–Whitney U Test, 376Point of Controversy: A Comparison of the t Test and the Mann–Whitney U Test with Real-World Distributions, 37922.6 An Assumption-Freer Alternative to the t Test of a Difference Between Two Dependent Groups: The Sign Test, 38022.7 Another Assumption-Freer Alternative to the t Test of a Difference Between Two Dependent Groups: The Wilcoxon Signed-Ranks Test, 38222.8 An Assumption-Freer Alternative to the One-Way ANOVA for Independent Groups: The Kruskal–Wallis Test, 38422.9 An Assumption-Freer Alternative to ANOVA for Repeated Measures: Friedman’s Rank Test for Correlated Samples, 38722.10 Summary, 389EPILOGUE 392APPENDIX A REVIEW OF BASIC MATHEMATICS 396APPENDIX B LIST OF SYMBOLS 405APPENDIX C ANSWERS TO PROBLEMS 408APPENDIX D STATISTICAL TABLES 424Table A: Areas under the Normal Curve Corresponding to Given Values of z, 424Table B: The Binomial Distribution, 429Table C: Random Numbers, 432Table D: Student’s t Distribution, 434Table E: The F Distribution, 436Table F: The Studentized Range Statistic, 440Table G: Values of the Correlation Coefficient Required for Different Levels of Significance When H0∶ 𝜌 = 0, 441Table H: Values of Fisher’s z′ for Values of r, 443Table I: The 𝜒2 Distribution, 444Table J: Critical One-Tail Values of ΣRX for the Mann–Whitney U Test, 445Table K: Critical Values for the Smaller of R+ or R− for the Wilcoxon Signed-Ranks Test, 447REFERENCES 448INDEX 454
Hoppa över listan