Statistics Alive!
Häftad, Engelska, 2020
Av Wendy J. Steinberg, Matthew Price, USA) Price, Matthew (University of Vermont
2 479 kr
Students are shown the underlying logic to what they're learning, and well-crafted practice and self-check features help ensure that that new knowledge sticks. Coverage of probability theory and mathematical proofs is complemented by expanded conceptual coverage. In the Third Edition, new coauthor Matthew Price includes simplified practice problems and increased coverage of conceptual statistics, integrated discussions of effect size with hypothesis testing, and new coverage of ethical practices for conducting research.
Give your students the SAGE Edge!
SAGE Edge offers a robust online environment featuring an impressive array of free tools and resources for review, study, and further exploration, keeping both instructors and students on the cutting edge of teaching and learning.
Produktinformation
- Utgivningsdatum2020-11-25
- Mått215 x 279 x 27 mm
- Vikt1 320 g
- FormatHäftad
- SpråkEngelska
- Antal sidor624
- Upplaga3
- FörlagSAGE Publications
- ISBN9781544328263
Tillhör följande kategorier
Wendy J. Steinberg entered academia midcareer, having spent the first part of her career in high-stakes test development. She holds a PhD in educational psychology with dual concentrations, one in measurement and the other in development and cognition. Teaching is her passion. She views education as a sacred task that teachers and students alike should treat with reverence. She wants this textbook in the hands of every statistics student so that tears will be banished forever from the classroom. A portion of the sale of each textbook goes to charityMatthew Price holds a PhD in clinical psychology and has spent his career pursuing two goals. The first is helping victims of trauma and the second is teaching statistics. From his time in undergraduate statistics, he saw the challenge that this topic posed to many talented students. He has since spent many late nights making heads or tails out of how to teach the probability of heads and tails in an approachable and enjoyable manner. He is honored to assist in writing this textbook to continue to help all of those students who have yet to discover the awesomeness of stats.
- List of FiguresList of TablesPrefaceSupplemental Material for Use With Statistics Alive!AcknowledgmentsAbout the AuthorsPART I. PRELIMINARY INFORMATION: “FIRST THINGS FIRST”Module 1. Math Review, Vocabulary, and SymbolsGetting StartedCommon Terms and Symbols in StatisticsFundamental Rules and Procedures for StatisticsMore Rules and ProceduresModule 2. Measurement ScalesWhat Is Measurement?Scales of MeasurementContinuous Versus Discrete VariablesReal LimitsPART II. TABLES AND GRAPHS: “ON DISPLAY”Module 3. Frequency and Percentile TablesWhy Use Tables?Frequency TablesRelative Frequency or Percentage TablesGrouped Frequency TablesPercentile and Percentile Rank TablesSPSS ConnectionModule 4. Graphs and PlotsWhy Use Graphs?Graphing Continuous DataSymmetry, Skew, and KurtosisGraphing Discrete DataSPSS ConnectionPART III. CENTRAL TENDENCY: “BULL’S-EYE”Module 5. Mode, Median, and MeanWhat Is Central Tendency?ModeMedianMeanSkew and Central TendencySPSS ConnectionPART IV. DISPERSION: “FROM HERE TO ETERNITY”Module 6. Range, Variance, and Standard DeviationWhat Is Dispersion?RangeVarianceStandard DeviationMean Absolute DeviationControversy: N Versus n - 1SPSS ConnectionPART V. THE NORMAL CURVE AND STANDARD SCORES: “WHAT’S THE SCORE?”Module 7. Percent Area and the Normal CurveWhat Is a Normal Curve?History of the Normal CurveUses of the Normal CurveLooking AheadModule 8. z ScoresWhat Is a Standard Score?Benefits of Standard ScoresCalculating z ScoresComparing Scores Across Different TestsSPSS ConnectionModule 9. Score Transformations and Their EffectsWhy Transform Scores?Effects on Central TendencyEffects on DispersionA Graphic Look at TransformationsSummary of Transformation EffectsSome Common Transformed ScoresLooking AheadPART VI. PROBABILITY: “ODDS ARE”Module 10. Probability Definitions and TheoremsWhy Study Probability?Probability as a ProportionEqually Likely ModelMutually Exclusive OutcomesAddition TheoremIndependent OutcomesMultiplication TheoremA Brief ReviewProbability and InferenceModule 11. The Binomial DistributionWhat Are Dichotomous Events?Finding Probabilities by Listing and CountingFinding Probabilities by the Binomial FormulaFinding Probabilities by the Binomial TableProbability and ExperimentationLooking AheadNonnormal DataPART VII. INFERENTIAL THEORY: “OF TRUTH AND RELATIVITY”Module 12. Sampling, Variables, and HypothesesFrom Description to InferenceSamplingVariablesHypothesesModule 13. Errors and SignificanceRandom Sampling RevisitedSampling ErrorSignificant DifferenceThe Decision TableType I ErrorType II ErrorModule 14. The z Score as a Hypothesis TestInferential Logic and the z ScoreConstructing a Hypothesis Test for a z ScoreLooking AheadPART VIII. THE ONE-SAMPLE TEST: “ARE THEY FROM OUR PART OF TOWN?”Module 15. Standard Error of the MeanCentral Limit TheoremSampling Distribution of the MeanCalculating the Standard Error of the MeanSample Size and the Standard Error of the MeanLooking AheadModule 16. Normal Deviate Z TestPrototype Logic and the Z TestCalculating a Normal Deviate Z TestExamples of Normal Deviate Z TestsDecision Making With a Normal Deviate Z TestLooking AheadModule 17. One-Sample t TestZ Test Versus t TestComparison of Z-Test and t-Test FormulasDegrees of FreedomBiased and Unbiased EstimatesWhen Do We Reject the Null Hypothesis?One-Tailed Versus Two-Tailed TestsThe t Distribution Versus the Normal DistributionThe t Table Versus the Normal Curve TableCalculating a One-Sample t TestInterpreting a One-Sample t TestLooking AheadSPSS ConnectionModule 18. Interpreting and Reporting One-Sample t: Error, Confidence, and Parameter EstimatesWhat It Means to Reject the NullRefining ErrorDecision Making With a One-Sample t TestDichotomous Decisions Versus Reports of Actual pParameter Estimation: Point and IntervalSPSS ConnectionPART IX. THE TWO-SAMPLE TEST: “OURS IS BETTER THAN YOURS”Module 19. Standard Error of the Difference Between the MeansOne-Sample Versus Two-Sample StudiesSampling Distribution of the Difference Between the MeansCalculating the Standard Error of the Difference Between the MeansImportance of the Size of the Standard Error of the Difference Between the MeansLooking AheadModule 20. t Test With Independent Samples and Equal Sample SizesA Two-Sample StudyInferential Logic and the Two-Sample t TestCalculating a Two-Sample t TestInterpreting a Two-Sample t TestLooking AheadSPSS ConnectionModule 21. t Test With Unequal Sample SizesWhat Makes Sample Sizes Unequal?Comparison of Special-Case and Generalized FormulasCalculating a t Test With Unequal Sample SizesInterpreting a t Test With Unequal Sample SizesSPSS ConnectionModule 22. t Test With Related SamplesWhat Makes Samples Related?Comparison of Special-Case and Related-Samples FormulasAdvantage and Disadvantage of Related SamplesDirect-Difference FormulaCalculating a t Test With Related SamplesInterpreting a t Test With Related SamplesSPSS ConnectionModule 23. Interpreting and Reporting Two-Sample t: Error, Confidence, and Parameter EstimatesWhat Is Confidence?Refining Error and ConfidenceDecision Making With a Two-Sample t TestDichotomous Decisions Versus Reports of Actual pParameter Estimation: Point and IntervalSPSS ConnectionPART X. THE MULTISAMPLE TEST: “OURS IS BETTER THAN YOURS OR THEIRS”Module 24. ANOVA Logic: Sums of Squares, Partitioning, and Mean SquaresWhen Do We Use ANOVA?ANOVA AssumptionsPartitioning of Deviation ScoresFrom Deviation Scores to VariancesFrom Variances to Mean SquaresFrom Mean Squares to FLooking AheadModule 25. One-Way ANOVA: Independent Samples and Equal Sample SizesWhat Is a One-Way ANOVA?Inferential Logic and ANOVADeviation Score MethodRaw Score MethodRemaining Steps for Both Methods: Mean Squares and FInterpreting a One-Way ANOVAThe ANOVA Summary TableSPSS ConnectionPART XI. POST HOC TESTS: “SO WHO’S RESPONSIBLE?”Module 26. Tukey HSD TestWhy Do We Need a Post Hoc Test?Calculating the Tukey HSDInterpreting the Tukey HSDSPSS ConnectionModule 27. Scheffé TestWhy Do We Need a Post Hoc Test?Calculating the SchefféInterpreting the SchefféSPSS ConnectionPART XII. MORE THAN ONE INDEPENDENT VARIABLE: “DOUBLE DUTCH JUMP ROPE”Module 28. Main Effects and Interaction EffectsWhat Is a Factorial ANOVA?Factorial ANOVA DesignsNumber and Type of HypothesesMain EffectsInteraction EffectsLooking AheadModule 29. Factorial ANOVAReview of Factorial ANOVA DesignsData Setup and Preliminary ExpectationsSums of Squares FormulasCalculating Factorial ANOVA Sums of Squares: Raw Score MethodFactorial Mean Squares and FsInterpreting a Factorial F TestThe Factorial ANOVA Summary TableSPSS ConnectionPART XIII. NONPARAMETRIC STATISTICS: “WITHOUT FORM OR VOID”Module 30. One-Variable Chi-Square: Goodness of FitWhat Is a Nonparametric Test?Chi-Square as a Goodness-of-Fit TestFormula for Chi-SquareInferential Logic and Chi-SquareCalculating a Chi-Square Goodness of FitInterpreting a Chi-Square Goodness of FitLooking AheadSPSS ConnectionModule 31. Two-Variable Chi-Square: Test of IndependenceChi-Square as a Test of IndependencePrerequisites for a Chi-Square Test of IndependenceFormula for a Chi-SquareFinding Expected FrequenciesCalculating a Chi-Square Test of IndependenceInterpreting a Chi-Square Test of IndependenceSPSS ConnectionPART XIV. EFFECT SIZE AND POWER: “HOW MUCH IS ENOUGH?”Module 32. Measures of Effect SizeWhat Is Effect Size?For Two-Sample t TestsFor ANOVA F TestsFor Chi-Square TestsModule 33. Power and the Factors Affecting ItWhat Is Power?Factors Affecting PowerPutting It Together: Alpha, Power, Effect Size, and Sample SizeLooking AheadPART XV. CORRELATION: “WHITHER THOU GOEST, I WILL GO”Module 34. Relationship Strength and DirectionExperimental Versus Correlational StudiesPlotting Correlation DataRelationship StrengthRelationship DirectionLinear and Nonlinear RelationshipsOutliers and Their EffectsLooking AheadSPSS ConnectionModule 35. Pearson rWhat Is a Correlation Coefficient?Calculation of a Pearson rFormulas for Pearson rz-Score Scatterplots and rCalculating Pearson r: Deviation Score MethodInterpreting a Pearson r CoefficientLooking AheadSPSS ConnectionModule 36. Correlation PitfallsEffect of Sample Size on Statistical SignificanceStatistical Significance Versus Practical ImportanceEffect of Restriction in RangeEffect of Sample Heterogeneity or HomogeneityEffect of Unreliability in the Measurement InstrumentCorrelation Versus CausationPART XVI. LINEAR PREDICTION: “YOU’RE SO PREDICTABLE”Module 37. Linear PredictionCorrelation Permits PredictionLogic of a Prediction LineEquation for the Best-Fitting LineUsing a Prediction Equation to Predict Scores on YAnother Calculation ExampleSPSS ConnectionModule 38. Standard Error of PredictionWhat Is a Confidence Interval?Correlation and Prediction ErrorDistribution of Prediction ErrorCalculating the Standard Error of PredictionUsing the Standard Error of Prediction to Calculate Confidence IntervalsFactors Influencing the Standard Error of PredictionAnother Calculation ExampleModule 39. Introduction to Multiple RegressionWhat Is Regression?Prediction Error, RevisitedWhy Multiple Regression?The Multiple Regression EquationMultiple Regression and Predicted VarianceHypothesis Testing in Multiple RegressionAn ExampleThe General Linear ModelSPSS ConnectionPART XVII. REVIEW: “SAY IT AGAIN, SAM”Module 40. Selecting the Appropriate AnalysisReview of Descriptive MethodsReview of Inferential MethodsAppendix A: Normal Curve TableAppendix B: Binomial TableAppendix C: t TableAppendix D: F Table (ANOVA)Appendix E: Studentized Range Statistic (for Tukey HSD)Appendix F: Chi-Square TableAppendix G: Correlation TableAppendix H: Odd Solutions to Textbook ExercisesReferencesIndex