Understanding Advanced Statistical Methods

By (author) Westfall, Peter; By (author) Henning, Kevin S. S.

Providing a much-needed bridge between elementary statistics courses and advanced research methods courses, Understanding Advanced Statistical Methods helps students grasp the fundamental assumptions and machinery behind sophisticated statistical topics, such as logistic regression, maximum likelihood, bootstrapping, nonparametrics, and Bayesian methods. The book teaches students how to properly model, think critically, and design their own studies to avoid common errors. It leads them to think differently not only about math and statistics but also about general research and the scientific method. With a focus on statistical models as producers of data, the book enables students to more easily understand the machinery of advanced statistics. It also downplays the "population" interpretation of statistical models and presents Bayesian methods before frequentist ones. Requiring no prior calculus experience, the text employs a "just-in-time" approach that introduces mathematical topics, including calculus, where needed. Formulas throughout the text are used to explain why calculus and probability are essential in statistical modeling. The authors also intuitively explain the theory and logic behind real data analysis, incorporating a range of application examples from the social, economic, biological, medical, physical, and engineering sciences. Enabling your students to answer the why behind statistical methods, this text teaches them how to successfully draw conclusions when the premises are flawed. It empowers them to use advanced statistical methods with confidence and develop their own statistical recipes. Ancillary materials are available on the book's website.

「Nielsen BookData」より

[目次]

  • Introduction: Probability, Statistics, and Science Reality, Nature, Science, and Models Statistical Processes: Nature, Design and Measurement, and Data Models Deterministic Models Variability Parameters Purely Probabilistic Statistical Models Statistical Models with Both Deterministic and Probabilistic Components Statistical Inference Good and Bad Models Uses of Probability Models Random Variables and Their Probability Distributions Introduction Types of Random Variables: Nominal, Ordinal, and Continuous Discrete Probability Distribution Functions Continuous Probability Distribution Functions Some Calculus-Derivatives and Least Squares More Calculus-Integrals and Cumulative Distribution Functions Probability Calculation and Simulation Introduction Analytic Calculations, Discrete and Continuous Cases Simulation-Based Approximation Generating Random Numbers Identifying Distributions Introduction Identifying Distributions from Theory Alone Using Data: Estimating Distributions via the Histogram Quantiles: Theoretical and Data-Based Estimates Using Data: Comparing Distributions via the Quantile-Quantile Plot Effect of Randomness on Histograms and q-q Plots Conditional Distributions and Independence Introduction Conditional Discrete Distributions Estimating Conditional Discrete Distributions Conditional Continuous Distributions Estimating Conditional Continuous Distributions Independence Marginal Distributions, Joint Distributions, Independence, and Bayes' Theorem Introduction Joint and Marginal Distributions Estimating and Visualizing Joint Distributions Conditional Distributions from Joint Distributions Joint Distributions When Variables Are Independent Bayes' Theorem Sampling from Populations and Processes Introduction Sampling from Populations Critique of the Population Interpretation of Probability Models The Process Model versus the Population Model Independent and Identically Distributed Random Variables and Other Models Checking the iid Assumption Expected Value and the Law of Large Numbers Introduction Discrete Case Continuous Case Law of Large Numbers Law of Large Numbers for the Bernoulli Distribution Keeping the Terminology Straight: Mean, Average, Sample Mean, Sample Average, and Expected Value Bootstrap Distribution and the Plug-In Principle Functions of Random Variables: Their Distributions and Expected Values Introduction Distributions of Functions: The Discrete Case Distributions of Functions: The Continuous Case Expected Values of Functions and the Law of the Unconscious Statistician Linearity and Additivity Properties Nonlinear Functions and Jensen's Inequality Variance Standard Deviation, Mean Absolute Deviation, and Chebyshev's Inequality Linearity Property of Variance Skewness and Kurtosis Distributions of Totals Introduction Additivity Property of Variance Covariance and Correlation Central Limit Theorem Estimation: Unbiasedness, Consistency, and Efficiency Introduction Biased and Unbiased Estimators Bias of the Plug-In Estimator of Variance Removing the Bias of the Plug-In Estimator of Variance The Joke Is on Us: The Standard Deviation Estimator Is Biased after All Consistency of Estimators Efficiency of Estimators Likelihood Function and Maximum Likelihood Estimates Introduction Likelihood Function Maximum Likelihood Estimates Wald Standard Error Bayesian Statistics Introduction: Play a Game with Hans! Prior Information and Posterior Knowledge Case of the Unknown Survey Bayesian Statistics: The Overview Bayesian Analysis of the Bernoulli Parameter Bayesian Analysis Using Simulation What Good Is Bayes? Frequentist Statistical Methods Introduction Large-Sample Approximate Frequentist Confidence Interval for the Process Mean What Does Approximate Really Mean for an Interval Range? Comparing the Bayesian and Frequentist Paradigms Are Your Results Explainable by Chance Alone? Introduction What Does by Chance Alone Mean? The p-Value The Extremely Ugly "pv <= 0.05" Rule of Thumb Chi-Squared, Student's t, and F-Distributions, with Applications Introduction Linearity and Additivity Properties of the Normal Distribution Effect of Using an Estimate of s Chi-Squared Distribution Frequentist Confidence Interval for s Student's t-Distribution Comparing Two Independent Samples Using a Confidence Interval Comparing Two Independent Homoscedastic Normal Samples via Hypothesis Testing F-Distribution and ANOVA Test F-Distribution and Comparing Variances of Two Independent Groups Likelihood Ratio Tests Introduction Likelihood Ratio Method for Constructing Test Statistics Evaluating the Statistical Significance of Likelihood Ratio Test Statistics Likelihood Ratio Goodness-of-Fit Tests Cross-Classification Frequency Tables and Tests of Independence Comparing Non-Nested Models via the AIC Statistic Sample Size and Power Introduction Choosing a Sample Size for a Prespecified Accuracy Margin Power Noncentral Distributions Choosing a Sample Size for Prespecified Power Post Hoc Power: A Useless Statistic Robustness and Nonparametric Methods Introduction Nonparametric Tests Based on the Rank Transformation Randomization Tests Level and Power Robustness Bootstrap Percentile-t Confidence Interval Final Words Index Vocabulary, Formula Summaries, and Exercises appear at the end of each chapter.

「Nielsen BookData」より

この本の情報

書名 Understanding Advanced Statistical Methods
著作者等 Henning, Kevin S. S.
Westfall, Peter
シリーズ名 Chapman & Hall/CRC Texts in Statistical Science
出版元 Taylor & Francis Inc
刊行年月 2013.04.19
ページ数 569p
ISBN 9781466512122
言語 英語
出版国 アメリカ合衆国
この本を: 
このエントリーをはてなブックマークに追加

このページを印刷

外部サイトで検索

この本と繋がる本を検索

ウィキペディアから連想