- Statistics Tutorial
- Home
- Adjusted R-Squared
- Analysis of Variance
- Arithmetic Mean
- Arithmetic Median
- Arithmetic Mode
- Arithmetic Range
- Bar Graph
- Best Point Estimation
- Beta Distribution
- Binomial Distribution
- Black-Scholes model
- Boxplots
- Central limit theorem
- Chebyshev's Theorem
- Chi-squared Distribution
- Chi Squared table
- Circular Permutation
- Cluster sampling
- Cohen's kappa coefficient
- Combination
- Combination with replacement
- Comparing plots
- Continuous Uniform Distribution
- Continuous Series Arithmetic Mean
- Continuous Series Arithmetic Median
- Continuous Series Arithmetic Mode
- Cumulative Frequency
- Co-efficient of Variation
- Correlation Co-efficient
- Cumulative plots
- Cumulative Poisson Distribution
- Data collection
- Data collection - Questionaire Designing
- Data collection - Observation
- Data collection - Case Study Method
- Data Patterns
- Deciles Statistics
- Discrete Series Arithmetic Mean
- Discrete Series Arithmetic Median
- Discrete Series Arithmetic Mode
- Dot Plot
- Exponential distribution
- F distribution
- F Test Table
- Factorial
- Frequency Distribution
- Gamma Distribution
- Geometric Mean
- Geometric Probability Distribution
- Goodness of Fit
- Grand Mean
- Gumbel Distribution
- Harmonic Mean
- Harmonic Number
- Harmonic Resonance Frequency
- Histograms
- Hypergeometric Distribution
- Hypothesis testing
- Individual Series Arithmetic Mean
- Individual Series Arithmetic Median
- Individual Series Arithmetic Mode
- Interval Estimation
- Inverse Gamma Distribution
- Kolmogorov Smirnov Test
- Kurtosis
- Laplace Distribution
- Linear regression
- Log Gamma Distribution
- Logistic Regression
- Mcnemar Test
- Mean Deviation
- Means Difference
- Multinomial Distribution
- Negative Binomial Distribution
- Normal Distribution
- Odd and Even Permutation
- One Proportion Z Test
- Outlier Function
- Permutation
- Permutation with Replacement
- Pie Chart
- Poisson Distribution
- Pooled Variance (r)
- Power Calculator
- Probability
- Probability Additive Theorem
- Probability Multiplecative Theorem
- Probability Bayes Theorem
- Probability Density Function
- Process Capability (Cp) & Process Performance (Pp)
- Process Sigma
- Quadratic Regression Equation
- Qualitative Data Vs Quantitative Data
- Quartile Deviation
- Range Rule of Thumb
- Rayleigh Distribution
- Regression Intercept Confidence Interval
- Relative Standard Deviation
- Reliability Coefficient
- Required Sample Size
- Residual analysis
- Residual sum of squares
- Root Mean Square
- Sample planning
- Sampling methods
- Scatterplots
- Shannon Wiener Diversity Index
- Signal to Noise Ratio
- Simple random sampling
- Skewness
- Standard Deviation
- Standard Error ( SE )
- Standard normal table
- Statistical Significance
- Statistics Formulas
- Statistics Notation
- Stem and Leaf Plot
- Stratified sampling
- Student T Test
- Sum of Square
- T-Distribution Table
- Ti 83 Exponential Regression
- Transformations
- Trimmed Mean
- Type I & II Error
- Variance
- Venn Diagram
- Weak Law of Large Numbers
- Z table
- Statistics Useful Resources
- Statistics - Discussion
Statistics - Type I & II Errors
Type I and Type II errors signifies the erroneous outcomes of statistical hypothesis tests. Type I error represents the incorrect rejection of a valid null hypothesis whereas Type II error represents the incorrect retention of an invalid null hypothesis.
Null Hypothesis
Null Hypothesis refers to a statement which nullifies the contrary with evidence. Consider the following examples:
Example 1
Hypothesis - Water added to a toothpaste protects teeth against cavities.
Null Hypothesis - Water added to a toothpaste has no effect against cavities.
Example 2
Hypothesis - Floride added to a toothpaste protects teeth against cavities.
Null Hypothesis - Floride added to a toothpaste has no effect against cavities.
Here Null hypothesis is to be tested against experimental data to nullify the effect of floride and water on teeth's cavities.
Type I Error
Consider the Example 1. Here Null hypothesis is true i.e. Water added to a toothpaste has no effect against cavities. But if using experimental data, we detect an effect of water added on cavities then we are rejecting a true null hypothesis. This is a Type I error. It is also called a False Positive condition (a situation which indicates that a given condition is present but it actually is not present). The Type I error rate or significance level of Type I is represented by the probability of rejecting the null hypothesis given that it is true.
Type I error is denoted by $ \alpha $ and is also called alpha level. Generally It is acceptable to have Type I error significance level as 0.05 or 5% which means that 5% probability of incorrectly rejecting the null hypothesis is acceptable.
Type II Error
Consider the Example 2. Here Null hypothesis is false i.e. Floride added to a toothpaste has effect against cavities. But if using experimental data, we do not detect an effect of floride added on cavities then we are accepting a false null hypothesis. This is a Type II error. It is also called a False Positive condition (a situation which indicates that a given condition is not present but it actually is present).
Type II error is denoted by $ \beta $ and is also called beta level.
Goal of a statistical test is to determine that a null hypothesis can be rejected or not. A statistical test can reject or not be able to reject a null hypothesis. Following table illustrates the relationship between truth or falseness of the null hypothesis and outcomes of the test in terms of Type I or Type II error.
Judgment | Null hypothesis ($ H_0 $) is | Error Type | Inference |
---|---|---|---|
Reject | Valid | Type I Error (False Positive) | Incorrect |
Reject | Invalid | True Positive | Correct |
Unable to Reject | Valid | True Negative | Correct |
Unable to Reject | Invalid | Type II error(False Negative) | Incorrect |