- Statistics Tutorial
- Home
- Adjusted R-Squared
- Analysis of Variance
- Arithmetic Mean
- Arithmetic Median
- Arithmetic Mode
- Arithmetic Range
- Bar Graph
- Best Point Estimation
- Beta Distribution
- Binomial Distribution
- Black-Scholes model
- Boxplots
- Central limit theorem
- Chebyshev's Theorem
- Chi-squared Distribution
- Chi Squared table
- Circular Permutation
- Cluster sampling
- Cohen's kappa coefficient
- Combination
- Combination with replacement
- Comparing plots
- Continuous Uniform Distribution
- Continuous Series Arithmetic Mean
- Continuous Series Arithmetic Median
- Continuous Series Arithmetic Mode
- Cumulative Frequency
- Co-efficient of Variation
- Correlation Co-efficient
- Cumulative plots
- Cumulative Poisson Distribution
- Data collection
- Data collection - Questionaire Designing
- Data collection - Observation
- Data collection - Case Study Method
- Data Patterns
- Deciles Statistics
- Discrete Series Arithmetic Mean
- Discrete Series Arithmetic Median
- Discrete Series Arithmetic Mode
- Dot Plot
- Exponential distribution
- F distribution
- F Test Table
- Factorial
- Frequency Distribution
- Gamma Distribution
- Geometric Mean
- Geometric Probability Distribution
- Goodness of Fit
- Grand Mean
- Gumbel Distribution
- Harmonic Mean
- Harmonic Number
- Harmonic Resonance Frequency
- Histograms
- Hypergeometric Distribution
- Hypothesis testing
- Individual Series Arithmetic Mean
- Individual Series Arithmetic Median
- Individual Series Arithmetic Mode
- Interval Estimation
- Inverse Gamma Distribution
- Kolmogorov Smirnov Test
- Kurtosis
- Laplace Distribution
- Linear regression
- Log Gamma Distribution
- Logistic Regression
- Mcnemar Test
- Mean Deviation
- Means Difference
- Multinomial Distribution
- Negative Binomial Distribution
- Normal Distribution
- Odd and Even Permutation
- One Proportion Z Test
- Outlier Function
- Permutation
- Permutation with Replacement
- Pie Chart
- Poisson Distribution
- Pooled Variance (r)
- Power Calculator
- Probability
- Probability Additive Theorem
- Probability Multiplecative Theorem
- Probability Bayes Theorem
- Probability Density Function
- Process Capability (Cp) & Process Performance (Pp)
- Process Sigma
- Quadratic Regression Equation
- Qualitative Data Vs Quantitative Data
- Quartile Deviation
- Range Rule of Thumb
- Rayleigh Distribution
- Regression Intercept Confidence Interval
- Relative Standard Deviation
- Reliability Coefficient
- Required Sample Size
- Residual analysis
- Residual sum of squares
- Root Mean Square
- Sample planning
- Sampling methods
- Scatterplots
- Shannon Wiener Diversity Index
- Signal to Noise Ratio
- Simple random sampling
- Skewness
- Standard Deviation
- Standard Error ( SE )
- Standard normal table
- Statistical Significance
- Statistics Formulas
- Statistics Notation
- Stem and Leaf Plot
- Stratified sampling
- Student T Test
- Sum of Square
- T-Distribution Table
- Ti 83 Exponential Regression
- Transformations
- Trimmed Mean
- Type I & II Error
- Variance
- Venn Diagram
- Weak Law of Large Numbers
- Z table
- Statistics Useful Resources
- Statistics - Discussion
Statistics - Probability Multiplicative Theorem
For Independent Events
The theorem states that the probability of the simultaneous occurrence of two events that are independent is given by the product of their individual probabilities.
The theorem can he extended to three or more independent events also as
Example
Problem Statement:
A college has to appoint a lecturer who must be B.Com., MBA, and Ph. D, the probability of which is ${\frac{1}{20}}$, ${\frac{1}{25}}$, and ${\frac{1}{40}}$ respectively. Find the probability of getting such a person to be appointed by the college.
Solution:
Probability of a person being a B.Com.P(A) =${\frac{1}{20}}$
Probability of a person being a MBA P(B) = ${\frac{1}{25}}$
Probability of a person being a Ph.D P(C) =${\frac{1}{40}}$
Using multiplicative theorem for independent events
For Dependent Events (Conditional Probability)
As defined earlier, dependent events are those were the occurrences or nonoccurrence of one event effects the outcome of next event. For such events the earlier stated multiplicative theorem is not applicable. The probability associated with such events is called as conditional probability and is given by
P(A/B) = ${\frac{P(AB)}{P(B)}}$ or ${\frac{P(A \cap B)}{P(B)}}$
Read P(A/B) as the probability of occurrence of event A when event B has already occurred.
Similarly the conditional probability of B given A is
P(B/A) = ${\frac{P(AB)}{P(A)}}$ or ${\frac{P(A \cap B)}{P(A)}}$
Example
Problem Statement:
A coin is tossed 2 times. The toss resulted in one head and one tail. What is the probability that the first throw resulted in a tail?
Solution:
The sample space of a coin tossed two times is given as S = {HH, HT, TH, TT}
Let Event A be the first throw resulting in a tail.
Event B be that one tail and one head occurred.