Power Analysis

  • Null Hypothesis: innocent until guilty proven.
  • Decreasing type II error increases type I error. Researchers consider type I error more serious, hence keep it at lease at 0.05 (or 0.01).
https://www.youtube.com/watch?v=OWn3Ko1WYTA
https://www.youtube.com/watch?v=OWn3Ko1WYTA
Null Vs Alternative Hypothesis

  • We can assume a null hypothesis where test statistics has a normal distribution and parameter μ = 0.
  • We can also specify an alternative hypothesis where test statistics has a normal distribution and parameter μ= 1.
  • If test statistics falls above the critical value we reject the null hypothesis.
  • The probability that test statistics falls above the critical value is called alpha level.
  • If test statistics falls below the critical value we don’t reject the null hypothesis.
  • The probability of test statistics to falls above the critical value assuming alternative hypothesis is called statistical Power.
  • Red area represented by alpha is masked by green area represented by Power.
The probability that test statistics falls above the critical value assuming ‘Null hypothesis’ is called “alpha level”.
The probability that test statistics falls above the critical value assuming ‘alternative hypothesis’ is called statistical “Power”.
https://www.youtube.com/watch?v=iuBbJIeEUwA
https://www.youtube.com/watch?v=iuBbJIeEUwA
https://www.youtube.com/watch?v=iuBbJIeEUwA
https://www.youtube.com/watch?v=qUoOJ7QBLKY

SAMPLE SIZE

  • Most study designs establish and fix sample sizes before the study commences. Some study designs may allow change the sample size as the trial progresses and more information about factors that affect the sample size such as effect size and clinical event rate becomes available.
  • Factors affecting sample size:-
    1. Power
    2. Significance level(α)
    3. Effect size
    4. Clinical event rate
    5. Subject Drop-out rate
    6. Subject allocation ratio
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3409926/
1. Power
  • Find the difference when it truly exists. i.e. Probability to reject false Null Hypothesis is ‘Power’.
  • most commonly used power threshold is 80% , means a false null hypothesis is successfully rejected 80% of the time
https://www.youtube.com/watch?v=TrbVti5Wxlg
2. Significance level
  • Alpha level is probability of making type I error.
  • most commonly used significance levels are 5% ( p=0.05) and 1% ( p=0.01),
  • means that there is a 5% or 1% probability, respectively, of a ‘chance difference’ mistakenly being considered a ‘real significant difference’.
  • Significance level = 1- Confidence level (α= 1-CL)
  • A statistically significant difference doesn’t always mean a meaningful/ important difference. It simply indicates that researchers can be confident that the difference does exist. (With large sample size, even very small differences can be statistically significant.)
  • Alpha level is determined before the study begins while p-value is calculated from sample data after the study has been completed. If p<α, results are statistically significant.
3. Clinical Event rate
  • It is frequency of relevant clinical event in the study population.
  • Clinical event- ‘problem’ that we are trying to measure or ‘study treatment’ aims to prevent, alleviate, or cure.
  • Example-If studying a treatment designed to prevent heart attacks-relevant clinical event is a heart attack. If you were studying a treatment to alleviate severe diarrhea, then the clinical event is an episode of diarrhea
  • Low event rate (i.e., rare event) necessitates a large sample size, because many patients in your study population will go through study without having an event.
  • Higher event rates (i.e., the event is common) allows to use much smaller sample sizes as almost everyone in the study population will have the event and thus provide useful information.
  • Predicting the event rate can be very difficult and is usually based on prior studies.
4. Effect Size
  • Effect size is magnitude of a phenomenon or the strength of the relationship between two variables on a numeric scale. 
  • Smaller the effect size (difference anticipated) larger the sample size required.
  • calculated by dividing two population mean differences by their standard deviation. Cohen suggested that
  • d=0.2 is considered a ‘small’ effect size,
  • d=0.5 represents a ‘medium’ effect size 
  • d=0.8 a ‘large’ effect size.
  • if two groups’ means don’t differ by 0.2 standard deviations or more, the difference is trivial, even if it is statistically significant.
5. Compliance and drop-out rates

Sample size calculations should also take into account the number of subjects that may not comply with the study protocol or withdraw from the study.

6. Subject allocation ratio
  • The allocation ratio is the ratio of the number of patients assigned to each arm.
  • Most studies assign an equal umber of subjects to each study arm (called a ‘one-to one allocation’ when there are only two study arms).
  • A two-to-one allocation ratio means that twice as many patients are in one study arm compared to the other study arm.
  • Disproportionate allocations (i.e., any allocation ratio other than one-to-one) call for larger sample sizes.
Margin of error(MOE) or (E)
  • Standard deviation (SD)
    • SD is the amount of variability (dispersion) of individual data values from mean.
    • SD measures spread and variability of the data. If all values are the same, then SD will be zero.
    • Standard error of the mean (SEM)-
    • a sample mean deviates from the actual mean of population; this deviation is called the SEM. It measures how far sample mean is likely to be from true population mean.
    • measures accuracy of means (comparing/ testing differences between means)
  • SEM is always smaller than the SD.
  • Std. Deviation->Std. Error(SEM)->MOE
  • Is a standard deviation of 10 high or is a .20 high? 
  • When we know nothing else about data except SD and Mean, the CV helps us undersatnd that even a lower SD doesn’t mean less variable data. For example,
    • If mean is 100 and SD is 10, then cv = 10/100 = .10 or 10%.
    • While, If the standard deviation is .20 and the mean is .50, then the cv = .20/.50 = .4 or 40%.
  • Coefficient of Variance also known as relative standard deviation (RSD).
  • RSD or CV=standard deviation / mean.
  • CV >= 1 indicates a relatively high variation, while a CV < 1 can be considered low.
  • For given confidence level (Z value) and σ values, Margin of Error (E) decreases as the sample size increases . If we want to keep sampling error as small as possible we should take a large sample size.
  • We can also calculate the sample size to keep margin of error at a particular value.
  • We can have SE of the, median, of the 75th percentile, and many other statistics. When our ‘statistic’ is ‘mean’, then SE becomes SEM.
  • The margin of error(MOE) is a measure of how close the estimate is to the mean (true) value. It is usually a number of standard deviation (MOE = X * standard deviation)depending on the required confidence level.
  • MOE = Z * SEM (Margin of error= Z critical value* Std. error)
  • More precisely,
  • MOE of one standard deviation gives a 68% confidence level.
  • MOE of 1.96 (not 2) standard deviations gives a 95% confidence level.
  • MOE of 2.58 standard deviations gives a 99% confidence level.
  •  
  • PROPORTION for Margin of Error – (CAP P)   
  • Parameters are variables that summarize data for an entire population. 
  • Statistics are variables that summarize data from a sample.
https://www.youtube.com/watch?v=qVDVAZigXg0
https://www.youtube.com/watch?v=qVDVAZigXg0
https://stats.stackexchange.com/questions/15981/what-is-the-difference-between-margin-of-error-and-standard-error
  • Test statistics could be Mean, Proportion, Risk ratio etc.

Sample size calculation

  1. Most of the Population-Sometimes the entire population of patients with certain conditions or characteristics is small enough to serve as the sample.
  2. Sample Size  of Comparable Study-it is important to find a truly comparable study and not simply replicating any sample size calculation mistakes that the other study made.
  3. Sample Size Tables and Software-There are many published tables or software programs that allows to look up the necessary sample size based on present study characteristics. However, sometimes the tables or programs will not account for all of the characteristics important to the study.
  4. Sample Size Formulae-
https://www.youtube.com/watch?v=_QEddJG2MN0
https://www.youtube.com/watch?v=_QEddJG2MN0
https://www.youtube.com/watch?v=_QEddJG2MN0
https://www.youtube.com/watch?v=_QEddJG2MN0
https://sites.google.com/site/jilmacmath/statistics/margin-of-error
https://www.youtube.com/watch?v=u4TJdNAnTWg
https://www.youtube.com/watch?v=u4TJdNAnTWg
  • As significance level is usually set at 0.05 and power at 0.8, researchers need to pay more attention to Effect size to calculate the required Sample size.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3409926/
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3409926/
moe
https://sites.google.com/site/jilmacmath/statistics/margin-of-error
https://www.elsevier.com/books/principles-and-practice-of-clinical-trial-medicine/chin/978-0-12-373695-6
https://www.elsevier.com/books/principles-and-practice-of-clinical-trial-medicine/chin/978-0-12-373695-6
https://www.elsevier.com/books/principles-and-practice-of-clinical-trial-medicine/chin/978-0-12-373695-6
This image has an empty alt attribute; its file name is size2.png
https://www.elsevier.com/books/principles-and-practice-of-clinical-trial-medicine/chin/978-0-12-373695-6

References