ANOVA-MANOVA, ANCOVA- MANCOVA

VARIABLE TYPES

  • Factor = Categorical (Qualitative)Independent variable.
  • Factor= IV (input)
  • Response=DV (output)
  • Independent variable are also called “regressors,“controlled variable,” manipulated variable,” “explanatory variable,” “exposure variable,” and/or “input variable.
  • Dependent variables are also called “response variable,” “regressand,” “measured variable,” “observed variable,” “responding variable.
https://www.youtube.com/watch?v=B0ABvLa_u88
https://www.graphpad.com/support/faq/what-is-the-difference-between-ordinal-interval-and-ratio-variables-why-should-i-care/
https://www.youtube.com/watch?v=cz4nPSA9rlc

FACTORIAL DESIGN

  • Factorial design is a study design that is used to examine how two or more categorical IVs /predictors (Factors) predict or explain an outcome.
  • (N-way) ANOVA is a statistical test to find the significance of main effects and interactions.
  • Strength-can look at effect of each factor separately and also in combination with other factors.
  • Weakness-gets complicated and hard to interpret when more than two factors.
  • Levelssubdivisons of each IV. For example:3 polymer levels of High, Medium and Low.
  • Conditions-All levels of each IV are combined with all levels of other IVs to produce all poaasible conditions.
  • Factorial Notations-
  • Number of numbers refers to total number of factors in design (e.g. 2×2 = 2 factors, 2x2x2 = 3 factors)
  • Number values refer to the number of levels of each factor (e.g. 2×2 = 2 factors each at 2 levels)
  • All levels of each independent variable are combined with all levels of the other independent variables to produce all possible conditions (e.g. 3×4 = 2 factors, one with 3 levels and one with 4 levels to produce a total of 12 possible conditions).
  • Main effect-effect of ane factor(IV) on response (DV) ignoring any other IVs.
  • Repeated-Measure Factorial Design-each paticipant undergoes each conditions within experiment (reduces participant numer and within subject variance but high possibility of carryover effect)
  • Randomly Assigned Factorial Designeach participant is randomly assigned to just one condition.
  • Mixed Factorial DesignRepeated measure on one IV while radomly assigned participant on other IV.
  • Non-Manipulated Variable-some pre-existing condition of the participants, that experimenter cannot change. (E.g. Gender, Race, Height etc).
  • Correlational Factorial Designtwo or more predictors (Factors) that are not manipulated in the study.
  • Quasi-Experimental Factorial Design two or more quasi-IVs, meaning that the IVs are manipulated but participants are not randomly assigned to IV conditions.
  • Experimental Factorial Design-two or more IVs that are manipulated and in which participants are randomly assigned to IV levels.
  • Hybrid Factorial Design-at least one experimental IV and at least one quasi-IV
  • Cell-a comparison of one level of a factor across a level of another factor.

ANOVA Vs REGRESSION

  • ANOVA and regression ( or multivariate regression) are really the same models. But, Regression  uses numeric/continuous IV instead of categorical (or factor) IV in ANOVA/MANOVA.
  • ANOVA/MANOVA both come in “N-way” varieties.
  • One-way ANOVA-measures effect of one independent variable (i.e,effect of polymer type on EE of NPs)
  • Two-Way and Three-way ANOVA also k/a Factorial ANOVA measure effect of 2 factors (Polymer, drug) and 3 factors (Polymer, drug, surfactant) on EE of NPs respectively.
  • Univariate analysis is a descriptive analysis of one variable.
  • One-way ANOVA is a “bivariate” analysis.

Bivariate Analysis

  • Bivariate analysis involves the analysis of two variables (often denoted as X, Y), to explores the concept of relationship between two variables, whether there exists an association and the strength of this association, or whether there are differences between two variables and the significance of these differences. There are three types of bivariate analysis.
    • (1) Numerical & Numerical Bivariate analysis-
      • Scatter Plot, Linear Correlation(r).
      • Linear correlation quantifies the strength of a linear relationship between two numerical variables. When there is no correlation between two variables, there is no tendency for the values of one quantity to increase or decrease with the values of the second quantity.
    • (2) Categorical & Categorical Bivariate analysis
      • Stacked Column Chart, Chi-square Test .
      • Chi-square test can determine the association between categorical variables. It is based on the difference between expected frequencies (e) and observed frequencies (n) in one or more categories in the frequency table.
      • The chi-square distribution returns a probability for the computed chi-square and the degree of freedom. A probability of zero shows a complete dependency between two categorical variables and a probability of one means that two categorical variables are completely independent. 
  • (3)Numerical & Categorical Bivariate analysis
    • Line Chart with Error Bars, Z-test and t-test, ANOVA
    • The ANOVA test assesses whether the averages of a numerical dependent variable (2 or more in case of MNOVA) for more than two groups (categorical IV) are statistically different from each other.

ANOVA Vs MANOVA

  • The difference between ANOVA and a “Multivariate Analysis of Variance” (MANOVA) is the “M”, which stands for multivariate.
  • Unlike ANOVA, MANOVA compares for two or more continuous response (or dependent) variables.
  • Like ANOVA, MANOVA has both a one-way flavor and an N-way flavor.
  • The number of factors (categorical independent variables) involved distinguish a one-way MANOVA from a two-way MANOVA.
  • To measure effect of single factor (Polymer) on particle size and EE of NPs is examples of One-Way MANOVA .
  • To measure effect of 2 factors (Polymer, drug) on particle size and EE of NPs is examples of Two-Way MANOVA .

ANOVA Vs ANCOVA Vs MANCOVA

  • The difference between ANOVA and ANCOVA is the letter “C”, which stands for ‘covariance’.
  • Like ANOVA, “Analysis of Covariance” (ANCOVA) has a single continuous/numerical response variable (DV).
  • Unlike ANOVA, ANCOVA compares a response variable by both a factor (qualitative, categorical IV) and a continuous/numerical IV(e.g. comparing test score by both ‘level of education’ and ‘number of hours spent studying’).
  • The term for the continuous/numerical independent variable used in ANCOVA is “covariate”.
  • Unlike ANCOVA, MANCOVA compares for two or more continuous response (or dependent) variables.

ANOVA

ANOVA uses the Ftest to determine whether the variability between group means is larger than the variability of the observations within the groups.

F value = variance of the group means (Mean square Between) / mean of the within group variances(Mean square error)

If that ratio is sufficiently large, you can conclude that not all the means are equal.

  • Note that the F-critical value can be obtained from a computer before the experiment is run, as long as we know how many subjects will be studied and how many levels the explanatory variable has.
  • Then when the experiment is run, we can calculate the observed F-statistic and compare it to F-critical.
  • If the observed F-statistic is smaller than the critical value, we retain the null hypothesis because the p-value must be bigger than alpha, and if the observed F-statistic is equal to or bigger than the F-critical value, we reject the null hypothesis because the p-value must be equal to or smaller than alpha.

 

F- statistics

http://www.socr.ucla.edu/Applets.dir/F_Table.html

  • An F statistic is a value you get when you run an ANOVA test or a regression analysis to find out if the mean between two populations are significantly different. It’s similar to a T statistics from a T-Test.
  • The F-distribution is a non-negative distribution in the sense that F
    values, which are squares, can never be negative numbers.
  • While the t-test compares “means”, ANOVA compares the “variance” between the populations.
  • t-Test=when population means of only two groups is to be compared ANOVA = preferred when means of three or more groups are to be compared.
  • A-T test will tell you if a single variable is statistically significant and an F test will tell you if a group of variables are jointly significant (Interaction effect along with main effects is possible for “ANOVA with replication”) .

The F test results have both an F value and an F critical value.

  • The F critical value is also called the F statistic.
  • The value you calculate from your data is called the F value (without the “critical” part).

In general, if the calculated F value is larger than the F statistic, we can reject the null hypothesis.  If the null hypothesis is true, we expect F to have a value close to 1.0 most of the time. 

However, we should also consider the p value .

The p value is the evidence against a  null hypothesis. The smaller the p-value, the stronger the evidence that you should reject the null hypothesis.

Read your p-value first. If the p-value is small (less than your alpha level ), you can reject the null hypothesis. Only then should you consider the f-value. If you don’t reject the null, ignore the f-value.

Choosing a Statistical Test (3 groups of Tests)

https://www.youtube.com/watch?v=ulk_JWckJ78
Examples of Chi Squared Test

Chi-squared test

  • It is a special type of test that deals with frequency of data instead of means like some other tests(they look for independence of events instead of simple numerical difference.).
  • It is most useful for data that is non-parametric .
  • degrees of freedom is not just the sample size minus 1.

df = (# rows – 1) x (# columns – 1) . (In the case of scary movies example, we have 1 degree of freedom.)

Assumptions for a chi-square independence test -If these assumptions hold,  χ2 test statistic follows a χ2 distribution

    1. independent observations. This usually -not always- holds 
    2. For a 2 by 2 table, all expected frequencies > 5.*
      For a larger table, all expected frequencies > 1 and no more than 20% of all cells may have expected frequencies < 5.

Chi-squared test of independence

  • 2 categorical variables from a single population.
  • data are collected randomly from a population, to determine if there is significant association between two categorical variables.
  • For example, in a university, students might be classified their gender (female or male) or by their primary major (mathematics, chemistry, history, etc.). We use a chi-square test for independence to determine whether gender is related to their choice of study.

Chi-squared test of homogeneity

  • only 1 categorical variable from 2 (or more) populations.
  • data are collected by sampling each sub-group separately, to determine if the frequency count differed significantly across different populations.
  • For example, in a survey of subject preferences, we might ask students for their favorite subject. We ask the same question of two different populations, such as females and males. We then use a chi-square test for homogeneity to determine whether female subject preferences differed significantly from male subject preferences.
http://spss-tutorials.com/chi-square-independence-test/#assumptions
https://www.biochemia-medica.com/en/journal/23/2/10.11613/BM.2013.018/fullArticle https://magoosh.com/statistics/chi-square-test/
https://www.youtube.com/watch?v=m9ObCrzGhv8
https://www.youtube.com/watch?v=ulk_JWckJ78
https://www.biochemia-medica.com/en/journal/20/1/10.11613/BM.2010.004
https://www.researchgate.net/figure/Common-statistical-tests-to-compare-categorical-data-for-difference_fig1_305213637/download

Post hoc Tests (Multiple comparison analysis in ANOVA)

https://www.graphpad.com/support/faqid/1091/

  • Tests conducted on subsets of data tested previously in another analysis are called post hoc tests.
  • Post-hoc test is used for situations where you can decide which comparisons you want to make after looking at the data. You don’t need to plan ahead. 
  • A class of post hoc tests that provide this type of detailed information for ANOVA results are called “multiple comparison analysis” tests.Multiple comparison test applies whenever you make several comparisons at once.
  • The most commonly used multiple comparison analysis statistics include the following tests: Tukey, Newman-Keuls, Scheffee, Bonferroni and Dunnett.
  • These statistical tools each have specific uses, advantages and disadvantages.
  • Some are best used for testing theory while others are useful in generating new theory. 

We can make several types of Multiple comparisons. There are several ways we can do this:(STD-B)

    1. All possible comparisons, including averages of groups. So you might compare the average of groups A and B with the average of groups C, D and E. Or compare group A, to the average of B-F (Scheffe’s test).
    2. All possible pairwise comparisons. Compare the mean of every group with the mean of every other group, such as mean of group A with mean of group B (Tukey or Newman-Keuls).
    3. All against a control. If group A is the control, you may only want to compare A with B, A with C, A with D; but not compare B with C or C with D (Dunnett’s test).
    4. Only a few comparisons based on your scientific goals. So you might want to compare A with B and B with C and that’s it.(Bonferroni’s test).
  • Planned comparison tests  require that you focus in on a few scientifically sensible comparisons. You can’t decide which comparisons to do after looking at the data. The choice must be based on the scientific questions you are asking, and be chosen when you design the experiment.
  • Orthogonal comparison-When you only make a few comparison, the comparisons are called “orthogonal” when the each comparison is among different groups. Comparing Groups A and B is orthogonal to comparing Groups C and D, because there is no information in the data from groups A and B that is relevant when comparing Groups C and D. In contrast, comparing A and B is not orthogonal to comparing B and C.

  • When comparisons are orthogonal, the comparison can use ordinary t tests. You may still want to use the Bonferroni correction to adjust the significance level. 

https://www.ncbi.nlm.nih.gov/pubmed/22420233

Covariance Vs Correlation Vs Regression

Covariance-indicates only the direction of  linear relationship between variables. Covariance values are not standardized, can range from negative infinity to positive infinity.

Correlation -determines co-relationship or association of two variables (extent to which two variables tend to change together). It describes both the strength and the direction of the relationship. Correlation coefficient values are standardized values, can range from -1 to+1.

Regression-It describes numeric relation between an independent variable to dependent variable.  Regression indicates the impact of a unit change in known variable (x) on the estimated variable (y).

Correlation is used to represent the linear relationship between two variables. On the contrary, regression is used to fit the best line and estimate one variable on the basis of another variable.

Regression, Multiple Regression

Simple linear regression plots one independent variable X against one dependent variable Y. Technically, in regression analysis,

  • independent variable is called = predictor variable (‘x’)
  • dependent variable is called = criterion variable (‘y’)

Multiple regression It use multiple independent(“x” variables) or predictors variables  used in the regression. 

Regression analysis can result in linear or nonlinear graphs. A linear regression is where the relationships between your independent and dependent variables can be described with a straight line.Non-linear regression produces curved line.

  • Example-In one-variable linear regression, you would input one independent variable (i.e. “sales”) against an independent variable (i.e. “profit”). But you might be interested in how different types of sales effect the regression. You could set your X1 as one type of sales, your X2 as another type of sales and so on.
  • Simple regression analysis uses a single x variable for each dependent “y” variable. For example: (x1, Y1).
  • Multiple regression uses multiple “x” variables for each independent variable: (x1)1, (x2)1, (x3)1, Y1).
  • Simple regression: Y = b0 + b1 x.
  • Multiple regression: Y = b0 + b1 x1 + b0 + b1 x2…b0…b1 xn.

The output would include a summary, similar to a summary for simple linear regression, that includes:

Pearson Vs Spearman correlation 

Minitab offers two different correlation analyses:

Pearson product moment correlation “linear/proportional relationship” between two continuous (interval/ratio) variables.

  • For example, to evaluate whether increases in temperature at your production facility are associated with decreasing thickness of your chocolate coating.

Spearman rank-order correlation-It evaluates “monotonic relationship” between two ordinal/ranked variables (the variables tend to change together, but not necessarily at a constant rate). The Spearman correlation coefficient is based on the ranked values for each variable rather than the raw data.

  • For example, to evaluate whether the order in which employees complete a test exercise is related to the number of months they have been employed.
https://support.minitab.com/en-us/minitab-express/1/help-and-how-to/modeling-statistics/regression/supporting-topics/basics/a-comparison-of-the-pearson-and-spearman-correlation-methods/#other-nonlinear-relationships

Correlation Coefficient (R), R-squared, Adjusted R- squared

There are several types of correlation coefficient:Pearson’s correlation coefficient ‘r’ (most common), Cramer’s V correlation etc.

R, or Pearson’s r, is a measure of the strength and direction of the linear relationship between two variables.

The absolute value of the correlation coefficient (The formulas return a value between -1 and 1,) gives the relationship strength. The larger the number, the stronger the relationship. For example, |-.75| = .75, which has a stronger relationship than .65.

  •  1 indicates a strong positive relationship
  • -1 indicates a strong negative relationship
  • A result of zero indicates no relationship at all.

Correlation can be rightfully explained for simple linear regression – because you only have one x and one y variable. For multiple linear regression R is computed, but R square is a better term. You can explain R square for both simple linear regressions and also for multiple linear regressions.

  • Coefficient of determination, denoted R2 or r2, is the proportion of the variance in the dependent variable that is predictable from the independent variable(s).
  • Coefficient of Determination (R Squared) can never be negative – since it is a squared value (always between 0 and 1).

  • R2 shows how well terms (data points) fit a curve or line.
  • Adjusted R2 also indicates how well terms fit a curve or line, but adjusts for the number of terms in a model. If you add more and more useless variables to a model, adjusted r-squared will decrease. If you add more useful variables, adjusted r-squared will increase.
  • Adjusted R2 will always be less than or equal to R2.
Pearson’s Correlation coefficient

Standard Error of a Sample

  • The standard error(SE) is very similar to standard deviation.
  • Both are measures of spread. The higher the number, the more spread out your data is.
  • While the standard error uses statistics (sample data) standard deviations use parameters (population data). 

“One-Way” Vs “Two-Way

  • One-way has one independent variable or factor (with 3 or more levels, number of observations need not to be same in each group). For example: brand of cereal.
  • Two-way has two independent variables or factors (it can have multiple levels, same number of observations in each group). For example: brand of cereal, calories.

“Groups” or “Levels”

A level of an independent variable, means that the variables can be split up into separate parts.

For example, let’s say you were studying the effect of alcohol on performance in a driving simulator. Alcohol — the independent variable — could be composed of different parts: no alcohol, two drinks, four drinks. Each of those parts is called a level.

In the above example, levels for IV “brand of cereal” might be Lucky Charms, Raisin Bran, Cornflakes — a total of three levels. The levels for IV “Calories” might be: sweetened, unsweetened — a total of two levels.

Replication

    • A Two way ANOVA without replication can compare a group of individuals performing more than one task (like unpaired t-test for two groups). For example, you could compare students’ scores across a battery of tests.
    • twoway ANOVA is usually done with replication (more than one observation for each combination of the nominal variables). 

https://www.youtube.com/watch?v=2fytt7BZJMI

https://www.youtube.com/watch?v=Zb1wxUEbbJ4

REFERENCES