One sample t-test
One sample t-test, tests whether a sample mean significantly differs from a hypothesized value
One sample median test
One sample median test is used to test whether a sample median differs significantly from a hypothesized value.
Binomial test
Binomial test is used to test whether the proportion of successes on a two-level categorical dependent variable significantly differs from a hypothesized value.
Chi-square goodness of fit
A chi-square goodness of fit test is used to test whether the observed proportions for a categorical variable differ from hypothesized proportions. For example, let's suppose that we believe that the general population consists of x% Hispanic, y% Asian, z% African American and a% White folks. We want to test whether the observed proportions from our sample differ significantly from these hypothesized proportions.
Two independent samples t-test
An independent samples t-test is used when you want to compare the means of a normally distributed interval dependent variable for two independent groups.
Wilcoxon-Mann-Whitney test
The Wilcoxon-Mann-Whitney test is a non-parametric analog to the independent samples t-test and can be used when you do not assume that the dependent variable is a normally distributed interval variable (you only assume that the variable is at least ordinal).
Chi-square test
A chi-square test is used when you want to see if there is a relationship between two categorical variables. Remember that the chi-square test assumes that the expected value for each cell is five or higher.
Fisher's exact test
The Fisher's exact test is used when you want to conduct a chi-square test but one or more of your cells has an expected frequency of five or less. Remember that the chi-square test assumes that each cell has an expected frequency of five or more, but the Fisher's exact test has no such assumption and can be used regardless of how small the expected frequency is.
One-way ANOVA
A one-way analysis of variance (ANOVA) is used when you have a categorical independent variable (with two or more categories) and a normally distributed interval dependent variable and you wish to test for differences in the means of the dependent variable broken down by the levels of the independent variable.
Kruskal Wallis test
The Kruskal Wallis test is used when you have one independent variable with two or more levels and an ordinal dependent variable. In other words, it is the non-parametric version of ANOVA and a generalized form of the Mann-Whitney test method since it permits two or more groups.
Paired t-test
A paired (samples) t-test is used when you have two related observations (i.e., two observations per subject) and you want to see if the means on these two normally distributed interval variables differ from one another.
Wilcoxon signed rank sum test
The Wilcoxon signed rank sum test is the non-parametric version of a paired samples t-test. You use the Wilcoxon signed rank sum test when you do not wish to assume that the difference between the two variables is interval and normally distributed (but you do assume the difference is ordinal).
One-way repeated measures ANOVA
You would perform a one-way repeated measures analysis of variance if you had one categorical independent variable and a normally distributed interval dependent variable that was repeated at least twice for each subject. This is the equivalent of the paired samples t-test, but allows for two or more levels of the categorical variable. This tests whether the mean of the dependent variable differs by the categorical variable.
Factorial ANOVA
A factorial ANOVA has two or more categorical independent variables (either with or without the interactions) and a single normally distributed interval dependent variable.
Friedman test
You perform a Friedman test when you have one within-subjects independent variable with two or more levels and a dependent variable that is not interval and normally distributed (but at least ordinal). We will use this test to determine if there is a difference in the reading, writing and math scores. The null hypothesis in this test is that the distribution of the ranks of each type of score (i.e., reading, writing and math) are the same.
Factorial logistic regression
A factorial logistic regression is used when you have two more categorical independent variables but a dichotomous dependent variable.
Correlation
A correlation is useful when you want to see the relationship between two (or more) normally distributed interval variables. By squaring the correlation and then multiplying by 100, you can determine what percentage of the variability is shared.
Simple linear regression
Simple linear regression allows us to look at the linear relationship between one normally distributed interval predictor and one normally distributed interval outcome variable.
Non-parametric correlation
A Spearman correlation is used when one or both of the variables are not assumed to be normally distributed and interval (but are assumed to be ordinal). The values of the variables are converted in ranks and then correlated.
Simple logistic regression
Logistic regression assumes that the outcome variable is binary (i.e., coded as 0 and 1).
Multiple regression
Multiple regression is very similar to simple regression, except that in multiple regression you have more than one predictor variable in the equation.
Analysis of covariance
Analysis of covariance is like ANOVA, except in addition to the categorical predictors you also have continuous predictors as well.
Multiple logistic regression
Multiple logistic regression is like simple logistic regression, except that there are two or more predictors. The predictors can be interval variables or dummy variables, but cannot be categorical variables. If you have categorical predictors, they should be coded into one or more dummy variables.
Discriminant analysis
Discriminant analysis is used when you have one or more normally distributed interval independent variables and a categorical dependent variable. It is a multivariate technique that considers the latent dimensions in the independent variables for predicting group membership in the categorical dependent variable.
One-way MANOVA
MANOVA (multivariate analysis of variance) is like ANOVA, except that there are two or more dependent variables. In a one-way MANOVA, there is one categorical independent variable and two or more dependent variables.
Multivariate multiple regression
Multivariate multiple regression is used when you have two or more variables that are to be predicted from two or more predictor variables.
Canonical correlation
Canonical correlation is a multivariate technique used to examine the relationship between two groups of variables. For each set of variables, it creates latent variables and looks at the relationships among the latent variables. It assumes that all variables in the model are interval and normally distributed.
Factor analysis
Factor analysis is a form of exploratory multivariate analysis that is used to either reduce the number of variables in a model or to detect relationships among variables. All variables involved in the factor analysis need to be interval and are assumed to be normally distributed. The goal of the analysis is to try to identify factors which underlie the variables. There may be fewer factors than variables, but there may not be more factors than variables.