In applied investigations, one is often interested in comparing some characteristic (such as the mean, the variance or a measure of association between two characters) of a group with a specified value, or in comparing two or more groups with regard to the characteristic. For instance, one may wish to compare two varieties of wheat with regard to the mean yield per hectare or to know if the genetic fraction of the total variation in a strain is more than a given value or to compare different lines of a crop in respect of variation between plants within lines. In making such comparisons one cannot rely on the mere numerical magnitudes of the index of comparison such as the mean, variance or measure of association. This is because each group is represented only by a sample of observations and if another sample were drawn the numerical value would change. This variation between samples from the same population can at best be reduced in a well-designed controlled experiment but can never be eliminated. One is forced to draw inference in the presence of the sampling fluctuations which affect the observed differences between groups, clouding the real differences. Statistical science provides an objective procedure for distinguishing whether the observed difference connotes any real difference among groups. Such a procedure is called a test of significance.
The test of significance is a method of making due allowance for the sampling fluctuation affecting the results of experiments or observations. The fact that the results of biological experiments are affected by a considerable amount of uncontrolled variation makes such tests necessary. These tests enable us to decide on the basis of the sample results, if
The test of significance is a method of making due allowance for the sampling fluctuation affecting the results of experiments or observations. The fact that the results of biological experiments are affected by a considerable amount of uncontrolled variation makes such tests necessary. These tests enable us to decide on the basis of the sample results, if
i)the deviation between the observed sample statistic and the hypothetical parameter value,
or
or
ii)the deviation between two sample statistic,
is significant or might be attributed to chance or the fluctuation of sampling.
For applying the tests of significance, we first set up a hypothesis - a definite statement about the population parameters. In all such situations we set up an exact hypothesis such as, the treatments or variate in question do not differ in respect of the mean value, or the variability, or the association between the specified characters, as the case may be, and follow an objective procedure of analysis of data which leads to a conclusion of either of two kinds:
i)reject the hypothesis, or
ii)not reject the hypothesis
For applying any test of significance, the following steps should be followed.
i) Identify the variables to be analyzed and identify the groups to be compared
ii) State null hypothesis
iii) Choose an appropriate alternative hypothesis
iv) Set alpha (level of significance)
v) Choose a test statistic
vi) Compute the test statistic
vii) Find out the p-value
viii) Interpret the p-value
ix) Compute power of the test, if required.
Computing and interpreting p
When the data are subjected to significance testing, the resulting value is called statistic. This can be Z-, t-, chi-square statistic, , F-, etc. depending on the test used. This statistic is used to find out the p value available from tables (statistics software can automatically calculate the p value). If the p value is less then the cut off value (level of significance, i.e , alpha), it is considered that the difference between the groups is statistically significant. When p is <0.05, it indicates that the probability of obtaining the difference (between groups) purely by chance (when there is no difference) is less than 5%. If p>0.05, the difference is considered statistically non-significant and it is concluded that there is no difference between the groups or the difference is not detected.
Non- significant result can be due to two reasons:
1.There is really no difference between the groups.
2 The study is not powerful enough to detect the difference.
Hence, one should calculate the power to conclude whether there is no difference or the power is inadequate. If the power is inadequate (<80%), The conclusion is “the study did not detect the difference “rather than “there is no difference between groups”.