**Understanding the Results**

The null hypothesis in probability and statistics is the starting assumption that nothing other than random chance is operating to create the observed effect that you see in a particular set of data. Basically it assumes that the measured effects are the same across the independent conditions being tested. There are no differences or relationships between these independent variables and the dependent outcomes—equal until proven otherwise.

The null hypothesis is rejected if your data set is unlikely to have been produced by chance. The significance of the results is described by the confidence level that was defined by the test (as described by the acceptable error "alpha-level"). For example, it is harder to reject the null hypothesis at 99% confidence (alpha 0.01) than at 95% confidence (alpha 0.05).

Even if the null hypothesis is rejected at a certain confidence level, no alternative hypothesis is proven thereby. The only conclusion you can draw is that some effect is going on. But you do not know its cause. If the experiment was designed properly, the only things that changed were the experimental conditions. So it is logical to attribute a causal effect to them.

What if the null hypothesis is not rejected? This simply means that you did not find any statistically significant differences. That is not the same as stating that there was no difference. Remember, accepting the null hypothesis merely means that the observed differences might have been due simply to random chance, not that they must have been.

The null hypothesis is rejected if your data set is unlikely to have been produced by chance. The significance of the results is described by the confidence level that was defined by the test (as described by the acceptable error "alpha-level"). For example, it is harder to reject the null hypothesis at 99% confidence (alpha 0.01) than at 95% confidence (alpha 0.05).

Even if the null hypothesis is rejected at a certain confidence level, no alternative hypothesis is proven thereby. The only conclusion you can draw is that some effect is going on. But you do not know its cause. If the experiment was designed properly, the only things that changed were the experimental conditions. So it is logical to attribute a causal effect to them.

What if the null hypothesis is not rejected? This simply means that you did not find any statistically significant differences. That is not the same as stating that there was no difference. Remember, accepting the null hypothesis merely means that the observed differences might have been due simply to random chance, not that they must have been.

**Some concepts involved in testing of hypothesis.**
In applied investigations or in experimental research, one may wish to estimate the yield of a new hybrid line of corn, but ultimate purpose will involve some use of this estimate. One may wish, for example, to compare the yield of new line with that of a standard line and perhaps recommend that the new line replaces the standard line if it appears superior. This is the common situation in research. One may wish to determine whether a new method of sealing light bulbs will increase the life of the bulbs, whether a new germicide is more effective in treating a certain infection than a standard germicide, whether one method of preserving foods is better than the other so far as the retention of vitamin is concerned, which one among the six available varieties of any crop is best in terms of yield per hectare.

Using the light bulb example as an illustration, let us suppose that the average life of bulbs made under a standard manufacturing procedure is about 1400 hours. It is desired to test a new procedure for manufacturing the bulbs. Here, we are dealing with two populations of light bulbs: those made by the standard process and those made by the proposed process. From the past investigations, based on sample tests it is known that the mean of the first population is 1400 hours. The question is whether the mean of the second population is greater than or less than 1400 hours? This we have to decide on the basis of observations taken from a sample of bulbs made by second process.

In making comparisons of above type, one cannot rely on the mere numerical magnitudes of the index of comparison such as mean, variance, etc. This is because each group is represented only by a sample of observations and if another sample were drawn, the numerical value would change. This variation between samples from the same population can at best be reduced in a well designed experiment but can never be eliminated. One is forced to draw inference in the presence of the sampling fluctuations which affect the observed differences between the groups, clouding the real differences. Hence, we have to devise some statistical procedure, which can test whether those difference are due to chance factors or really due to treatment.

The tests of hypothesis are such statistical procedures which enable us to decide whether the differences are attributed to chance or fluctuations of sampling.

**Sample space:**The set of all possible outcomes of an experiment is called sample space. It is denoted by S. For example in an experiment of tossing two coins simultaneously, the sample space is S = {HH, HT, TH, TT}; where ‘H’ denotes the head and ‘T’ denotes the tail outcomes. In testing of hypothesis, we are concerned with drawing inferences about the population based on a random sample. Let there are ‘N’ units in a population and we have to draw sample of size ‘n’. Then the set of all possible samples of size 'n' is the sample space and any sample

*x=(x*is the point of the sample space.

_{1}, x_{2},…,x_{n})**Parameter:**A function of population values is known as parameter For example, population mean (

*m*) and population variance(σ

^{2}).

**Statistic:**A function of sample values say, (

*x*) is called a statistic. For example, sample mean

_{1}, x_{2},…,x_{n}(),sample variance (

*s*), where

^{2}**Statistical Hypothesis:**A Statistical Hypothesis is an assertion or conjecture (tentative conclusion) either about the form or about the parameter of the distribution. For example

i) The normal distribution has mean 20.

ii) The distribution of process is Poisson.

iii) Effective life of a bulb is 1400 hours.

iv) A given detergent cleans better than any washing soap.

In a statistical hypothesis, all the parameters of a distribution may be specified completely or partly.

*A statistical hypothesis in which all the parameters of a distribution are completely specified is called simple hypothesis, otherwise, it is known as composite hypothesis*. For example, in case of normal population, the hypothesis
i) Mean(μ) = 20, variance(σ

^{2}) = 5(Simple hypothesis)
ii) μ = 20, σ

^{2}>1 (composite hypothesis)
iii) μ = 20 (composite hypothesis)

**The statistical hypothesis under sample study is called null hypothesis. It is usually that the observations are the result purely of chance. It is usually denoted by**

Null Hypothesis:Null Hypothesis:

*H*._{0}**Alternative Hypothesis:**In respect of every null hypothesis, it is desirable to state, what is called an alternative hypothesis. It is complementary to the null hypothesis. Or "The desirable attitude of the statistician about the hypothesis is termed as alternative hypothesis". It is taken usually that the observations are the result of real effect plus chance variation. It is usually denoted by

*H*. For example, if one wishes to compare the yield per hectare of a new line with that of standard line, then, the null hypothesis:

_{1}*H*: Yield per hectare of new line (μ

_{0}_{1})=Yield per hectare of standard line (μ

_{2})

The alternative hypothesis corresponding to

*H*, can be the following:_{0}
i)

*H*_{1}_{ : }*m*_{1 }*>**m*_{2 }(Right tail alternative)
ii)

*H*_{1}_{: }*m*_{1 }*<**m*_{2 }(left tail alternative)
iii)

*H*_{1}_{:}_{ }*µ*_{1 }≠ µ*(Two tailed alternative)*_{2}
(i) and (ii) are called one tailed test and (ii) is a two tailed test. Whether one sets up a one tailed test or a two-tailed test depends upon the conclusion to be drawn if

*H*is rejected. The location of the critical region will be decided only after_{0}*H*has been stated. For example in testing a new drug, one sets up the hypothesis that it is no better than similar drugs now on the market and tests this hypothesis against the alternative hypothesis that the new drug is superior. Such an alternative hypothesis will result in a one tailed test (right tail alternative)._{0}
If we wish to compare a new teaching technique with the conventional classroom procedure, the alternative hypothesis should allow for the new approach to be either inferior or superior to the conventional procedure. Hence the test is two-tailed.

**Critical Region:**It is region of rejecting null hypothesis when it is true if sample point belongs to it. Hence ‘C’ is the critical region.

Suppose that if the test is based on a sample of size 2, then the outcome set or sample space is the first quadrant in a two dimensional space and a test criterion will enable us to separate our outcome set into two complementary subsets, C and C

_{bar}If the sample point falls in the subset C,*H*is rejected, otherwise,_{0 }*H*is accepted._{0 }
The terms acceptance and rejection are, however, not to be taken in their literal senses. Thus acceptance of

*H*does not mean that_{0}*H*has been proved true. It means only that so far as the given observations are concerned, there is no evidence to believe otherwise. Similarly, rejection of_{0}*H*does not disprove the hypothesis; if merely means that_{0}*H*does not look plausible in the light of given observations._{0}
It is now known that, in order to establish the null hypothesis, we have to study the sample instead of entire population. Hence, whatever, decision rule we may employ, there is every chance of committing errors in the decision for rejecting or accepting the hypothesis. Four possible situations, which can arise in any test procedure, are given in the following table.

From the table, it is clear that the errors committed in making decisions are of two types.

Type I error: Reject

*H*when_{0}*H*is true._{0}
Type II error: Accept (does not reject)

*H*when_{0 }*H*is false._{0 }For example, a judge, who has to decide whether the person has committed the crime. The statistical hypothesis in this case is:

*H*: Person is innocent;

_{0}*H*: Person is criminal.

_{1}In this situation, two types of errors which the judge may commit are:

Type I error: Innocent person is found guilty and punished.

Type II error: A guilty person is set free.

Since, it is more serious to punish an innocent than to set free a criminal. Therefore, Type I error is more serious than the Type II error.

Probabilities of the errors:

Probability of Type I error = P (Reject

*H*/_{0}*H*_{0}_{ }is true) =*α*
Probability of Type II error = P (Accept

*H*/_{0}*H*is true) =_{1}*β*
In quality control terminology, Type I error amounts to rejecting a lot when it is good and Type II error may be regarded as accepting a lot when it is bad.

P (Reject a lot when it is good) = α (producer’s risk)

P (Accept a lot when it is bad) =

*β*(consumer’s risk)**Level of significance:**the probability of Type I error (α) is called the level of significance. It is also known as size of the critical region.

Although 5% level of significance has been taken as a rough line demarcation in which deviations due to sampling fluctuations alone will be interpreted as real ones in 5% of the cases. Hence, yet the inferences about the population based on samples are subject to some degree of uncertainty. It is not possible to remove this uncertainty completely, but it can be reduced by choosing the level of significance still lesser like 1%, in which chances of interpreting the deviations due to sampling fluctuations, as real one, is only one in 100 cases.

**Power function of a Test:**The probability of rejecting

*H*

_{0}_{ }when

*H*

_{1}_{ }is true is called power function of the test.

Power function = P(Reject

*H*/_{0}*H*_{1}_{ }is true)
= 1-P(Accept

*H*/_{0}*H*is true)_{1}
= 1-

*β*.
The value of this function plays the same role in hypothesis testing as the mean square error plays in estimation. It is usually used as our standard in assessing the goodness of a test or in comparing two tests of same size. The value of this function for a particular point is called the power of the test.

In testing of hypothesis, the ideal procedure is to minimize the probabilities of both type of errors. Unfortunately, for a fixed sample size n, both the probabilities cannot be controlled simultaneously. A test which minimizes the one type of error, maximizes the other type of error. For example, if there is a critical region which makes the probability of Type I error zero, will be of the form always accept

*H*and that the probability of Type II error will be one. It is, therefore, desirable to fix the probability of one of the error and choose a critical region which minimizes the probability of the other error. As Type I error is consider to be more serious than Type II error. Therefore, we fix the probability of Type I error (α) and minimize the probability of Type II error (_{0}*β)*and hence, maximize the power function of test.**Steps in solving testing of hypothesis problem:**

i) Explicit knowledge of the nature of the population distribution and the parameters of interest, i.e., about which the hypothesis is set up.

ii) Setting up of null hypothesis

*H*and alternative hypothesis_{0}*H*in terms of the range of parameter values, each one embodies._{1}
iii) The choice of a suitable test statistic, say,

*t=t(x*called the test statistic which will best reflect on_{1}, x_{2},…,x_{n})*H*and_{0}*H*._{1}
iv) Partition the simple space (set of possible values of test statistic, t) into two disjoint and complementary subsets

*C*and*C*= A (say) and framing be test, such as_{bar}
(a) Reject

*H*if value_{0}*t***ε C**_{ }
(b) Accept

*H*if value_{0}*t*.**ε**A
After framing the above test, obtain the experimental sample observations, compute the test statistic and take action accordingly.