Statistics – Factorial ”; Previous Next Factorial is a function applied to natural numbers greater than zero. The symbol for the factorial function is an exclamation mark after a number, like this: 2! Formula ${n! = 1 times 2 times 3 … times n}$ Where − ${n!}$ = represents factorial ${n}$ = Number of sets Example Problem Statement: Calculate the factorial of 5 i.e. 5!. Solution: Multiply all the whole numbers up to the number considered. ${5! = 5 times 4 times 3 times 2 times 1 , \[7pt] , = 120}$ Print Page Previous Next Advertisements ”;
Category: statistics
Statistics Notation
Statistics – Notations ”; Previous Next Following table shows the usage of various symbols used in Statistics Capitalization Generally lower case letters represent the sample attributes and capital case letters are used to represent population attributes. $ P $ – population proportion. $ p $ – sample proportion. $ X $ – set of population elements. $ x $ – set of sample elements. $ N $ – set of population size. $ N $ – set of sample size. Greek Vs Roman letters Roman letters represent the sample attributs and greek letters are used to represent Population attributes. $ mu $ – population mean. $ bar x $ – sample mean. $ delta $ – standard deviation of a population. $ s $ – standard deviation of a sample. Population specific Parameters Following symbols represent population specific attributes. $ mu $ – population mean. $ delta $ – standard deviation of a population. $ {mu}^2 $ – variance of a population. $ P $ – proportion of population elements having a particular attribute. $ Q $ – proportion of population elements having no particular attribute. $ rho $ – population correlation coefficient based on all of the elements from a population. $ N $ – number of elements in a population. Sample specific Parameters Following symbols represent population specific attributes. $ bar x $ – sample mean. $ s $ – standard deviation of a sample. $ {s}^2 $ – variance of a sample. $ p $ – proportion of sample elements having a particular attribute. $ q $ – proportion of sample elements having no particular attribute. $ r $ – population correlation coefficient based on all of the elements from a sample. $ n $ – number of elements in a sample. Linear Regression $ B_0 $ – intercept constant in a population regression line. $ B_1 $ – regression coefficient in a population regression line. $ {R}^2 $ – coefficient of determination. $ b_0 $ – intercept constant in a sample regression line. $ b_1 $ – regression coefficient in a sample regression line. $ ^{s}b_1 $ – standard error of the slope of a regression line. Probability $ P(A) $ – probability that event A will occur. $ P(A|B) $ – conditional probability that event A occurs, given that event B has occurred. $ P(A”) $ – probability of the complement of event A. $ P(A cap B) $ – probability of the intersection of events A and B. $ P(A cup B) $ – probability of the union of events A and B. $ E(X) $ – expected value of random variable X. $ b(x; n, P) $ – binomial probability. $ b*(x; n, P) $ – negative binomial probability. $ g(x; P) $ – geometric probability. $ h(x; N, n, k) $ – hypergeometric probability. Permutation/Combination $ n! $ – factorial value of n. $ ^{n}P_r $ – number of permutations of n things taken r at a time. $ ^{n}C_r $ – number of combinations of n things taken r at a time. Set $ A Cap B $ – intersection of set A and B. $ A Cup B $ – union of set A and B. $ { A, B, C } $ – set of elements consisting of A, B, and C. $ emptyset $ – null or empty set. Hypothesis Testing $ H_0 $ – null hypothesis. $ H_1 $ – alternative hypothesis. $ alpha $ – significance level. $ beta $ – probability of committing a Type II error. Random Variables $ Z $ or $ z $ – standardized score, also known as a z score. $ z_{alpha} $ – standardized score that has a cumulative probability equal to $ 1 – alpha $. $ t_{alpha} $ – t statistic that has a cumulative probability equal to $ 1 – alpha $. $ f_{alpha} $ – f statistic that has a cumulative probability equal to $ 1 – alpha $. $ f_{alpha}(v_1, v_2) $ – f statistic that has a cumulative probability equal to $ 1 – alpha $ and $ v_1 $ and $ v_2 $ degrees of freedom. $ X^2 $ – chi-square statistic. Summation Symbols $ sum $ – summation symbol, used to compute sums over a range of values. $ sum x $ or $ sum x_i $ – sum of a set of n observations. Thus, $ sum x = x_1 + x_2 + … + x_n $. Print Page Previous Next Advertisements ”;
Z table
Statistics – Z table ”; Previous Next Standard Normal Probability Table The following table shows the area under the curve to the left of a z-score: z .00 .01 .02 .03 .04 .05 .06 .07 .08 .09 -3.4 .0003 .0003 .0003 .0003 .0003 .0003 .0003 .0003 .0003 .0002 -3.3 .0005 .0005 .0005 .0004 .0004 .0004 .0004 .0004 .0004 .0003 -3.2 .0007 .0007 .0006 .0006 .0006 .0006 .0006 .0005 .0005 .0005 -3.1 .0010 .0009 .0009 .0009 .0008 .0008 .0008 .0008 .0007 .0007 -3.0 .0013 .0013 .0013 .0012 .0012 .0011 .0011 .0011 .0010 .0010 -2.9 .0019 .0018 .0018 .0017 .0016 .0016 .0015 .0015 .0014 .0014 -2.8 .0026 .0025 .0024 .0023 .0023 .0022 .0021 .0021 .0020 .0019 -2.7 .0035 .0034 .0033 .0032 .0031 .0030 .0029 .0028 .0027 .0026 -2.6 .0047 .0045 .0044 .0043 .0041 .0040 .0039 .0038 .0037 .0036 -2.5 .0062 .0060 .0059 .0057 .0055 .0054 .0052 .0051 .0049 .0048 -2.4 .0082 .0080 .0078 .0075 .0073 .0071 .0069 .0068 .0066 .0064 -2.3 .0107 .0104 .0102 .0099 .0096 .0094 .0091 .0089 .0087 .0084 -2.2 .0139 .0136 .0132 .0129 .0125 .0122 .0119 .0116 .0113 .0110 -2.1 .0179 .0174 .0170 .0166 .0162 .0158 .0154 .0150 .0146 .0143 -2.0 .0228 .0222 .0217 .0212 .0207 .0202 .0197 .0192 .0188 .0183 -1.9 .0287 .0281 .0274 .0268 .0262 .0256 .0250 .0244 .0239 .0233 -1.8 .0359 .0351 .0344 .0336 .0329 .0322 .0314 .0307 .0301 .0294 -1.7 .0446 .0436 .0427 .0418 .0409 .0401 .0392 .0384 .0375 .0367 -1.6 .0548 .0537 .0526 .0516 .0505 .0495 .0485 .0475 .0465 .0455 -1.5 .0668 .0655 .0643 .0630 .0618 .0606 .0594 .0582 .0571 .0559 -1.4 .0808 .0793 .0778 .0764 .0749 .0735 .0721 .0708 .0694 .0681 -1.3 .0968 .0951 .0934 .0918 .0901 .0885 .0869 .0853 .0838 .0823 -1.2 .1151 .1131 .1112 .1093 .1075 .1056 .1038 .1020 .1003 .0985 -1.1 .1357 .1335 .1314 .1292 .1271 .1251 .1230 .1210 .1190 .1170 -1.0 .1587 .1562 .1539 .1515 .1492 .1469 .1446 .1423 .1401 .1379 -0.9 .1841 .1814 .1788 .1762 .1736 .1711 .1685 .1660 .1635 .1611 -0.8 .2119 .2090 .2061 .2033 .2005 .1977 .1949 .1922 .1894 .1867 -0.7 .2420 .2389 .2358 .2327 .2296 .2266 .2236 .2206 .2177 .2148 -0.6 .2743 .2709 .2676 .2643 .2611 .2578 .2546 .2514 .2483 .2451 -0.5 .3085 .3050 .3015 .2s981 .2946 .2912 .2877 .2843 .2810 .2776 -0.4 .3446 .3409 .3372 .3336 .3300 .3264 .3228 .3192 .3156 .3121 -0.3 .3821 .3783 .3745 .3707 .3669 .3632 .3594 .3557 .3520 .3483 -0.2 .4207 .4168 .4129 .4090 .4052 .4013 .3974 .3936 .3897 .3859 -0.1 .4602 .4562 .4522 .4483 .4443 .4404 .4364 .4325 .4286 .4247 0.0 .5000 .4960 .4920 .4880 .4840 .4801 .4761 .4721 .4681 .4641 The following table shows the area under the curve to the left of a z-score: z .00 .01 .02 .03 .04 .05 .06 .07 .08 .09 0.0 .5000 .4960 .4920 .4880 .4840 .4801 .4761 .4721 .4681 .4641 0.1 .5398 .5438 .5478 .5517 .5557 .5596 .5636 .5675 .5714 .5753 0.2 .5793 .5832 .5871 .5910 .5948 .5987 .6026 .6064 .6103 .6141 0.3 .6179 .6217 .6255 .6293 .6331 .6368 .6406 .6443 .6480 .6517 0.4 .6554 .6591 .6628 .6664 .6700 .6736 .6772 .6808 .6844 .6879 0.5 .6915 .6950 .6985 .7019 .7054 .7088 .7123 .7157 .7190 .7224 0.6 .7257 .7291 .7324 .7357 .7389 .7422 .7454 .7486 .7517 .7549 0.7 .7580 .7611 .7642 .7673 .7704 .7734 .7764 .7794 .7823 .7852 0.8 .7881 .7910 .7939 .7967 .7995 .8023 .8051 .8078 .8106 .8133 0.9 .8159 .8186 .8212 .8238 .8264 .8289 .8315 .8340 .8365 .8389 1.0 .8413 .8438 .8461 .8485 .8508 .8531 .8554 .8577 .8599 .8621 1.1 .8643 .8665 .8686 .8708 .8729 .8749 .8770 .8790 .8810 .8830 1.2 .8849 .8869 .8888 .8907 .8925 .8944 .8962 .8980 .8997 .9015 1.3 .9032 .9049 .9066 .9082 .9099 .9115 .9131 .9147 .9162 .9177 1.4 .9192 .9207 .9222 .9236 .9251 .9265 .9279 .9292 .9306 .9319 1.5 .9332 .9345 .9357 .9370 .9382 .9394 .9406 .9418 .9429 .9441 1.6 .9452 .9463 .9474 .9484 .9495 .9505 .9515 .9525 .9535 .9545 1.7 .9554 .9564 .9573 .9582 .9591 .9599 .9608 .9616 .9625 .9633 1.8 .9641 .9649 .9656 .9664 .9671 .9678 .9686 .9693 .9699 .9706 1.9 .9713 .9719 .9726 .9732 .9738 .9744 .9750 .9756 .9761 .9767 2.0 .9772 .9778 .9783 .9788 .9793 .9798 .9803 .9808 .9812 .9817 2.1 .9821 .9826 .9830 .9834 .9838 .9842 .9846 .9850 .9854 .9857 2.2 .9861 .9864 .9868 .9871 .9875 .9878 .9881 .9884 .9887 .9890 2.3 .9893 .9896 .9898 .9901 .9904 .9906 .9909 .9911 .9913 .9916 2.4 .9918 .9920 .9922 .9925 .9927 .9929 .9931 .9932 .9934 .9936 2.5 .9938 .9940 .9941 .9943 .9945 .9946 .9948 .9949 .9951 .9952 2.6 .9953 .9955 .9956 .9957 .9959 .9960 .9961 .9962 .9963 .9964 2.7 .9965 .9966 .9967 .9968 .9969 .9970 .9971 .9972 .9973 .9974 2.8 .9974 .9975 .9976 .9977 .9977 .9978 .9979 .9979 .9980 .9981 2.9 .9981 .9982 .9982 .9983 .9984 .9984 .9985 .9985 .9986 .9986 3.0 .9987 .9987 .9987 .9988 .9988 .9989 .9989 .9989 .9990 .9990 3.1 .9990 .9991 .9991 .9991 .9992 .9992 .9992 .9992 .9993 .9993 3.2 .9993 .9993 .9994 .9994 .9994 .9994 .9994 .9995 .9995 .9995 3.3 .9995 .9995 .9995 .9996 .9996 .9996 .9996 .9996 .9996 .9997 3.4 .9997 .9997 .9997 .9997 .9997 .9997 .9997 .9997 .9997 .9998 Print Page Previous Next Advertisements ”;
Correlation Co-efficient
Statistics – Correlation Co-efficient ”; Previous Next Correlation Co-efficient A correlation coefficient is a statistical measure of the degree to which changes to the value of one variable predict change to the value of another. In positively correlated variables, the value increases or decreases in tandem. In negatively correlated variables, the value of one increases as the value of the other decreases. Correlation coefficients are expressed as values between +1 and -1. A coefficient of +1 indicates a perfect positive correlation: A change in the value of one variable will predict a change in the same direction in the second variable. A coefficient of -1 indicates a perfect negative: A change in the value of one variable predicts a change in the opposite direction in the second variable. Lesser degrees of correlation are expressed as non-zero decimals. A coefficient of zero indicates there is no discernable relationship between fluctuations of the variables. Formula ${r = frac{N sum xy – (sum x)(sum y)}{sqrt{[Nsum x^2 – (sum x)^2][Nsum y^2 – (sum y)^2]}} }$ Where − ${N}$ = Number of pairs of scores ${sum xy}$ = Sum of products of paired scores. ${sum x}$ = Sum of x scores. ${sum y}$ = Sum of y scores. ${sum x^2}$ = Sum of squared x scores. ${sum y^2}$ = Sum of squared y scores. Example Problem Statement: Calculate the correlation co-efficient of the following: X Y 1 2 3 5 4 5 4 8 Solution: ${ sum xy = (1)(2) + (3)(5) + (4)(5) + (4)(8) = 69 \[7pt] sum x = 1 + 3 + 4 + 4 = 12 \[7pt] sum y = 2 + 5 + 5 + 8 = 20 \[7pt] sum x^2 = 1^2 + 3^2 + 4^2 + 4^2 = 42 \[7pt] sum y^2 = 2^2 + 5^2 + 5^2 + 8^2 = 118 \[7pt] r= frac{69 – frac{(12)(20)}{4}}{sqrt{(42 – frac{(12)^2}{4})(118-frac{(20)^2}{4}}} \[7pt] = .866 }$ Print Page Previous Next Advertisements ”;
Variance
Statistics – Variance ”; Previous Next A variance is defined as the average of Squared differences from mean value. Combination is defined and given by the following function: Formula ${ delta = frac{ sum (M – n_i)^2 }{n}}$ Where − ${M}$ = Mean of items. ${n}$ = the number of items considered. ${n_i}$ = items. Example Problem Statement: Find the variance between following data : {600, 470, 170, 430, 300} Solution: Step 1: Determine the Mean of the given items. ${ M = frac{600 + 470 + 170 + 430 + 300}{5} \[7pt] = frac{1970}{5} \[7pt] = 394}$ Step 2: Determine Variance ${ delta = frac{ sum (M – n_i)^2 }{n} \[7pt] = frac{(600 – 394)^2 + (470 – 394)^2 + (170 – 394)^2 + (430 – 394)^2 + (300 – 394)^2}{5} \[7pt] = frac{(206)^2 + (76)^2 + (-224)^2 + (36)^2 + (-94)^2}{5} \[7pt] = frac{ 42,436 + 5,776 + 50,176 + 1,296 + 8,836}{5} \[7pt] = frac{ 108,520}{5} \[7pt] = frac{(14)(13)(3)(11)}{(2)(1)} \[7pt] = 21,704}$ As a result, Variance is ${21,704}$. Print Page Previous Next Advertisements ”;
Transformations
Statistics – Transformations ”; Previous Next Data transformation refers to application of a function to each item in a data set. Here $ x_i $ is replaced by its transformed value $ y_i $ where $ y_i = f(x_i) $. Data transformations are carried out generally to make appearance of graphs more interpretable. There are four major functions used for transformations. $ log x $ – logarithm transformations. For example sound units are in decibels and is generally represented using log transformations. $ frac{1}{x} $ – Reciprocal Transformations. For example Time to complete race/ task is represents using speed. More the speed lesser the time taken. $ sqrt{x} $ – Square root Transformations. For example areas of circular ground are compared using their radius. $ {x^2} $ – Power Transformations. For example to compare negative numbers. logarithm and Square root Transformations are used in case of positive numbers where as Reciprocal and Power Transformations can be used in case of both negative as well as positive numbers. Following diagrams illustrates the use of logarithm transformation to compare population graphically. Before transformation After transformation Print Page Previous Next Advertisements ”;
Type I & II Error
Statistics – Type I & II Errors ”; Previous Next Type I and Type II errors signifies the erroneous outcomes of statistical hypothesis tests. Type I error represents the incorrect rejection of a valid null hypothesis whereas Type II error represents the incorrect retention of an invalid null hypothesis. Null Hypothesis Null Hypothesis refers to a statement which nullifies the contrary with evidence. Consider the following examples: Example 1 Hypothesis – Water added to a toothpaste protects teeth against cavities. Null Hypothesis – Water added to a toothpaste has no effect against cavities. Example 2 Hypothesis – Floride added to a toothpaste protects teeth against cavities. Null Hypothesis – Floride added to a toothpaste has no effect against cavities. Here Null hypothesis is to be tested against experimental data to nullify the effect of floride and water on teeth”s cavities. Type I Error Consider the Example 1. Here Null hypothesis is true i.e. Water added to a toothpaste has no effect against cavities. But if using experimental data, we detect an effect of water added on cavities then we are rejecting a true null hypothesis. This is a Type I error. It is also called a False Positive condition (a situation which indicates that a given condition is present but it actually is not present). The Type I error rate or significance level of Type I is represented by the probability of rejecting the null hypothesis given that it is true. Type I error is denoted by $ alpha $ and is also called alpha level. Generally It is acceptable to have Type I error significance level as 0.05 or 5% which means that 5% probability of incorrectly rejecting the null hypothesis is acceptable. Type II Error Consider the Example 2. Here Null hypothesis is false i.e. Floride added to a toothpaste has effect against cavities. But if using experimental data, we do not detect an effect of floride added on cavities then we are accepting a false null hypothesis. This is a Type II error. It is also called a False Positive condition (a situation which indicates that a given condition is not present but it actually is present). Type II error is denoted by $ beta $ and is also called beta level. Goal of a statistical test is to determine that a null hypothesis can be rejected or not. A statistical test can reject or not be able to reject a null hypothesis. Following table illustrates the relationship between truth or falseness of the null hypothesis and outcomes of the test in terms of Type I or Type II error. Judgment Null hypothesis ($ H_0 $) is Error Type Inference Reject Valid Type I Error (False Positive) Incorrect Reject Invalid True Positive Correct Unable to Reject Valid True Negative Correct Unable to Reject Invalid Type II error(False Negative) Incorrect Print Page Previous Next Advertisements ”;
Standard Error ( SE )
Statistics – Standard Error ( SE ) ”; Previous Next The standard deviation of a sampling distribution is called as standard error. In sampling, the three most important characteristics are: accuracy, bias and precision. It can be said that: The estimate derived from any one sample is accurate to the extent that it differs from the population parameter. Since the population parameters can only be determined by a sample survey, hence they are generally unknown and the actual difference between the sample estimate and population parameter cannot be measured. The estimator is unbiased if the mean of the estimates derived from all the possible samples equals the population parameter. Even if the estimator is unbiased an individual sample is most likely going to yield inaccurate estimate and as stated earlier, inaccuracy cannot be measured. However it is possible to measure the precision i.e. the range between which the true value of the population parameter is expected to lie, using the concept of standard error. Formula $SE_bar{x} = frac{s}{sqrt{n}}$ Where − ${s}$ = Standard Deviation and ${n}$ = No.of observations Example Problem Statement: Calculate Standard Error for the following individual data: Items 14 36 45 70 105 Solution: Let”s first compute the Arithmetic Mean $bar{x}$ $bar{x} = frac{14 + 36 + 45 + 70 + 105}{5} \[7pt] , = frac{270}{5} \[7pt] , = {54}$ Let”s now compute the Standard Deviation ${s}$ $s = sqrt{frac{1}{n-1}((x_{1}-bar{x})^{2}+(x_{2}-bar{x})^{2}+…+(x_{n}-bar{x})^{2})} \[7pt] , = sqrt{frac{1}{5-1}((14-54)^{2}+(36-54)^{2}+(45-54)^{2}+(70-54)^{2}+(105-54)^{2})} \[7pt] , = sqrt{frac{1}{4}(1600+324+81+256+2601)} \[7pt] , = {34.86}$ Thus the Standard Error $SE_bar{x}$ $SE_bar{x} = frac{s}{sqrt{n}} \[7pt] , = frac{34.86}{sqrt{5}} \[7pt] , = frac{34.86}{2.23} \[7pt] , = {15.63}$ The Standard Error of the given numbers is 15.63. The smaller the proportion of the population that is sampled the less is the effect of this multiplier because then the finite multiplier will be close to one and will affect the standard error negligibly. Hence if the sample size is less than 5% of population, the finite multiplier is ignored. Print Page Previous Next Advertisements ”;
Ti 83 Exponential Regression
Statistics – Ti 83 Exponential Regression ”; Previous Next Ti 83 Exponential Regression is used to compute an equation which best fits the co-relation between sets of indisciriminate variables. Formula ${ y = a times b^x}$ Where − ${a, b}$ = coefficients for the exponential. Example Problem Statement: Calculate Exponential Regression Equation(y) for the following data points. Time (min), Ti 0 5 10 15 Temperature (°F), Te 140 129 119 112 Solution: Let consider a and b as coefficients for the exponential Regression. Step 1 ${ b = e^{ frac{n times sum Ti log(Te) – sum (Ti) times sum log(Te) } {n times sum (Ti)^2 – times (Ti) times sum (Ti) }} } $ Where − ${n}$ = total number of items. ${ sum Ti log(Te) = 0 times log(140) + 5 times log(129) + 10 times log(119) + 15 times log(112) = 62.0466 \[7pt] sum log(L2) = log(140) + log(129) + log(119) + log(112) = 8.3814 \[7pt] sum Ti = (0 + 5 + 10 + 15) = 30 \[7pt] sum Ti^2 = (0^2 + 5^2 + 10^2 + 15^2) = 350 \[7pt] implies b = e^{frac {4 times 62.0466 – 30 times 8.3814} {4 times 350 – 30 times 30}} \[7pt] = e^{-0.0065112} \[7pt] = 0.9935 } $ Step 2 ${ a = e^{ frac{sum log(Te) – sum (Ti) times log(b)}{n} } \[7pt] = e^{frac{8.3814 – 30 times log(0.9935)}{4}} \[7pt] = e^2.116590964 \[7pt] = 8.3028 } $ Step 3 Putting the value of a and b in Exponential Regression Equation(y), we get. ${ y = a times b^x \[7pt] = 8.3028 times 0.9935^x } $ Print Page Previous Next Advertisements ”;
Statistics Formulas
Statistics – Formulas ”; Previous Next Following is the list of statistics formulas used in the Tutorialspoint statistics tutorials. Each formula is linked to a web page that describe how to use the formula. A Adjusted R-Squared – $ {R_{adj}^2 = 1 – [frac{(1-R^2)(n-1)}{n-k-1}]} $ Arithmetic Mean – $ bar{x} = frac{_{sum {x}}}{N} $ Arithmetic Median – Median = Value of $ frac{N+1}{2})^{th} item $ Arithmetic Range – $ {Coefficient of Range = frac{L-S}{L+S}} $ B Best Point Estimation – $ {MLE = frac{S}{T}} $ Binomial Distribution – $ {P(X-x)} = ^{n}{C_x}{Q^{n-x}}.{p^x} $ C Chebyshev”s Theorem – $ {1-frac{1}{k^2}} $ Circular Permutation – $ {P_n = (n-1)!} $ Cohen”s kappa coefficient – $ {k = frac{p_0 – p_e}{1-p_e} = 1 – frac{1-p_o}{1-p_e}} $ Combination – $ {C(n,r) = frac{n!}{r!(n-r)!}} $ Combination with replacement – $ {^nC_r = frac{(n+r-1)!}{r!(n-1)!} } $ Continuous Uniform Distribution – f(x) = $ begin{cases} 1/(b-a), & text{when $ a le x le b $} \ 0, & text{when $x lt a$ or $x gt b$} end{cases} $ Coefficient of Variation – $ {CV = frac{sigma}{X} times 100 } $ Correlation Co-efficient – $ {r = frac{N sum xy – (sum x)(sum y)}{sqrt{[Nsum x^2 – (sum x)^2][Nsum y^2 – (sum y)^2]}} } $ Cumulative Poisson Distribution – $ {F(x,lambda) = sum_{k=0}^x frac{e^{- lambda} lambda ^x}{k!}} $ D Deciles Statistics – $ {D_i = l + frac{h}{f}(frac{iN}{10} – c); i = 1,2,3…,9} $ Deciles Statistics – $ {D_i = l + frac{h}{f}(frac{iN}{10} – c); i = 1,2,3…,9} $ F Factorial – $ {n! = 1 times 2 times 3 … times n} $ G Geometric Mean – $ G.M. = sqrt[n]{x_1x_2x_3…x_n} $ Geometric Probability Distribution – $ {P(X=x) = p times q^{x-1} } $ Grand Mean – $ {X_{GM} = frac{sum x}{N}} $ H Harmonic Mean – $ H.M. = frac{W}{sum (frac{W}{X})} $ Harmonic Mean – $ H.M. = frac{W}{sum (frac{W}{X})} $ Hypergeometric Distribution – $ {h(x;N,n,K) = frac{[C(k,x)][C(N-k,n-x)]}{C(N,n)}} $ I Interval Estimation – $ {mu = bar x pm Z_{frac{alpha}{2}}frac{sigma}{sqrt n}} $ L Logistic Regression – $ {pi(x) = frac{e^{alpha + beta x}}{1 + e^{alpha + beta x}}} $ M Mean Deviation – $ {MD} =frac{1}{N} sum{|X-A|} = frac{sum{|D|}}{N} $ Mean Difference – $ {Mean Difference= frac{sum x_1}{n} – frac{sum x_2}{n}} $ Multinomial Distribution – $ {P_r = frac{n!}{(n_1!)(n_2!)…(n_x!)} {P_1}^{n_1}{P_2}^{n_2}…{P_x}^{n_x}} $ N Negative Binomial Distribution – $ {f(x) = P(X=x) = (x-1r-1)(1-p)x-rpr} $ Normal Distribution – $ {y = frac{1}{sqrt {2 pi}}e^{frac{-(x – mu)^2}{2 sigma}} } $ O One Proportion Z Test – $ { z = frac {hat p -p_o}{sqrt{frac{p_o(1-p_o)}{n}}} } $ P Permutation – $ { {^nP_r = frac{n!}{(n-r)!} } $ Permutation with Replacement – $ {^nP_r = n^r } $ Poisson Distribution – $ {P(X-x)} = {e^{-m}}.frac{m^x}{x!} $ probability – $ {P(A) = frac{Number of favourable cases}{Total number of equally likely cases} = frac{m}{n}} $ Probability Additive Theorem – $ {P(A or B) = P(A) + P(B) \[7pt] P (A cup B) = P(A) + P(B)} $ Probability Multiplicative Theorem – $ {P(A and B) = P(A) times P(B) \[7pt] P (AB) = P(A) times P(B)} $ Probability Bayes Theorem – $ {P(A_i/B) = frac{P(A_i) times P (B/A_i)}{sum_{i=1}^k P(A_i) times P (B/A_i)}} $ Probability Density Function – $ {P(a le X le b) = int_a^b f(x) d_x} $ R Reliability Coefficient – $ {Reliability Coefficient, RC = (frac{N}{(N-1)}) times (frac{(Total Variance – Sum of Variance)}{Total Variance})} $ Residual Sum of Squares – $ {RSS = sum_{i=0}^n(epsilon_i)^2 = sum_{i=0}^n(y_i – (alpha + beta x_i))^2} $ S Shannon Wiener Diversity Index – $ { H = sum[(p_i) times ln(p_i)] } $ Standard Deviation – $ sigma = sqrt{frac{sum_{i=1}^n{(x-bar x)^2}}{N-1}} $ Standard Error ( SE ) – $ SE_bar{x} = frac{s}{sqrt{n}} $ Sum of Square – $ {Sum of Squares = sum(x_i – bar x)^2 } $ T Trimmed Mean – $ mu = frac{sum {X_i}}{n} $ Print Page Previous Next Advertisements ”;