Data collection – Observation

Statistics – Data collection – Observation ”; Previous Next Observation is a popular method of data collection in behavioral sciences. The power, observation has been summed by W.L. Prosser as follows “there is still no man that would not accept dog tracks in the mud against the sworn testimony of a hundred eye witnesses that no dog had passed by.” Observation refers to the monitoring and recording of behavioral and non behavioral activities and conditions in a systematic manner to obtain information about the phenomena of interest, ”Behavioral Observation” is: Non verbal analysis like body movement. eye movement. Linguistic analysis which includes observing sounds like ohs! and abs! Extra linguistic analysis which observes the pitch timbre, rate of speaking etc. Spatial analysis about how people relate to each other. The non behavioral observation is an analysis of records e.g. newspaper archives, physical condition analysis such as checking the quality of grains in gunny bags and process analysis which includes observing any process. Observation can be classified into various, categories. Type of Observation Structured Vs. Unstructured Observation – In structured observation the problem has been clearly defined, hence the behavior to be observed and the method by which it will be measured is specified beforehand in detail. This reduces the chances of observer introducing observer”s bias in research e.g. study of p1ant safety compliance can be observed in a structure manner. Unstructured analysis is used in situations where the problem has not been clearly defined hence it cannot be pre specified that what is to be observed. Hence a researcher monitors all relevant phenomena and a great deal of flexibility is allowed in terms of what they note and record e.g. the student”s behavior in a class would require monitoring their total behavior in the class environment. The data collected through unstructured analysis should be analyzed carefully so that no bias is introduced. Disguised Vs. Undisguised Observation – This classification has been done on the basis of whether the subjects should know that they are being observed or not. In disguised observation, the subjects are unaware of the facts that they are being observed. Their behavior is observed using hidden cameras, one way mirrors, or other devices. Since the subjects are unaware that they are being observed hence they behave in a natural way. The drawback is that it may take long hours of observation before the subjects display the phenomena of interest. Disguised observation may be: Direct observation when the behavior is observed by the researcher himself personally. Indirect observation which is the effect or the result of the behavior that is observed. In undisguised observation, the subjects are aware that they are being observed. In this type of observation, there is the fear that the subject might show a typical activity. The entry of observer may upset the subject, but for how long this disruption will exist cannot be said conclusively. Studies have shown that such descriptions are short-lived and the subjects soon resume normal behavior. Participant vs. Non-Participant Observation – If the observer participates in the situation while observing it is termed as participant observation. g. a researcher studying the life style of slum dwellers, following participant observation, will himself stay in slums. His role as an observer may be concealed or revealed. By becoming a part of the setting he is able to observe in an insightful manner. A problem that arises out of this method is that the observer may become sympathetic to the subjects and would have problem in viewing his research objectively. In case of non-participant observation, the observer remains outside the setting and does not involve himself or participate in the situation. Natural vs. Contrived Observation. – In natural observation the behavior is observed as it takes place in the actual setting e.g. the consumer preferences observed directly at Pizza Hut where consumers are ordering pizza. The advantage of this method is that the true results are obtained, but it is expensive and time consuming method. In contrived observation, the phenomena is observed in an artificial or simulated setting e.g. the consumers instead of being observed in a restaurant are made to order in a setting that looks like a restaurant but is not an actual one. This type of observation has the advantage of being over in a short time and recording of behavior is easily done. However, since the consumer”s are conscious of their setting they may not show actual behavior. Classification on the Basis of Mode of Administration – This includes: monitors and records the behavior as it occurs. The recording is done on an observation schedule. The personal observation not only records what, has been specified but also identifies and records unexpected behaviors that defy pre-established response categories. Mechanical Observation – Mechanical devices, instead of human”s are to record the behavior. The devices record the behavior as it occurs and data is sorted and analyzed later on. Apart from cameras, other devices are galvanometer which measures the emotional arousal induced by an exposure to a specific stimuli, audiometer and people meter that record which channel on TV is being viewed with the latter also recording who is viewing the channel, coulometer records the eye movement etc. Audit – It is the process of obtaining information by physical examination of data. The audit, which is a count of physical objects, is generally done by the researcher himself. An audit can be a store audit or a pantry audit. The store audits are performed by the distributors or manufacturers in order to ana1yse the market share, purchase pattern etc. e.g. the researcher may check the store records or do an analysis of inventory on hand to record the data. The pantry audit involves the researcher developing an inventory of brands quantities and package sizes of products in a consumer”s home, generally in the course of a personal interview. Such an audit is used to supplement or test the truthfulness of information provided in the direct questionnaire. Content

Geometric Probability Distribution

Statistics – Geometric Probability Distribution ”; Previous Next The geometric distribution is a special case of the negative binomial distribution. It deals with the number of trials required for a single success. Thus, the geometric distribution is a negative binomial distribution where the number of successes (r) is equal to 1. Formula ${P(X=x) = p times q^{x-1} }$ Where − ${p}$ = probability of success for single trial. ${q}$ = probability of failure for a single trial (1-p) ${x}$ = the number of failures before a success. ${P(X-x)}$ = Probability of x successes in n trials. Example Problem Statement: In an amusement fair, a competitor is entitled for a prize if he throws a ring on a peg from a certain distance. It is observed that only 30% of the competitors are able to do this. If someone is given 5 chances, what is the probability of his winning the prize when he has already missed 4 chances? Solution: If someone has already missed four chances and has to win in the fifth chance, then it is a probability experiment of getting the first success in 5 trials. The problem statement also suggests the probability distribution to be geometric. The probability of success is given by the geometric distribution formula: ${P(X=x) = p times q^{x-1} }$ Where − ${p = 30 % = 0.3 }$ ${x = 5}$ = the number of failures before a success. Therefore, the required probability: $ {P(X=5) = 0.3 times (1-0.3)^{5-1} , \[7pt] , = 0.3 times (0.7)^4, \[7pt] , approx 0.072 \[7pt] , approx 7.2 % }$ Print Page Previous Next Advertisements ”;

Z table

Statistics – Z table ”; Previous Next Standard Normal Probability Table The following table shows the area under the curve to the left of a z-score: z .00 .01 .02 .03 .04 .05 .06 .07 .08 .09 -3.4 .0003 .0003 .0003 .0003 .0003 .0003 .0003 .0003 .0003 .0002 -3.3 .0005 .0005 .0005 .0004 .0004 .0004 .0004 .0004 .0004 .0003 -3.2 .0007 .0007 .0006 .0006 .0006 .0006 .0006 .0005 .0005 .0005 -3.1 .0010 .0009 .0009 .0009 .0008 .0008 .0008 .0008 .0007 .0007 -3.0 .0013 .0013 .0013 .0012 .0012 .0011 .0011 .0011 .0010 .0010 -2.9 .0019 .0018 .0018 .0017 .0016 .0016 .0015 .0015 .0014 .0014 -2.8 .0026 .0025 .0024 .0023 .0023 .0022 .0021 .0021 .0020 .0019 -2.7 .0035 .0034 .0033 .0032 .0031 .0030 .0029 .0028 .0027 .0026 -2.6 .0047 .0045 .0044 .0043 .0041 .0040 .0039 .0038 .0037 .0036 -2.5 .0062 .0060 .0059 .0057 .0055 .0054 .0052 .0051 .0049 .0048 -2.4 .0082 .0080 .0078 .0075 .0073 .0071 .0069 .0068 .0066 .0064 -2.3 .0107 .0104 .0102 .0099 .0096 .0094 .0091 .0089 .0087 .0084 -2.2 .0139 .0136 .0132 .0129 .0125 .0122 .0119 .0116 .0113 .0110 -2.1 .0179 .0174 .0170 .0166 .0162 .0158 .0154 .0150 .0146 .0143 -2.0 .0228 .0222 .0217 .0212 .0207 .0202 .0197 .0192 .0188 .0183 -1.9 .0287 .0281 .0274 .0268 .0262 .0256 .0250 .0244 .0239 .0233 -1.8 .0359 .0351 .0344 .0336 .0329 .0322 .0314 .0307 .0301 .0294 -1.7 .0446 .0436 .0427 .0418 .0409 .0401 .0392 .0384 .0375 .0367 -1.6 .0548 .0537 .0526 .0516 .0505 .0495 .0485 .0475 .0465 .0455 -1.5 .0668 .0655 .0643 .0630 .0618 .0606 .0594 .0582 .0571 .0559 -1.4 .0808 .0793 .0778 .0764 .0749 .0735 .0721 .0708 .0694 .0681 -1.3 .0968 .0951 .0934 .0918 .0901 .0885 .0869 .0853 .0838 .0823 -1.2 .1151 .1131 .1112 .1093 .1075 .1056 .1038 .1020 .1003 .0985 -1.1 .1357 .1335 .1314 .1292 .1271 .1251 .1230 .1210 .1190 .1170 -1.0 .1587 .1562 .1539 .1515 .1492 .1469 .1446 .1423 .1401 .1379 -0.9 .1841 .1814 .1788 .1762 .1736 .1711 .1685 .1660 .1635 .1611 -0.8 .2119 .2090 .2061 .2033 .2005 .1977 .1949 .1922 .1894 .1867 -0.7 .2420 .2389 .2358 .2327 .2296 .2266 .2236 .2206 .2177 .2148 -0.6 .2743 .2709 .2676 .2643 .2611 .2578 .2546 .2514 .2483 .2451 -0.5 .3085 .3050 .3015 .2s981 .2946 .2912 .2877 .2843 .2810 .2776 -0.4 .3446 .3409 .3372 .3336 .3300 .3264 .3228 .3192 .3156 .3121 -0.3 .3821 .3783 .3745 .3707 .3669 .3632 .3594 .3557 .3520 .3483 -0.2 .4207 .4168 .4129 .4090 .4052 .4013 .3974 .3936 .3897 .3859 -0.1 .4602 .4562 .4522 .4483 .4443 .4404 .4364 .4325 .4286 .4247 0.0 .5000 .4960 .4920 .4880 .4840 .4801 .4761 .4721 .4681 .4641 The following table shows the area under the curve to the left of a z-score: z .00 .01 .02 .03 .04 .05 .06 .07 .08 .09 0.0 .5000 .4960 .4920 .4880 .4840 .4801 .4761 .4721 .4681 .4641 0.1 .5398 .5438 .5478 .5517 .5557 .5596 .5636 .5675 .5714 .5753 0.2 .5793 .5832 .5871 .5910 .5948 .5987 .6026 .6064 .6103 .6141 0.3 .6179 .6217 .6255 .6293 .6331 .6368 .6406 .6443 .6480 .6517 0.4 .6554 .6591 .6628 .6664 .6700 .6736 .6772 .6808 .6844 .6879 0.5 .6915 .6950 .6985 .7019 .7054 .7088 .7123 .7157 .7190 .7224 0.6 .7257 .7291 .7324 .7357 .7389 .7422 .7454 .7486 .7517 .7549 0.7 .7580 .7611 .7642 .7673 .7704 .7734 .7764 .7794 .7823 .7852 0.8 .7881 .7910 .7939 .7967 .7995 .8023 .8051 .8078 .8106 .8133 0.9 .8159 .8186 .8212 .8238 .8264 .8289 .8315 .8340 .8365 .8389 1.0 .8413 .8438 .8461 .8485 .8508 .8531 .8554 .8577 .8599 .8621 1.1 .8643 .8665 .8686 .8708 .8729 .8749 .8770 .8790 .8810 .8830 1.2 .8849 .8869 .8888 .8907 .8925 .8944 .8962 .8980 .8997 .9015 1.3 .9032 .9049 .9066 .9082 .9099 .9115 .9131 .9147 .9162 .9177 1.4 .9192 .9207 .9222 .9236 .9251 .9265 .9279 .9292 .9306 .9319 1.5 .9332 .9345 .9357 .9370 .9382 .9394 .9406 .9418 .9429 .9441 1.6 .9452 .9463 .9474 .9484 .9495 .9505 .9515 .9525 .9535 .9545 1.7 .9554 .9564 .9573 .9582 .9591 .9599 .9608 .9616 .9625 .9633 1.8 .9641 .9649 .9656 .9664 .9671 .9678 .9686 .9693 .9699 .9706 1.9 .9713 .9719 .9726 .9732 .9738 .9744 .9750 .9756 .9761 .9767 2.0 .9772 .9778 .9783 .9788 .9793 .9798 .9803 .9808 .9812 .9817 2.1 .9821 .9826 .9830 .9834 .9838 .9842 .9846 .9850 .9854 .9857 2.2 .9861 .9864 .9868 .9871 .9875 .9878 .9881 .9884 .9887 .9890 2.3 .9893 .9896 .9898 .9901 .9904 .9906 .9909 .9911 .9913 .9916 2.4 .9918 .9920 .9922 .9925 .9927 .9929 .9931 .9932 .9934 .9936 2.5 .9938 .9940 .9941 .9943 .9945 .9946 .9948 .9949 .9951 .9952 2.6 .9953 .9955 .9956 .9957 .9959 .9960 .9961 .9962 .9963 .9964 2.7 .9965 .9966 .9967 .9968 .9969 .9970 .9971 .9972 .9973 .9974 2.8 .9974 .9975 .9976 .9977 .9977 .9978 .9979 .9979 .9980 .9981 2.9 .9981 .9982 .9982 .9983 .9984 .9984 .9985 .9985 .9986 .9986 3.0 .9987 .9987 .9987 .9988 .9988 .9989 .9989 .9989 .9990 .9990 3.1 .9990 .9991 .9991 .9991 .9992 .9992 .9992 .9992 .9993 .9993 3.2 .9993 .9993 .9994 .9994 .9994 .9994 .9994 .9995 .9995 .9995 3.3 .9995 .9995 .9995 .9996 .9996 .9996 .9996 .9996 .9996 .9997 3.4 .9997 .9997 .9997 .9997 .9997 .9997 .9997 .9997 .9997 .9998 Print Page Previous Next Advertisements ”;

Correlation Co-efficient

Statistics – Correlation Co-efficient ”; Previous Next Correlation Co-efficient A correlation coefficient is a statistical measure of the degree to which changes to the value of one variable predict change to the value of another. In positively correlated variables, the value increases or decreases in tandem. In negatively correlated variables, the value of one increases as the value of the other decreases. Correlation coefficients are expressed as values between +1 and -1. A coefficient of +1 indicates a perfect positive correlation: A change in the value of one variable will predict a change in the same direction in the second variable. A coefficient of -1 indicates a perfect negative: A change in the value of one variable predicts a change in the opposite direction in the second variable. Lesser degrees of correlation are expressed as non-zero decimals. A coefficient of zero indicates there is no discernable relationship between fluctuations of the variables. Formula ${r = frac{N sum xy – (sum x)(sum y)}{sqrt{[Nsum x^2 – (sum x)^2][Nsum y^2 – (sum y)^2]}} }$ Where − ${N}$ = Number of pairs of scores ${sum xy}$ = Sum of products of paired scores. ${sum x}$ = Sum of x scores. ${sum y}$ = Sum of y scores. ${sum x^2}$ = Sum of squared x scores. ${sum y^2}$ = Sum of squared y scores. Example Problem Statement: Calculate the correlation co-efficient of the following: X Y 1 2 3 5 4 5 4 8 Solution: ${ sum xy = (1)(2) + (3)(5) + (4)(5) + (4)(8) = 69 \[7pt] sum x = 1 + 3 + 4 + 4 = 12 \[7pt] sum y = 2 + 5 + 5 + 8 = 20 \[7pt] sum x^2 = 1^2 + 3^2 + 4^2 + 4^2 = 42 \[7pt] sum y^2 = 2^2 + 5^2 + 5^2 + 8^2 = 118 \[7pt] r= frac{69 – frac{(12)(20)}{4}}{sqrt{(42 – frac{(12)^2}{4})(118-frac{(20)^2}{4}}} \[7pt] = .866 }$ Print Page Previous Next Advertisements ”;

Variance

Statistics – Variance ”; Previous Next A variance is defined as the average of Squared differences from mean value. Combination is defined and given by the following function: Formula ${ delta = frac{ sum (M – n_i)^2 }{n}}$ Where − ${M}$ = Mean of items. ${n}$ = the number of items considered. ${n_i}$ = items. Example Problem Statement: Find the variance between following data : {600, 470, 170, 430, 300} Solution: Step 1: Determine the Mean of the given items. ${ M = frac{600 + 470 + 170 + 430 + 300}{5} \[7pt] = frac{1970}{5} \[7pt] = 394}$ Step 2: Determine Variance ${ delta = frac{ sum (M – n_i)^2 }{n} \[7pt] = frac{(600 – 394)^2 + (470 – 394)^2 + (170 – 394)^2 + (430 – 394)^2 + (300 – 394)^2}{5} \[7pt] = frac{(206)^2 + (76)^2 + (-224)^2 + (36)^2 + (-94)^2}{5} \[7pt] = frac{ 42,436 + 5,776 + 50,176 + 1,296 + 8,836}{5} \[7pt] = frac{ 108,520}{5} \[7pt] = frac{(14)(13)(3)(11)}{(2)(1)} \[7pt] = 21,704}$ As a result, Variance is ${21,704}$. Print Page Previous Next Advertisements ”;

Transformations

Statistics – Transformations ”; Previous Next Data transformation refers to application of a function to each item in a data set. Here $ x_i $ is replaced by its transformed value $ y_i $ where $ y_i = f(x_i) $. Data transformations are carried out generally to make appearance of graphs more interpretable. There are four major functions used for transformations. $ log x $ – logarithm transformations. For example sound units are in decibels and is generally represented using log transformations. $ frac{1}{x} $ – Reciprocal Transformations. For example Time to complete race/ task is represents using speed. More the speed lesser the time taken. $ sqrt{x} $ – Square root Transformations. For example areas of circular ground are compared using their radius. $ {x^2} $ – Power Transformations. For example to compare negative numbers. logarithm and Square root Transformations are used in case of positive numbers where as Reciprocal and Power Transformations can be used in case of both negative as well as positive numbers. Following diagrams illustrates the use of logarithm transformation to compare population graphically. Before transformation After transformation Print Page Previous Next Advertisements ”;

Type I & II Error

Statistics – Type I & II Errors ”; Previous Next Type I and Type II errors signifies the erroneous outcomes of statistical hypothesis tests. Type I error represents the incorrect rejection of a valid null hypothesis whereas Type II error represents the incorrect retention of an invalid null hypothesis. Null Hypothesis Null Hypothesis refers to a statement which nullifies the contrary with evidence. Consider the following examples: Example 1 Hypothesis – Water added to a toothpaste protects teeth against cavities. Null Hypothesis – Water added to a toothpaste has no effect against cavities. Example 2 Hypothesis – Floride added to a toothpaste protects teeth against cavities. Null Hypothesis – Floride added to a toothpaste has no effect against cavities. Here Null hypothesis is to be tested against experimental data to nullify the effect of floride and water on teeth”s cavities. Type I Error Consider the Example 1. Here Null hypothesis is true i.e. Water added to a toothpaste has no effect against cavities. But if using experimental data, we detect an effect of water added on cavities then we are rejecting a true null hypothesis. This is a Type I error. It is also called a False Positive condition (a situation which indicates that a given condition is present but it actually is not present). The Type I error rate or significance level of Type I is represented by the probability of rejecting the null hypothesis given that it is true. Type I error is denoted by $ alpha $ and is also called alpha level. Generally It is acceptable to have Type I error significance level as 0.05 or 5% which means that 5% probability of incorrectly rejecting the null hypothesis is acceptable. Type II Error Consider the Example 2. Here Null hypothesis is false i.e. Floride added to a toothpaste has effect against cavities. But if using experimental data, we do not detect an effect of floride added on cavities then we are accepting a false null hypothesis. This is a Type II error. It is also called a False Positive condition (a situation which indicates that a given condition is not present but it actually is present). Type II error is denoted by $ beta $ and is also called beta level. Goal of a statistical test is to determine that a null hypothesis can be rejected or not. A statistical test can reject or not be able to reject a null hypothesis. Following table illustrates the relationship between truth or falseness of the null hypothesis and outcomes of the test in terms of Type I or Type II error. Judgment Null hypothesis ($ H_0 $) is Error Type Inference Reject Valid Type I Error (False Positive) Incorrect Reject Invalid True Positive Correct Unable to Reject Valid True Negative Correct Unable to Reject Invalid Type II error(False Negative) Incorrect Print Page Previous Next Advertisements ”;

Simple random sampling

Statistics – Simple random sampling ”; Previous Next A simple random sample is defined as one in which each element of the population has an equal and independent chance of being selected. In case of a population with N units, the probability of choosing n sample units, with all possible combinations of NCn samples is given by 1/NCn e.g. If we have a population of five elements (A, B, C, D, E) i.e. N 5, and we want a sample of size n = 3, then there are 5C3 = 10 possible samples and the probability of any single unit being a member of the sample is given by 1/10. Simple random sampling can be done in two different ways i.e. ”with replacement” or ”without replacement”. When the units are selected into a sample successively after replacing the selected unit before the next draw, it is a simple random sample with replacement. If the units selected are not replaced before the next draw and drawing of successive units are made only from the remaining units of the population, then it is termed as simple random sample without replacement. Thus in the former method a unit once selected may be repeated, whereas in the latter a unit once selected is not repeated. Due to more statistical efficiency associated with a simple random sample without replacement it is the preferred method. A simple random sample can be drawn through either of the two procedures i.e. through lottery method or through random number tables. Lottery Method – Under this method units are selected on the basis of random draws. Firstly each member or element of the population is assigned a unique number. In the next step these numbers are written on separate cards which are physically similar in shape, size, color etc. Then they are placed in a basket and thoroughly mixed. In the last step the slips are taken out randomly without looking at them. The number of slips drawn is equal to the sample size required. Lottery method suffers from few drawbacks. The process of writing N number of slips is cumbersome and shuffling a large number of slips, where population size is very large, is difficult. Also human bias may enter while choosing the slips. Hence the other alternative i.e. random numbers can be used. Random Number Tables Method – These consist of columns of numbers which have been randomly prepared. Number of random tables are available e.g. Fisher and Yates Tables, Tippets random number etc. Listed below is a sequence of two digited random numbers from Fisher & Yates table: 61, 44, 65, 22, 01, 67, 76, 23, 57, 58, 54, 11, 33, 86, 07, 26, 75, 76, 64, 22, 19, 35, 74, 49, 86, 58, 69, 52, 27, 34, 91, 25, 34, 67, 76, 73, 27, 16, 53, 18, 19, 69, 32, 52, 38, 72, 38, 64, 81, 79 and 38. The first step involves assigning a unique number to each member of the population e.g. if the population comprises of 20 people then all individuals are numbered from 01 to 20. If we are to collect a sample of 5 units then referring to the random number tables 5 double digit numbers are chosen. E.g. using the above table the units having the following five numbers will form a sample: 01, 11, 07, 19 and 16. If the sampling is without replacement and a particular random number repeats itself then it will not be taken again and the next number that fits our criteria will be chosen. Thus a simple random sample can be drawn using either of the two procedures. However in practice, it has been seen that simple random sample involves lots of time and effort and is impractical. Print Page Previous Next Advertisements ”;

Standard Error ( SE )

Statistics – Standard Error ( SE ) ”; Previous Next The standard deviation of a sampling distribution is called as standard error. In sampling, the three most important characteristics are: accuracy, bias and precision. It can be said that: The estimate derived from any one sample is accurate to the extent that it differs from the population parameter. Since the population parameters can only be determined by a sample survey, hence they are generally unknown and the actual difference between the sample estimate and population parameter cannot be measured. The estimator is unbiased if the mean of the estimates derived from all the possible samples equals the population parameter. Even if the estimator is unbiased an individual sample is most likely going to yield inaccurate estimate and as stated earlier, inaccuracy cannot be measured. However it is possible to measure the precision i.e. the range between which the true value of the population parameter is expected to lie, using the concept of standard error. Formula $SE_bar{x} = frac{s}{sqrt{n}}$ Where − ${s}$ = Standard Deviation and ${n}$ = No.of observations Example Problem Statement: Calculate Standard Error for the following individual data: Items 14 36 45 70 105 Solution: Let”s first compute the Arithmetic Mean $bar{x}$ $bar{x} = frac{14 + 36 + 45 + 70 + 105}{5} \[7pt] , = frac{270}{5} \[7pt] , = {54}$ Let”s now compute the Standard Deviation ${s}$ $s = sqrt{frac{1}{n-1}((x_{1}-bar{x})^{2}+(x_{2}-bar{x})^{2}+…+(x_{n}-bar{x})^{2})} \[7pt] , = sqrt{frac{1}{5-1}((14-54)^{2}+(36-54)^{2}+(45-54)^{2}+(70-54)^{2}+(105-54)^{2})} \[7pt] , = sqrt{frac{1}{4}(1600+324+81+256+2601)} \[7pt] , = {34.86}$ Thus the Standard Error $SE_bar{x}$ $SE_bar{x} = frac{s}{sqrt{n}} \[7pt] , = frac{34.86}{sqrt{5}} \[7pt] , = frac{34.86}{2.23} \[7pt] , = {15.63}$ The Standard Error of the given numbers is 15.63. The smaller the proportion of the population that is sampled the less is the effect of this multiplier because then the finite multiplier will be close to one and will affect the standard error negligibly. Hence if the sample size is less than 5% of population, the finite multiplier is ignored. Print Page Previous Next Advertisements ”;

Ti 83 Exponential Regression

Statistics – Ti 83 Exponential Regression ”; Previous Next Ti 83 Exponential Regression is used to compute an equation which best fits the co-relation between sets of indisciriminate variables. Formula ${ y = a times b^x}$ Where − ${a, b}$ = coefficients for the exponential. Example Problem Statement: Calculate Exponential Regression Equation(y) for the following data points. Time (min), Ti 0 5 10 15 Temperature (°F), Te 140 129 119 112 Solution: Let consider a and b as coefficients for the exponential Regression. Step 1 ${ b = e^{ frac{n times sum Ti log(Te) – sum (Ti) times sum log(Te) } {n times sum (Ti)^2 – times (Ti) times sum (Ti) }} } $ Where − ${n}$ = total number of items. ${ sum Ti log(Te) = 0 times log(140) + 5 times log(129) + 10 times log(119) + 15 times log(112) = 62.0466 \[7pt] sum log(L2) = log(140) + log(129) + log(119) + log(112) = 8.3814 \[7pt] sum Ti = (0 + 5 + 10 + 15) = 30 \[7pt] sum Ti^2 = (0^2 + 5^2 + 10^2 + 15^2) = 350 \[7pt] implies b = e^{frac {4 times 62.0466 – 30 times 8.3814} {4 times 350 – 30 times 30}} \[7pt] = e^{-0.0065112} \[7pt] = 0.9935 } $ Step 2 ${ a = e^{ frac{sum log(Te) – sum (Ti) times log(b)}{n} } \[7pt] = e^{frac{8.3814 – 30 times log(0.9935)}{4}} \[7pt] = e^2.116590964 \[7pt] = 8.3028 } $ Step 3 Putting the value of a and b in Exponential Regression Equation(y), we get. ${ y = a times b^x \[7pt] = 8.3028 times 0.9935^x } $ Print Page Previous Next Advertisements ”;