Hi Everyone
i have a question on margin of error. Let's take an example: at 90% precision, the margin of error (MOE) is given as below where N denotes pop size and n sample size N1 = 1000, n1=221, Margin of error = 1.52% N2=500, n2=200, margin of error = 4.5% N3=71, n3=67, Margin of error = 5.45% The 3rd scenarios having the highest MOE, would mean that the findings drawn from sample are less precise. Is that right? but if i were to look at the sample size, no doubt it's the smallest among the 3 samples but if i were to look at the pop size, i am already sampling 94% of pop. wouldn't the variability within this sample be smaller compared to first/second sample? it just doesn't sound right when i am telling my client that sample 3 has the largest sampling error and thus less precise? ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD |
The finite population correction is used only in the
context where the pop under consideration is all that there is any
projection or generalization to. Often the intent is to project
and generalize to some kind of future or super-population.
I would guess that the source of the numbers did not apply the fpc.
If the DV is dichotomous, the width of the margin of error depend on how close the estimate is to 50%. Are the DV's the same in the three scenarios? I.e., are you reporting the MOE as a percent of the estimate? <soapbox on> Most people, including sampling text book authors I have spoken to, cannot cognitively (i.e., without doing some calculation) compare "margins of error" at different confidence levels. Unless there is a special sub-discipline where it is conventional to use 90%, I recommend using 95%. For most purposes in designing a study, I find that client decision makers can deal with effect size. So I recommend leaving power and confidence level at the conventional 80% and 95% and design to detect a given effect size. In addition many people who have had just a smattering of exposure to statistical thinking have the idea that using 90% is a sign of fudging or even deliberate disinformation. <soapbox off> Art Kendall Social Research Consultants On 6/12/2011 10:52 AM, Michelle Tan wrote: ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARDHi Everyone i have a question on margin of error. Let's take an example: at 90% precision, the margin of error (MOE) is given as below where N denotes pop size and n sample size N1 = 1000, n1=221, Margin of error = 1.52% N2=500, n2=200, margin of error = 4.5% N3=71, n3=67, Margin of error = 5.45% The 3rd scenarios having the highest MOE, would mean that the findings drawn from sample are less precise. Is that right? but if i were to look at the sample size, no doubt it's the smallest among the 3 samples but if i were to look at the pop size, i am already sampling 94% of pop. wouldn't the variability within this sample be smaller compared to first/second sample? it just doesn't sound right when i am telling my client that sample 3 has the largest sampling error and thus less precise? ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD
Art Kendall
Social Research Consultants |
In reply to this post by Michelle Tan
Thanks for the reply. DV is dichotomous and not the same in the three
scenarios. I am using MOE to add to the value of P to get the confidence interval at a desired precision level. Some background info. The aim of the project is to conduct an audit to check if the claims provision provided by our insurer is justified or not. So, p is the proportion of justified cases. in my 3 scenarios, p is very lose to 1 as expected. There are a total of 3100 accident records which I have categorised into 3 categories of interest based on different criteria set for the 3 categories. (note that each accident may falls under more than one category but it will only be audited once). A sample is then drawn from each category. The findings are as follows: N1 = 1000, n1=221, Margin of error = 1.52% N2=500, n2=200, margin of error = 4.5% N3=71, n3=67, Margin of error = 5.45% i) I am using adjusted waldâs mtd to construct a Confidence interval for each of the 3 categories and using the CI to make inferences to the total population N in each category. is that possible? If yes, Is the sampling error similar to Margin of error in this case? Does it sound right to use the confidence interval obtained from sample to project to the population? if no, what other analysis can i do? ii) So does that means that for dichotomous data, regardless of how close my sample size is to the pop size, the width of the margin of error only depend on how close the estimate is to 50%? ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD |
How are you calculating it? Is your MOE in percentage points like the percentage result or is is a percent of the percentage result? When the confidence interval is close enough to symmetric the MOE is about 1.96 * the sampling error. Since 1000+500+71 is 1571 and not 3100, how are you using N and n? there are 3 numbers to think about a pop size, a sample size, and a number of (un)justified claims. These are used to find a percentage result and a confidence interval in percentage points. The term "margin of error" has a variety of meaning. In planning an analysis/audit it is the largest acceptable half width of the 95% confidence interval. When nothing else is known about the a reasonable _subjective_ expectation of an estimate 50% is used since it is the worst case, i.e., it has the widest confidence interval. (highest standard error.) "I don't want to more than x percentage points away from what we would find if we did the whole pop." The sample size then becomes very much a function of the worst case expected result. In reporting results of a survey, a common way for the media to do it is to call the widest half 95% confidence interval for any percentage the margin of error. It might be sated in one of these ways "All results in percentages were within a plus or minus x percentage point 95% MOE or smaller." "The largest 95% MOE for any resulting percentage was plus or minus 5 percentage points." "The fudge factor is +/- 5 percentage points." "the (95%MOE was not larger than+/- 5 percentage points." Auditing is one context where fpc's (finite population corrections) are common but certainly not universal. As you move farther away from a result of 50% the confidence interval becomes less symmetric. There has been an old rule of thumb that from 20 to 80 percent result a normal approximation is usable. from 5% to 20% and 80 to 95% result a binomial approximation is usable, and from 0% to 5% and 95 to 100% the Poisson approximation is usable. If I recall correctly (and that is a big if) it comes from Cochran. This rule of thumb comes from the days when one had to use things called "tables" to look up coefficients, etc. These days we use functions in software to do those things. Search the archives for the extension command to find confidence intervals. You probably want to use the number unjustified as the DV since some methods allow upper confidence limits over 100%. I do not recall whether that command does that or if it includes the fpc. In interpreting the results keep in mind that "justified" is not an intrinsically dichotomous construct. Art Kendall Social Research Consultants On 6/12/2011 11:36 PM, Michelle Tan wrote: ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARDThanks for the reply. DV is dichotomous and not the same in the three scenarios. I am using MOE to add to the value of P to get the confidence interval at a desired precision level. Some background info. The aim of the project is to conduct an audit to check if the claims provision provided by our insurer is justified or not. So, p is the proportion of justified cases. in my 3 scenarios, p is very lose to 1 as expected. There are a total of 3100 accident records which I have categorised into 3 categories of interest based on different criteria set for the 3 categories. (note that each accident may falls under more than one category but it will only be audited once). A sample is then drawn from each category. The findings are as follows: N1 = 1000, n1=221, Margin of error = 1.52% N2=500, n2=200, margin of error = 4.5% N3=71, n3=67, Margin of error = 5.45% i) I am using adjusted wald’s mtd to construct a Confidence interval for each of the 3 categories and using the CI to make inferences to the total population N in each category. is that possible? If yes, Is the sampling error similar to Margin of error in this case? Does it sound right to use the confidence interval obtained from sample to project to the population? if no, what other analysis can i do? ii) So does that means that for dichotomous data, regardless of how close my sample size is to the pop size, the width of the margin of error only depend on how close the estimate is to 50%? ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD
Art Kendall
Social Research Consultants |
In reply to this post by Michelle Tan
I agree with the previous comment... In order to interpret right the confidence interval you get when you apply the formula (+- 1.96*(sqrt(p*(1-p)/n), with the normal approximation to binomial distributions, I recommend you reading P values are not error probabilities: an excelente paper by Hubbard & Bayarri.:
I paste the link... http://www.uv.es/sestio/TechRep/tr14-03.pdf There is also a thread in this forum about interpreting p-values that I find very useful, but I can not find the link. Kind regards |
In reply to this post by Michelle Tan
Thanks a lot!
I calculate MOE based on this formula: 1.645*â (padj *(1-padj)/ (n+1.645^2)) using adjusted Waldâs method where padj = (x+ zα/2 ^2/2)/(n+ zα/2 ^2); My total population should be 1300 instead of 3100. Apologise for the error. As some cases may be falls under more than one category based on the criteria set, total pop = 1300 and not the sum of 1000+500+71 Since my population size is known, does it make sense to adjust the MOE using finite population corrector as per below? MOE after using FPC = 1.645*â (padj *(1-padj)/ (n+1.645^2)) *sqrt((N-n)/(N- 1)) Will there be issue of over adjustment if i use above? If not, how else can I take population size into account when calculating MOE? It just doesnât make sense to tell people that regardless of size of my total population, as long as my sample size is small, my margin of error is high e.g in my case, for N3, I am sampling 94% (67/71) of the total population (but margin of error is 5.45%) compared to N1 where Proportion of sampled cases in N1 =221/1000 but margin of error is 1.52% ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD |
Free forum by Nabble | Edit this page |