Any one who could help me about that how can I calculate P-values in each cell from the following table. Regards,
|
Does "Academic" mean graduate school, i.e., after
university?
How did you obtain this data? Do you have the raw data? What is the meaning of the counts and percentages in the table? What questions are you using the data to answer? Art Kendall Social Research ConsultantsOn 5/3/2013 2:37 AM, salman_stats [via SPSSX Discussion] wrote:
Art Kendall
Social Research Consultants |
In reply to this post by salman_stats
After puzzling over this a bit, and using Art's clue about percentages,
I figure that these are counts of people who were satisfied with various aspects of care. You need to create the sets of 5x2 tables, Yes/No, so that from 237(98.3) you derive 237, 4; 263(97.0) becomes 263,8; and so on. How you do this most conveniently depends on what form of data you have available. From that table, I divided N by the fraction, rounded off, and subtracted N from that total count. (I notice that the total counts vary by one or two in a column, showing some Missing -- Merely dropping the Missing might be something that someone could criticize: Does the Missing represent an equivocating attitude, or Not Applicable? An equivocation might be considered, entirely properly, as "not Yes" in a table labeled Yes/ Not Yes -- and you can get rid of the problem of Missing data by scoring it that way.) The CROSSTABS procedure can provide cell information that includes the deviation from the overall mean. I think the z-score includes a p value. -- Rich Ulrich Date: Fri, 3 May 2013 11:36:29 +0500 From: [hidden email] Subject: How can I calculate the P values in 6 X 6 contigency table To: [hidden email] Any one who could help me about that how can I calculate P-values in each cell from the following table. Regards,
|
Dear Rich,
These calculation were given me to a researcher and he wants to calculate the P-values of each cell. whereas these tables were evaluated from Cross tab options from SPSS. column frequencies and column percentages were calculated from the these tables. Is is possible to calculate each cell's P-value by Chi-square contingency formula 6X6 table of (observed frequencies - expected frequencies)2 /(expected frequencies)2. Regards, Salman |
Administrator
|
Please consult an intro stats book and ponder the assumptions of the Chi Square test statistic.
What do you think would be appropriate to use for 'expected frequencies'. What does this term 'expected frequencies' mean to you? Folly? --
Please reply to the list and not to my personal email.
Those desiring my consulting or training services please feel free to email me. --- "Nolite dare sanctum canibus neque mittatis margaritas vestras ante porcos ne forte conculcent eas pedibus suis." Cum es damnatorum possederunt porcos iens ut salire off sanguinum cliff in abyssum?" |
If we are not considering the assumption of Chi-square test for proportion and frequencies of the table. What should I do for further calculation? Could you please tell what will be most possible option or method to calculate the P-values of this table. |
Administrator
|
What do you mean by "calculate the P-values of this table"?
What do you wish to learn from this data? "What should I do for further calculation?" That is hardly a question which can be readily answered considering your question is ill defined at best. Back up a few paces, carefully define your question and perhaps describe the process by which this data table came about. Do you have access to the raw data or merely this summary table? --
Please reply to the list and not to my personal email.
Those desiring my consulting or training services please feel free to email me. --- "Nolite dare sanctum canibus neque mittatis margaritas vestras ante porcos ne forte conculcent eas pedibus suis." Cum es damnatorum possederunt porcos iens ut salire off sanguinum cliff in abyssum?" |
Salman, This "table" may appear to have been constructed as a cross-tab but it does not appear to me to represent a single 6x6 table. Rather, it is the conjunction of 6 separate 1-way tables where
the columns are instances of the nominal category Ed Level and the rows are actually separate service items. Otherwise, you need to explain what exactly is indicated by each row.
In any case, I know of no procedure which will give you a P-value for the interior cells, although perhaps you could get a separate Chi-squared value for each row as indicated by the P-value column
on the right. ... Mark Miller On Tue, May 7, 2013 at 5:35 AM, David Marso <[hidden email]> wrote: What do you mean by "calculate the P-values of this table"? |
Administrator
|
It looks to me like six separate 5x2 (or 2x5) tables, as Rich Ulrich observed. If that is correct, then one could do something like this to get a chi-square for each 5x2 table:
new file. dataset close all. data list list / row col (2f1) kount1 (f5.0) pct (f5.1). begin data 1 1 237 98.3 1 2 263 97.0 1 3 257 94.1 1 4 233 97.1 1 5 83 92.2 2 1 223 92.1 2 2 237 87.1 2 3 224 81.8 2 4 201 83.8 2 5 78 84.8 3 1 222 91.4 3 2 247 90.5 3 3 229 83.0 3 4 211 88.3 3 5 72 78.3 4 1 217 89.3 4 2 234 85.7 4 3 214 77.5 4 4 207 86.6 4 5 68 74.7 5 1 224 92.2 5 2 248 90.8 5 3 236 85.5 5 4 217 90.8 5 5 77 83.7 6 1 223 92.1 6 2 247 90.5 6 3 230 83.6 6 4 214 89.5 6 5 76 82.6 end data. compute kount2 = rnd(kount1*100 / pct) - kount1. formats kount2 (f5.0). list. VARSTOCASES /MAKE kount FROM kount1 kount2 /INDEX=Y(2) /KEEP=row col /NULL=KEEP. split file by row. weight by kount. crosstabs col by Y / cell = count row / stat = chisqr. split file off. However, the OP is asking for 36 p-values, apparently. So I'm not sure what they want. If they want p-values for 98.3% vs 1.7% (cell 1 in the original table), 97% vs 3% (cell 2, original table), etc, that seems rather pointless.
--
Bruce Weaver bweaver@lakeheadu.ca http://sites.google.com/a/lakeheadu.ca/bweaver/ "When all else fails, RTFM." PLEASE NOTE THE FOLLOWING: 1. My Hotmail account is not monitored regularly. To send me an e-mail, please use the address shown above. 2. The SPSSX Discussion forum on Nabble is no longer linked to the SPSSX-L listserv administered by UGA (https://listserv.uga.edu/). |
Administrator
|
"However, the OP is asking for 36 p-values, apparently. So I'm not sure what they want. If they want p-values for 98.3% vs 1.7% (cell 1 in the original table), 97% vs 3% (cell 2, original table), etc, that seems rather pointless."
Oops...the OP appears to want 30 p-values, not 36. Six rows x 5 columns in the original table = 30, not 36.
--
Bruce Weaver bweaver@lakeheadu.ca http://sites.google.com/a/lakeheadu.ca/bweaver/ "When all else fails, RTFM." PLEASE NOTE THE FOLLOWING: 1. My Hotmail account is not monitored regularly. To send me an e-mail, please use the address shown above. 2. The SPSSX Discussion forum on Nabble is no longer linked to the SPSSX-L listserv administered by UGA (https://listserv.uga.edu/). |
In reply to this post by Bruce Weaver
Here, Bruce shows one way to get the 6 tables that it is reasonable to get.
The "separate p-levels" is not a very good concept, but - if you insist on them - you get them from the standardized residuals from each cell. I mentioned the residuals in my original reply. Apparently, these z-scores do not come with p-levels provided in the table. That means you have to separately apply the 1.96 cutoff for a 5% test, or look them up. Since you will be doing multiple implicit tests, it is best to report those as "nominal" p-values, and talk about them as tersely as possible. -- Rich Ulrich > Date: Tue, 7 May 2013 11:32:36 -0700 > From: [hidden email] > Subject: Re: How can I calculate the P values in 6 X 6 contigency table > To: [hidden email] > > It looks to me like six separate 5x2 (or 2x5) tables, as Rich Ulrich > observed. If that is correct, then one could do something like this to get > a chi-square for each 5x2 table: > > new file. > dataset close all. > > data list list / row col (2f1) kount1 (f5.0) pct (f5.1). > begin data > 1 1 237 98.3 > 1 2 263 97.0 > 1 3 257 94.1 > 1 4 233 97.1 > 1 5 83 92.2 > 2 1 223 92.1 > 2 2 237 87.1 > 2 3 224 81.8 > 2 4 201 83.8 > 2 5 78 84.8 > 3 1 222 91.4 > 3 2 247 90.5 > 3 3 229 83.0 > 3 4 211 88.3 > 3 5 72 78.3 > 4 1 217 89.3 > 4 2 234 85.7 > 4 3 214 77.5 > 4 4 207 86.6 > 4 5 68 74.7 > 5 1 224 92.2 > 5 2 248 90.8 > 5 3 236 85.5 > 5 4 217 90.8 > 5 5 77 83.7 > 6 1 223 92.1 > 6 2 247 90.5 > 6 3 230 83.6 > 6 4 214 89.5 > 6 5 76 82.6 > end data. > > compute kount2 = rnd(kount1*100 / pct) - kount1. > formats kount2 (f5.0). > list. > > VARSTOCASES > /MAKE kount FROM kount1 kount2 > /INDEX=Y(2) > /KEEP=row col > /NULL=KEEP. > > split file by row. > weight by kount. > crosstabs col by Y / cell = count row / stat = chisqr. > split file off. > > However, the OP is asking for 36 p-values, apparently. So I'm not sure what > they want. If they want p-values for 98.3% vs 1.7% (cell 1 in the original > table), 97% vs 3% (cell 2, original table), etc, that seems rather > pointless. > > > ... |
In reply to this post by Bruce Weaver
One could obtain the likelihood Ratio Tests by employing GENLIN before VARSTOCASES, as shown below. -Ryan compute total=sum(kount1,kount2). execute. split file by row. GENLIN kount1 OF total BY col (ORDER=ASCENDING) /MODEL col INTERCEPT=YES DISTRIBUTION=BINOMIAL LINK=LOGIT /MISSING CLASSMISSING=EXCLUDE /PRINT SUMMARY. split file off.
On Tue, May 7, 2013 at 2:32 PM, Bruce Weaver <[hidden email]> wrote: It looks to me like six separate 5x2 (or 2x5) tables, as Rich Ulrich |
Free forum by Nabble | Edit this page |