Dear list: The following macro generates 100 random samples from a population of 10,000 cases and generates a chi squared value for each random sample
(thanks to David Marso). Is there a way now to get the actual chi squared values placed into a new data file which simply contains the 100 chi squared values? I have seen it done in another macro but not sure how to implement that with the macro that I am
using. Advice much appreciated. DEFINE myfreq (vars = !CHAREND('/')) !DO !I=1 !TO 100 TEMPORARY. SAMPLE 100 FROM 10000 /* use your actual values */. NPAR TESTS /CHISQUARE=!vars /EXPECTED=.16 .<a href="tel:20%20.24%20.13%20.14%20.13" target="_blank">20 .24 .13 .14 .13 /MISSING ANALYSIS. !DOEND !ENDDEFINE . myfreq vars = id/. Martin F. Sherman, Ph.D. Professor of Psychology Director of Masters Education in Psychology: Thesis Track Loyola University Maryland Department of Psychology 222 B Beatty Hall 4501 North Charles Street Baltimore, MD 21210 410-617-2417 |
Administrator
|
OMS?
--
Bruce Weaver bweaver@lakeheadu.ca http://sites.google.com/a/lakeheadu.ca/bweaver/ "When all else fails, RTFM." PLEASE NOTE THE FOLLOWING: 1. My Hotmail account is not monitored regularly. To send me an e-mail, please use the address shown above. 2. The SPSSX Discussion forum on Nabble is no longer linked to the SPSSX-L listserv administered by UGA (https://listserv.uga.edu/). |
In reply to this post by msherman
Martin No idea, so I’ve forwarded this to the list. I’m sure David M will come up with something to put the chi-square values in a column in the Data Editor. John From: Martin Sherman [mailto:[hidden email]] John; I got the program to run but now I want to modify it so that I can use the chi squared statistic NPAR TESTS /CHISQUARE=MEANAGE /EXPECTED=.1 .3 .2 .3 .4 .1 /MISSING ANALYSIS. The question I have then is where do I place the NPAR Tests. I think it should go somewhere (see below). I am using ID as my variable of interested. FREQ SAMPLE. AGGREGATE OUTFILE * / BREAK SAMPLE / MEANAGE = MEAN(id) / PctVHapp=PIN(id,1,1). From: John F Hall [[hidden email]] Martin It’s on page SSRC Survey Unit Quality of Life surveys on my site:(item 4: SPSS saved file for main GB survey 1975.) Don’t mind the upper case labels: it’s all they had in SPSS in 1975.JohnJohn F Hall (Mr) [Retired academic survey researcher] Email: [hidden email] Website: www.surveyresearch.weebly.com SPSS start page: www.surveyresearch.weebly.com/spss-without-tears.html From: Martin Sherman [[hidden email]] John: could you send me the 'ql4gb1975.sav' file. I first want to run his syntax and then adjust it for my situation. I have 10,000 data points that vary according to m and m colors (e.g. 1600 Red, 1400 blue, etc) and I want to get samples of size 100 and do this for 100 trials and to use the single variable chi squared with the expected proportions set. mfs From: SPSSX(r) Discussion [[hidden email]] On Behalf Of John F Hall David Marso did something like this for me several months ago when I wanted to show how means and other statistics can vary for 100 sub-samples size n from the main sample size N. This was a prelude to explaining sampling variation of the mean in a class and the exercise was supposed to yield a nice series of means from each student which would be approximately normally distributed. It never did, but they got the general idea. I may not have it on this computer, but you may be able to find something via Nabble, unless DM can provide a short bit of syntax again :) Panic over: I found it. See below. John F Hall (Mr) [Retired academic survey researcher] Email: [hidden email] Website: www.surveyresearch.weebly.com SPSS start page: www.surveyresearch.weebly.com/spss-without-tears.html SPSS syntax to provide 100 samples and calculate means of life satisfation. %% very happy and mean/median age. *Marso sample . * data set 'ql4gb1975.sav' . * variables: var544 (happy), var545 (lifesat), age . compute happy = var544. compute lifesat = var545 . compute caseid = serial . execute . *** OK here we go **. **First of all oversample **. LOOP SAMPLE=1 TO 100. + DO IF UNIFORM(1) < .12 . + XSAVE OUTFILE "Samples.sav" / KEEP caseid lifesat age happy sample. + END IF. END LOOP. *ONE OF THE ONLY Places you NEED an EXECUTE *. EXECUTE. GET FILE "samples.sav". FREQ SAMPLE. * Using same ideas as what I posted to Cindy Gregory * . COMPUTE SCRAMBLE=UNIFORM(1). SORT CASES BY SAMPLE SCRAMBLE. IF $CASENUM=1 OR LAG(SAMPLE) NE SAMPLE GPCount=1. IF MISSING( GPCount) GPCount=LAG(GPCount)+1. *ONE OF THE ONLY Places you NEED an EXECUTE *. EXECUTE. SELECT IF GPCount LE 300. FREQ SAMPLE. AGGREGATE OUTFILE * / BREAK SAMPLE / MLIFESAT MEANAGE = MEAN(lifesat age) / PctVHapp=PIN(happy,3,3). freq MLIFESAT MEANAGE PctVHapp /ST STDDEV SEMEAN MEAN /HIS NOR /FOR not . disp lab . freq age /for not /his nor /sta mea std sem . In Data View it produced this: 101 7.98 48.47 37.1 101 8.04 48.84 39.0 101 7.70 45.05 33.6 101 7.76 46.51 30.7 101 8.05 47.57 38.2 101 7.95 49.92 49.0 101 7.70 47.91 39.3 101 7.83 46.82 39.6 101 7.86 48.86 38.3 101 8.17 50.07 44.8 101 7.75 48.56 39.1 101 8.08 46.87 44.4 101 7.86 47.87 35.2 101 7.82 47.66 38.8 101 8.05 46.20 39.2 101 8.04 45.98 39.8 101 8.15 46.43 36.4 101 7.85 48.16 34.3 101 7.73 47.69 29.5 101 7.69 47.13 32.8 101 7.88 45.64 33.0 101 8.04 43.84 44.3 101 8.02 47.11 33.9 101 7.82 47.28 38.8 101 7.75 49.66 39.5 101 8.01 51.24 47.0 101 8.00 47.78 41.1 101 7.75 47.09 35.5 101 7.95 45.63 44.5 101 8.06 48.41 36.8 101 7.95 47.90 39.1 101 7.92 45.60 40.0 101 7.70 48.33 37.8 101 7.81 47.92 35.2 101 7.48 45.92 35.9 101 7.88 47.26 35.9 101 7.80 48.01 41.5 101 7.69 46.35 36.5 101 7.54 47.33 31.4 101 7.90 47.65 37.1 101 7.99 47.00 44.2 101 7.73 47.45 35.0 101 8.22 48.72 55.3 101 7.47 47.36 39.8 101 7.90 45.46 39.5 101 7.87 47.36 35.6 101 7.89 49.56 32.7 101 7.80 43.92 31.9 101 7.73 45.24 28.0 101 7.78 49.96 40.4 101 8.01 46.58 43.4 101 7.80 47.31 29.4 101 7.91 47.27 37.5 101 7.94 46.89 41.1 101 7.70 48.46 41.1 101 7.62 46.79 36.3 101 8.01 46.79 39.0 101 7.62 48.49 35.1 101 8.05 47.62 45.5 101 8.08 48.05 47.7 101 7.70 49.87 36.7 101 7.86 47.18 32.3 101 8.24 47.21 43.2 101 7.76 46.51 36.0 101 7.97 48.15 42.6 101 8.23 46.23 48.2 101 7.87 49.18 40.9 101 7.91 46.48 43.0 101 7.74 45.99 45.0 101 8.07 46.96 36.2 101 7.83 43.11 37.3 101 7.78 46.80 35.9 101 7.97 47.44 37.6 101 7.81 47.98 31.1 101 8.12 48.19 46.6 101 7.89 47.26 36.0 101 7.77 47.86 35.6 101 7.46 47.72 32.4 101 8.11 46.54 44.0 101 7.64 47.02 36.4 101 8.04 46.85 42.2 101 7.84 47.73 38.6 101 7.78 45.02 40.0 101 8.05 49.07 43.2 101 7.87 49.58 36.8 101 7.86 45.89 41.1 101 8.06 42.93 40.7 101 7.89 49.43 43.5 101 8.01 47.14 44.4 101 7.95 47.41 37.4 101 7.81 48.30 37.9 101 8.12 45.36 40.2 101 7.71 46.57 41.7 101 7.97 48.35 40.8 101 7.95 46.47 37.5 101 7.66 47.06 35.2 101 8.01 45.85 36.1 101 7.71 47.45 35.7 101 7.95 48.44 43.2 101 8.21 46.29 40.4 I just ran this syntax on the *.sav file and got some nice histograms with almost normal distributions: freq mlifesat meanage pctvhapp /for not /his nor. Just what the doctor ordered. From: SPSSX(r) Discussion [[hidden email]] On Behalf Of Martin Sherman Dear list: Is there a way to embed within a macro repetitive tests (single variable chi squared) when at the same time one obtains a random sample from a data set. That is, I have a large data set with 10,000 cases and want to sample 100 cases at a time and perform this 100 times and to obtain single variable chi squared values for each random sample. Martin F. Sherman, Ph.D. Professor of Psychology Director of Masters Education in Psychology: Thesis Track Loyola University Maryland Department of Psychology 222 B Beatty Hall 4501 North Charles Street Baltimore, MD 21210 410-617-2417 |
Administrator
|
Bruce already answered the question OMS!
John, please in the future reply under the original thread. David
Please reply to the list and not to my personal email.
Those desiring my consulting or training services please feel free to email me. --- "Nolite dare sanctum canibus neque mittatis margaritas vestras ante porcos ne forte conculcent eas pedibus suis." Cum es damnatorum possederunt porcos iens ut salire off sanguinum cliff in abyssum?" |
Free forum by Nabble | Edit this page |