Hello List- Apologies if this topic has been addressed before, but I
did not find any results when I searched the archives. I am using SPSS 19 and exporting data into XLS (per client
request). I have many numeric MSRP fields with a significant amount of missing
data. The fields are currently formatted as DOLLAR 8.2, but using NUMERIC 8.2
has the same results. When exported, the missing values appear as #NULL! in
XLS. Is there a tweak that I can put in my save syntax (below) to change #NULL!
to something else? When I open the spreadsheet, it is difficult to see the
actually values in the middle of all those #NULL!’s! I found this in the syntax help, but when I tried it out,
it didn’t appear to work for XLS files. If you select RECODE on the
/MISSING subcommand, how do you specify what to recode it to? [/MISSING={IGNORE}] {RECODE} SAVE TRANSLATE OUTFILE='MyFileName.xls' /TYPE=XLS /VERSION=8 /MAP /REPLACE /FIELDNAMES /CELLS=VALUES. I am trying to stay away from recoding the missing values
to zero (or any other numeric) prior to exporting, because I don’t know
if my client will be taking averages of the data once it is in XLS. Also, this
export will be performed exclusively through syntax, so drop-down solutions won’t
work in this situation. I realize that I can open the XLS file later and replace
the nulls with nothing, but my goal in all of my repetitive tasks is always “zero
manual intervention required”. In fact, if I could make SPSS email out
the file automatically, I’d script that in too! Any tips would be greatly appreciated. Thank you, Heidi Green |
Technically, system-missing values are
"unknown" and #NULL! was deemed the closest Excel value to unknown.
I don't know of any workaround in SPSS Statistics at this time, other
than to recode sysmis into a value, at which point it's no longer sysmis.
From: Heidi Green <[hidden email]> To: [hidden email] Date: 04/20/2011 01:30 PM Subject: Options for Exporting missing values to XLS? Sent by: "SPSSX(r) Discussion" <[hidden email]> Hello List- Apologies if this topic has been addressed before, but I did not find any results when I searched the archives. I am using SPSS 19 and exporting data into XLS (per client request). I have many numeric MSRP fields with a significant amount of missing data. The fields are currently formatted as DOLLAR 8.2, but using NUMERIC 8.2 has the same results. When exported, the missing values appear as #NULL! in XLS. Is there a tweak that I can put in my save syntax (below) to change #NULL! to something else? When I open the spreadsheet, it is difficult to see the actually values in the middle of all those #NULL!’s! I found this in the syntax help, but when I tried it out, it didn’t appear to work for XLS files. If you select RECODE on the /MISSING subcommand, how do you specify what to recode it to? [/MISSING={IGNORE}] {RECODE} SAVE TRANSLATE OUTFILE='MyFileName.xls' /TYPE=XLS /VERSION=8 /MAP /REPLACE /FIELDNAMES /CELLS=VALUES. I am trying to stay away from recoding the missing values to zero (or any other numeric) prior to exporting, because I don’t know if my client will be taking averages of the data once it is in XLS. Also, this export will be performed exclusively through syntax, so drop-down solutions won’t work in this situation. I realize that I can open the XLS file later and replace the nulls with nothing, but my goal in all of my repetitive tasks is always “zero manual intervention required”. In fact, if I could make SPSS email out the file automatically, I’d script that in too! Any tips would be greatly appreciated. Thank you, Heidi Green |
It's easy enough to search and replace those #NULL! values in Excel.
Technically, system-missing values are "unknown" and #NULL! was deemed the closest Excel value to unknown. I don't know of any workaround in SPSS Statistics at this time, other than to recode sysmis into a value, at which point it's no longer sysmis. From: Heidi Green <[hidden email]> To: [hidden email] Date: 04/20/2011 01:30 PM Subject: Options for Exporting missing values to XLS? Sent by: "SPSSX(r) Discussion" <[hidden email]> Hello List- Apologies if this topic has been addressed before, but I did not find any results when I searched the archives. I am using SPSS 19 and exporting data into XLS (per client request). I have many numeric MSRP fields with a significant amount of missing data. The fields are currently formatted as DOLLAR 8.2, but using NUMERIC 8.2 has the same results. When exported, the missing values appear as #NULL! in XLS. Is there a tweak that I can put in my save syntax (below) to change #NULL! to something else? When I open the spreadsheet, it is difficult to see the actually values in the middle of all those #NULL!’s! I found this in the syntax help, but when I tried it out, it didn’t appear to work for XLS files. If you select RECODE on the /MISSING subcommand, how do you specify what to recode it to? [/MISSING={IGNORE}] {RECODE} SAVE TRANSLATE OUTFILE='MyFileName.xls' /TYPE=XLS /VERSION=8 /MAP /REPLACE /FIELDNAMES /CELLS=VALUES. I am trying to stay away from recoding the missing values to zero (or any other numeric) prior to exporting, because I don’t know if my client will be taking averages of the data once it is in XLS. Also, this export will be performed exclusively through syntax, so drop-down solutions won’t work in this situation. I realize that I can open the XLS file later and replace the nulls with nothing, but my goal in all of my repetitive tasks is always “zero manual intervention required”. In fact, if I could make SPSS email out the file automatically, I’d script that in too! Any tips would be greatly appreciated. Thank you, Heidi Green |
In reply to this post by Rick Oliver-3
Thank you for the quick replies! Looks like I’ll be
continuing my manual replacements for now. Have a great day! From: SPSSX(r)
Discussion [mailto:[hidden email]] On
Behalf Of Rick Oliver Technically, system-missing values are
"unknown" and #NULL! was deemed the closest Excel value to unknown. I
don't know of any workaround in SPSS Statistics at this time, other than to
recode sysmis into a value, at which point it's no longer sysmis.
|
you could consider this approach:
If you know for certain what the range is of the scores on the variables and -99999 does not belong to it, you could use this syntax prior to exporting the data (unchecked syntax). do repeat a = var_1 to var_n. if missing(a) a=-99999. end repeat. then subsequently save the data. HTH On Thu, Apr 21, 2011 at 04:58, Heidi Green <[hidden email]> wrote: > > > Thank you for the quick replies! Looks like I’ll be continuing my manual > replacements for now. > > Have a great day! > > ________________________________ > > From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of > Rick Oliver > Sent: Wednesday, April 20, 2011 11:49 AM > > To: [hidden email] > Subject: Re: Options for Exporting missing values to XLS? > > > > Technically, system-missing values are "unknown" and #NULL! was deemed the > closest Excel value to unknown. � I don't know of any workaround in SPSS > Statistics at this time, other than to recode sysmis into a value, at which > point it's no longer sysmis. > > > > From: � � � � Heidi Green <[hidden email]> > To: � � � � [hidden email] > Date: � � � � 04/20/2011 01:30 PM > Subject: � � � � Options for Exporting missing values to XLS? > Sent by: � � � � "SPSSX(r) Discussion" <[hidden email]> > > ________________________________ > > > Hello List- > Apologies if this topic has been addressed before, but I did not find any > results when I searched the archives. > > I am using SPSS 19 and exporting data into XLS (per client request). I have > many numeric MSRP fields with a significant amount of missing data. The > fields are currently formatted as DOLLAR 8.2, but using NUMERIC 8.2 has the > same results. When exported, the missing values appear as #NULL! in XLS. Is > there a tweak that I can put in my save syntax (below) to change #NULL! to > something else? When I open the spreadsheet, it is difficult to see the > actually values in the middle of all those #NULL!’s! > > I found this in the syntax help, but when I tried it out, it didn’t appear > to work for XLS files. If you select RECODE on the /MISSING subcommand, how > do you specify what to recode it to? > [/MISSING={IGNORE}] > � {RECODE} > > SAVE TRANSLATE OUTFILE='MyFileName.xls' > � /TYPE=XLS > � /VERSION=8 > � /MAP > � /REPLACE > � /FIELDNAMES > � /CELLS=VALUES. > > I am trying to stay away from recoding the missing values to zero (or any > other numeric) prior to exporting, because I don’t know if my client will be > taking averages of the data once it is in XLS. Also, this export will be > performed exclusively through syntax, so drop-down solutions won’t work in > this situation. > > I realize that I can open the XLS file later and replace the nulls with > nothing, but my goal in all of my repetitive tasks is always “zero manual > intervention required”. In fact, if I could make SPSS email out the file > automatically, I’d script that in too! > > Any tips would be greatly appreciated. > Thank you, > Heidi Green > > > ____________ > DefenderMX4. -- ___________________________________________________________________ Maurice Vergeer Department of communication, Radboud University� (www.ru.nl) PO Box 9104, NL-6500 HE Nijmegen, The Netherlands Visiting Professor Yeungnam University, Gyeongsan, South Korea Recent publications: -Vergeer, M., Hermans, L., & Sams, S. (forthcoming). Online social networks and micro-blogging in political campaigning: The exploration of a new campaign tool and a new campaign style. Party Politics. -Pleijter, A., Hermans, L. & Vergeer, M. (forthcoming). Journalists and journalism in the Netherlands. In D. Weaver & L. Willnat, The Global Journalist in the 21st Century. London: Routledge. -Eisinga, R., Franses, Ph.H., & Vergeer, M. (2010). Weather conditions and daily television use in the Netherlands, 1996–2005. International Journal of Meteorology. Webspace www.mauricevergeer.nl http://blog.mauricevergeer.nl/ www.journalisteninhetdigitaletijdperk.nl maurice.vergeer (skype) ___________________________________________________________________ ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD |
Administrator
|
In reply to this post by Heidi Green
How about exporting to a .CSV file? I just tried a small example (using v18 for Windoze), and cells with SYSMIS are blank. Double-clicking the .CSV file opens it in Excel, and it can then be saved as an Excel file if desired. That's still a manual step, but possibly less onerous than doing the Replace-All for #NULL!.
HTH.
--
Bruce Weaver bweaver@lakeheadu.ca http://sites.google.com/a/lakeheadu.ca/bweaver/ "When all else fails, RTFM." PLEASE NOTE THE FOLLOWING: 1. My Hotmail account is not monitored regularly. To send me an e-mail, please use the address shown above. 2. The SPSSX Discussion forum on Nabble is no longer linked to the SPSSX-L listserv administered by UGA (https://listserv.uga.edu/). |
In reply to this post by Heidi Green
As Rick Oliver said, SPSS is bound to create
those #NULLs in the Excel export, but there is nothing stopping you from
creating a simple Basic script, run with the SCRIPT command, to open the
generated file and replacing those values with blanks or whatever else
you want. And, as a bonus, you can have the script email the file
to your client!
Jon Peck Senior Software Engineer, IBM [hidden email] new phone: 720-342-5621 From: Heidi Green <[hidden email]> To: [hidden email] Date: 04/20/2011 12:30 PM Subject: [SPSSX-L] Options for Exporting missing values to XLS? Sent by: "SPSSX(r) Discussion" <[hidden email]> Hello List- Apologies if this topic has been addressed before, but I did not find any results when I searched the archives. I am using SPSS 19 and exporting data into XLS (per client request). I have many numeric MSRP fields with a significant amount of missing data. The fields are currently formatted as DOLLAR 8.2, but using NUMERIC 8.2 has the same results. When exported, the missing values appear as #NULL! in XLS. Is there a tweak that I can put in my save syntax (below) to change #NULL! to something else? When I open the spreadsheet, it is difficult to see the actually values in the middle of all those #NULL!’s! I found this in the syntax help, but when I tried it out, it didn’t appear to work for XLS files. If you select RECODE on the /MISSING subcommand, how do you specify what to recode it to? [/MISSING={IGNORE}] {RECODE} SAVE TRANSLATE OUTFILE='MyFileName.xls' /TYPE=XLS /VERSION=8 /MAP /REPLACE /FIELDNAMES /CELLS=VALUES. I am trying to stay away from recoding the missing values to zero (or any other numeric) prior to exporting, because I don’t know if my client will be taking averages of the data once it is in XLS. Also, this export will be performed exclusively through syntax, so drop-down solutions won’t work in this situation. I realize that I can open the XLS file later and replace the nulls with nothing, but my goal in all of my repetitive tasks is always “zero manual intervention required”. In fact, if I could make SPSS email out the file automatically, I’d script that in too! Any tips would be greatly appreciated. Thank you, Heidi Green |
Hi,
I am thinking of comparing Logistic Regression and Linear Regression's accuracy. In which for Linear Regression, I am looking at Adjusted R-Square and for Logistic Regression, the Classification Table's Overall Percentage to decide which model to use. My query is for Logistic Regression, should I use Nagelkerke R Square instead of the Overall Percentage to do a fair comparison between Linear Regression and Logistic Regression? Thanks for any help. Dorraj Oet |
In reply to this post by Jon K Peck
Dear all,
I have 3 variables as follows; Var01 – yes=1, no=2, don’t know=97 Var02 yes=1, no=2, don’t know=97 Var03 yes=1, no=2, don’t know=97 and, I wish to construct a new variable var04 with - all responses to var01 through var03 are ‘yes’ , - at least two responses to var01 through var03 are ‘yes’ - at least one response to var01 through var03 are ‘yes’ - no ‘yes’ at all
It will be much appreciated if you kindly give me some guidance.
regards, Khaing Soe
|
I think you mean “only” rather than “at least”. From your data editor: File > new > syntax Then write: Count var04 = var01 to var03 (3) . Freq var04 . . . and press the green triangle. If you’re new to SPSS, you should look at the tutorials on my site. John Hall From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of Khaing Soe Dear all, I have 3 variables as follows; Var01 – yes=1, no=2, don’t know=97 Var02 yes=1, no=2, don’t know=97 Var03 yes=1, no=2, don’t know=97 and, I wish to construct a new variable var04 with - all responses to var01 through var03 are ‘yes’ , - at least two responses to var01 through var03 are ‘yes’ - at least one response to var01 through var03 are ‘yes’ - no ‘yes’ at all It will be much appreciated if you kindly give me some guidance. regards, Khaing Soe |
In reply to this post by Khaing Soe-2
Look at the COUNT command. From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of Khaing Soe Dear all, I have 3 variables as follows; Var01 – yes=1, no=2, don’t know=97 Var02 yes=1, no=2, don’t know=97 Var03 yes=1, no=2, don’t know=97 and, I wish to construct a new variable var04 with - all responses to var01 through var03 are ‘yes’ , - at least two responses to var01 through var03 are ‘yes’ - at least one response to var01 through var03 are ‘yes’ - no ‘yes’ at all It will be much appreciated if you kindly give me some guidance. regards, Khaing Soe |
In reply to this post by Jarrod Teo-2
When I did this years ago, I did it by the power. I generated a data set with a continuous outcome, Y, and a bunch of predictors assuming a linear relation with Y. Then I dichotomized Y and ran linear regression with Y and logistic with the dichotomized Y. Of course this is biased in favor of regression since I assumed a linear relation with Y. However, it would be possible to simulate other relations of the Xs with Y to see the difference. In any case, I simulated a 1000 data sets and compared the significance of each test. Dr. Paul R. Swank, Professor and Director of Research Children's Learning Institute University of Texas Health Science Center-Houston From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of DorraJ Oet Hi, |
In reply to this post by Jarrod Teo-2
>
> I am thinking of comparing Logistic Regression and Linear Regression's > accuracy. In which for Linear Regression, I am looking at Adjusted > R-Square and for Logistic Regression, the Classification Table's > Overall Percentage to decide which model to use. > So, you will correct for chance (adjusted R-square) in one case and not in the other? That seems like introducing an unneeded bias. If you are using a dichotomous outcome, this is technically the same as discriminant function for two groups. There is ample literature on comparing d.f. and logistic regression, though much of it is superficial. > My query is for Logistic Regression, should I use Nagelkerke R Square > instead of the Overall Percentage to do a fair comparison between > Linear Regression and Logistic Regression? I have never seen any source that says that any version of R^2 for logistic provides a "fair comparison" to any OLS R^2. Don't bother using it, I think, unless it is for curiosity. I think that you use at least two criteria, in order to balance for biases in either direction. 1) The correlation of the predicted score or Probability with the outcome dichotomy is straightforward. The presence of cases that are strongly over-predicted will tend to weaken the r for the logistic examples. 2) The proportion-correctly-classified is less straightforward -- Especially when the group sizes are unequal, there *is* a choice about where to draw the dividing line for Classifying. The Odds ratio of the resulting 2x2 table is, quite arguably, the proper criterion for optimizing the result; but if the groups are divided 90-10 (say), the "overall percentage correct" is likely to be highest when all cases are assigned to that group... even though, practically and statistically, that is usually a worthless result. (Adjusting the "prior probabilities" in discriminant function tends to make a vast over-correction, by the way.) But if you are using equal Ns, then the Correct Assignments is an acceptable alternative. If you are starting with a continuous outcome for regression, and throwing away information to dichotomize for logistic, then correlation and Percentage will give two ways to quantify the loss of information. -- Rich Ulrich ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD |
Some time ago I expressed my view in this forum that the Classification Table in Logistic Regression is a misleading instrument. The reason is that probabilities are about groups of entities (“populations”) and not individuals. Individuals may be said to “have” a certain probability (p) of something as a shortcut to mean that they belong to a group or population where that “something” occurs to a proportion “p” of its members. Even when p is high, there is always some proportion 1-p to which the event does not occur, their high probability notwithstanding. Insofar as all available information has gone into determining p, the reasons why Dick suffered the event whilst Jane did not, both having the same p, are strictly indeterminate. Not fifty/fifty, I mean strictly indeterminate. If p>0.50 you may bet that both Dick and Jane would suffer the event, but you actually do not have a clue about the individual outcome. Such rule of decision (bet that the event occurs whenever p>0.50) only ensures minimizing the number of errors in a large number of such bets, but says nothing about the individual case. Here, however, I am talking about prediction, not about rules of decision which are slightly different. That is the theoretical reason. The practical reason is that seldom p is around 0.50. For low or high p, say 0.10 or 0.90, the rule about p>0.50 may be wildly misleading. For instance. If the average p=0.10, and a log reg model is used to refine the probabilities, it is quite possible that not one case gets p>0.50, i.e. all combinations of predictor values yield p<0.50, and nonetheless 10% of cases suffer the event, even if everyone had only a low probability. So, in that case, predicting the occurrence of the event to anybody with p>0.50 would result in not getting even one event right. All will be missed by the rule. Seeing this, some clever souls have imagined a modified rule for the classification table: change the threshold value away from 0.50. For instance, you may think that everybody with p above the average is more likely to suffer the event, e.g. Dick who has p=0.11. So Dick will be predicted to suffer the event even if he has a 0.89 probability of NOT suffering the event. In fact, there might be a lot of people with predicted p>0.10, but only a fraction would actually suffer the event. The same, contrario sensu, goes for high probabilities such as 0.90. Using a ROC curve to find the most sensitive threshold is often recommended, and that is slightly better, but suffers the same theoretical problems (and many of the empirical ones as well). For my part, I think the test for probabilities is that observed proportions should be highly correlated with predicted probabilities. If a group of people has a predicted probability of 0.35, one should expect that the observed proportion of events among those people should be near 0.35. This test could be defined in several ways. The one I like is the basis of Hosmer-Lemeshow test, but not the final chi-square-like number. What you use are the risk deciles, i.e. people divided into ten groups of equal size with increasing probability of the event. Ideally, the observed frequency of the event in each decile should match the predicted one (i.e. the mean of the predicted probabilities of people in each decile). Of course, one may use quintiles, or smaller percentiles (e.g. 20 groups with 5% of cases each). The linear correlation coefficient between the ten (or twenty) mean predicted and mean observed probabilities are your test. Or you can use Hosmer-Lemeshow indicator, which is similar to chi square, but this time you should be happy whenever your chi-square is LOW (meaning small differences between observed and expected proportions). A chart of observed and predicted mean probabilities is also helpful, to locate the level of probability where discrepancies are larger. Besides risk percentiles, one may also form groups using combinations of predictor values: for instance, if age, employment and age are your predictors, one group would include all working females of age 15-24, and other groups for every other combinations of work, age and gender. In case one or more predictors are continuous you may use binning to form a relatively low number of groups (as with age in this example). The same rule applies in this case: you look for agreement between expected and observed probabilities within each group. The first approach (risk deciles) ensure relative homogeneity of expected probabilities within each group, but preserves some within-group variation. In the second approach, if you use all the predictors to define groups, then the expected probability will be the same for all members of each group. One problem is that you may end up with too many groups, some of them scarcely populated. Thus I prefer the percentile approach. Hope this helps re-thinking this problem. Hector De: SPSSX(r) Discussion [mailto:[hidden email]] En nombre de Rich Ulrich > No virus found in this message. |
Hi
all,
I am
just learning to use linear mixed-effects modeling. While I have many questions
about this procedure and have been slowly working through tutorials, i have
a simple question regarding reference categories. One of the fixed effects in
some sample data i created is coded 0=control 1=treatement. The
estimates of fixed effects table automatically uses the highest numeric category
as the refcat. While I can recode this, is there a way to tell MIXED which
category to use as the refcat? They sytax menu doesn't appear to show a way to
do this except in the EMMEANS subcommand. Have i missed this?
I'm using version 14.0.
Thanks
Carol
|
Not that I'm aware. Maybe someone else knows of a way to specify a
reference category. Ryan On Tue, Apr 26, 2011 at 7:20 PM, Parise, Carol A. <[hidden email]> wrote: > Hi all, > > I am just learning to use linear mixed-effects modeling. While I have many > questions about this procedure and have been slowly working through > tutorials, i have a simple question regarding reference categories. One of > the fixed effects in some sample data i created is coded 0=control > 1=treatement. The estimates of fixed effects table automatically uses the > highest numeric category as the refcat. While I can recode this, is there a > way to tell MIXED which category to use as the refcat? They sytax menu > doesn't appear to show a way to do this except in the EMMEANS > subcommand. Have i missed this? I'm using version 14.0. > > Thanks > Carol > > ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD |
In reply to this post by John F Hall
Dear Bruce and all, Thanks and sorry for confusion caused by my first message. What I want to construct is a new variable var04 with 4 response categories, not just counting them. For example, - 1 for all 'yes' to var01 through var03, - 2 for two 'yes' to var01 through var03 - 3 for one 'yes ' to var01 through var03 - 4 for no ‘yes’ at all Please give some guidance as needed. regards, Khaing Soe From: John F Hall <[hidden email]> To: Khaing Soe <[hidden email]>; [hidden email] Sent: Tue, April 26, 2011 4:26:52 PM Subject: RE: Question on recoding I think you mean “only” rather than “at least”.
From your data editor:
File > new > syntax
Then write:
Count var04 = var01 to var03 (3) . Freq var04 .
. . and press the green triangle.
If you’re new to SPSS, you should look at the tutorials on my site.
John Hall
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of Khaing Soe
Dear all,
I have 3 variables as follows; Var01 – yes=1, no=2, don’t know=97 Var02 yes=1, no=2, don’t know=97 Var03 yes=1, no=2, don’t know=97
and, I wish to construct a new variable var04 with - all responses to var01 through var03 are ‘yes’ , - at least two responses to var01 through var03 are ‘yes’ - at least one response to var01 through var03 are ‘yes’ - no ‘yes’ at all
It will be much appreciated if you kindly give me some guidance. regards, Khaing Soe
|
In reply to this post by Jarrod Teo-2
What's the purpose of such a comparison? Put another way, what exactly
are you trying to demonstrate? Also, what type of data do you plan on using to perform this comparison? Ryan 2011/4/26 DorraJ Oet <[hidden email]>: > Hi, > > I am thinking of comparing Logistic Regression and Linear Regression's > accuracy. In which for Linear Regression, I am looking at Adjusted R-Square > and for Logistic Regression, the Classification Table's Overall Percentage > to decide which model to use. > > My query is for Logistic Regression, should I use Nagelkerke R Square > instead of the Overall Percentage to do a fair comparison between Linear > Regression and Logistic Regression? > > Thanks for any help. > Dorraj Oet > > > ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD |
In reply to this post by Khaing Soe-2
Let's see. You have four categories, and their labels are effectively,
zero 'yes' one 'yes' two 'yes' three 'yes' That sure looks like a COUNT to me. If you want to reverse the scoring, so that '0' becomes 4, you can subtract the raw counts from 4. -- Rich Ulrich ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD |
In reply to this post by Khaing Soe-2
Just do what I said and you’ll get what you want. You can add labels inside the data editor or use syntax with: Var labels …. Val lab …. You can also run a check by: Crosstabs var01 to var03 by var04 . From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of Khaing Soe Dear Bruce and all, Thanks and sorry for confusion caused by my first message. What I want to construct is a new variable var04 with 4 response categories, not just counting them. For example, - 1 for all 'yes' to var01 through var03, - 2 for two 'yes' to var01 through var03 - 3 for one 'yes ' to var01 through var03 - 4 for no ‘yes’ at all Please give some guidance as needed. regards, Khaing Soe From: John F Hall <[hidden email]> I think you mean “only” rather than “at least”. From your data editor: File > new > syntax Then write: Count var04 = var01 to var03 (3) . Freq var04 . . . and press the green triangle. If you’re new to SPSS, you should look at the tutorials on my site. John Hall From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of Khaing Soe Dear all, I have 3 variables as follows; Var01 – yes=1, no=2, don’t know=97 Var02 yes=1, no=2, don’t know=97 Var03 yes=1, no=2, don’t know=97 and, I wish to construct a new variable var04 with - all responses to var01 through var03 are ‘yes’ , - at least two responses to var01 through var03 are ‘yes’ - at least one response to var01 through var03 are ‘yes’ - no ‘yes’ at all It will be much appreciated if you kindly give me some guidance. regards, Khaing Soe |
Free forum by Nabble | Edit this page |