Comparing ordinal variable in k Independant samples

classic Classic list List threaded Threaded
17 messages Options
Reply | Threaded
Open this post in threaded view
|

Comparing ordinal variable in k Independant samples

ANDRES ALBERTO BURGA LEON

Hello to everybody:

I need to check for differences among four independent samples that made a short essay (rated using a 3 point rating scale= good, medium, poor). I presume I could use Kruskal-Wallis H or a similar non-parametric test, but in the new PASW 18 non-parametric test “Test Field” option only accepts variable measured in scale level. Can’t I use ordinal variables? What test could I do?


I also need to check for differences among four independent groups, but whit a 7 points ordinal variable.

Thanks


Mg. Andrés Burga León
Coordinador de Análisis e Informática
Unidad de Medición de la Calidad Educativa
Ministerio de Educación del Perú
Calle El Comercio s/n (espalda del Museo de la Nación)
Lima 41
Perú
Teléfono 615-5840

Reply | Threaded
Open this post in threaded view
|

Re: Comparing ordinal variable in k Independant samples

Bruce Weaver
Administrator
Ordinal logistic regression (via the PLUM procedure) might be suitable.  How large is your sample?  And how many subjects fall into each of the 3 categories?  If your data don't meet all the assumptions for ordinal logistic regression, you could try multinomial logistic regression (NOMREG) instead.  It assumes only nominal scale measurement for the outcome, not ordered categories.


ANDRES ALBERTO BURGA LEON wrote
Hello to everybody:

I need to check for differences among four independent samples that made a
short essay (rated using a 3 point rating scale= good, medium, poor). I
presume I could use Kruskal-Wallis H or a similar non-parametric test, but
in the new PASW 18 non-parametric test “Test Field” option only accepts
variable measured in scale level. Can’t I use ordinal variables? What test
could I do?

I also need to check for differences among four independent groups, but
whit a 7 points ordinal variable.

Thanks

Mg. Andrés Burga León
Coordinador de Análisis e Informática
Unidad de Medición de la Calidad Educativa
Ministerio de Educación del Perú
Calle El Comercio s/n (espalda del Museo de la Nación)
Lima 41
Perú
Teléfono 615-5840
--
Bruce Weaver
bweaver@lakeheadu.ca
http://sites.google.com/a/lakeheadu.ca/bweaver/

"When all else fails, RTFM."

PLEASE NOTE THE FOLLOWING: 
1. My Hotmail account is not monitored regularly. To send me an e-mail, please use the address shown above.
2. The SPSSX Discussion forum on Nabble is no longer linked to the SPSSX-L listserv administered by UGA (https://listserv.uga.edu/).
Reply | Threaded
Open this post in threaded view
|

easy non-SPSS question (analysis)

J P-6
Hello,
 
I am having a low IQ day today. I need to compare the proportion of males vs females on a series of survey question. The substantive questions is is there a gender effect (i.e., are the proportions significantly different). The rub is I do not have the raw data, only cross tabulation tables.
 
My first thought is a simple t-test of proportions or would a chi-square test of fit be more appropriate (where the observed is %male and the expected is %female)??
 
Thanks in advance
John

Reply | Threaded
Open this post in threaded view
|

Re: easy non-SPSS question (analysis)

Bruce Weaver
Administrator
Use the WEIGHT command.  Here's an example.  Variable COUNT holds the cell counts.  Adjust the number of categories for the other variable as needed.

DATA LIST LIST /sex (f2.0) cat (f2.0) count(f5.0) .
BEGIN DATA.
1 1 9
1 2 17
1 3 22
2 1 21
2 2 13
2 3 8
END DATA.

val lab
 sex 1 'Male'
      2 'Female' /
 cat 1 'Category 1'
      2 'Category 2'
      3 'Category 3'
.
weight by count.
crosstabs sex by cat / stat = chisq.


J P-6 wrote
Hello,

I am having a low IQ day today. I need to compare the proportion of males vs females on a series of survey question. The substantive questions is is there a gender effect (i.e., are the proportions significantly different). The rub is I do not have the raw data, only cross tabulation tables.

My first thought is a simple t-test of proportions or would a chi-square test of fit be more appropriate (where the observed is %male and the expected is %female)??

Thanks in advance
John

--
Bruce Weaver
bweaver@lakeheadu.ca
http://sites.google.com/a/lakeheadu.ca/bweaver/

"When all else fails, RTFM."

PLEASE NOTE THE FOLLOWING: 
1. My Hotmail account is not monitored regularly. To send me an e-mail, please use the address shown above.
2. The SPSSX Discussion forum on Nabble is no longer linked to the SPSSX-L listserv administered by UGA (https://listserv.uga.edu/).
Reply | Threaded
Open this post in threaded view
|

Re: easy non-SPSS question (analysis)

Jon K Peck
In reply to this post by J P-6

The PROPOR extension command from SPSS Developer Central (www.spss.com/devcentral) displays binomial and Poisson c.i.'s for proportions.  It can be used when you only have the counts with the data either specified in the command or in the usual way as variables.

Usage example:
PROPOR NUM=55 DENOM=100.

NUM and DENOM can also be lists of values.

PROPOR /HELP.
displays the full syntax information.

HTH,
Jon Peck
SPSS, an IBM Company
[hidden email]
312-651-3435



From: J P <[hidden email]>
To: [hidden email]
Date: 06/15/2010 12:52 PM
Subject: [SPSSX-L] easy non-SPSS question (analysis)
Sent by: "SPSSX(r) Discussion" <[hidden email]>





Hello,
 
I am having a low IQ day today. I need to compare the proportion of males vs females on a series of survey question. The substantive questions is is there a gender effect (i.e., are the proportions significantly different). The rub is I do not have the raw data, only cross tabulation tables.
 
My first thought is a simple t-test of proportions or would a chi-square test of fit be more appropriate (where the observed is %male and the expected is %female)??
 
Thanks in advance
John


Reply | Threaded
Open this post in threaded view
|

Re: easy non-SPSS question (analysis)

Bruce Weaver
Administrator
These confidence intervals may be interesting and useful, but are not equivalent to a test on whether the proportions for males & females differ.  I.e., overlap in the confidence intervals can occur even if the difference between the point estimates is statistically significant.  E.g.,

   http://www.cmaj.ca/cgi/content/full/166/1/65

Cheers,
Bruce


Jon K Peck wrote
The PROPOR extension command from SPSS Developer Central (
www.spss.com/devcentral) displays binomial and Poisson c.i.'s for
proportions.  It can be used when you only have the counts with the data
either specified in the command or in the usual way as variables.

Usage example:
PROPOR NUM=55 DENOM=100.

NUM and DENOM can also be lists of values.

PROPOR /HELP.
displays the full syntax information.

HTH,
Jon Peck
SPSS, an IBM Company
peck@us.ibm.com
312-651-3435



From:
J P <jp7837@yahoo.com>
To:
SPSSX-L@LISTSERV.UGA.EDU
Date:
06/15/2010 12:52 PM
Subject:
[SPSSX-L] easy non-SPSS question (analysis)
Sent by:
"SPSSX(r) Discussion" <SPSSX-L@LISTSERV.UGA.EDU>



Hello,

I am having a low IQ day today. I need to compare the proportion of males
vs females on a series of survey question. The substantive questions is is
there a gender effect (i.e., are the proportions significantly different).
The rub is I do not have the raw data, only cross tabulation tables.

My first thought is a simple t-test of proportions or would a chi-square
test of fit be more appropriate (where the observed is %male and the
expected is %female)??

Thanks in advance
John

--
Bruce Weaver
bweaver@lakeheadu.ca
http://sites.google.com/a/lakeheadu.ca/bweaver/

"When all else fails, RTFM."

PLEASE NOTE THE FOLLOWING: 
1. My Hotmail account is not monitored regularly. To send me an e-mail, please use the address shown above.
2. The SPSSX Discussion forum on Nabble is no longer linked to the SPSSX-L listserv administered by UGA (https://listserv.uga.edu/).
Reply | Threaded
Open this post in threaded view
|

Re: Comparing ordinal variable in k Independant samples

Kooij, A.J. van der
In reply to this post by Bruce Weaver
Re: Comparing ordinal variable in k Independant samples
or, if you have the CATEGORIES module, you could use CATREG (Analyze menu, Regression, Optimal Scaling).
 
Kind regards,
Anita van der Kooij,
Data Theory Group,
Leiden University.


From: SPSSX(r) Discussion on behalf of Bruce Weaver
Sent: Tue 15-Jun-10 19:54
To: [hidden email]
Subject: Re: Comparing ordinal variable in k Independant samples

Ordinal logistic regression (via the PLUM procedure) might be suitable.  How
large is your sample?  And how many subjects fall into each of the 3
categories?  If your data don't meet all the assumptions for ordinal
logistic regression, you could try multinomial logistic regression (NOMREG)
instead.  It assumes only nominal scale measurement for the outcome, not
ordered categories.



ANDRES ALBERTO BURGA LEON wrote:


>
> Hello to everybody:
>
> I need to check for differences among four independent samples that made a
> short essay (rated using a 3 point rating scale= good, medium, poor). I
> presume I could use Kruskal-Wallis H or a similar non-parametric test, but
> in the new PASW 18 non-parametric test “Test Field” option only accepts
> variable measured in scale level. Can’t I use ordinal variables? What test
> could I do?
>
> I also need to check for differences among four independent groups, but
> whit a 7 points ordinal variable.
>
> Thanks
>
> Mg. Andrés Burga León
> Coordinador de Análisis e Informática
> Unidad de Medición de la Calidad Educativa
> Ministerio de Educación del Perú
> Calle El Comercio s/n (espalda del Museo de la Nación)
> Lima 41
> Perú
> Teléfono 615-5840
>
>


-----
--
Bruce Weaver
[hidden email]
http://sites.google.com/a/lakeheadu.ca/bweaver/
"When all else fails, RTFM."

NOTE:  My Hotmail account is not monitored regularly.
To send me an e-mail, please use the address shown above.
--
View this message in context: http://old.nabble.com/Comparing-ordinal-variable-in-k-Independant-samples-tp28893393p28894443.html
Sent from the SPSSX Discussion mailing list archive at Nabble.com.

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD

**********************************************************************

This email and any files transmitted with it are confidential and

intended solely for the use of the individual or entity to whom they

are addressed. If you have received this email in error please notify

the system manager.

**********************************************************************

 

Reply | Threaded
Open this post in threaded view
|

SPSS is becoming too rigid (Was: Comparing ordinal variable in k Independant samples)

Steve Simon, P.Mean Consulting
In reply to this post by ANDRES ALBERTO BURGA LEON
There was a question that I just stumbled across a month late, but I
wanted to bring it up again, not because the earlier answers were bad,
but rather because it illustrates a philosophical change in how SPSS
handles data analysis. I don't like this change and I wanted to see what
others think about it.

ANDRES ALBERTO BURGA LEON wrote:

> I need to check for differences among four independent samples that made
> a short essay (rated using a 3 point rating scale= good, medium, poor).
> I presume I could use Kruskal-Wallis H or a similar non-parametric test,
> but in the new PASW 18 non-parametric test “Test Field” option only
> accepts variable measured in scale level. Can’t I use ordinal variables?
> What test could I do?

I just got SPSS 18 loaded last week, and I looked at the non-parametric
test procedure. They've integrated all the nonparametric tests into a
single menu choice, which may or may not be a good thing, but
interestingly, this appears to be the first (but probably not the last)
statistical procedure that uses the nominal/ordinal/scale properties of
the variable. The graphical methods, of course, have been using this
nominal/ordinal/scale property for quite a while (since version 15, I
believe).

This is something like the philosophy implemented in JMP, but it is, so
far, only implemented partially in SPSS. I'm not sure I like this new
approach and I thought it would be worth discussing this on this list.

In theory, the measurement property of a particular variable should
allow you to use or eliminate certain tests or graphs, but there are two
problems with this. First, a lot of times, you want to run a test that
doesn't quite meet the measurement properties of the variable in
question just as a sensitivity check. Second, SPSS does a lousy job of
assigning measurement properties to a file that is imported from another
source. I dislike the idea of having to check and fix the measurement
properties of every variable in every imported data set.

The advantage is that incorporating measurement properties into all the
statistical procedures might prevent an inexperienced data analyst from
making a bogus choice and might end up steering them towards a more
appropriate choice. It also is a potential time saver in that you will
be presented with a smaller number of valid choices for your graphs and
analyses.

The workaround is to change the measurement properties temporarily, but
I find this tedious and annoying.

Another workaround is to use the legacy dialogs, which did not have
these restrictions. I'm not thrilled about this either. I don't want to
teach people to use something that is obsolete.

I also see this as a common question that I will get in consulting (why
doesn't SPSS let me run the X procedure on my data set). It might end up
padding my consulting income, but I still don't like it.

What do other people think about this use of measurement properties as a
gatekeeper that prevents certain graphs/analyses from being run?

Steve Simon, Standard Disclaimer
"Data entry and data management issues with examples
in IBM SPSS," Tuesday, August 24, 11am-noon CDT.
Free webinar. Details at www.pmean.com/webinars

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

A Question about skewness

stace swayne
Dear List,

Is there a rule of thumb about how much skewness is acceptable vs. unacceptable.
I have read conflicting opinions that say that skewness should be less than 2,
and then some people have said that skewness should be less than 1.

All suggestions are welcomed,

Stace

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: SPSS is becoming too rigid (Was: Comparing ordinal variable in k Independant samples)

Christopher Stride
In reply to this post by Steve Simon, P.Mean Consulting
I strongly agree with you Steve; and the argument for this system, i.e.
that it will stop users running tests using variables of the wrong type
is weak in itself in that if users don't understand why their test is
appropriate they are unlikely to be able to interpret the results
properly anyway...

I can also think of instances where using a method not normally
associated with certain types of vars can be useful e.g. mean tables on
a dichotomous variable will give you a proportion scoring 1. And if
these restrictions contuinue to be implemented, will we be stopped from
adding dummy variables to regression!?

This is a recent change that definitely should be reversed (along with
the loss of the ability to right-click on menu headings to get a brief
help box)

cheers
Chris



On 19/07/2010 18:51, Steve Simon, P.Mean Consulting wrote:

> There was a question that I just stumbled across a month late, but I
> wanted to bring it up again, not because the earlier answers were bad,
> but rather because it illustrates a philosophical change in how SPSS
> handles data analysis. I don't like this change and I wanted to see what
> others think about it.
>
> ANDRES ALBERTO BURGA LEON wrote:
>
>> I need to check for differences among four independent samples that made
>> a short essay (rated using a 3 point rating scale= good, medium, poor).
>> I presume I could use Kruskal-Wallis H or a similar non-parametric test,
>> but in the new PASW 18 non-parametric test “Test Field” option only
>> accepts variable measured in scale level. Can’t I use ordinal variables?
>> What test could I do?
>
> I just got SPSS 18 loaded last week, and I looked at the non-parametric
> test procedure. They've integrated all the nonparametric tests into a
> single menu choice, which may or may not be a good thing, but
> interestingly, this appears to be the first (but probably not the last)
> statistical procedure that uses the nominal/ordinal/scale properties of
> the variable. The graphical methods, of course, have been using this
> nominal/ordinal/scale property for quite a while (since version 15, I
> believe).
>
> This is something like the philosophy implemented in JMP, but it is, so
> far, only implemented partially in SPSS. I'm not sure I like this new
> approach and I thought it would be worth discussing this on this list.
>
> In theory, the measurement property of a particular variable should
> allow you to use or eliminate certain tests or graphs, but there are two
> problems with this. First, a lot of times, you want to run a test that
> doesn't quite meet the measurement properties of the variable in
> question just as a sensitivity check. Second, SPSS does a lousy job of
> assigning measurement properties to a file that is imported from another
> source. I dislike the idea of having to check and fix the measurement
> properties of every variable in every imported data set.
>
> The advantage is that incorporating measurement properties into all the
> statistical procedures might prevent an inexperienced data analyst from
> making a bogus choice and might end up steering them towards a more
> appropriate choice. It also is a potential time saver in that you will
> be presented with a smaller number of valid choices for your graphs and
> analyses.
>
> The workaround is to change the measurement properties temporarily, but
> I find this tedious and annoying.
>
> Another workaround is to use the legacy dialogs, which did not have
> these restrictions. I'm not thrilled about this either. I don't want to
> teach people to use something that is obsolete.
>
> I also see this as a common question that I will get in consulting (why
> doesn't SPSS let me run the X procedure on my data set). It might end up
> padding my consulting income, but I still don't like it.
>
> What do other people think about this use of measurement properties as a
> gatekeeper that prevents certain graphs/analyses from being run?
>
> Steve Simon, Standard Disclaimer
> "Data entry and data management issues with examples
> in IBM SPSS," Tuesday, August 24, 11am-noon CDT.
> Free webinar. Details at www.pmean.com/webinars
>
> =====================
> To manage your subscription to SPSSX-L, send a message to
> [hidden email] (not to SPSSX-L), with no body text except the
> command. To leave the list, send the command
> SIGNOFF SPSSX-L
> For a list of commands to manage subscriptions, send the command
> INFO REFCARD

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: A Question about skewness

Bruce Weaver
Administrator
In reply to this post by stace swayne
stace swayne wrote
Dear List,

Is there a rule of thumb about how much skewness is acceptable vs. unacceptable.
I have read conflicting opinions that say that skewness should be less than 2,
and then some people have said that skewness should be less than 1.

All suggestions are welcomed,

Stace
Please provide some context.  Too much skewness for what?  Which measure of skewness are you talking about?

--
Bruce Weaver
bweaver@lakeheadu.ca
http://sites.google.com/a/lakeheadu.ca/bweaver/

"When all else fails, RTFM."

PLEASE NOTE THE FOLLOWING: 
1. My Hotmail account is not monitored regularly. To send me an e-mail, please use the address shown above.
2. The SPSSX Discussion forum on Nabble is no longer linked to the SPSSX-L listserv administered by UGA (https://listserv.uga.edu/).
Reply | Threaded
Open this post in threaded view
|

Re: SPSS is becoming too rigid (Was: Comparing ordinal variable in k Independant samples)

Baker, Harley
In reply to this post by Christopher Stride
I also agree strongly. There are times that variables that might be nominal in one situation may actually be interval/ratio in another context (e.g., Lord's treatise "On the Statistical Treatment of Football Numbers" from the early 1950s). The Steven's view of measurement is not universally accepted, with a number of psychometricians and  statisticians arguing against the NOIR categorization. (I can post some of these references if anyone is interested.)

At bottom, my sense is that the software should not limit the researcher in any sort of mechanically enforced fashion. Those of us who know what we are doing should not have to go through the extra hoops that such a system imposes.

My two cents' worth . . .

Harley

Dr. Harley Baker
Professor and Chair, Psychology Program
Sage Hall 2061
California State University Channel Islands
One University Drive
Camarillo, CA 93012

805.437.8997 (p)
805.437.8951 (f)

[hidden email]

________________________________________
From: SPSSX(r) Discussion [[hidden email]] on behalf of Dr C B Stride [[hidden email]]
Sent: Monday, July 19, 2010 12:04 PM
To: [hidden email]
Subject: Re: SPSS is becoming too rigid (Was: Comparing ordinal variable              in k              Independant samples)

I strongly agree with you Steve; and the argument for this system, i.e.
that it will stop users running tests using variables of the wrong type
is weak in itself in that if users don't understand why their test is
appropriate they are unlikely to be able to interpret the results
properly anyway...

I can also think of instances where using a method not normally
associated with certain types of vars can be useful e.g. mean tables on
a dichotomous variable will give you a proportion scoring 1. And if
these restrictions contuinue to be implemented, will we be stopped from
adding dummy variables to regression!?

This is a recent change that definitely should be reversed (along with
the loss of the ability to right-click on menu headings to get a brief
help box)

cheers
Chris



On 19/07/2010 18:51, Steve Simon, P.Mean Consulting wrote:

> There was a question that I just stumbled across a month late, but I
> wanted to bring it up again, not because the earlier answers were bad,
> but rather because it illustrates a philosophical change in how SPSS
> handles data analysis. I don't like this change and I wanted to see what
> others think about it.
>
> ANDRES ALBERTO BURGA LEON wrote:
>
>> I need to check for differences among four independent samples that made
>> a short essay (rated using a 3 point rating scale= good, medium, poor).
>> I presume I could use Kruskal-Wallis H or a similar non-parametric test,
>> but in the new PASW 18 non-parametric test “Test Field” option only
>> accepts variable measured in scale level. Can’t I use ordinal variables?
>> What test could I do?
>
> I just got SPSS 18 loaded last week, and I looked at the non-parametric
> test procedure. They've integrated all the nonparametric tests into a
> single menu choice, which may or may not be a good thing, but
> interestingly, this appears to be the first (but probably not the last)
> statistical procedure that uses the nominal/ordinal/scale properties of
> the variable. The graphical methods, of course, have been using this
> nominal/ordinal/scale property for quite a while (since version 15, I
> believe).
>
> This is something like the philosophy implemented in JMP, but it is, so
> far, only implemented partially in SPSS. I'm not sure I like this new
> approach and I thought it would be worth discussing this on this list.
>
> In theory, the measurement property of a particular variable should
> allow you to use or eliminate certain tests or graphs, but there are two
> problems with this. First, a lot of times, you want to run a test that
> doesn't quite meet the measurement properties of the variable in
> question just as a sensitivity check. Second, SPSS does a lousy job of
> assigning measurement properties to a file that is imported from another
> source. I dislike the idea of having to check and fix the measurement
> properties of every variable in every imported data set.
>
> The advantage is that incorporating measurement properties into all the
> statistical procedures might prevent an inexperienced data analyst from
> making a bogus choice and might end up steering them towards a more
> appropriate choice. It also is a potential time saver in that you will
> be presented with a smaller number of valid choices for your graphs and
> analyses.
>
> The workaround is to change the measurement properties temporarily, but
> I find this tedious and annoying.
>
> Another workaround is to use the legacy dialogs, which did not have
> these restrictions. I'm not thrilled about this either. I don't want to
> teach people to use something that is obsolete.
>
> I also see this as a common question that I will get in consulting (why
> doesn't SPSS let me run the X procedure on my data set). It might end up
> padding my consulting income, but I still don't like it.
>
> What do other people think about this use of measurement properties as a
> gatekeeper that prevents certain graphs/analyses from being run?
>
> Steve Simon, Standard Disclaimer
> "Data entry and data management issues with examples
> in IBM SPSS," Tuesday, August 24, 11am-noon CDT.
> Free webinar. Details at www.pmean.com/webinars
>
> =====================
> To manage your subscription to SPSSX-L, send a message to
> [hidden email] (not to SPSSX-L), with no body text except the
> command. To leave the list, send the command
> SIGNOFF SPSSX-L
> For a list of commands to manage subscriptions, send the command
> INFO REFCARD

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: SPSS is becoming too rigid (Was: Comparing ordinal variable in k Independent samples)

Jon K Peck

Bear in mind that you can easily set the measurement level you want either temporarily or permanently in either syntax or most dialog boxes (right click in the source list), although not in the new nonparametric dialog, unfortunately.  In fact, some Statistics dialog boxes have had measurement-level sensitivity as far back as version 11.5.  In Custom Tables, you can even use the same variable with multiple measurement levels in the same table.  Our R extension dialogs automatically map categorical MLs to R factors.

And the Define Variable Properties dialog has a set of heuristics to help users set an appropriate level - not perfect, of course, but often a good start.

So this is meant as a convenience and guide, not as a prescription.

Regards
Jon Peck
SPSS, an IBM Company
[hidden email]
312-651-3435



From: "Baker, Harley" <[hidden email]>
To: [hidden email]
Date: 07/19/2010 01:32 PM
Subject: Re: [SPSSX-L] SPSS is becoming too rigid (Was: Comparing ordinal              variable              in k              Independant samples)
Sent by: "SPSSX(r) Discussion" <[hidden email]>





I also agree strongly. There are times that variables that might be nominal in one situation may actually be interval/ratio in another context (e.g., Lord's treatise "On the Statistical Treatment of Football Numbers" from the early 1950s). The Steven's view of measurement is not universally accepted, with a number of psychometricians and  statisticians arguing against the NOIR categorization. (I can post some of these references if anyone is interested.)

At bottom, my sense is that the software should not limit the researcher in any sort of mechanically enforced fashion. Those of us who know what we are doing should not have to go through the extra hoops that such a system imposes.

My two cents' worth . . .

Harley

Dr. Harley Baker
Professor and Chair, Psychology Program
Sage Hall 2061
California State University Channel Islands
One University Drive
Camarillo, CA 93012

805.437.8997 (p)
805.437.8951 (f)

[hidden email]

________________________________________
From: SPSSX(r) Discussion [[hidden email]] on behalf of Dr C B Stride [[hidden email]]
Sent: Monday, July 19, 2010 12:04 PM
To: [hidden email]
Subject: Re: SPSS is becoming too rigid (Was: Comparing ordinal variable              in k              Independant samples)

I strongly agree with you Steve; and the argument for this system, i.e.
that it will stop users running tests using variables of the wrong type
is weak in itself in that if users don't understand why their test is
appropriate they are unlikely to be able to interpret the results
properly anyway...

I can also think of instances where using a method not normally
associated with certain types of vars can be useful e.g. mean tables on
a dichotomous variable will give you a proportion scoring 1. And if
these restrictions contuinue to be implemented, will we be stopped from
adding dummy variables to regression!?

This is a recent change that definitely should be reversed (along with
the loss of the ability to right-click on menu headings to get a brief
help box)

cheers
Chris



On 19/07/2010 18:51, Steve Simon, P.Mean Consulting wrote:
> There was a question that I just stumbled across a month late, but I
> wanted to bring it up again, not because the earlier answers were bad,
> but rather because it illustrates a philosophical change in how SPSS
> handles data analysis. I don't like this change and I wanted to see what
> others think about it.
>
> ANDRES ALBERTO BURGA LEON wrote:
>
>> I need to check for differences among four independent samples that made
>> a short essay (rated using a 3 point rating scale= good, medium, poor).
>> I presume I could use Kruskal-Wallis H or a similar non-parametric test,
>> but in the new PASW 18 non-parametric test “Test Field” option only
>> accepts variable measured in scale level. Can’t I use ordinal variables?
>> What test could I do?
>
> I just got SPSS 18 loaded last week, and I looked at the non-parametric
> test procedure. They've integrated all the nonparametric tests into a
> single menu choice, which may or may not be a good thing, but
> interestingly, this appears to be the first (but probably not the last)
> statistical procedure that uses the nominal/ordinal/scale properties of
> the variable. The graphical methods, of course, have been using this
> nominal/ordinal/scale property for quite a while (since version 15, I
> believe).
>
> This is something like the philosophy implemented in JMP, but it is, so
> far, only implemented partially in SPSS. I'm not sure I like this new
> approach and I thought it would be worth discussing this on this list.
>
> In theory, the measurement property of a particular variable should
> allow you to use or eliminate certain tests or graphs, but there are two
> problems with this. First, a lot of times, you want to run a test that
> doesn't quite meet the measurement properties of the variable in
> question just as a sensitivity check. Second, SPSS does a lousy job of
> assigning measurement properties to a file that is imported from another
> source. I dislike the idea of having to check and fix the measurement
> properties of every variable in every imported data set.
>
> The advantage is that incorporating measurement properties into all the
> statistical procedures might prevent an inexperienced data analyst from
> making a bogus choice and might end up steering them towards a more
> appropriate choice. It also is a potential time saver in that you will
> be presented with a smaller number of valid choices for your graphs and
> analyses.
>
> The workaround is to change the measurement properties temporarily, but
> I find this tedious and annoying.
>
> Another workaround is to use the legacy dialogs, which did not have
> these restrictions. I'm not thrilled about this either. I don't want to
> teach people to use something that is obsolete.
>
> I also see this as a common question that I will get in consulting (why
> doesn't SPSS let me run the X procedure on my data set). It might end up
> padding my consulting income, but I still don't like it.
>
> What do other people think about this use of measurement properties as a
> gatekeeper that prevents certain graphs/analyses from being run?
>
> Steve Simon, Standard Disclaimer
> "Data entry and data management issues with examples
> in IBM SPSS," Tuesday, August 24, 11am-noon CDT.
> Free webinar. Details at
www.pmean.com/webinars
>
> =====================
> To manage your subscription to SPSSX-L, send a message to
> [hidden email] (not to SPSSX-L), with no body text except the
> command. To leave the list, send the command
> SIGNOFF SPSSX-L
> For a list of commands to manage subscriptions, send the command
> INFO REFCARD

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD


Reply | Threaded
Open this post in threaded view
|

Re: SPSS is becoming too rigid (Was: Comparing ordinal variable in k Independant samples)

Rick Oliver-3
In reply to this post by Baker, Harley

There is nothing preventing you from changing the measurement level for any variable at any time (with the notable exception of treating a string variable as scale/continuous).  So the software is not really imposing any "limit" on what you can do in that sense. For some procedures, measurement level affects the computation of the results; so there has to be some mechanism for identifying measurement level.


From: "Baker, Harley" <[hidden email]>
To: [hidden email]
Date: 07/19/2010 02:31 PM
Subject: Re: SPSS is becoming too rigid (Was: Comparing ordinal variable              in k              Independant samples)
Sent by: "SPSSX(r) Discussion" <[hidden email]>





I also agree strongly. There are times that variables that might be nominal in one situation may actually be interval/ratio in another context (e.g., Lord's treatise "On the Statistical Treatment of Football Numbers" from the early 1950s). The Steven's view of measurement is not universally accepted, with a number of psychometricians and  statisticians arguing against the NOIR categorization. (I can post some of these references if anyone is interested.)

At bottom, my sense is that the software should not limit the researcher in any sort of mechanically enforced fashion. Those of us who know what we are doing should not have to go through the extra hoops that such a system imposes.

My two cents' worth . . .

Harley

Dr. Harley Baker
Professor and Chair, Psychology Program
Sage Hall 2061
California State University Channel Islands
One University Drive
Camarillo, CA 93012

805.437.8997 (p)
805.437.8951 (f)

[hidden email]

________________________________________
From: SPSSX(r) Discussion [[hidden email]] on behalf of Dr C B Stride [[hidden email]]
Sent: Monday, July 19, 2010 12:04 PM
To: [hidden email]
Subject: Re: SPSS is becoming too rigid (Was: Comparing ordinal variable              in k              Independant samples)

I strongly agree with you Steve; and the argument for this system, i.e.
that it will stop users running tests using variables of the wrong type
is weak in itself in that if users don't understand why their test is
appropriate they are unlikely to be able to interpret the results
properly anyway...

I can also think of instances where using a method not normally
associated with certain types of vars can be useful e.g. mean tables on
a dichotomous variable will give you a proportion scoring 1. And if
these restrictions contuinue to be implemented, will we be stopped from
adding dummy variables to regression!?

This is a recent change that definitely should be reversed (along with
the loss of the ability to right-click on menu headings to get a brief
help box)

cheers
Chris



On 19/07/2010 18:51, Steve Simon, P.Mean Consulting wrote:
> There was a question that I just stumbled across a month late, but I
> wanted to bring it up again, not because the earlier answers were bad,
> but rather because it illustrates a philosophical change in how SPSS
> handles data analysis. I don't like this change and I wanted to see what
> others think about it.
>
> ANDRES ALBERTO BURGA LEON wrote:
>
>> I need to check for differences among four independent samples that made
>> a short essay (rated using a 3 point rating scale= good, medium, poor).
>> I presume I could use Kruskal-Wallis H or a similar non-parametric test,
>> but in the new PASW 18 non-parametric test “Test Field” option only
>> accepts variable measured in scale level. Can’t I use ordinal variables?
>> What test could I do?
>
> I just got SPSS 18 loaded last week, and I looked at the non-parametric
> test procedure. They've integrated all the nonparametric tests into a
> single menu choice, which may or may not be a good thing, but
> interestingly, this appears to be the first (but probably not the last)
> statistical procedure that uses the nominal/ordinal/scale properties of
> the variable. The graphical methods, of course, have been using this
> nominal/ordinal/scale property for quite a while (since version 15, I
> believe).
>
> This is something like the philosophy implemented in JMP, but it is, so
> far, only implemented partially in SPSS. I'm not sure I like this new
> approach and I thought it would be worth discussing this on this list.
>
> In theory, the measurement property of a particular variable should
> allow you to use or eliminate certain tests or graphs, but there are two
> problems with this. First, a lot of times, you want to run a test that
> doesn't quite meet the measurement properties of the variable in
> question just as a sensitivity check. Second, SPSS does a lousy job of
> assigning measurement properties to a file that is imported from another
> source. I dislike the idea of having to check and fix the measurement
> properties of every variable in every imported data set.
>
> The advantage is that incorporating measurement properties into all the
> statistical procedures might prevent an inexperienced data analyst from
> making a bogus choice and might end up steering them towards a more
> appropriate choice. It also is a potential time saver in that you will
> be presented with a smaller number of valid choices for your graphs and
> analyses.
>
> The workaround is to change the measurement properties temporarily, but
> I find this tedious and annoying.
>
> Another workaround is to use the legacy dialogs, which did not have
> these restrictions. I'm not thrilled about this either. I don't want to
> teach people to use something that is obsolete.
>
> I also see this as a common question that I will get in consulting (why
> doesn't SPSS let me run the X procedure on my data set). It might end up
> padding my consulting income, but I still don't like it.
>
> What do other people think about this use of measurement properties as a
> gatekeeper that prevents certain graphs/analyses from being run?
>
> Steve Simon, Standard Disclaimer
> "Data entry and data management issues with examples
> in IBM SPSS," Tuesday, August 24, 11am-noon CDT.
> Free webinar. Details at
www.pmean.com/webinars
>
> =====================
> To manage your subscription to SPSSX-L, send a message to
> [hidden email] (not to SPSSX-L), with no body text except the
> command. To leave the list, send the command
> SIGNOFF SPSSX-L
> For a list of commands to manage subscriptions, send the command
> INFO REFCARD

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD


Reply | Threaded
Open this post in threaded view
|

Re: A Question about skewness

Marta Garcia-Granero
In reply to this post by stace swayne
stace swayne wrote:
> Dear List,
>
> Is there a rule of thumb about how much skewness is acceptable vs. unacceptable.
> I have read conflicting opinions that say that skewness should be less than 2,
> and then some people have said that skewness should be less than 1.
>
>
>
Hi Stace:

If you are talking about the skewness coefficient provided by SPSS with
DESCRIPTIVES or EXAMINE, the ratio of the coefficient by its standard
error is a Z test that can be considered significant if its absolute
value is greater than 1.96.

But, since statistical significance and statistical relevance is not the
same, take into account that for very big samples the result can be
significant even if the coefficient is very low. Somewhere I read that a
skewness coefficient over 1 (in absolute value) is important (I'm not
talking of "significant").

HTH,
Marta GG
(we are the champions...)


--
For miscellaneous SPSS related statistical stuff, visit:
http://gjyp.nl/marta/

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD
Reply | Threaded
Open this post in threaded view
|

Re: SPSS is becoming too rigid (Was: Comparing ordinal variable in k Independant samples)

Ruben Geert van den Berg
In reply to this post by Rick Oliver-3
Dear all,
 
"For some procedures, measurement level affects the computation of the results; so there has to be some mechanism for identifying measurement level."
>>>Agree. But just as with, say, CATREG, I think this mechanism -if really needed- should reside in the syntax or dialogue boxes. If my CATREG results suggest I can treat some variable as scale -within the scope of this single procedure- I want to modify this in just this single procedure, not in the data. Otherwise, in a next procedure, I may have to modify it back.
 
Of course nothing prevents us from changing measurement levels of all numeric variables. For me the main issue would be that doing so increases the amount of work I need in order to get stuff done. This will become especially annoying when one wants to run procedures on one variable that would require different measurement levels. So I perfectly agree with previous opinions: if a data analyst doesn't know what he's doing, enforcing measurement levels upon certain procedures will probably not prevent him from producing 'less than optimal' results. Or as one colleague wisely phrased: "nothing can ever beat the stupidity of clients". So let's not sacrifice our educated users in a futile attempt to protect some less educated users from themselves.
 
Another argument is that measurement levels are often disputable. More precisely, many variables that are strictly ordinal (like 5 point Likert scales) tend to be treated as scale variables in the social sciences. So if I decide to run chisq tests on those and someone else prefers t tests, I'll have to change all measurement levels with extra, unnecessary lines of syntax.
 
As a compromise, perhaps, can't users be enabled to switch measurement level sensitivity on or off, like
 
SET MEASUREMENTLEVELSENSITIVITY=OFF.
 
Best,

Ruben van den Berg
Consultant Models & Methods
TNS NIPO
Email: [hidden email]
Mobiel: +31 6 24641435
Telefoon: +31 20 522 5738
Internet: www.tns-nipo.com



 

Date: Mon, 19 Jul 2010 15:17:32 -0500
From: [hidden email]
Subject: Re: SPSS is becoming too rigid (Was: Comparing ordinal variable in k Independant samples)
To: [hidden email]


There is nothing preventing you from changing the measurement level for any variable at any time (with the notable exception of treating a string variable as scale/continuous).  So the software is not really imposing any "limit" on what you can do in that sense. For some procedures, measurement level affects the computation of the results; so there has to be some mechanism for identifying measurement level.


From: "Baker, Harley" <[hidden email]>
To: [hidden email]
Date: 07/19/2010 02:31 PM
Subject: Re: SPSS is becoming too rigid (Was: Comparing ordinal variable              in k              Independant samples)
Sent by: "SPSSX(r) Discussion" <[hidden email]>





I also agree strongly. There are times that variables that might be nominal in one situation may actually be interval/ratio in another context (e.g., Lord's treatise "On the Statistical Treatment of Football Numbers" from the early 1950s). The Steven's view of measurement is not universally accepted, with a number of psychometricians and  statisticians arguing against the NOIR categorization. (I can post some of these references if anyone is interested.)

At bottom, my sense is that the software should not limit the researcher in any sort of mechanically enforced fashion. Those of us who know what we are doing should not have to go through the extra hoops that such a system imposes.

My two cents' worth . . .

Harley

Dr. Harley Baker
Professor and Chair, Psychology Program
Sage Hall 2061
California State University Channel Islands
One University Drive
Camarillo, CA 93012

805.437.8997 (p)
805.437.8951 (f)

[hidden email]

________________________________________
From: SPSSX(r) Discussion [[hidden email]] on behalf of Dr C B Stride [[hidden email]]
Sent: Monday, July 19, 2010 12:04 PM
To: [hidden email]
Subject: Re: SPSS is becoming too rigid (Was: Comparing ordinal variable              in k              Independant samples)

I strongly agree with you Steve; and the argument for this system, i.e.
that it will stop users running tests using variables of the wrong type
is weak in itself in that if users don't understand why their test is
appropriate they are unlikely to be able to interpret the results
properly anyway...

I can also think of instances where using a method not normally
associated with certain types of vars can be useful e.g. mean tables on
a dichotomous variable will give you a proportion scoring 1. And if
these restrictions contuinue to be implemented, will we be stopped from
adding dummy variables to regression!?

This is a recent change that definitely should be reversed (along with
the loss of the ability to right-click on menu headings to get a brief
help box)

cheers
Chris



On 19/07/2010 18:51, Steve Simon, P.Mean Consulting wrote:

> There was a question that I just stumbled across a month late, but I
> wanted to bring it up again, not because the earlier answers were bad,
> but rather because it illustrates a philosophical change in how SPSS
> handles data analysis. I don't like this change and I wanted to see what
> others think about it.
>
> ANDRES ALBERTO BURGA LEON wrote:
>
>> I need to check for differences among four independent samples that made
>> a short essay (rated using a 3 point rating scale= good, medium, poor).
>> I presume I could use Kruskal-Wallis H or a similar non-parametric test,
>> but in the new PASW 18 non-parametric test “Test Field” option only
>> accepts variable measured in scale level. Can’t I use ordinal variables?
>> What test could I do?
>
> I just got SPSS 18 loaded last week, and I looked at the non-parametric
> test procedure. They've integrated all the nonparametric tests into a
> single menu choice, which may or may not be a good thing, but
> interestingly, this appears to be the first (but probably not the last)
> statistical procedure that uses the nominal/ordinal/scale properties of
> the variable. The graphical methods, of course, have been using this
> nominal/ordinal/scale property for quite a while (since version 15, I
> believe).
>
> This is something like the philosophy implemented in JMP, but it is, so
> far, only implemented partially in SPSS. I'm not sure I like this new
> approach and I thought it would be worth discussing this on this list.
>
> In theory, the measurement property of a particular variable should
> allow you to use or eliminate certain tests or graphs, but there are two
> problems with this. First, a lot of times, you want to run a test that
> doesn't quite meet the measurement properties of the variable in
> question just as a sensitivity check. Second, SPSS does a lousy job of
> assigning measurement properties to a file that is imported from another
> source. I dislike the idea of having to check and fix the measurement
> properties of every variable in every imported data set.
>
> The advantage is that incorporating measurement properties into all the
> statistical procedures might prevent an inexperienced data analyst from
> making a bogus choice and might end up steering them towards a more
> appropriate choice. It also is a potential time saver in that you will
> be presented with a smaller number of valid choices for your graphs and
> analyses.
>
> The workaround is to change the measurement properties temporarily, but
> I find this tedious and annoying.
>
> Another workaround is to use the legacy dialogs, which did not have
> these restrictions. I'm not thrilled about this either. I don't want to
> teach people to use something that is obsolete.
>
> I also see this as a common question that I will get in consulting (why
> doesn't SPSS let me run the X procedure on my data set). It might end up
> padding my consulting income, but I still don't like it.
>
> What do other people think about this use of measurement properties as a
> gatekeeper that prevents certain graphs/analyses from being run?
>
> Steve Simon, Standard Disclaimer
> "Data entry and data management issues with examples
> in IBM SPSS," Tuesday, August 24, 11am-noon CDT.
> Free webinar. Details at
www.pmean.com/webinars
>
> =====================
> To manage your subscription to SPSSX-L, send a message to
> [hidden email] (not to SPSSX-L), with no body text except the
> command. To leave the list, send the command
> SIGNOFF SPSSX-L
> For a list of commands to manage subscriptions, send the command
> INFO REFCARD

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD




New Windows 7: Find the right PC for you. Learn more.
Reply | Threaded
Open this post in threaded view
|

Re: SPSS is becoming too rigid (Was: Comparing ordinal variable in k Independant samples)

Garry Gelade

Dear Listers,

 

I agree with the other contributors.  Enforcing measurement level restrictions makes things more unwieldy, and the benefits are debatable.

 

Inexperienced users should be encouraged to consult tutorials, examples and textbooks to develop their understanding.  Being forced down certain routes by SPSS – without  understanding why – may even add to a beginner’s confusion.

 

As others have pointed out, measurement levels are something of a moveable feast, and  I don’t want to end up with a situation where repeating a procedure on a number of different variables requires that they are all assigned the same measurement level.

 

 

Garry Gelade

Business Analytic Ltd

 

From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of Ruben van den Berg
Sent: 20 July 2010 07:53
To: [hidden email]
Subject: Re: SPSS is becoming too rigid (Was: Comparing ordinal variable in k Independant samples)

 

Dear all,
 
"
For some procedures, measurement level affects the computation of the results; so there has to be some mechanism for identifying measurement level."
>>>Agree. But just as with, say, CATREG, I think this mechanism -if really needed- should reside in the syntax or dialogue boxes. If my CATREG results suggest I can treat some variable as scale -within the scope of this single procedure- I want to modify this in just this single procedure, not in the data. Otherwise, in a next procedure, I may have to modify it back.
 
Of course nothing prevents us from changing measurement levels of all numeric variables. For me the main issue would be that doing so increases the amount of work I need in order to get stuff done. This will become especially annoying when one wants to run procedures on one variable that would require different measurement levels. So I perfectly agree with previous opinions: if a data analyst doesn't know what he's doing, enforcing measurement levels upon certain procedures will probably not prevent him from producing 'less than optimal' results. Or as one colleague wisely phrased: "nothing can ever beat the stupidity of clients". So let's not sacrifice our educated users in a futile attempt to protect some less educated users from themselves.
 
Another argument is that measurement levels are often disputable. More precisely, many variables that are strictly ordinal (like 5 point Likert scales) tend to be treated as scale variables in the social sciences. So if I decide to run chisq tests on those and someone else prefers t tests, I'll have to change all measurement levels with extra, unnecessary lines of syntax.
 
As a compromise, perhaps, can't users be enabled to switch measurement level sensitivity on or off, like
 
SET MEASUREMENTLEVELSENSITIVITY=OFF.
 
Best,

Ruben van den Berg
Consultant Models & Methods
TNS NIPO
Email: [hidden email]
Mobiel: +31 6 24641435
Telefoon: +31 20 522 5738
Internet: www.tns-nipo.com



 


Date: Mon, 19 Jul 2010 15:17:32 -0500
From: [hidden email]
Subject: Re: SPSS is becoming too rigid (Was: Comparing ordinal variable in k Independant samples)
To: [hidden email]


There is nothing preventing you from changing the measurement level for any variable at any time (with the notable exception of treating a string variable as scale/continuous).  So the software is not really imposing any "limit" on what you can do in that sense. For some procedures, measurement level affects the computation of the results; so there has to be some mechanism for identifying measurement level.

From:

"Baker, Harley" <[hidden email]>

To:

[hidden email]

Date:

07/19/2010 02:31 PM

Subject:

Re: SPSS is becoming too rigid (Was: Comparing ordinal variable              in k              Independant samples)

Sent by:

"SPSSX(r) Discussion" <[hidden email]>

 





I also agree strongly. There are times that variables that might be nominal in one situation may actually be interval/ratio in another context (e.g., Lord's treatise "On the Statistical Treatment of Football Numbers" from the early 1950s). The Steven's view of measurement is not universally accepted, with a number of psychometricians and  statisticians arguing against the NOIR categorization. (I can post some of these references if anyone is interested.)

At bottom, my sense is that the software should not limit the researcher in any sort of mechanically enforced fashion. Those of us who know what we are doing should not have to go through the extra hoops that such a system imposes.

My two cents' worth . . .

Harley

Dr. Harley Baker
Professor and Chair, Psychology Program
Sage Hall 2061
California State University Channel Islands
One University Drive
Camarillo, CA 93012

805.437.8997 (p)
805.437.8951 (f)

[hidden email]

________________________________________
From: SPSSX(r) Discussion [[hidden email]] on behalf of Dr C B Stride [[hidden email]]
Sent: Monday, July 19, 2010 12:04 PM
To: [hidden email]
Subject: Re: SPSS is becoming too rigid (Was: Comparing ordinal variable              in k              Independant samples)

I strongly agree with you Steve; and the argument for this system, i.e.
that it will stop users running tests using variables of the wrong type
is weak in itself in that if users don't understand why their test is
appropriate they are unlikely to be able to interpret the results
properly anyway...

I can also think of instances where using a method not normally
associated with certain types of vars can be useful e.g. mean tables on
a dichotomous variable will give you a proportion scoring 1. And if
these restrictions contuinue to be implemented, will we be stopped from
adding dummy variables to regression!?

This is a recent change that definitely should be reversed (along with
the loss of the ability to right-click on menu headings to get a brief
help box)

cheers
Chris



On 19/07/2010 18:51, Steve Simon, P.Mean Consulting wrote:
> There was a question that I just stumbled across a month late, but I
> wanted to bring it up again, not because the earlier answers were bad,
> but rather because it illustrates a philosophical change in how SPSS
> handles data analysis. I don't like this change and I wanted to see what
> others think about it.
>
> ANDRES ALBERTO BURGA LEON wrote:
>
>> I need to check for differences among four independent samples that made
>> a short essay (rated using a 3 point rating scale= good, medium, poor).
>> I presume I could use Kruskal-Wallis H or a similar non-parametric test,
>> but in the new PASW 18 non-parametric test “Test Field” option only
>> accepts variable measured in scale level. Can’t I use ordinal variables?
>> What test could I do?
>
> I just got SPSS 18 loaded last week, and I looked at the non-parametric
> test procedure. They've integrated all the nonparametric tests into a
> single menu choice, which may or may not be a good thing, but
> interestingly, this appears to be the first (but probably not the last)
> statistical procedure that uses the nominal/ordinal/scale properties of
> the variable. The graphical methods, of course, have been using this
> nominal/ordinal/scale property for quite a while (since version 15, I
> believe).
>
> This is something like the philosophy implemented in JMP, but it is, so
> far, only implemented partially in SPSS. I'm not sure I like this new
> approach and I thought it would be worth discussing this on this list.
>
> In theory, the measurement property of a particular variable should
> allow you to use or eliminate certain tests or graphs, but there are two
> problems with this. First, a lot of times, you want to run a test that
> doesn't quite meet the measurement properties of the variable in
> question just as a sensitivity check. Second, SPSS does a lousy job of
> assigning measurement properties to a file that is imported from another
> source. I dislike the idea of having to check and fix the measurement
> properties of every variable in every imported data set.
>
> The advantage is that incorporating measurement properties into all the
> statistical procedures might prevent an inexperienced data analyst from
> making a bogus choice and might end up steering them towards a more
> appropriate choice. It also is a potential time saver in that you will
> be presented with a smaller number of valid choices for your graphs and
> analyses.
>
> The workaround is to change the measurement properties temporarily, but
> I find this tedious and annoying.
>
> Another workaround is to use the legacy dialogs, which did not have
> these restrictions. I'm not thrilled about this either. I don't want to
> teach people to use something that is obsolete.
>
> I also see this as a common question that I will get in consulting (why
> doesn't SPSS let me run the X procedure on my data set). It might end up
> padding my consulting income, but I still don't like it.
>
> What do other people think about this use of measurement properties as a
> gatekeeper that prevents certain graphs/analyses from being run?
>
> Steve Simon, Standard Disclaimer
> "Data entry and data management issues with examples
> in IBM SPSS," Tuesday, August 24, 11am-noon CDT.
> Free webinar. Details at
www.pmean.com/webinars
>
> =====================
> To manage your subscription to SPSSX-L, send a message to
> [hidden email] (not to SPSSX-L), with no body text except the
> command. To leave the list, send the command
> SIGNOFF SPSSX-L
> For a list of commands to manage subscriptions, send the command
> INFO REFCARD

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD

=====================
To manage your subscription to SPSSX-L, send a message to
[hidden email] (not to SPSSX-L), with no body text except the
command. To leave the list, send the command
SIGNOFF SPSSX-L
For a list of commands to manage subscriptions, send the command
INFO REFCARD



New Windows 7: Find the right PC for you. Learn more.