Re: SPSSX-L Digest - 12 Jul 2006 to 13 Jul 2006 (#2006-192) List command output

classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

Re: SPSSX-L Digest - 12 Jul 2006 to 13 Jul 2006 (#2006-192) List command output

Leslie Horst
As a long-time SPSS user, I did a low-tech but effective solution, probably developed in a version long ago that didn't have "summarize."

String junk (a1).
Compute junk = '/'.
List var = v1 junk v2 junk v3 junk v4 junk.

Then when all of this goes into one column in Excel you can easily parse it using
data/text to columns/delimited and then specifying the slash mark (or some other delimiter that you are sure does not exist in your data).


-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of Automatic digest processor
Sent: Friday, July 14, 2006 12:02 AM
To: Recipients of SPSSX-L digests
Subject: SPSSX-L Digest - 12 Jul 2006 to 13 Jul 2006 (#2006-192)

There are 26 messages totalling 1173 lines in this issue.

Topics of the day:

  1. export to xls
  2. List Command output (3)
  3. Likert scale (3)
  4. SPSS and Terminal Services
  5. Using Execute (2)
  6. reliability
  7. A Distinctly Non-Normal Distribution (3)
  8. execute
  9. SPSS-GLM query
 10. percentile scores
 11. About Mann-Whitney Test...
 12. Logistic Regression queries
 13. Working with dates - now pulled out ALL MY HAIR
 14. Survival plot
 15. Use of the weighted Kappa statistic
 16. White paper for percentile rank formula (2)
 17. Outreach Program Director Opening
 18. Follow-up MANCOVA interaction?

----------------------------------------------------------------------

Date:    Thu, 13 Jul 2006 09:30:01 +0300
From:    vlad simion <[hidden email]>
Subject: export to xls

Hi again,

I posted yesterday a message regarding an export to excel. No one has any
sugesstions about how to export a string variable to an xls file via ODBC
driver. My problem is that it only saves the first letter of the string and
not the whole string.

Here is part of the export syntax:

save translate
 /connect= 'dsn=excel files;dbq=c:\documents and
settings\ay08418\desktop\data\test.xls'
 /table=!quote(!n)
 /type=odbc
 /replace.

Thanks again!

Vlad.



--
Vlad Simion
Data Analyst
Tel:      +40 720130611

------------------------------

Date:    Thu, 13 Jul 2006 09:28:01 +0200
From:    Mark Webb <[hidden email]>
Subject: List Command output

Hi all
I'm using the List Command to list selected respondents and a few =
variables.
e.g. List Variables =3D Name Branch Dept Var1 Var2 Var3.

When I copy & past into Excel all variables listed go into one column.
How can I get a SPSS output that will export into columns - like for =
example the Frequencies command ?
The print command is much the same.

Regards

Mark Webb

------------------------------

Date:    Thu, 13 Jul 2006 11:44:55 +0300
From:    Dominic Fernandes <[hidden email]>
Subject: Likert scale

Hi All,

I have a question. How do we analyze a Likert scale (consisting of 7 =
responses) course evaluation questionnaire given to 25 teachers who =
attended an in-service course? Shall we treat the responses as scale =
variables and use parametric tests or shall we treat the responses as =
ordinal variables and use the nonparametric test.

Thank you in advance for your assistance.

Dominic.

------------------------------

Date:    Thu, 13 Jul 2006 11:55:27 +0100
From:    Clare Gill <[hidden email]>
Subject: SPSS and Terminal Services

Hello

We are considering a terminal services environment in UCD to
deliver some of our applications to the UCD community.  We would
still be installing the windows version of SPSS so we would like to
know if there has been or if there currently is any issues with
delivering SPSS in this way.

As we have a large user community in UCD, we would also like to
know what the recommended or expected hardware requirements
would be to run SPSS in this environment as some of the analysis
that can be done in SPSS can already use a lot of system resourses.
We would consider 250/500 users as a guideline for system
requirements.

--
Clare Gill
Computing Services, University College Dublin,
UCD Computer Centre, Belfield, Dublin 4.
Tel: +353-1-716 2007 Fax: +353-1-2837077
http://www.ucd.ie/computing

------------------------------

Date:    Thu, 13 Jul 2006 06:36:26 -0500
From:    "Peck, Jon" <[hidden email]>
Subject: Re: Using Execute

The reason that the default behavior is to generate EXECUTE commands when working from the transformation dialogs is that without that, the Data Editor window does not update immediately, and new users found this behavior confusing.  We wanted to make it easier to get started with SPSS.

Since you can turn this setting off, and since many users do not use syntax anyway, this is pretty harmless.  The help for this option explains the efficiency issue, although many users never find their way to this spot.

Regards,
Jon Peck

-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of Lisa Stickney
Sent: Wednesday, July 12, 2006 9:50 PM
To: [hidden email]
Subject: Re: [SPSSX-L] Using Execute

Hi Richard,

>
> Good ever-lovin' grief! Thank you, Lisa! I didn't even know that SPSS
> inserted EXECUTEs following pasted syntax commands. (See below.)
>
> If SPSS puts EXECUTEs after every (pasted) transformation command, of
> COURSE users will think they're necessary.
>
> Other readers: Did this happen to you, too? It's precisely the right
> way to form precisely the wrong habit.
>

I don't think it does it with every transformation command, but it
definitely does it with COMPUTE, RECODE & COUNT.  Plus it does it with some
of the data commands -- FILTER, MATCH FILES, & ADD FILES that I know of.


> Beadle, ViAnn wrote at 12:53 PM 7/12/2006,
>
>>Although its not really obvious, you can turn off the EXECUTE command
>>[that's pasted after pasted transformation commands] in Edit>Options
>>under Transformation and Merge Operations.
>
> To be precise (since I had to look for it myself), it's
>
> Edit>Options>Data; then, under Transformation and Merge Operations,
> + To get the EXECUTE pasted, select "Calculate values immediately";
> + To omit it, select "Calculate values before used".
> (Which is the default?)
>

Thanks to both ViAnn & Richard for pointing this out.  I have happily
changed my copy of SPSS.

As for the default, I believe it's "Calculate values immediately."  The IT
people have installed versions 11.5, 12, 13 & now 14 on my laptop, and
they've all been set this way.  So, unless it's just my installation or
they're changing this upon installation (I doubt it), it's probably the
default.

One other comment I have about this is option is that I think it would have
little meaning to a newbie who's trying to learn SPSS sytax.  Unless you're
very familiar with SPSS, it's not clear how this relates to EXECUTE or why
it might be important.


    Best,
        Lisa

Lisa T. Stickney
Ph.D. Candidate
The Fox School of Business
     and Management
Temple University
[hidden email]

------------------------------

Date:    Thu, 13 Jul 2006 08:07:11 -0400
From:    "Dates, Brian" <[hidden email]>
Subject: Re: Likert scale

Dominic,

With seven (7) response categories, you can probably treat the data as
interval in nature.  I'm a purist and my principal area of work is in
measurement development and standardization, so I generally follow the IRT
approach and then do the analysis on the IRT scaled items.  Likert actually
developed a scaling method for his approach to items which would be worth
your time for future endeavors.  As part of your analysis, I would recommend
at least performing a reliability analysis to check for internal consistency
and the relationship of the items to total score, so you can identify any
"clinker" items that might exist.  Good luck.

Brian

Brian G. Dates, Director of Quality Assurance
Southwest Counseling and Development Services
1700 Waterman
Detroit, Michigan  48209
Telephone: 313.841.7442
FAX:  313.841.4470
email: [hidden email]


> -----Original Message-----
> From: Dominic Fernandes [SMTP:[hidden email]]
> Sent: Thursday, July 13, 2006 4:45 AM
> To:   [hidden email]
> Subject:      Likert scale
>
> Hi All,
>
> I have a question. How do we analyze a Likert scale (consisting of 7
> responses) course evaluation questionnaire given to 25 teachers who
> attended an in-service course? Shall we treat the responses as scale
> variables and use parametric tests or shall we treat the responses as
> ordinal variables and use the nonparametric test.
>
> Thank you in advance for your assistance.
>
> Dominic.
>
>
Confidentiality Notice for Email Transmissions: The information in this
message is confidential and may be legally privileged. It is intended solely
for the addressee.  Access to this message by anyone else is unauthorised.
If you are not the intended recipient, any disclosure, copying, or
distribution of the message, or any action or omission taken by you in
reliance on it, is prohibited and may be unlawful.  Please immediately
contact the sender if you have received this message in error. Thank you.

------------------------------

Date:    Thu, 13 Jul 2006 07:59:57 -0500
From:    "Beadle, ViAnn" <[hidden email]>
Subject: Re: List Command output

Use the summarize command which will give you a case listing in a table from which you can copy the cells:

summarize name branch dept var1 var2 var3 /cells none/format list nocasenum.

-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of Mark Webb
Sent: Thursday, July 13, 2006 2:28 AM
To: [hidden email]
Subject: List Command output

Hi all
I'm using the List Command to list selected respondents and a few variables.
e.g. List Variables = Name Branch Dept Var1 Var2 Var3.

When I copy & past into Excel all variables listed go into one column.
How can I get a SPSS output that will export into columns - like for example the Frequencies command ?
The print command is much the same.

Regards

Mark Webb

------------------------------

Date:    Thu, 13 Jul 2006 07:24:29 -0700
From:    razina khayat <[hidden email]>
Subject: reliability

Hi all,
   I'm trying to do a test-retest reliability analysis on attachment loss measurement. Attachment loss is measured in millimeters ranging from 0 to about 14. my question is how to do this within ± 1 mm accuracy. Also for validity assessment (using gold standard), how could we measure the same variable within 1 mm accuracy.
  thanks,
  razina



---------------------------------
Talk is cheap. Use Yahoo! Messenger to make PC-to-Phone calls.  Great rates starting at 1¢/min.

------------------------------

Date:    Thu, 13 Jul 2006 09:12:28 -0600
From:    Stevan Nielsen <[hidden email]>
Subject: A Distinctly Non-Normal Distribution

Dear Colleagues,

I have stumbled upon an interesting phenomenon: I have discovered that
consumption of a valuable resource conforms to a very regular, reverse
J-shaped distribution.  The modal case in our large sample (N = 16,000)
consumes one unit, the next most common case consumes two units, the
next most common three units, the next most common four units -- and
this is the median case, and so on.  The average is at about 9.7 units,
which falls between the 72nd and 73rd percentile in the distribution --
clearly NOT an indicator of central tendency.

I used SPSS Curve Estimation to examine five functional relationships
between units consumed and proportion of consumers in the sample,
testing proportion of consumers in the sample as linear, logarithmic,
inverse, quadratic, or cubic functions of number of units consumed.  I
found that the reciprocal model, estimating proportion of cases as the
inverse of units consumed, was clearly the best solution, yielding a
remarkable, and very reliable R2 = .966.  All five models were reliable,
but the next best was the logarithmic solution, with R2 = .539; worst
was the linear model, with R2 = .102.

These seems like a remarkably regular, quite predictable relationship.
I've spent my career so enamored with normal distributions that I'm not
sure what to make of this distribution.  I have several questions for
your consideration:

Do any of you have experience with such functions?  (I believe it would
be correct to call this a decay functions.)

Where are such functions most likely to occur in nature, commerce,
epidemiology, genetics, healthcare, and so on?

What complications arise when attempting to form statistical inferences
where such population distributions are present?  (We have other
measurements for subjects in this distributions, measurements which are
quite nicely normal in their distributions.)

Your curious colleague,

lars nielsen

Stevan Lars Nielsen, Ph.D.
Brigham Young University

801-422-3035; fax 801-422-0175

------------------------------

Date:    Thu, 13 Jul 2006 08:31:18 -0700
From:    "P. Scott Dixon" <[hidden email]>
Subject: Re: execute

I agree with ViAnn.  With all respect to the SPSS wunderkinds, I think the use
of executes is useful to find flaws in my code at the point they occur, and the
cost in processor time is negligible with small to mid size samples.  It is a
strong tool in my, admittedly, less-than-perfect code.

Scott

 -----------------------------

Date:    Wed, 12 Jul 2006 07:04:19 -0500
From:    "Beadle, ViAnn" <[hidden email]>
Subject: Re: reverse concat

OK, I'll take your bait.=20
=20
I don't actually use EXECUTE in syntax but when I'm work out gnarly =
transformation problems I frequently Run Pending Transformations (which =
generates your dreaded EXECUTE) from the data editor and see the effects =
of my transformations immediately. My strategy is to work through the =
problem a "executable" unit at a time and check out the unit by running =
the pending transformations. Unless your dealing with wide (more than =
200 or so variables) or long (more than 10,000) or more cases the time =
to actually do the transformation is much less than the think time to =
step mentally through the process.=20
=20
What does that extra processing really cost you? It's not like the bad =
old days when every job submittal at $1.00 a pop ate up my account at =
the University of Chicago Comp Center.

________________________________

------------------------------

Date:    Thu, 13 Jul 2006 16:55:13 +0100
From:    Shweta Manchanda <[hidden email]>
Subject: SPSS-GLM query

I would like to know what the estimated marginal means in the General
Linear Model mean. I would be grateful to get any help.

Many thanks.

------------------------------

Date:    Thu, 13 Jul 2006 09:57:03 +0300
From:    [hidden email]
Subject: Re: percentile scores

Small corrections: to obtain percentile scores using formulas described in psychometric literatuire (see, for instance, Crocker, L., & Algina, J. (1991). Introduction to classical and modern test theory. Orlando, FL: Harcourt Brace Jovanovich.), I would use slightly different options:

1)    Under the Transform menu, choose Rank Cases&#8230;
2)    Choose the variable containing the raw test scores
3)    Click on the Ties&#8230; box and choose MEAN as the Rank Assigned to Ties
4)    Click on the Rank Cases: Types&#8230; box and Check Fractional rank as %
5)    Check Rankit under Proportion Estimation Formula
6)    Uncheck Rank (Unless you want cumulative frequencies)
7)    Click on OK


Dr. Alexander Vinogradov, Associate Professor
Sociology and Psychology Faculty
National Taras Shevchenko University
Ukraine

>> Statisticsdoc <[hidden email]> wrote: Humphrey,

>> There are two ways to compute percentile scores in SPSS:

>> You can use menus to compute percentile ranks as follows:

>> 1.)    Under the Transform menu, choose Rank Cases&#8230;
>> 2.)    Choose the variable containing the raw test scores
>> 3.)    Click on the Ties&#8230; box and choose High as the Rank Assigned to Ties
>> 4.)    Click on the Rank Cases: Types&#8230; box and Check Fractional rank as %
>> 5.)    Uncheck Rank (Unless you want cumulative frequencies)
>> 6.)    Click on OK
>> 7.)    SPSS will label the variable that contains the percentile ranks "PCT001".




------------------------------

Date:    Thu, 13 Jul 2006 02:59:55 -0500
From:    John McConnell <[hidden email]>
Subject: Re: List Command output

 Hi Mark

 If you use the Case Summaries option (from memory this is in thenAnalyze>Reports menu and in syntax SUMMARIZE) you get a list inside a pivot table which is the key to Excel compatibility.

 From there you should be able to cut/paste or even export the table to Excel to keep more of the formatting.

 hth

 john

 John McConnell
 Applied Insights


---------- Original Message ----------------------------------
From: Mark Webb <[hidden email]>
Reply-To: Mark Webb <[hidden email]>
Date:          Thu, 13 Jul 2006 09:28:01 +0200

>Hi all
>I'm using the List Command to list selected respondents and a few variables.
>e.g. List Variables = Name Branch Dept Var1 Var2 Var3.
>
>When I copy & past into Excel all variables listed go into one column.
>How can I get a SPSS output that will export into columns - like for example the Frequencies command ?
>The print command is much the same.
>
>Regards
>
>Mark Webb
>

------------------------------

Date:    Thu, 13 Jul 2006 06:55:35 -0500
From:    "Oliver, Richard" <[hidden email]>
Subject: Re: Using Execute

The default behavior is for the GUI to generate an Execute statement =
after transformation commands generated from dialogs (e.g. the Compute =
dialog). This is the case for any command generated from the dialogs =
that requires some subsequent command to read the data (which is why it =
also happens with Match Files and Add Files). AFAIK, Filter doesn't =
require a subsequent command to read the data, but the same dialog that =
generates Filter syntax can also generate Select If syntax.=20
=20
There are reasons this was deemed to be the best default for the GUI, =
but it may not be ideal for those attempting to learn syntax by pasting =
dialog selections (although all those Executes will tell you which =
commands require some other command to read the data).=20

________________________________

From: SPSSX(r) Discussion on behalf of Lisa Stickney
Sent: Wed 7/12/2006 9:49 PM
To: [hidden email]
Subject: Re: Using Execute



Hi Richard,

>
> Good ever-lovin' grief! Thank you, Lisa! I didn't even know that SPSS
> inserted EXECUTEs following pasted syntax commands. (See below.)
>
> If SPSS puts EXECUTEs after every (pasted) transformation command, of
> COURSE users will think they're necessary.
>
> Other readers: Did this happen to you, too? It's precisely the right
> way to form precisely the wrong habit.
>

I don't think it does it with every transformation command, but it
definitely does it with COMPUTE, RECODE & COUNT.  Plus it does it with =
some
of the data commands -- FILTER, MATCH FILES, & ADD FILES that I know of.


> Beadle, ViAnn wrote at 12:53 PM 7/12/2006,
>
>>Although its not really obvious, you can turn off the EXECUTE command
>>[that's pasted after pasted transformation commands] in Edit>Options
>>under Transformation and Merge Operations.
>
> To be precise (since I had to look for it myself), it's
>
> Edit>Options>Data; then, under Transformation and Merge Operations,
> + To get the EXECUTE pasted, select "Calculate values immediately";
> + To omit it, select "Calculate values before used".
> (Which is the default?)
>

Thanks to both ViAnn & Richard for pointing this out.  I have happily
changed my copy of SPSS.

As for the default, I believe it's "Calculate values immediately."  The =
IT
people have installed versions 11.5, 12, 13 & now 14 on my laptop, and
they've all been set this way.  So, unless it's just my installation or
they're changing this upon installation (I doubt it), it's probably the
default.

One other comment I have about this is option is that I think it would =
have
little meaning to a newbie who's trying to learn SPSS sytax.  Unless =
you're
very familiar with SPSS, it's not clear how this relates to EXECUTE or =
why
it might be important.


    Best,
        Lisa

Lisa T. Stickney
Ph.D. Candidate
The Fox School of Business
     and Management
Temple University
[hidden email]

------------------------------

Date:    Thu, 13 Jul 2006 18:13:33 +0200
From:    Karl Koch <[hidden email]>
Subject: About Mann-Whitney Test...

Hi all,

The Mann-Whitney assumes independent samples. In Andy Fields book [2005] it also says that it assumes that datapoints are measured from different people - meaning that one person cannot partipate in both groups. I am not sure, if Andy Field here assumes that a datapoint always maps directory to a person (because of his background in Psychology). I find this a bit confusing and wouild like to ask you.

My datapoint are actually measures (on a 6-point scale) (which come from people but several from each person). Each person partipates in each of the two groups - from which I want to test stat. significance if they are different or not. The order with which people partipate in this two groups is counterballanced and should therefore be independent.

What do you think? Is this a case for Mann-Whitney then?

Best Regards,
Karl
--


Der GMX SmartSurfer hilft bis zu 70% Ihrer Onlinekosten zu sparen!
Ideal für Modem und ISDN: http://www.gmx.net/de/go/smartsurfer

------------------------------

Date:    Thu, 13 Jul 2006 11:31:32 -0500
From:    Anthony Babinec <[hidden email]>
Subject: Re: A Distinctly Non-Normal Distribution

Here are a couple general comments.

While the normal distribution might be a useful
assumed distribution for errors in regression, there
is no reason to think that it is necessarily useful for
summarizing all phenomena out there in the world.

As you have described your data, they are counts.
In other words, values are 1, 2, 3 etc., and not
real values in some interval.

Are you looking at consumption in some fixed unit of time -
say week, month, year? Given some assumptions, there
are distributions such as the poisson that might
be appropriate. It also could be the case that
what you are studying represents a mixture of types,
say usage types (low, medium, high), though that may or
may not be the case here.

Pete Fader(Wharton) and Bruce Hardie(London Business School)
have a nice course on probability models in marketing that is
regularly given at AMA events.

-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of
Stevan Nielsen
Sent: Thursday, July 13, 2006 10:12 AM
To: [hidden email]
Subject: A Distinctly Non-Normal Distribution

Dear Colleagues,

I have stumbled upon an interesting phenomenon: I have discovered that
consumption of a valuable resource conforms to a very regular, reverse
J-shaped distribution.  The modal case in our large sample (N = 16,000)
consumes one unit, the next most common case consumes two units, the
next most common three units, the next most common four units -- and
this is the median case, and so on.  The average is at about 9.7 units,
which falls between the 72nd and 73rd percentile in the distribution --
clearly NOT an indicator of central tendency.

I used SPSS Curve Estimation to examine five functional relationships
between units consumed and proportion of consumers in the sample,
testing proportion of consumers in the sample as linear, logarithmic,
inverse, quadratic, or cubic functions of number of units consumed.  I
found that the reciprocal model, estimating proportion of cases as the
inverse of units consumed, was clearly the best solution, yielding a
remarkable, and very reliable R2 = .966.  All five models were reliable,
but the next best was the logarithmic solution, with R2 = .539; worst
was the linear model, with R2 = .102.

These seems like a remarkably regular, quite predictable relationship.
I've spent my career so enamored with normal distributions that I'm not
sure what to make of this distribution.  I have several questions for
your consideration:

Do any of you have experience with such functions?  (I believe it would
be correct to call this a decay functions.)

Where are such functions most likely to occur in nature, commerce,
epidemiology, genetics, healthcare, and so on?

What complications arise when attempting to form statistical inferences
where such population distributions are present?  (We have other
measurements for subjects in this distributions, measurements which are
quite nicely normal in their distributions.)

Your curious colleague,

lars nielsen

Stevan Lars Nielsen, Ph.D.
Brigham Young University

801-422-3035; fax 801-422-0175

------------------------------

Date:    Thu, 13 Jul 2006 11:25:50 -0400
From:    K <[hidden email]>
Subject: Logistic Regression queries

Hi all,

I’m currently learning how to use Logistic Regression. I have a couple of
queries that I can’t seem to find the answer to;

1) Most  of my odds ratios are around 1, with some slightly higher at
15, and one at 386.  This seems extremely high. Is there an explanation for
such a high odds ratio?
2) I’ve read that high standard errors signal multcollinearity. Just
how high is high though?

------------------------------

Date:    Thu, 13 Jul 2006 14:04:11 -0300
From:    Hector Maletta <[hidden email]>
Subject: Re: A Distinctly Non-Normal Distribution

The phenomenon you are describing seems to follow a Poisson distribution.
There are also other asymmetrical distributions that apply to multiple
phenomena in nature and society. The Poisson distribution was first tried by
Bortkiewicz in the early 20th century to predict the number of Prussian
soldiers killed annually by a horse-kick (or was it the number of times an
average soldier would be kicked by a horse during his service time? I do not
remember precisely), and applies to any event that could happen multiple
times with decreasing probability of repetition.
Another famous asymmetrical distribution, first rising to a maximum in some
low value and then decreasing gradually towards higher values, is the Pareto
equation for the distribution of income. He found few people with
implausibly low incomes, then a maximum frequency around the most common
wage level, and then a very long tail with decreasing frequencies as income
increases all the way up to Bill Gates level.
There is no reason to expect these phenomena to follow a symmetrical, let
alone a normal Gaussian, distribution.
Hector


-----Mensaje original-----
De: SPSSX(r) Discussion [mailto:[hidden email]] En nombre de
Stevan Nielsen
Enviado el: Thursday, July 13, 2006 12:12 PM
Para: [hidden email]
Asunto: A Distinctly Non-Normal Distribution

Dear Colleagues,

I have stumbled upon an interesting phenomenon: I have discovered that
consumption of a valuable resource conforms to a very regular, reverse
J-shaped distribution.  The modal case in our large sample (N = 16,000)
consumes one unit, the next most common case consumes two units, the
next most common three units, the next most common four units -- and
this is the median case, and so on.  The average is at about 9.7 units,
which falls between the 72nd and 73rd percentile in the distribution --
clearly NOT an indicator of central tendency.

I used SPSS Curve Estimation to examine five functional relationships
between units consumed and proportion of consumers in the sample,
testing proportion of consumers in the sample as linear, logarithmic,
inverse, quadratic, or cubic functions of number of units consumed.  I
found that the reciprocal model, estimating proportion of cases as the
inverse of units consumed, was clearly the best solution, yielding a
remarkable, and very reliable R2 = .966.  All five models were reliable,
but the next best was the logarithmic solution, with R2 = .539; worst
was the linear model, with R2 = .102.

These seems like a remarkably regular, quite predictable relationship.
I've spent my career so enamored with normal distributions that I'm not
sure what to make of this distribution.  I have several questions for
your consideration:

Do any of you have experience with such functions?  (I believe it would
be correct to call this a decay functions.)

Where are such functions most likely to occur in nature, commerce,
epidemiology, genetics, healthcare, and so on?

What complications arise when attempting to form statistical inferences
where such population distributions are present?  (We have other
measurements for subjects in this distributions, measurements which are
quite nicely normal in their distributions.)

Your curious colleague,

lars nielsen

Stevan Lars Nielsen, Ph.D.
Brigham Young University

801-422-3035; fax 801-422-0175

------------------------------

Date:    Thu, 13 Jul 2006 13:13:42 -0400
From:    "[Ela Bonbevan]" <[hidden email]>
Subject: Working with dates - now pulled out ALL MY HAIR

Many thanks to all who helped with my date problems.  I was able to use
some of the syntax to fix some of my dates and I am most grateful for the
help.

I must confess that I don't find the date import from Excel to SPSS very
straightforward.  The data in my excel database comes from two sources and
the date formats are all over the place.

One set comes as 19950816 and this is the easiest.  The other comes like
this 1/20/1999 and displays like this 20-Jan-99.  Today when I did the
excel import it brought up the following 4 cells.  Where 38058 was
supposed to be 13 March 04 and 38205 was supposed to be 7 August 04.  I
have made numerous attempts to fix this by standardizing the formats in
Excel before I bring the sheet in but formatting cells appears to have no
effect.

08-JAN-04 38058            07.07.04  38205

Any help??

Diane

Diane

------------------------------

Date:    Thu, 13 Jul 2006 14:00:19 -0400
From:    Trinh Luong <[hidden email]>
Subject: Survival plot

Dear All,
  I'd like to change the y-axis labels on a Kaplan-Meier survival plot from proportions to percentages but I'm not sure how.  Could someone help me with this problem?
  Many thanks,
  Trinh Luong, MSc.
  Erasmus MC Rotterdam

------------------------------

Date:    Thu, 13 Jul 2006 15:06:52 -0300
From:    "Della Mora, Marcelo" <[hidden email]>
Subject: Likert scale

Hi Dominic

Try reliability analysis that tests internal consistency
and relationship of each items to total score: correlation item-total, co-variance...

hope this helps you


Marcelo Della Mora



-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]]On Behalf Of
Dominic Fernandes
Sent: Thursday, July 13, 2006 5:45 AM
To: [hidden email]
Subject: Likert scale


Hi All,

I have a question. How do we analyze a Likert scale (consisting of 7 responses) course evaluation questionnaire given to 25 teachers who attended an in-service course? Shall we treat the responses as scale variables and use parametric tests or shall we treat the responses as ordinal variables and use the nonparametric test.

Thank you in advance for your assistance.

Dominic.

------------------------------

Date:    Thu, 13 Jul 2006 19:34:01 +0100
From:    Margaret MacDougall <[hidden email]>
Subject: Use of the weighted Kappa statistic

Dear all

  Apologies for cross-posting

  Dear all

  I plan to compare students' self-assessment scores (within the range A-E, or something similar) to that of the scores these students are allocated by their examiners (the 2nd examiners, say).  I would like to consider using a weighted Kappa statistic. However, several questions have emerged about how best to proceed.  I would be most grateful for assistance with each of them.

  Could I please have some advice on how to decide between linear and quadratic weighting as the best choice in this case.

  Further, whilst the raters fall conveniently into two perfectly distinguishable groups ('student and '2nd examiner'), these raters change from student to student, although occasionally, one 2nd examiner may rate more than one student. As I understand it, in its original form, the weighted Kappa statistic was designed not only under the assumption of there being two distinct classes of raters but also that these raters would not change from subject. I am therefore concerned that a standard  weighted Kappa statistic may not be the correct one for me.

  A related question is what formula to use for the standard error of the weighted Kappa which I should use.

  To summarize, I have raised three main questions, the first of which relates to the type of weighting to assume, the second to the appropriateness of using a standard Kappa statistic for my problem and the third of which relates in turn to what formula
  to use for the standard error of the recommended Kappa statistic.

  I look forward to receiving some much needed help.

  Thank you so much

  Best wishes

  Margaret



---------------------------------
 All new Yahoo! Mail "The new Interface is stunning in its simplicity and ease of use." - PC Magazine

------------------------------

Date:    Thu, 13 Jul 2006 12:54:20 -0700
From:    Frank Berry <[hidden email]>
Subject: White paper for percentile rank formula

Hi,

Is there a white paper that explains a formula for each of the following percentiles in spss? When there is missing values in a numeric variable ranging from 0 to 100 (all integers), would the same formula for percentiles applies to missing values in that variable? For example, an equal percentile can be obtained for values of 49, 50 - missing and 51. Or in other cases, same value of 49 could have two or more percentiles.

TIA.
Frank

  /PERCENTILES= 1 2 3 4 5 6 7 8 9 10
 11 12 13 14 15 16 17 18 19 20
 21 22 23 24 25 26 27 28 29 30
 31 32 33 34 35 36 37 38 39 40
 41 42 43 44 45 46 47 48 49 50
 51 52 53 54 55 56 57 58 59 60
 61 62 63 64 65 66 67 68 69 70
 71 72 73 74 75 76 77 78 79 80
 81 82 83 84 85 86 87 88 89 90
 91 92 93 94 95 96 97 98 99




---------------------------------
Talk is cheap. Use Yahoo! Messenger to make PC-to-Phone calls.  Great rates starting at 1¢/min.

------------------------------

Date:    Thu, 13 Jul 2006 17:44:45 -0500
From:    "Reutter, Alex" <[hidden email]>
Subject: Re: White paper for percentile rank formula

See the algorithms document for the procedure(s) you're interested in.

http://support.spss.com/tech/Products/SPSS/Documentation/Statistics/algorithms/index.html

Using Guest/Guest as login/password.

Alex


> -----Original Message-----
> From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of
> Frank Berry
> Sent: Thursday, July 13, 2006 2:54 PM
> To: [hidden email]
> Subject: White paper for percentile rank formula
>
> Hi,
>
> Is there a white paper that explains a formula for each of the following
> percentiles in spss? When there is missing values in a numeric variable
> ranging from 0 to 100 (all integers), would the same formula for
> percentiles applies to missing values in that variable? For example, an
> equal percentile can be obtained for values of 49, 50 - missing and 51. Or
> in other cases, same value of 49 could have two or more percentiles.
>
> TIA.
> Frank
>
>   /PERCENTILES= 1 2 3 4 5 6 7 8 9 10
>  11 12 13 14 15 16 17 18 19 20
>  21 22 23 24 25 26 27 28 29 30
>  31 32 33 34 35 36 37 38 39 40
>  41 42 43 44 45 46 47 48 49 50
>  51 52 53 54 55 56 57 58 59 60
>  61 62 63 64 65 66 67 68 69 70
>  71 72 73 74 75 76 77 78 79 80
>  81 82 83 84 85 86 87 88 89 90
>  91 92 93 94 95 96 97 98 99
>
>

------------------------------

Date:    Thu, 13 Jul 2006 16:43:18 -0400
From:    AMANDA THOMAS <[hidden email]>
Subject: Outreach Program Director Opening

Hello,
I have attached an Outreach Program Director job description.  This
position is located within the Bureau of Research Training and Services at
Kent State University's College of Education, Health and Human Services.
The department works primarily in evaluating grants and and assisting
clients in understanding and using their data. Please pass this on to
anyone you know who may be interested and/or qualified.  Applications can
be submitted online at https://jobs.kent.edu.

Amanda S. Thomas
Bureau of Research Training and Services
Kent State University
507 White Hall
Kent, Ohio  44242
Phone:  (330) 672-0788
                                                                     Page 1

                                                ADMINISTRATIVE/PROFESSIONAL
                                                            JOB DESCRIPTION
                                            Developed for Equal Opportunity


CLASS TITLE:      Outreach Program Director

KSU  Class  Code:   AD47  EEO Code: 1C      FLSA:  Exempt     Pay Grade: 08


BASIC FUNCTION:

   To  plan  and  direct  all  operational,  administrative,  and financial
activities of a designated
   educational outreach program.

CHARACTERISTIC DUTIES AND RESPONSIBILITIES:

   Develop and market designated educational outreach program.

   Establish objectives; develop strategies for achieving objectives.

   Oversee University-designated program budget.

   Develop and/or revise program policies and procedures.

   Serve  as  liaison  to  various  constituent  groups relative to program
activities.

   Develop linkages with external organizations, professional
associations,and community groups to support
   program development.

   Oversee the promotion and/or communication on all aspects of the program
by  strategically  planning  for
   the  development  and  distribution of various materials.

   Identify and secure program funding opportunities.

   Evaluate and implement changes to programs.

   Serve on various University committees.

   Perform related duties as assigned.

   Employees  assigned  in  a  sales  unit and sales management role may be
   responsible for:
      ·     Identifying a list of prospects and managing the communication
processes with the prospects among
            the unit staff members.
      ·     Meeting with client prospects (i.e. human resource
managers,operations managers, decision-makers,
            executives, professionals and employees) to present information
about Kent State’s services and
            products.
      ·     Pricing solutions proposed, writing and reviewing
proposals,contracts and training plans
            reflecting the organizations’ needs and Kent State’s solutions.
      ·     Maintaining electronic records utilizing client relationship
management software.  Reporting on
            the client activity using the software.
      ·     Strategically planning for contracted sales activity, analyzing
and reporting on prospect and
            client activity.

REPORTS TO:

   Designated Supervisor

LEADERSHIP AND SUPERVISION:

   Leadership  of a small department, unit, or major function and/or direct
   supervision over administrative/professional employees.

MINIMUM QUALIFICATIONS:

   Must Pass Criminal Background Check

   Education and Experience:

      Bachelor’s  degree  in  a  relevant field; Master’s degree preferred;
      four to five years relevant experience.

   Other Knowledge, Skills, and Abilities:

      Knowledge  of  personal  computer  applications; budgeting; strategic
      planning.

      Knowledge  of  business,  employee  development  and/or  organization
      development approaches and concepts.

      Skill in written and interpersonal communication.

      Demonstration   of  ability  to  maintain  client  relationships  for
      retaining and/or cross-selling clients.

      Demonstration  of  ability  to  strategically plan for contract sales
      services and for prospecting and client calls for a small unit

      Ability to provide leadership and direction.

      Evaluation  experience.   Experience writing grants, composing budget
      proposals for grant evaluations.

      Advanced understanding of SPSS software.

PHYSICAL REQUIREMENTS:

      Light  work:   Exerting up to 20 pounds of force occasionally, and/or
      up  to  10  pounds  of  force frequently, and/or negligible amount of
      force  constantly  to  move  objects.   Typically  requires  sitting,
      walking,  standing,  bending,  keying,  talking,  hearing, seeing and
      repetitive motions.

      Incumbent  may  be  required  to  travel  from  building  to building
      frequently  and  campus to campus occasionally.  For those in a sales
      management role, incumbent will be required to travel to client sites
      frequently.

The intent of this description is to illustrate the types of duties and
responsibilities that will be required of positions given this title and
should not be interpreted to describe all the specific duties and
responsibilities that may be required in any particular position. Directly
related experience/education beyond the minimum stated may be substituted
where appropriate at the discretion of the Appointing Authority. Kent State
University reserves the right to revise or change job duties, job hours,
and responsibilities.

   FILE: AD47
   SOURCE:  Job Description
   ANALYST: ds
   DEPARTMENTAL AUTHORIZATION:       A.Lane 8/7/05

------------------------------

Date:    Thu, 13 Jul 2006 19:05:13 -0700
From:    jamie canataro <[hidden email]>
Subject: Follow-up MANCOVA interaction?

I have a question regarding a MANCOVA interaction and was wondering for some assistance.

  I am performing a 2 X 3 MANCOVA and have received significant results on an interaction.  I am looking to follow-up the significant interaction to determine which of the specific means are statistically different.  Would this require me to create a syntax program? And if so, how can I acquire/create/borrow the syntax.

  Thank you in advance.

  Nicole Canning


---------------------------------
Yahoo! Music Unlimited - Access over 1 million songs.Try it free.

------------------------------

End of SPSSX-L Digest - 12 Jul 2006 to 13 Jul 2006 (#2006-192)
**************************************************************
Reply | Threaded
Open this post in threaded view
|

Re: SPSSX-L Digest - 12 Jul 2006 to 13 Jul 2006 (#2006-192) List command output

Hector Maletta
As another long term user, I did not create junk variables for syntax punctuation as Leslie did, but I copy the list of variables in Excel, then columns as needed, write the necessary symbols in the first row, such as "/" or anything else such as ")newvar=", then copy the whole to syntax.
However, one of my recent hassles is that copyng Excel cells into syntax copies the grid too, and not simply the cells contents as text. To patch up in a hurry, I copy first from Excel to Word (as a Word table), then convert the table into text within Word, and finally copy to syntax.
It is very clumsy and haphazard, but it works every time and saves me from thinking of new macros (or adapting old ones) to do it in a more elegant way. After all it is not every day that I need this kind of thing, and the situations are always different.

Hector

-----Mensaje original-----
De: SPSSX(r) Discussion [mailto:[hidden email]] En nombre de Leslie Horst
Enviado el: Friday, July 14, 2006 10:12 AM
Para: [hidden email]
Asunto: Re: SPSSX-L Digest - 12 Jul 2006 to 13 Jul 2006 (#2006-192) List command output

As a long-time SPSS user, I did a low-tech but effective solution, probably developed in a version long ago that didn't have "summarize."

String junk (a1).
Compute junk = '/'.
List var = v1 junk v2 junk v3 junk v4 junk.

Then when all of this goes into one column in Excel you can easily parse it using
data/text to columns/delimited and then specifying the slash mark (or some other delimiter that you are sure does not exist in your data).


-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of Automatic digest processor
Sent: Friday, July 14, 2006 12:02 AM
To: Recipients of SPSSX-L digests
Subject: SPSSX-L Digest - 12 Jul 2006 to 13 Jul 2006 (#2006-192)

There are 26 messages totalling 1173 lines in this issue.

Topics of the day:

  1. export to xls
  2. List Command output (3)
  3. Likert scale (3)
  4. SPSS and Terminal Services
  5. Using Execute (2)
  6. reliability
  7. A Distinctly Non-Normal Distribution (3)
  8. execute
  9. SPSS-GLM query
 10. percentile scores
 11. About Mann-Whitney Test...
 12. Logistic Regression queries
 13. Working with dates - now pulled out ALL MY HAIR
 14. Survival plot
 15. Use of the weighted Kappa statistic
 16. White paper for percentile rank formula (2)
 17. Outreach Program Director Opening
 18. Follow-up MANCOVA interaction?

----------------------------------------------------------------------

Date:    Thu, 13 Jul 2006 09:30:01 +0300
From:    vlad simion <[hidden email]>
Subject: export to xls

Hi again,

I posted yesterday a message regarding an export to excel. No one has any
sugesstions about how to export a string variable to an xls file via ODBC
driver. My problem is that it only saves the first letter of the string and
not the whole string.

Here is part of the export syntax:

save translate
 /connect= 'dsn=excel files;dbq=c:\documents and
settings\ay08418\desktop\data\test.xls'
 /table=!quote(!n)
 /type=odbc
 /replace.

Thanks again!

Vlad.



--
Vlad Simion
Data Analyst
Tel:      +40 720130611

------------------------------

Date:    Thu, 13 Jul 2006 09:28:01 +0200
From:    Mark Webb <[hidden email]>
Subject: List Command output

Hi all
I'm using the List Command to list selected respondents and a few =
variables.
e.g. List Variables =3D Name Branch Dept Var1 Var2 Var3.

When I copy & past into Excel all variables listed go into one column.
How can I get a SPSS output that will export into columns - like for =
example the Frequencies command ?
The print command is much the same.

Regards

Mark Webb

------------------------------

Date:    Thu, 13 Jul 2006 11:44:55 +0300
From:    Dominic Fernandes <[hidden email]>
Subject: Likert scale

Hi All,

I have a question. How do we analyze a Likert scale (consisting of 7 =
responses) course evaluation questionnaire given to 25 teachers who =
attended an in-service course? Shall we treat the responses as scale =
variables and use parametric tests or shall we treat the responses as =
ordinal variables and use the nonparametric test.

Thank you in advance for your assistance.

Dominic.

------------------------------

Date:    Thu, 13 Jul 2006 11:55:27 +0100
From:    Clare Gill <[hidden email]>
Subject: SPSS and Terminal Services

Hello

We are considering a terminal services environment in UCD to
deliver some of our applications to the UCD community.  We would
still be installing the windows version of SPSS so we would like to
know if there has been or if there currently is any issues with
delivering SPSS in this way.

As we have a large user community in UCD, we would also like to
know what the recommended or expected hardware requirements
would be to run SPSS in this environment as some of the analysis
that can be done in SPSS can already use a lot of system resourses.
We would consider 250/500 users as a guideline for system
requirements.

--
Clare Gill
Computing Services, University College Dublin,
UCD Computer Centre, Belfield, Dublin 4.
Tel: +353-1-716 2007 Fax: +353-1-2837077
http://www.ucd.ie/computing

------------------------------

Date:    Thu, 13 Jul 2006 06:36:26 -0500
From:    "Peck, Jon" <[hidden email]>
Subject: Re: Using Execute

The reason that the default behavior is to generate EXECUTE commands when working from the transformation dialogs is that without that, the Data Editor window does not update immediately, and new users found this behavior confusing.  We wanted to make it easier to get started with SPSS.

Since you can turn this setting off, and since many users do not use syntax anyway, this is pretty harmless.  The help for this option explains the efficiency issue, although many users never find their way to this spot.

Regards,
Jon Peck

-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of Lisa Stickney
Sent: Wednesday, July 12, 2006 9:50 PM
To: [hidden email]
Subject: Re: [SPSSX-L] Using Execute

Hi Richard,

>
> Good ever-lovin' grief! Thank you, Lisa! I didn't even know that SPSS
> inserted EXECUTEs following pasted syntax commands. (See below.)
>
> If SPSS puts EXECUTEs after every (pasted) transformation command, of
> COURSE users will think they're necessary.
>
> Other readers: Did this happen to you, too? It's precisely the right
> way to form precisely the wrong habit.
>

I don't think it does it with every transformation command, but it
definitely does it with COMPUTE, RECODE & COUNT.  Plus it does it with some
of the data commands -- FILTER, MATCH FILES, & ADD FILES that I know of.


> Beadle, ViAnn wrote at 12:53 PM 7/12/2006,
>
>>Although its not really obvious, you can turn off the EXECUTE command
>>[that's pasted after pasted transformation commands] in Edit>Options
>>under Transformation and Merge Operations.
>
> To be precise (since I had to look for it myself), it's
>
> Edit>Options>Data; then, under Transformation and Merge Operations,
> + To get the EXECUTE pasted, select "Calculate values immediately";
> + To omit it, select "Calculate values before used".
> (Which is the default?)
>

Thanks to both ViAnn & Richard for pointing this out.  I have happily
changed my copy of SPSS.

As for the default, I believe it's "Calculate values immediately."  The IT
people have installed versions 11.5, 12, 13 & now 14 on my laptop, and
they've all been set this way.  So, unless it's just my installation or
they're changing this upon installation (I doubt it), it's probably the
default.

One other comment I have about this is option is that I think it would have
little meaning to a newbie who's trying to learn SPSS sytax.  Unless you're
very familiar with SPSS, it's not clear how this relates to EXECUTE or why
it might be important.


    Best,
        Lisa

Lisa T. Stickney
Ph.D. Candidate
The Fox School of Business
     and Management
Temple University
[hidden email]

------------------------------

Date:    Thu, 13 Jul 2006 08:07:11 -0400
From:    "Dates, Brian" <[hidden email]>
Subject: Re: Likert scale

Dominic,

With seven (7) response categories, you can probably treat the data as
interval in nature.  I'm a purist and my principal area of work is in
measurement development and standardization, so I generally follow the IRT
approach and then do the analysis on the IRT scaled items.  Likert actually
developed a scaling method for his approach to items which would be worth
your time for future endeavors.  As part of your analysis, I would recommend
at least performing a reliability analysis to check for internal consistency
and the relationship of the items to total score, so you can identify any
"clinker" items that might exist.  Good luck.

Brian

Brian G. Dates, Director of Quality Assurance
Southwest Counseling and Development Services
1700 Waterman
Detroit, Michigan  48209
Telephone: 313.841.7442
FAX:  313.841.4470
email: [hidden email]


> -----Original Message-----
> From: Dominic Fernandes [SMTP:[hidden email]]
> Sent: Thursday, July 13, 2006 4:45 AM
> To:   [hidden email]
> Subject:      Likert scale
>
> Hi All,
>
> I have a question. How do we analyze a Likert scale (consisting of 7
> responses) course evaluation questionnaire given to 25 teachers who
> attended an in-service course? Shall we treat the responses as scale
> variables and use parametric tests or shall we treat the responses as
> ordinal variables and use the nonparametric test.
>
> Thank you in advance for your assistance.
>
> Dominic.
>
>
Confidentiality Notice for Email Transmissions: The information in this
message is confidential and may be legally privileged. It is intended solely
for the addressee.  Access to this message by anyone else is unauthorised.
If you are not the intended recipient, any disclosure, copying, or
distribution of the message, or any action or omission taken by you in
reliance on it, is prohibited and may be unlawful.  Please immediately
contact the sender if you have received this message in error. Thank you.

------------------------------

Date:    Thu, 13 Jul 2006 07:59:57 -0500
From:    "Beadle, ViAnn" <[hidden email]>
Subject: Re: List Command output

Use the summarize command which will give you a case listing in a table from which you can copy the cells:

summarize name branch dept var1 var2 var3 /cells none/format list nocasenum.

-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of Mark Webb
Sent: Thursday, July 13, 2006 2:28 AM
To: [hidden email]
Subject: List Command output

Hi all
I'm using the List Command to list selected respondents and a few variables.
e.g. List Variables = Name Branch Dept Var1 Var2 Var3.

When I copy & past into Excel all variables listed go into one column.
How can I get a SPSS output that will export into columns - like for example the Frequencies command ?
The print command is much the same.

Regards

Mark Webb

------------------------------

Date:    Thu, 13 Jul 2006 07:24:29 -0700
From:    razina khayat <[hidden email]>
Subject: reliability

Hi all,
   I'm trying to do a test-retest reliability analysis on attachment loss measurement. Attachment loss is measured in millimeters ranging from 0 to about 14. my question is how to do this within ± 1 mm accuracy. Also for validity assessment (using gold standard), how could we measure the same variable within 1 mm accuracy.
  thanks,
  razina



---------------------------------
Talk is cheap. Use Yahoo! Messenger to make PC-to-Phone calls.  Great rates starting at 1¢/min.

------------------------------

Date:    Thu, 13 Jul 2006 09:12:28 -0600
From:    Stevan Nielsen <[hidden email]>
Subject: A Distinctly Non-Normal Distribution

Dear Colleagues,

I have stumbled upon an interesting phenomenon: I have discovered that
consumption of a valuable resource conforms to a very regular, reverse
J-shaped distribution.  The modal case in our large sample (N = 16,000)
consumes one unit, the next most common case consumes two units, the
next most common three units, the next most common four units -- and
this is the median case, and so on.  The average is at about 9.7 units,
which falls between the 72nd and 73rd percentile in the distribution --
clearly NOT an indicator of central tendency.

I used SPSS Curve Estimation to examine five functional relationships
between units consumed and proportion of consumers in the sample,
testing proportion of consumers in the sample as linear, logarithmic,
inverse, quadratic, or cubic functions of number of units consumed.  I
found that the reciprocal model, estimating proportion of cases as the
inverse of units consumed, was clearly the best solution, yielding a
remarkable, and very reliable R2 = .966.  All five models were reliable,
but the next best was the logarithmic solution, with R2 = .539; worst
was the linear model, with R2 = .102.

These seems like a remarkably regular, quite predictable relationship.
I've spent my career so enamored with normal distributions that I'm not
sure what to make of this distribution.  I have several questions for
your consideration:

Do any of you have experience with such functions?  (I believe it would
be correct to call this a decay functions.)

Where are such functions most likely to occur in nature, commerce,
epidemiology, genetics, healthcare, and so on?

What complications arise when attempting to form statistical inferences
where such population distributions are present?  (We have other
measurements for subjects in this distributions, measurements which are
quite nicely normal in their distributions.)

Your curious colleague,

lars nielsen

Stevan Lars Nielsen, Ph.D.
Brigham Young University

801-422-3035; fax 801-422-0175

------------------------------

Date:    Thu, 13 Jul 2006 08:31:18 -0700
From:    "P. Scott Dixon" <[hidden email]>
Subject: Re: execute

I agree with ViAnn.  With all respect to the SPSS wunderkinds, I think the use
of executes is useful to find flaws in my code at the point they occur, and the
cost in processor time is negligible with small to mid size samples.  It is a
strong tool in my, admittedly, less-than-perfect code.

Scott

 -----------------------------

Date:    Wed, 12 Jul 2006 07:04:19 -0500
From:    "Beadle, ViAnn" <[hidden email]>
Subject: Re: reverse concat

OK, I'll take your bait.=20
=20
I don't actually use EXECUTE in syntax but when I'm work out gnarly =
transformation problems I frequently Run Pending Transformations (which =
generates your dreaded EXECUTE) from the data editor and see the effects =
of my transformations immediately. My strategy is to work through the =
problem a "executable" unit at a time and check out the unit by running =
the pending transformations. Unless your dealing with wide (more than =
200 or so variables) or long (more than 10,000) or more cases the time =
to actually do the transformation is much less than the think time to =
step mentally through the process.=20
=20
What does that extra processing really cost you? It's not like the bad =
old days when every job submittal at $1.00 a pop ate up my account at =
the University of Chicago Comp Center.

________________________________

------------------------------

Date:    Thu, 13 Jul 2006 16:55:13 +0100
From:    Shweta Manchanda <[hidden email]>
Subject: SPSS-GLM query

I would like to know what the estimated marginal means in the General
Linear Model mean. I would be grateful to get any help.

Many thanks.

------------------------------

Date:    Thu, 13 Jul 2006 09:57:03 +0300
From:    [hidden email]
Subject: Re: percentile scores

Small corrections: to obtain percentile scores using formulas described in psychometric literatuire (see, for instance, Crocker, L., & Algina, J. (1991). Introduction to classical and modern test theory. Orlando, FL: Harcourt Brace Jovanovich.), I would use slightly different options:

1)    Under the Transform menu, choose Rank Cases&#8230;
2)    Choose the variable containing the raw test scores
3)    Click on the Ties&#8230; box and choose MEAN as the Rank Assigned to Ties
4)    Click on the Rank Cases: Types&#8230; box and Check Fractional rank as %
5)    Check Rankit under Proportion Estimation Formula
6)    Uncheck Rank (Unless you want cumulative frequencies)
7)    Click on OK


Dr. Alexander Vinogradov, Associate Professor
Sociology and Psychology Faculty
National Taras Shevchenko University
Ukraine

>> Statisticsdoc <[hidden email]> wrote: Humphrey,

>> There are two ways to compute percentile scores in SPSS:

>> You can use menus to compute percentile ranks as follows:

>> 1.)    Under the Transform menu, choose Rank Cases&#8230;
>> 2.)    Choose the variable containing the raw test scores
>> 3.)    Click on the Ties&#8230; box and choose High as the Rank Assigned to Ties
>> 4.)    Click on the Rank Cases: Types&#8230; box and Check Fractional rank as %
>> 5.)    Uncheck Rank (Unless you want cumulative frequencies)
>> 6.)    Click on OK
>> 7.)    SPSS will label the variable that contains the percentile ranks "PCT001".




------------------------------

Date:    Thu, 13 Jul 2006 02:59:55 -0500
From:    John McConnell <[hidden email]>
Subject: Re: List Command output

 Hi Mark

 If you use the Case Summaries option (from memory this is in thenAnalyze>Reports menu and in syntax SUMMARIZE) you get a list inside a pivot table which is the key to Excel compatibility.

 From there you should be able to cut/paste or even export the table to Excel to keep more of the formatting.

 hth

 john

 John McConnell
 Applied Insights


---------- Original Message ----------------------------------
From: Mark Webb <[hidden email]>
Reply-To: Mark Webb <[hidden email]>
Date:          Thu, 13 Jul 2006 09:28:01 +0200

>Hi all
>I'm using the List Command to list selected respondents and a few variables.
>e.g. List Variables = Name Branch Dept Var1 Var2 Var3.
>
>When I copy & past into Excel all variables listed go into one column.
>How can I get a SPSS output that will export into columns - like for example the Frequencies command ?
>The print command is much the same.
>
>Regards
>
>Mark Webb
>

------------------------------

Date:    Thu, 13 Jul 2006 06:55:35 -0500
From:    "Oliver, Richard" <[hidden email]>
Subject: Re: Using Execute

The default behavior is for the GUI to generate an Execute statement =
after transformation commands generated from dialogs (e.g. the Compute =
dialog). This is the case for any command generated from the dialogs =
that requires some subsequent command to read the data (which is why it =
also happens with Match Files and Add Files). AFAIK, Filter doesn't =
require a subsequent command to read the data, but the same dialog that =
generates Filter syntax can also generate Select If syntax.=20
=20
There are reasons this was deemed to be the best default for the GUI, =
but it may not be ideal for those attempting to learn syntax by pasting =
dialog selections (although all those Executes will tell you which =
commands require some other command to read the data).=20

________________________________

From: SPSSX(r) Discussion on behalf of Lisa Stickney
Sent: Wed 7/12/2006 9:49 PM
To: [hidden email]
Subject: Re: Using Execute



Hi Richard,

>
> Good ever-lovin' grief! Thank you, Lisa! I didn't even know that SPSS
> inserted EXECUTEs following pasted syntax commands. (See below.)
>
> If SPSS puts EXECUTEs after every (pasted) transformation command, of
> COURSE users will think they're necessary.
>
> Other readers: Did this happen to you, too? It's precisely the right
> way to form precisely the wrong habit.
>

I don't think it does it with every transformation command, but it
definitely does it with COMPUTE, RECODE & COUNT.  Plus it does it with =
some
of the data commands -- FILTER, MATCH FILES, & ADD FILES that I know of.


> Beadle, ViAnn wrote at 12:53 PM 7/12/2006,
>
>>Although its not really obvious, you can turn off the EXECUTE command
>>[that's pasted after pasted transformation commands] in Edit>Options
>>under Transformation and Merge Operations.
>
> To be precise (since I had to look for it myself), it's
>
> Edit>Options>Data; then, under Transformation and Merge Operations,
> + To get the EXECUTE pasted, select "Calculate values immediately";
> + To omit it, select "Calculate values before used".
> (Which is the default?)
>

Thanks to both ViAnn & Richard for pointing this out.  I have happily
changed my copy of SPSS.

As for the default, I believe it's "Calculate values immediately."  The =
IT
people have installed versions 11.5, 12, 13 & now 14 on my laptop, and
they've all been set this way.  So, unless it's just my installation or
they're changing this upon installation (I doubt it), it's probably the
default.

One other comment I have about this is option is that I think it would =
have
little meaning to a newbie who's trying to learn SPSS sytax.  Unless =
you're
very familiar with SPSS, it's not clear how this relates to EXECUTE or =
why
it might be important.


    Best,
        Lisa

Lisa T. Stickney
Ph.D. Candidate
The Fox School of Business
     and Management
Temple University
[hidden email]

------------------------------

Date:    Thu, 13 Jul 2006 18:13:33 +0200
From:    Karl Koch <[hidden email]>
Subject: About Mann-Whitney Test...

Hi all,

The Mann-Whitney assumes independent samples. In Andy Fields book [2005] it also says that it assumes that datapoints are measured from different people - meaning that one person cannot partipate in both groups. I am not sure, if Andy Field here assumes that a datapoint always maps directory to a person (because of his background in Psychology). I find this a bit confusing and wouild like to ask you.

My datapoint are actually measures (on a 6-point scale) (which come from people but several from each person). Each person partipates in each of the two groups - from which I want to test stat. significance if they are different or not. The order with which people partipate in this two groups is counterballanced and should therefore be independent.

What do you think? Is this a case for Mann-Whitney then?

Best Regards,
Karl
--


Der GMX SmartSurfer hilft bis zu 70% Ihrer Onlinekosten zu sparen!
Ideal für Modem und ISDN: http://www.gmx.net/de/go/smartsurfer

------------------------------

Date:    Thu, 13 Jul 2006 11:31:32 -0500
From:    Anthony Babinec <[hidden email]>
Subject: Re: A Distinctly Non-Normal Distribution

Here are a couple general comments.

While the normal distribution might be a useful
assumed distribution for errors in regression, there
is no reason to think that it is necessarily useful for
summarizing all phenomena out there in the world.

As you have described your data, they are counts.
In other words, values are 1, 2, 3 etc., and not
real values in some interval.

Are you looking at consumption in some fixed unit of time -
say week, month, year? Given some assumptions, there
are distributions such as the poisson that might
be appropriate. It also could be the case that
what you are studying represents a mixture of types,
say usage types (low, medium, high), though that may or
may not be the case here.

Pete Fader(Wharton) and Bruce Hardie(London Business School)
have a nice course on probability models in marketing that is
regularly given at AMA events.

-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of
Stevan Nielsen
Sent: Thursday, July 13, 2006 10:12 AM
To: [hidden email]
Subject: A Distinctly Non-Normal Distribution

Dear Colleagues,

I have stumbled upon an interesting phenomenon: I have discovered that
consumption of a valuable resource conforms to a very regular, reverse
J-shaped distribution.  The modal case in our large sample (N = 16,000)
consumes one unit, the next most common case consumes two units, the
next most common three units, the next most common four units -- and
this is the median case, and so on.  The average is at about 9.7 units,
which falls between the 72nd and 73rd percentile in the distribution --
clearly NOT an indicator of central tendency.

I used SPSS Curve Estimation to examine five functional relationships
between units consumed and proportion of consumers in the sample,
testing proportion of consumers in the sample as linear, logarithmic,
inverse, quadratic, or cubic functions of number of units consumed.  I
found that the reciprocal model, estimating proportion of cases as the
inverse of units consumed, was clearly the best solution, yielding a
remarkable, and very reliable R2 = .966.  All five models were reliable,
but the next best was the logarithmic solution, with R2 = .539; worst
was the linear model, with R2 = .102.

These seems like a remarkably regular, quite predictable relationship.
I've spent my career so enamored with normal distributions that I'm not
sure what to make of this distribution.  I have several questions for
your consideration:

Do any of you have experience with such functions?  (I believe it would
be correct to call this a decay functions.)

Where are such functions most likely to occur in nature, commerce,
epidemiology, genetics, healthcare, and so on?

What complications arise when attempting to form statistical inferences
where such population distributions are present?  (We have other
measurements for subjects in this distributions, measurements which are
quite nicely normal in their distributions.)

Your curious colleague,

lars nielsen

Stevan Lars Nielsen, Ph.D.
Brigham Young University

801-422-3035; fax 801-422-0175

------------------------------

Date:    Thu, 13 Jul 2006 11:25:50 -0400
From:    K <[hidden email]>
Subject: Logistic Regression queries

Hi all,

I’m currently learning how to use Logistic Regression. I have a couple of
queries that I can’t seem to find the answer to;

1) Most  of my odds ratios are around 1, with some slightly higher at
15, and one at 386.  This seems extremely high. Is there an explanation for
such a high odds ratio?
2) I’ve read that high standard errors signal multcollinearity. Just
how high is high though?

------------------------------

Date:    Thu, 13 Jul 2006 14:04:11 -0300
From:    Hector Maletta <[hidden email]>
Subject: Re: A Distinctly Non-Normal Distribution

The phenomenon you are describing seems to follow a Poisson distribution.
There are also other asymmetrical distributions that apply to multiple
phenomena in nature and society. The Poisson distribution was first tried by
Bortkiewicz in the early 20th century to predict the number of Prussian
soldiers killed annually by a horse-kick (or was it the number of times an
average soldier would be kicked by a horse during his service time? I do not
remember precisely), and applies to any event that could happen multiple
times with decreasing probability of repetition.
Another famous asymmetrical distribution, first rising to a maximum in some
low value and then decreasing gradually towards higher values, is the Pareto
equation for the distribution of income. He found few people with
implausibly low incomes, then a maximum frequency around the most common
wage level, and then a very long tail with decreasing frequencies as income
increases all the way up to Bill Gates level.
There is no reason to expect these phenomena to follow a symmetrical, let
alone a normal Gaussian, distribution.
Hector


-----Mensaje original-----
De: SPSSX(r) Discussion [mailto:[hidden email]] En nombre de
Stevan Nielsen
Enviado el: Thursday, July 13, 2006 12:12 PM
Para: [hidden email]
Asunto: A Distinctly Non-Normal Distribution

Dear Colleagues,

I have stumbled upon an interesting phenomenon: I have discovered that
consumption of a valuable resource conforms to a very regular, reverse
J-shaped distribution.  The modal case in our large sample (N = 16,000)
consumes one unit, the next most common case consumes two units, the
next most common three units, the next most common four units -- and
this is the median case, and so on.  The average is at about 9.7 units,
which falls between the 72nd and 73rd percentile in the distribution --
clearly NOT an indicator of central tendency.

I used SPSS Curve Estimation to examine five functional relationships
between units consumed and proportion of consumers in the sample,
testing proportion of consumers in the sample as linear, logarithmic,
inverse, quadratic, or cubic functions of number of units consumed.  I
found that the reciprocal model, estimating proportion of cases as the
inverse of units consumed, was clearly the best solution, yielding a
remarkable, and very reliable R2 = .966.  All five models were reliable,
but the next best was the logarithmic solution, with R2 = .539; worst
was the linear model, with R2 = .102.

These seems like a remarkably regular, quite predictable relationship.
I've spent my career so enamored with normal distributions that I'm not
sure what to make of this distribution.  I have several questions for
your consideration:

Do any of you have experience with such functions?  (I believe it would
be correct to call this a decay functions.)

Where are such functions most likely to occur in nature, commerce,
epidemiology, genetics, healthcare, and so on?

What complications arise when attempting to form statistical inferences
where such population distributions are present?  (We have other
measurements for subjects in this distributions, measurements which are
quite nicely normal in their distributions.)

Your curious colleague,

lars nielsen

Stevan Lars Nielsen, Ph.D.
Brigham Young University

801-422-3035; fax 801-422-0175

------------------------------

Date:    Thu, 13 Jul 2006 13:13:42 -0400
From:    "[Ela Bonbevan]" <[hidden email]>
Subject: Working with dates - now pulled out ALL MY HAIR

Many thanks to all who helped with my date problems.  I was able to use
some of the syntax to fix some of my dates and I am most grateful for the
help.

I must confess that I don't find the date import from Excel to SPSS very
straightforward.  The data in my excel database comes from two sources and
the date formats are all over the place.

One set comes as 19950816 and this is the easiest.  The other comes like
this 1/20/1999 and displays like this 20-Jan-99.  Today when I did the
excel import it brought up the following 4 cells.  Where 38058 was
supposed to be 13 March 04 and 38205 was supposed to be 7 August 04.  I
have made numerous attempts to fix this by standardizing the formats in
Excel before I bring the sheet in but formatting cells appears to have no
effect.

08-JAN-04 38058            07.07.04  38205

Any help??

Diane

Diane

------------------------------

Date:    Thu, 13 Jul 2006 14:00:19 -0400
From:    Trinh Luong <[hidden email]>
Subject: Survival plot

Dear All,
  I'd like to change the y-axis labels on a Kaplan-Meier survival plot from proportions to percentages but I'm not sure how.  Could someone help me with this problem?
  Many thanks,
  Trinh Luong, MSc.
  Erasmus MC Rotterdam

------------------------------

Date:    Thu, 13 Jul 2006 15:06:52 -0300
From:    "Della Mora, Marcelo" <[hidden email]>
Subject: Likert scale

Hi Dominic

Try reliability analysis that tests internal consistency
and relationship of each items to total score: correlation item-total, co-variance...

hope this helps you


Marcelo Della Mora



-----Original Message-----
From: SPSSX(r) Discussion [mailto:[hidden email]]On Behalf Of
Dominic Fernandes
Sent: Thursday, July 13, 2006 5:45 AM
To: [hidden email]
Subject: Likert scale


Hi All,

I have a question. How do we analyze a Likert scale (consisting of 7 responses) course evaluation questionnaire given to 25 teachers who attended an in-service course? Shall we treat the responses as scale variables and use parametric tests or shall we treat the responses as ordinal variables and use the nonparametric test.

Thank you in advance for your assistance.

Dominic.

------------------------------

Date:    Thu, 13 Jul 2006 19:34:01 +0100
From:    Margaret MacDougall <[hidden email]>
Subject: Use of the weighted Kappa statistic

Dear all

  Apologies for cross-posting

  Dear all

  I plan to compare students' self-assessment scores (within the range A-E, or something similar) to that of the scores these students are allocated by their examiners (the 2nd examiners, say).  I would like to consider using a weighted Kappa statistic. However, several questions have emerged about how best to proceed.  I would be most grateful for assistance with each of them.

  Could I please have some advice on how to decide between linear and quadratic weighting as the best choice in this case.

  Further, whilst the raters fall conveniently into two perfectly distinguishable groups ('student and '2nd examiner'), these raters change from student to student, although occasionally, one 2nd examiner may rate more than one student. As I understand it, in its original form, the weighted Kappa statistic was designed not only under the assumption of there being two distinct classes of raters but also that these raters would not change from subject. I am therefore concerned that a standard  weighted Kappa statistic may not be the correct one for me.

  A related question is what formula to use for the standard error of the weighted Kappa which I should use.

  To summarize, I have raised three main questions, the first of which relates to the type of weighting to assume, the second to the appropriateness of using a standard Kappa statistic for my problem and the third of which relates in turn to what formula
  to use for the standard error of the recommended Kappa statistic.

  I look forward to receiving some much needed help.

  Thank you so much

  Best wishes

  Margaret



---------------------------------
 All new Yahoo! Mail "The new Interface is stunning in its simplicity and ease of use." - PC Magazine

------------------------------

Date:    Thu, 13 Jul 2006 12:54:20 -0700
From:    Frank Berry <[hidden email]>
Subject: White paper for percentile rank formula

Hi,

Is there a white paper that explains a formula for each of the following percentiles in spss? When there is missing values in a numeric variable ranging from 0 to 100 (all integers), would the same formula for percentiles applies to missing values in that variable? For example, an equal percentile can be obtained for values of 49, 50 - missing and 51. Or in other cases, same value of 49 could have two or more percentiles.

TIA.
Frank

  /PERCENTILES= 1 2 3 4 5 6 7 8 9 10
 11 12 13 14 15 16 17 18 19 20
 21 22 23 24 25 26 27 28 29 30
 31 32 33 34 35 36 37 38 39 40
 41 42 43 44 45 46 47 48 49 50
 51 52 53 54 55 56 57 58 59 60
 61 62 63 64 65 66 67 68 69 70
 71 72 73 74 75 76 77 78 79 80
 81 82 83 84 85 86 87 88 89 90
 91 92 93 94 95 96 97 98 99




---------------------------------
Talk is cheap. Use Yahoo! Messenger to make PC-to-Phone calls.  Great rates starting at 1¢/min.

------------------------------

Date:    Thu, 13 Jul 2006 17:44:45 -0500
From:    "Reutter, Alex" <[hidden email]>
Subject: Re: White paper for percentile rank formula

See the algorithms document for the procedure(s) you're interested in.

http://support.spss.com/tech/Products/SPSS/Documentation/Statistics/algorithms/index.html

Using Guest/Guest as login/password.

Alex


> -----Original Message-----
> From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of
> Frank Berry
> Sent: Thursday, July 13, 2006 2:54 PM
> To: [hidden email]
> Subject: White paper for percentile rank formula
>
> Hi,
>
> Is there a white paper that explains a formula for each of the following
> percentiles in spss? When there is missing values in a numeric variable
> ranging from 0 to 100 (all integers), would the same formula for
> percentiles applies to missing values in that variable? For example, an
> equal percentile can be obtained for values of 49, 50 - missing and 51. Or
> in other cases, same value of 49 could have two or more percentiles.
>
> TIA.
> Frank
>
>   /PERCENTILES= 1 2 3 4 5 6 7 8 9 10
>  11 12 13 14 15 16 17 18 19 20
>  21 22 23 24 25 26 27 28 29 30
>  31 32 33 34 35 36 37 38 39 40
>  41 42 43 44 45 46 47 48 49 50
>  51 52 53 54 55 56 57 58 59 60
>  61 62 63 64 65 66 67 68 69 70
>  71 72 73 74 75 76 77 78 79 80
>  81 82 83 84 85 86 87 88 89 90
>  91 92 93 94 95 96 97 98 99
>
>

------------------------------

Date:    Thu, 13 Jul 2006 16:43:18 -0400
From:    AMANDA THOMAS <[hidden email]>
Subject: Outreach Program Director Opening

Hello,
I have attached an Outreach Program Director job description.  This
position is located within the Bureau of Research Training and Services at
Kent State University's College of Education, Health and Human Services.
The department works primarily in evaluating grants and and assisting
clients in understanding and using their data. Please pass this on to
anyone you know who may be interested and/or qualified.  Applications can
be submitted online at https://jobs.kent.edu.

Amanda S. Thomas
Bureau of Research Training and Services
Kent State University
507 White Hall
Kent, Ohio  44242
Phone:  (330) 672-0788
                                                                     Page 1

                                                ADMINISTRATIVE/PROFESSIONAL
                                                            JOB DESCRIPTION
                                            Developed for Equal Opportunity


CLASS TITLE:      Outreach Program Director

KSU  Class  Code:   AD47  EEO Code: 1C      FLSA:  Exempt     Pay Grade: 08


BASIC FUNCTION:

   To  plan  and  direct  all  operational,  administrative,  and financial
activities of a designated
   educational outreach program.

CHARACTERISTIC DUTIES AND RESPONSIBILITIES:

   Develop and market designated educational outreach program.

   Establish objectives; develop strategies for achieving objectives.

   Oversee University-designated program budget.

   Develop and/or revise program policies and procedures.

   Serve  as  liaison  to  various  constituent  groups relative to program
activities.

   Develop linkages with external organizations, professional
associations,and community groups to support
   program development.

   Oversee the promotion and/or communication on all aspects of the program
by  strategically  planning  for
   the  development  and  distribution of various materials.

   Identify and secure program funding opportunities.

   Evaluate and implement changes to programs.

   Serve on various University committees.

   Perform related duties as assigned.

   Employees  assigned  in  a  sales  unit and sales management role may be
   responsible for:
      ·     Identifying a list of prospects and managing the communication
processes with the prospects among
            the unit staff members.
      ·     Meeting with client prospects (i.e. human resource
managers,operations managers, decision-makers,
            executives, professionals and employees) to present information
about Kent State’s services and
            products.
      ·     Pricing solutions proposed, writing and reviewing
proposals,contracts and training plans
            reflecting the organizations’ needs and Kent State’s solutions.
      ·     Maintaining electronic records utilizing client relationship
management software.  Reporting on
            the client activity using the software.
      ·     Strategically planning for contracted sales activity, analyzing
and reporting on prospect and
            client activity.

REPORTS TO:

   Designated Supervisor

LEADERSHIP AND SUPERVISION:

   Leadership  of a small department, unit, or major function and/or direct
   supervision over administrative/professional employees.

MINIMUM QUALIFICATIONS:

   Must Pass Criminal Background Check

   Education and Experience:

      Bachelor’s  degree  in  a  relevant field; Master’s degree preferred;
      four to five years relevant experience.

   Other Knowledge, Skills, and Abilities:

      Knowledge  of  personal  computer  applications; budgeting; strategic
      planning.

      Knowledge  of  business,  employee  development  and/or  organization
      development approaches and concepts.

      Skill in written and interpersonal communication.

      Demonstration   of  ability  to  maintain  client  relationships  for
      retaining and/or cross-selling clients.

      Demonstration  of  ability  to  strategically plan for contract sales
      services and for prospecting and client calls for a small unit

      Ability to provide leadership and direction.

      Evaluation  experience.   Experience writing grants, composing budget
      proposals for grant evaluations.

      Advanced understanding of SPSS software.

PHYSICAL REQUIREMENTS:

      Light  work:   Exerting up to 20 pounds of force occasionally, and/or
      up  to  10  pounds  of  force frequently, and/or negligible amount of
      force  constantly  to  move  objects.   Typically  requires  sitting,
      walking,  standing,  bending,  keying,  talking,  hearing, seeing and
      repetitive motions.

      Incumbent  may  be  required  to  travel  from  building  to building
      frequently  and  campus to campus occasionally.  For those in a sales
      management role, incumbent will be required to travel to client sites
      frequently.

The intent of this description is to illustrate the types of duties and
responsibilities that will be required of positions given this title and
should not be interpreted to describe all the specific duties and
responsibilities that may be required in any particular position. Directly
related experience/education beyond the minimum stated may be substituted
where appropriate at the discretion of the Appointing Authority. Kent State
University reserves the right to revise or change job duties, job hours,
and responsibilities.

   FILE: AD47
   SOURCE:  Job Description
   ANALYST: ds
   DEPARTMENTAL AUTHORIZATION:       A.Lane 8/7/05

------------------------------

Date:    Thu, 13 Jul 2006 19:05:13 -0700
From:    jamie canataro <[hidden email]>
Subject: Follow-up MANCOVA interaction?

I have a question regarding a MANCOVA interaction and was wondering for some assistance.

  I am performing a 2 X 3 MANCOVA and have received significant results on an interaction.  I am looking to follow-up the significant interaction to determine which of the specific means are statistically different.  Would this require me to create a syntax program? And if so, how can I acquire/create/borrow the syntax.

  Thank you in advance.

  Nicole Canning


---------------------------------
Yahoo! Music Unlimited - Access over 1 million songs.Try it free.

------------------------------

End of SPSSX-L Digest - 12 Jul 2006 to 13 Jul 2006 (#2006-192)
**************************************************************
Reply | Threaded
Open this post in threaded view
|

Re: SPSSX-L Digest - 12 Jul 2006 to 13 Jul 2006 (#2006-192) List command output

Meyer, Gregory J
I use the same kind of procedure as Hector, though typically don't encounter the problem of tables being copied into the syntax window. But when I need to get rid of a table format from Excel, I copy into Notepad rather than Word as this saves the step of converting from table to text.

Greg

| -----Original Message-----
| From: SPSSX(r) Discussion [mailto:[hidden email]]
| On Behalf Of Hector Maletta
| Sent: Friday, July 14, 2006 9:33 AM
| To: [hidden email]
| Subject: Re: SPSSX-L Digest - 12 Jul 2006 to 13 Jul 2006
| (#2006-192) List command output
|
| As another long term user, I did not create junk variables
| for syntax punctuation as Leslie did, but I copy the list of
| variables in Excel, then columns as needed, write the
| necessary symbols in the first row, such as "/" or anything
| else such as ")newvar=", then copy the whole to syntax.
| However, one of my recent hassles is that copyng Excel cells
| into syntax copies the grid too, and not simply the cells
| contents as text. To patch up in a hurry, I copy first from
| Excel to Word (as a Word table), then convert the table into
| text within Word, and finally copy to syntax.
| It is very clumsy and haphazard, but it works every time and
| saves me from thinking of new macros (or adapting old ones)
| to do it in a more elegant way. After all it is not every day
| that I need this kind of thing, and the situations are always
| different.
|
| Hector
|
| -----Mensaje original-----
| De: SPSSX(r) Discussion [mailto:[hidden email]] En
| nombre de Leslie Horst
| Enviado el: Friday, July 14, 2006 10:12 AM
| Para: [hidden email]
| Asunto: Re: SPSSX-L Digest - 12 Jul 2006 to 13 Jul 2006
| (#2006-192) List command output
|
| As a long-time SPSS user, I did a low-tech but effective
| solution, probably developed in a version long ago that
| didn't have "summarize."
|
| String junk (a1).
| Compute junk = '/'.
| List var = v1 junk v2 junk v3 junk v4 junk.
|
| Then when all of this goes into one column in Excel you can
| easily parse it using
| data/text to columns/delimited and then specifying the slash
| mark (or some other delimiter that you are sure does not
| exist in your data).
|
|
| -----Original Message-----
| From: SPSSX(r) Discussion [mailto:[hidden email]]
| On Behalf Of Automatic digest processor
| Sent: Friday, July 14, 2006 12:02 AM
| To: Recipients of SPSSX-L digests
| Subject: SPSSX-L Digest - 12 Jul 2006 to 13 Jul 2006 (#2006-192)
|
| There are 26 messages totalling 1173 lines in this issue.
|
| Topics of the day:
|
|   1. export to xls
|   2. List Command output (3)
|   3. Likert scale (3)
|   4. SPSS and Terminal Services
|   5. Using Execute (2)
|   6. reliability
|   7. A Distinctly Non-Normal Distribution (3)
|   8. execute
|   9. SPSS-GLM query
|  10. percentile scores
|  11. About Mann-Whitney Test...
|  12. Logistic Regression queries
|  13. Working with dates - now pulled out ALL MY HAIR
|  14. Survival plot
|  15. Use of the weighted Kappa statistic
|  16. White paper for percentile rank formula (2)
|  17. Outreach Program Director Opening
|  18. Follow-up MANCOVA interaction?
|
| ----------------------------------------------------------------------
|
| Date:    Thu, 13 Jul 2006 09:30:01 +0300
| From:    vlad simion <[hidden email]>
| Subject: export to xls
|
| Hi again,
|
| I posted yesterday a message regarding an export to excel. No
| one has any
| sugesstions about how to export a string variable to an xls
| file via ODBC
| driver. My problem is that it only saves the first letter of
| the string and
| not the whole string.
|
| Here is part of the export syntax:
|
| save translate
|  /connect= 'dsn=excel files;dbq=c:\documents and
| settings\ay08418\desktop\data\test.xls'
|  /table=!quote(!n)
|  /type=odbc
|  /replace.
|
| Thanks again!
|
| Vlad.
|
|
|
| --
| Vlad Simion
| Data Analyst
| Tel:      +40 720130611
|
| ------------------------------
|
| Date:    Thu, 13 Jul 2006 09:28:01 +0200
| From:    Mark Webb <[hidden email]>
| Subject: List Command output
|
| Hi all
| I'm using the List Command to list selected respondents and a few =
| variables.
| e.g. List Variables =3D Name Branch Dept Var1 Var2 Var3.
|
| When I copy & past into Excel all variables listed go into one column.
| How can I get a SPSS output that will export into columns - like for =
| example the Frequencies command ?
| The print command is much the same.
|
| Regards
|
| Mark Webb
|
| ------------------------------
|
| Date:    Thu, 13 Jul 2006 11:44:55 +0300
| From:    Dominic Fernandes <[hidden email]>
| Subject: Likert scale
|
| Hi All,
|
| I have a question. How do we analyze a Likert scale (consisting of 7 =
| responses) course evaluation questionnaire given to 25 teachers who =
| attended an in-service course? Shall we treat the responses as scale =
| variables and use parametric tests or shall we treat the
| responses as =
| ordinal variables and use the nonparametric test.
|
| Thank you in advance for your assistance.
|
| Dominic.
|
| ------------------------------
|
| Date:    Thu, 13 Jul 2006 11:55:27 +0100
| From:    Clare Gill <[hidden email]>
| Subject: SPSS and Terminal Services
|
| Hello
|
| We are considering a terminal services environment in UCD to
| deliver some of our applications to the UCD community.  We would
| still be installing the windows version of SPSS so we would like to
| know if there has been or if there currently is any issues with
| delivering SPSS in this way.
|
| As we have a large user community in UCD, we would also like to
| know what the recommended or expected hardware requirements
| would be to run SPSS in this environment as some of the analysis
| that can be done in SPSS can already use a lot of system resourses.
| We would consider 250/500 users as a guideline for system
| requirements.
|
| --
| Clare Gill
| Computing Services, University College Dublin,
| UCD Computer Centre, Belfield, Dublin 4.
| Tel: +353-1-716 2007 Fax: +353-1-2837077
| http://www.ucd.ie/computing
|
| ------------------------------
|
| Date:    Thu, 13 Jul 2006 06:36:26 -0500
| From:    "Peck, Jon" <[hidden email]>
| Subject: Re: Using Execute
|
| The reason that the default behavior is to generate EXECUTE
| commands when working from the transformation dialogs is that
| without that, the Data Editor window does not update
| immediately, and new users found this behavior confusing.  We
| wanted to make it easier to get started with SPSS.
|
| Since you can turn this setting off, and since many users do
| not use syntax anyway, this is pretty harmless.  The help for
| this option explains the efficiency issue, although many
| users never find their way to this spot.
|
| Regards,
| Jon Peck
|
| -----Original Message-----
| From: SPSSX(r) Discussion [mailto:[hidden email]]
| On Behalf Of Lisa Stickney
| Sent: Wednesday, July 12, 2006 9:50 PM
| To: [hidden email]
| Subject: Re: [SPSSX-L] Using Execute
|
| Hi Richard,
|
| >
| > Good ever-lovin' grief! Thank you, Lisa! I didn't even know
| that SPSS
| > inserted EXECUTEs following pasted syntax commands. (See below.)
| >
| > If SPSS puts EXECUTEs after every (pasted) transformation
| command, of
| > COURSE users will think they're necessary.
| >
| > Other readers: Did this happen to you, too? It's precisely the right
| > way to form precisely the wrong habit.
| >
|
| I don't think it does it with every transformation command, but it
| definitely does it with COMPUTE, RECODE & COUNT.  Plus it
| does it with some
| of the data commands -- FILTER, MATCH FILES, & ADD FILES that
| I know of.
|
|
| > Beadle, ViAnn wrote at 12:53 PM 7/12/2006,
| >
| >>Although its not really obvious, you can turn off the
| EXECUTE command
| >>[that's pasted after pasted transformation commands] in Edit>Options
| >>under Transformation and Merge Operations.
| >
| > To be precise (since I had to look for it myself), it's
| >
| > Edit>Options>Data; then, under Transformation and Merge Operations,
| > + To get the EXECUTE pasted, select "Calculate values immediately";
| > + To omit it, select "Calculate values before used".
| > (Which is the default?)
| >
|
| Thanks to both ViAnn & Richard for pointing this out.  I have happily
| changed my copy of SPSS.
|
| As for the default, I believe it's "Calculate values
| immediately."  The IT
| people have installed versions 11.5, 12, 13 & now 14 on my laptop, and
| they've all been set this way.  So, unless it's just my
| installation or
| they're changing this upon installation (I doubt it), it's
| probably the
| default.
|
| One other comment I have about this is option is that I think
| it would have
| little meaning to a newbie who's trying to learn SPSS sytax.
| Unless you're
| very familiar with SPSS, it's not clear how this relates to
| EXECUTE or why
| it might be important.
|
|
|     Best,
|         Lisa
|
| Lisa T. Stickney
| Ph.D. Candidate
| The Fox School of Business
|      and Management
| Temple University
| [hidden email]
|
| ------------------------------
|
| Date:    Thu, 13 Jul 2006 08:07:11 -0400
| From:    "Dates, Brian" <[hidden email]>
| Subject: Re: Likert scale
|
| Dominic,
|
| With seven (7) response categories, you can probably treat the data as
| interval in nature.  I'm a purist and my principal area of work is in
| measurement development and standardization, so I generally
| follow the IRT
| approach and then do the analysis on the IRT scaled items.
| Likert actually
| developed a scaling method for his approach to items which
| would be worth
| your time for future endeavors.  As part of your analysis, I
| would recommend
| at least performing a reliability analysis to check for
| internal consistency
| and the relationship of the items to total score, so you can
| identify any
| "clinker" items that might exist.  Good luck.
|
| Brian
|
| Brian G. Dates, Director of Quality Assurance
| Southwest Counseling and Development Services
| 1700 Waterman
| Detroit, Michigan  48209
| Telephone: 313.841.7442
| FAX:  313.841.4470
| email: [hidden email]
|
|
| > -----Original Message-----
| > From: Dominic Fernandes [SMTP:[hidden email]]
| > Sent: Thursday, July 13, 2006 4:45 AM
| > To:   [hidden email]
| > Subject:      Likert scale
| >
| > Hi All,
| >
| > I have a question. How do we analyze a Likert scale (consisting of 7
| > responses) course evaluation questionnaire given to 25 teachers who
| > attended an in-service course? Shall we treat the responses as scale
| > variables and use parametric tests or shall we treat the
| responses as
| > ordinal variables and use the nonparametric test.
| >
| > Thank you in advance for your assistance.
| >
| > Dominic.
| >
| >
| Confidentiality Notice for Email Transmissions: The
| information in this
| message is confidential and may be legally privileged. It is
| intended solely
| for the addressee.  Access to this message by anyone else is
| unauthorised.
| If you are not the intended recipient, any disclosure, copying, or
| distribution of the message, or any action or omission taken by you in
| reliance on it, is prohibited and may be unlawful.  Please immediately
| contact the sender if you have received this message in
| error. Thank you.
|
| ------------------------------
|
| Date:    Thu, 13 Jul 2006 07:59:57 -0500
| From:    "Beadle, ViAnn" <[hidden email]>
| Subject: Re: List Command output
|
| Use the summarize command which will give you a case listing
| in a table from which you can copy the cells:
|
| summarize name branch dept var1 var2 var3 /cells none/format
| list nocasenum.
|
| -----Original Message-----
| From: SPSSX(r) Discussion [mailto:[hidden email]]
| On Behalf Of Mark Webb
| Sent: Thursday, July 13, 2006 2:28 AM
| To: [hidden email]
| Subject: List Command output
|
| Hi all
| I'm using the List Command to list selected respondents and a
| few variables.
| e.g. List Variables = Name Branch Dept Var1 Var2 Var3.
|
| When I copy & past into Excel all variables listed go into one column.
| How can I get a SPSS output that will export into columns -
| like for example the Frequencies command ?
| The print command is much the same.
|
| Regards
|
| Mark Webb
|
| ------------------------------
|
| Date:    Thu, 13 Jul 2006 07:24:29 -0700
| From:    razina khayat <[hidden email]>
| Subject: reliability
|
| Hi all,
|    I'm trying to do a test-retest reliability analysis on
| attachment loss measurement. Attachment loss is measured in
| millimeters ranging from 0 to about 14. my question is how to
| do this within ± 1 mm accuracy. Also for validity assessment
| (using gold standard), how could we measure the same variable
| within 1 mm accuracy.
|   thanks,
|   razina
|
|
|
| ---------------------------------
| Talk is cheap. Use Yahoo! Messenger to make PC-to-Phone
| calls.  Great rates starting at 1¢/min.
|
| ------------------------------
|
| Date:    Thu, 13 Jul 2006 09:12:28 -0600
| From:    Stevan Nielsen <[hidden email]>
| Subject: A Distinctly Non-Normal Distribution
|
| Dear Colleagues,
|
| I have stumbled upon an interesting phenomenon: I have discovered that
| consumption of a valuable resource conforms to a very regular, reverse
| J-shaped distribution.  The modal case in our large sample (N
| = 16,000)
| consumes one unit, the next most common case consumes two units, the
| next most common three units, the next most common four units -- and
| this is the median case, and so on.  The average is at about
| 9.7 units,
| which falls between the 72nd and 73rd percentile in the
| distribution --
| clearly NOT an indicator of central tendency.
|
| I used SPSS Curve Estimation to examine five functional relationships
| between units consumed and proportion of consumers in the sample,
| testing proportion of consumers in the sample as linear, logarithmic,
| inverse, quadratic, or cubic functions of number of units consumed.  I
| found that the reciprocal model, estimating proportion of cases as the
| inverse of units consumed, was clearly the best solution, yielding a
| remarkable, and very reliable R2 = .966.  All five models
| were reliable,
| but the next best was the logarithmic solution, with R2 = .539; worst
| was the linear model, with R2 = .102.
|
| These seems like a remarkably regular, quite predictable relationship.
| I've spent my career so enamored with normal distributions
| that I'm not
| sure what to make of this distribution.  I have several questions for
| your consideration:
|
| Do any of you have experience with such functions?  (I
| believe it would
| be correct to call this a decay functions.)
|
| Where are such functions most likely to occur in nature, commerce,
| epidemiology, genetics, healthcare, and so on?
|
| What complications arise when attempting to form statistical
| inferences
| where such population distributions are present?  (We have other
| measurements for subjects in this distributions, measurements
| which are
| quite nicely normal in their distributions.)
|
| Your curious colleague,
|
| lars nielsen
|
| Stevan Lars Nielsen, Ph.D.
| Brigham Young University
|
| 801-422-3035; fax 801-422-0175
|
| ------------------------------
|
| Date:    Thu, 13 Jul 2006 08:31:18 -0700
| From:    "P. Scott Dixon" <[hidden email]>
| Subject: Re: execute
|
| I agree with ViAnn.  With all respect to the SPSS
| wunderkinds, I think the use
| of executes is useful to find flaws in my code at the point
| they occur, and the
| cost in processor time is negligible with small to mid size
| samples.  It is a
| strong tool in my, admittedly, less-than-perfect code.
|
| Scott
|
|  -----------------------------
|
| Date:    Wed, 12 Jul 2006 07:04:19 -0500
| From:    "Beadle, ViAnn" <[hidden email]>
| Subject: Re: reverse concat
|
| OK, I'll take your bait.=20
| =20
| I don't actually use EXECUTE in syntax but when I'm work out gnarly =
| transformation problems I frequently Run Pending
| Transformations (which =
| generates your dreaded EXECUTE) from the data editor and see
| the effects =
| of my transformations immediately. My strategy is to work
| through the =
| problem a "executable" unit at a time and check out the unit
| by running =
| the pending transformations. Unless your dealing with wide
| (more than =
| 200 or so variables) or long (more than 10,000) or more cases
| the time =
| to actually do the transformation is much less than the think
| time to =
| step mentally through the process.=20
| =20
| What does that extra processing really cost you? It's not
| like the bad =
| old days when every job submittal at $1.00 a pop ate up my
| account at =
| the University of Chicago Comp Center.
|
| ________________________________
|
| ------------------------------
|
| Date:    Thu, 13 Jul 2006 16:55:13 +0100
| From:    Shweta Manchanda <[hidden email]>
| Subject: SPSS-GLM query
|
| I would like to know what the estimated marginal means in the General
| Linear Model mean. I would be grateful to get any help.
|
| Many thanks.
|
| ------------------------------
|
| Date:    Thu, 13 Jul 2006 09:57:03 +0300
| From:    [hidden email]
| Subject: Re: percentile scores
|
| Small corrections: to obtain percentile scores using formulas
| described in psychometric literatuire (see, for instance,
| Crocker, L., & Algina, J. (1991). Introduction to classical
| and modern test theory. Orlando, FL: Harcourt Brace
| Jovanovich.), I would use slightly different options:
|
| 1)    Under the Transform menu, choose Rank Cases&#8230;
| 2)    Choose the variable containing the raw test scores
| 3)    Click on the Ties&#8230; box and choose MEAN as the
| Rank Assigned to Ties
| 4)    Click on the Rank Cases: Types&#8230; box and Check
| Fractional rank as %
| 5)    Check Rankit under Proportion Estimation Formula
| 6)    Uncheck Rank (Unless you want cumulative frequencies)
| 7)    Click on OK
|
|
| Dr. Alexander Vinogradov, Associate Professor
| Sociology and Psychology Faculty
| National Taras Shevchenko University
| Ukraine
|
| >> Statisticsdoc <[hidden email]> wrote: Humphrey,
|
| >> There are two ways to compute percentile scores in SPSS:
|
| >> You can use menus to compute percentile ranks as follows:
|
| >> 1.)    Under the Transform menu, choose Rank Cases&#8230;
| >> 2.)    Choose the variable containing the raw test scores
| >> 3.)    Click on the Ties&#8230; box and choose High as the
| Rank Assigned to Ties
| >> 4.)    Click on the Rank Cases: Types&#8230; box and Check
| Fractional rank as %
| >> 5.)    Uncheck Rank (Unless you want cumulative frequencies)
| >> 6.)    Click on OK
| >> 7.)    SPSS will label the variable that contains the
| percentile ranks "PCT001".
|
|
|
|
| ------------------------------
|
| Date:    Thu, 13 Jul 2006 02:59:55 -0500
| From:    John McConnell <[hidden email]>
| Subject: Re: List Command output
|
|  Hi Mark
|
|  If you use the Case Summaries option (from memory this is in
| thenAnalyze>Reports menu and in syntax SUMMARIZE) you get a
| list inside a pivot table which is the key to Excel compatibility.
|
|  From there you should be able to cut/paste or even export
| the table to Excel to keep more of the formatting.
|
|  hth
|
|  john
|
|  John McConnell
|  Applied Insights
|
|
| ---------- Original Message ----------------------------------
| From: Mark Webb <[hidden email]>
| Reply-To: Mark Webb <[hidden email]>
| Date:          Thu, 13 Jul 2006 09:28:01 +0200
|
| >Hi all
| >I'm using the List Command to list selected respondents and
| a few variables.
| >e.g. List Variables = Name Branch Dept Var1 Var2 Var3.
| >
| >When I copy & past into Excel all variables listed go into
| one column.
| >How can I get a SPSS output that will export into columns -
| like for example the Frequencies command ?
| >The print command is much the same.
| >
| >Regards
| >
| >Mark Webb
| >
|
| ------------------------------
|
| Date:    Thu, 13 Jul 2006 06:55:35 -0500
| From:    "Oliver, Richard" <[hidden email]>
| Subject: Re: Using Execute
|
| The default behavior is for the GUI to generate an Execute statement =
| after transformation commands generated from dialogs (e.g.
| the Compute =
| dialog). This is the case for any command generated from the dialogs =
| that requires some subsequent command to read the data (which
| is why it =
| also happens with Match Files and Add Files). AFAIK, Filter doesn't =
| require a subsequent command to read the data, but the same
| dialog that =
| generates Filter syntax can also generate Select If syntax.=20
| =20
| There are reasons this was deemed to be the best default for
| the GUI, =
| but it may not be ideal for those attempting to learn syntax
| by pasting =
| dialog selections (although all those Executes will tell you which =
| commands require some other command to read the data).=20
|
| ________________________________
|
| From: SPSSX(r) Discussion on behalf of Lisa Stickney
| Sent: Wed 7/12/2006 9:49 PM
| To: [hidden email]
| Subject: Re: Using Execute
|
|
|
| Hi Richard,
|
| >
| > Good ever-lovin' grief! Thank you, Lisa! I didn't even know
| that SPSS
| > inserted EXECUTEs following pasted syntax commands. (See below.)
| >
| > If SPSS puts EXECUTEs after every (pasted) transformation
| command, of
| > COURSE users will think they're necessary.
| >
| > Other readers: Did this happen to you, too? It's precisely the right
| > way to form precisely the wrong habit.
| >
|
| I don't think it does it with every transformation command, but it
| definitely does it with COMPUTE, RECODE & COUNT.  Plus it
| does it with =
| some
| of the data commands -- FILTER, MATCH FILES, & ADD FILES that
| I know of.
|
|
| > Beadle, ViAnn wrote at 12:53 PM 7/12/2006,
| >
| >>Although its not really obvious, you can turn off the
| EXECUTE command
| >>[that's pasted after pasted transformation commands] in Edit>Options
| >>under Transformation and Merge Operations.
| >
| > To be precise (since I had to look for it myself), it's
| >
| > Edit>Options>Data; then, under Transformation and Merge Operations,
| > + To get the EXECUTE pasted, select "Calculate values immediately";
| > + To omit it, select "Calculate values before used".
| > (Which is the default?)
| >
|
| Thanks to both ViAnn & Richard for pointing this out.  I have happily
| changed my copy of SPSS.
|
| As for the default, I believe it's "Calculate values
| immediately."  The =
| IT
| people have installed versions 11.5, 12, 13 & now 14 on my laptop, and
| they've all been set this way.  So, unless it's just my
| installation or
| they're changing this upon installation (I doubt it), it's
| probably the
| default.
|
| One other comment I have about this is option is that I think
| it would =
| have
| little meaning to a newbie who's trying to learn SPSS sytax.  Unless =
| you're
| very familiar with SPSS, it's not clear how this relates to
| EXECUTE or =
| why
| it might be important.
|
|
|     Best,
|         Lisa
|
| Lisa T. Stickney
| Ph.D. Candidate
| The Fox School of Business
|      and Management
| Temple University
| [hidden email]
|
| ------------------------------
|
| Date:    Thu, 13 Jul 2006 18:13:33 +0200
| From:    Karl Koch <[hidden email]>
| Subject: About Mann-Whitney Test...
|
| Hi all,
|
| The Mann-Whitney assumes independent samples. In Andy Fields
| book [2005] it also says that it assumes that datapoints are
| measured from different people - meaning that one person
| cannot partipate in both groups. I am not sure, if Andy Field
| here assumes that a datapoint always maps directory to a
| person (because of his background in Psychology). I find this
| a bit confusing and wouild like to ask you.
|
| My datapoint are actually measures (on a 6-point scale)
| (which come from people but several from each person). Each
| person partipates in each of the two groups - from which I
| want to test stat. significance if they are different or not.
| The order with which people partipate in this two groups is
| counterballanced and should therefore be independent.
|
| What do you think? Is this a case for Mann-Whitney then?
|
| Best Regards,
| Karl
| --
|
|
| Der GMX SmartSurfer hilft bis zu 70% Ihrer Onlinekosten zu sparen!
| Ideal für Modem und ISDN: http://www.gmx.net/de/go/smartsurfer
|
| ------------------------------
|
| Date:    Thu, 13 Jul 2006 11:31:32 -0500
| From:    Anthony Babinec <[hidden email]>
| Subject: Re: A Distinctly Non-Normal Distribution
|
| Here are a couple general comments.
|
| While the normal distribution might be a useful
| assumed distribution for errors in regression, there
| is no reason to think that it is necessarily useful for
| summarizing all phenomena out there in the world.
|
| As you have described your data, they are counts.
| In other words, values are 1, 2, 3 etc., and not
| real values in some interval.
|
| Are you looking at consumption in some fixed unit of time -
| say week, month, year? Given some assumptions, there
| are distributions such as the poisson that might
| be appropriate. It also could be the case that
| what you are studying represents a mixture of types,
| say usage types (low, medium, high), though that may or
| may not be the case here.
|
| Pete Fader(Wharton) and Bruce Hardie(London Business School)
| have a nice course on probability models in marketing that is
| regularly given at AMA events.
|
| -----Original Message-----
| From: SPSSX(r) Discussion [mailto:[hidden email]]
| On Behalf Of
| Stevan Nielsen
| Sent: Thursday, July 13, 2006 10:12 AM
| To: [hidden email]
| Subject: A Distinctly Non-Normal Distribution
|
| Dear Colleagues,
|
| I have stumbled upon an interesting phenomenon: I have discovered that
| consumption of a valuable resource conforms to a very regular, reverse
| J-shaped distribution.  The modal case in our large sample (N
| = 16,000)
| consumes one unit, the next most common case consumes two units, the
| next most common three units, the next most common four units -- and
| this is the median case, and so on.  The average is at about
| 9.7 units,
| which falls between the 72nd and 73rd percentile in the
| distribution --
| clearly NOT an indicator of central tendency.
|
| I used SPSS Curve Estimation to examine five functional relationships
| between units consumed and proportion of consumers in the sample,
| testing proportion of consumers in the sample as linear, logarithmic,
| inverse, quadratic, or cubic functions of number of units consumed.  I
| found that the reciprocal model, estimating proportion of cases as the
| inverse of units consumed, was clearly the best solution, yielding a
| remarkable, and very reliable R2 = .966.  All five models
| were reliable,
| but the next best was the logarithmic solution, with R2 = .539; worst
| was the linear model, with R2 = .102.
|
| These seems like a remarkably regular, quite predictable relationship.
| I've spent my career so enamored with normal distributions
| that I'm not
| sure what to make of this distribution.  I have several questions for
| your consideration:
|
| Do any of you have experience with such functions?  (I
| believe it would
| be correct to call this a decay functions.)
|
| Where are such functions most likely to occur in nature, commerce,
| epidemiology, genetics, healthcare, and so on?
|
| What complications arise when attempting to form statistical
| inferences
| where such population distributions are present?  (We have other
| measurements for subjects in this distributions, measurements
| which are
| quite nicely normal in their distributions.)
|
| Your curious colleague,
|
| lars nielsen
|
| Stevan Lars Nielsen, Ph.D.
| Brigham Young University
|
| 801-422-3035; fax 801-422-0175
|
| ------------------------------
|
| Date:    Thu, 13 Jul 2006 11:25:50 -0400
| From:    K <[hidden email]>
| Subject: Logistic Regression queries
|
| Hi all,
|
| I’m currently learning how to use Logistic Regression. I have
| a couple of
| queries that I can’t seem to find the answer to;
|
| 1) Most  of my odds ratios are around 1, with some slightly higher at
| 15, and one at 386.  This seems extremely high. Is there an
| explanation for
| such a high odds ratio?
| 2) I’ve read that high standard errors signal multcollinearity. Just
| how high is high though?
|
| ------------------------------
|
| Date:    Thu, 13 Jul 2006 14:04:11 -0300
| From:    Hector Maletta <[hidden email]>
| Subject: Re: A Distinctly Non-Normal Distribution
|
| The phenomenon you are describing seems to follow a Poisson
| distribution.
| There are also other asymmetrical distributions that apply to multiple
| phenomena in nature and society. The Poisson distribution was
| first tried by
| Bortkiewicz in the early 20th century to predict the number
| of Prussian
| soldiers killed annually by a horse-kick (or was it the
| number of times an
| average soldier would be kicked by a horse during his service
| time? I do not
| remember precisely), and applies to any event that could
| happen multiple
| times with decreasing probability of repetition.
| Another famous asymmetrical distribution, first rising to a
| maximum in some
| low value and then decreasing gradually towards higher
| values, is the Pareto
| equation for the distribution of income. He found few people with
| implausibly low incomes, then a maximum frequency around the
| most common
| wage level, and then a very long tail with decreasing
| frequencies as income
| increases all the way up to Bill Gates level.
| There is no reason to expect these phenomena to follow a
| symmetrical, let
| alone a normal Gaussian, distribution.
| Hector
|
|
| -----Mensaje original-----
| De: SPSSX(r) Discussion [mailto:[hidden email]] En nombre de
| Stevan Nielsen
| Enviado el: Thursday, July 13, 2006 12:12 PM
| Para: [hidden email]
| Asunto: A Distinctly Non-Normal Distribution
|
| Dear Colleagues,
|
| I have stumbled upon an interesting phenomenon: I have discovered that
| consumption of a valuable resource conforms to a very regular, reverse
| J-shaped distribution.  The modal case in our large sample (N
| = 16,000)
| consumes one unit, the next most common case consumes two units, the
| next most common three units, the next most common four units -- and
| this is the median case, and so on.  The average is at about
| 9.7 units,
| which falls between the 72nd and 73rd percentile in the
| distribution --
| clearly NOT an indicator of central tendency.
|
| I used SPSS Curve Estimation to examine five functional relationships
| between units consumed and proportion of consumers in the sample,
| testing proportion of consumers in the sample as linear, logarithmic,
| inverse, quadratic, or cubic functions of number of units consumed.  I
| found that the reciprocal model, estimating proportion of cases as the
| inverse of units consumed, was clearly the best solution, yielding a
| remarkable, and very reliable R2 = .966.  All five models
| were reliable,
| but the next best was the logarithmic solution, with R2 = .539; worst
| was the linear model, with R2 = .102.
|
| These seems like a remarkably regular, quite predictable relationship.
| I've spent my career so enamored with normal distributions
| that I'm not
| sure what to make of this distribution.  I have several questions for
| your consideration:
|
| Do any of you have experience with such functions?  (I
| believe it would
| be correct to call this a decay functions.)
|
| Where are such functions most likely to occur in nature, commerce,
| epidemiology, genetics, healthcare, and so on?
|
| What complications arise when attempting to form statistical
| inferences
| where such population distributions are present?  (We have other
| measurements for subjects in this distributions, measurements
| which are
| quite nicely normal in their distributions.)
|
| Your curious colleague,
|
| lars nielsen
|
| Stevan Lars Nielsen, Ph.D.
| Brigham Young University
|
| 801-422-3035; fax 801-422-0175
|
| ------------------------------
|
| Date:    Thu, 13 Jul 2006 13:13:42 -0400
| From:    "[Ela Bonbevan]" <[hidden email]>
| Subject: Working with dates - now pulled out ALL MY HAIR
|
| Many thanks to all who helped with my date problems.  I was
| able to use
| some of the syntax to fix some of my dates and I am most
| grateful for the
| help.
|
| I must confess that I don't find the date import from Excel
| to SPSS very
| straightforward.  The data in my excel database comes from
| two sources and
| the date formats are all over the place.
|
| One set comes as 19950816 and this is the easiest.  The other
| comes like
| this 1/20/1999 and displays like this 20-Jan-99.  Today when I did the
| excel import it brought up the following 4 cells.  Where 38058 was
| supposed to be 13 March 04 and 38205 was supposed to be 7
| August 04.  I
| have made numerous attempts to fix this by standardizing the
| formats in
| Excel before I bring the sheet in but formatting cells
| appears to have no
| effect.
|
| 08-JAN-04 38058            07.07.04  38205
|
| Any help??
|
| Diane
|
| Diane
|
| ------------------------------
|
| Date:    Thu, 13 Jul 2006 14:00:19 -0400
| From:    Trinh Luong <[hidden email]>
| Subject: Survival plot
|
| Dear All,
|   I'd like to change the y-axis labels on a Kaplan-Meier
| survival plot from proportions to percentages but I'm not
| sure how.  Could someone help me with this problem?
|   Many thanks,
|   Trinh Luong, MSc.
|   Erasmus MC Rotterdam
|
| ------------------------------
|
| Date:    Thu, 13 Jul 2006 15:06:52 -0300
| From:    "Della Mora, Marcelo" <[hidden email]>
| Subject: Likert scale
|
| Hi Dominic
|
| Try reliability analysis that tests internal consistency
| and relationship of each items to total score: correlation
| item-total, co-variance...
|
| hope this helps you
|
|
| Marcelo Della Mora
|
|
|
| -----Original Message-----
| From: SPSSX(r) Discussion [mailto:[hidden email]]On
| Behalf Of
| Dominic Fernandes
| Sent: Thursday, July 13, 2006 5:45 AM
| To: [hidden email]
| Subject: Likert scale
|
|
| Hi All,
|
| I have a question. How do we analyze a Likert scale
| (consisting of 7 responses) course evaluation questionnaire
| given to 25 teachers who attended an in-service course? Shall
| we treat the responses as scale variables and use parametric
| tests or shall we treat the responses as ordinal variables
| and use the nonparametric test.
|
| Thank you in advance for your assistance.
|
| Dominic.
|
| ------------------------------
|
| Date:    Thu, 13 Jul 2006 19:34:01 +0100
| From:    Margaret MacDougall <[hidden email]>
| Subject: Use of the weighted Kappa statistic
|
| Dear all
|
|   Apologies for cross-posting
|
|   Dear all
|
|   I plan to compare students' self-assessment scores (within
| the range A-E, or something similar) to that of the scores
| these students are allocated by their examiners (the 2nd
| examiners, say).  I would like to consider using a weighted
| Kappa statistic. However, several questions have emerged
| about how best to proceed.  I would be most grateful for
| assistance with each of them.
|
|   Could I please have some advice on how to decide between
| linear and quadratic weighting as the best choice in this case.
|
|   Further, whilst the raters fall conveniently into two
| perfectly distinguishable groups ('student and '2nd
| examiner'), these raters change from student to student,
| although occasionally, one 2nd examiner may rate more than
| one student. As I understand it, in its original form, the
| weighted Kappa statistic was designed not only under the
| assumption of there being two distinct classes of raters but
| also that these raters would not change from subject. I am
| therefore concerned that a standard  weighted Kappa statistic
| may not be the correct one for me.
|
|   A related question is what formula to use for the standard
| error of the weighted Kappa which I should use.
|
|   To summarize, I have raised three main questions, the first
| of which relates to the type of weighting to assume, the
| second to the appropriateness of using a standard Kappa
| statistic for my problem and the third of which relates in
| turn to what formula
|   to use for the standard error of the recommended Kappa statistic.
|
|   I look forward to receiving some much needed help.
|
|   Thank you so much
|
|   Best wishes
|
|   Margaret
|
|
|
| ---------------------------------
|  All new Yahoo! Mail "The new Interface is stunning in its
| simplicity and ease of use." - PC Magazine
|
| ------------------------------
|
| Date:    Thu, 13 Jul 2006 12:54:20 -0700
| From:    Frank Berry <[hidden email]>
| Subject: White paper for percentile rank formula
|
| Hi,
|
| Is there a white paper that explains a formula for each of
| the following percentiles in spss? When there is missing
| values in a numeric variable ranging from 0 to 100 (all
| integers), would the same formula for percentiles applies to
| missing values in that variable? For example, an equal
| percentile can be obtained for values of 49, 50 - missing and
| 51. Or in other cases, same value of 49 could have two or
| more percentiles.
|
| TIA.
| Frank
|
|   /PERCENTILES= 1 2 3 4 5 6 7 8 9 10
|  11 12 13 14 15 16 17 18 19 20
|  21 22 23 24 25 26 27 28 29 30
|  31 32 33 34 35 36 37 38 39 40
|  41 42 43 44 45 46 47 48 49 50
|  51 52 53 54 55 56 57 58 59 60
|  61 62 63 64 65 66 67 68 69 70
|  71 72 73 74 75 76 77 78 79 80
|  81 82 83 84 85 86 87 88 89 90
|  91 92 93 94 95 96 97 98 99
|
|
|
|
| ---------------------------------
| Talk is cheap. Use Yahoo! Messenger to make PC-to-Phone
| calls.  Great rates starting at 1¢/min.
|
| ------------------------------
|
| Date:    Thu, 13 Jul 2006 17:44:45 -0500
| From:    "Reutter, Alex" <[hidden email]>
| Subject: Re: White paper for percentile rank formula
|
| See the algorithms document for the procedure(s) you're interested in.
|
| http://support.spss.com/tech/Products/SPSS/Documentation/Stati
| stics/algorithms/index.html
|
| Using Guest/Guest as login/password.
|
| Alex
|
|
| > -----Original Message-----
| > From: SPSSX(r) Discussion [mailto:[hidden email]]
| On Behalf Of
| > Frank Berry
| > Sent: Thursday, July 13, 2006 2:54 PM
| > To: [hidden email]
| > Subject: White paper for percentile rank formula
| >
| > Hi,
| >
| > Is there a white paper that explains a formula for each of
| the following
| > percentiles in spss? When there is missing values in a
| numeric variable
| > ranging from 0 to 100 (all integers), would the same formula for
| > percentiles applies to missing values in that variable? For
| example, an
| > equal percentile can be obtained for values of 49, 50 -
| missing and 51. Or
| > in other cases, same value of 49 could have two or more percentiles.
| >
| > TIA.
| > Frank
| >
| >   /PERCENTILES= 1 2 3 4 5 6 7 8 9 10
| >  11 12 13 14 15 16 17 18 19 20
| >  21 22 23 24 25 26 27 28 29 30
| >  31 32 33 34 35 36 37 38 39 40
| >  41 42 43 44 45 46 47 48 49 50
| >  51 52 53 54 55 56 57 58 59 60
| >  61 62 63 64 65 66 67 68 69 70
| >  71 72 73 74 75 76 77 78 79 80
| >  81 82 83 84 85 86 87 88 89 90
| >  91 92 93 94 95 96 97 98 99
| >
| >
|
| ------------------------------
|
| Date:    Thu, 13 Jul 2006 16:43:18 -0400
| From:    AMANDA THOMAS <[hidden email]>
| Subject: Outreach Program Director Opening
|
| Hello,
| I have attached an Outreach Program Director job description.  This
| position is located within the Bureau of Research Training
| and Services at
| Kent State University's College of Education, Health and
| Human Services.
| The department works primarily in evaluating grants and and assisting
| clients in understanding and using their data. Please pass this on to
| anyone you know who may be interested and/or qualified.
| Applications can
| be submitted online at https://jobs.kent.edu.
|
| Amanda S. Thomas
| Bureau of Research Training and Services
| Kent State University
| 507 White Hall
| Kent, Ohio  44242
| Phone:  (330) 672-0788
|
|        Page 1
|
|
| ADMINISTRATIVE/PROFESSIONAL
|
| JOB DESCRIPTION
|                                             Developed for
| Equal Opportunity
|
|
| CLASS TITLE:      Outreach Program Director
|
| KSU  Class  Code:   AD47  EEO Code: 1C      FLSA:  Exempt
| Pay Grade: 08
|
|
| BASIC FUNCTION:
|
|    To  plan  and  direct  all  operational,  administrative,
| and financial
| activities of a designated
|    educational outreach program.
|
| CHARACTERISTIC DUTIES AND RESPONSIBILITIES:
|
|    Develop and market designated educational outreach program.
|
|    Establish objectives; develop strategies for achieving objectives.
|
|    Oversee University-designated program budget.
|
|    Develop and/or revise program policies and procedures.
|
|    Serve  as  liaison  to  various  constituent  groups
| relative to program
| activities.
|
|    Develop linkages with external organizations, professional
| associations,and community groups to support
|    program development.
|
|    Oversee the promotion and/or communication on all aspects
| of the program
| by  strategically  planning  for
|    the  development  and  distribution of various materials.
|
|    Identify and secure program funding opportunities.
|
|    Evaluate and implement changes to programs.
|
|    Serve on various University committees.
|
|    Perform related duties as assigned.
|
|    Employees  assigned  in  a  sales  unit and sales
| management role may be
|    responsible for:
|       ·     Identifying a list of prospects and managing the
| communication
| processes with the prospects among
|             the unit staff members.
|       ·     Meeting with client prospects (i.e. human resource
| managers,operations managers, decision-makers,
|             executives, professionals and employees) to
| present information
| about Kent State’s services and
|             products.
|       ·     Pricing solutions proposed, writing and reviewing
| proposals,contracts and training plans
|             reflecting the organizations’ needs and Kent
| State’s solutions.
|       ·     Maintaining electronic records utilizing client
| relationship
| management software.  Reporting on
|             the client activity using the software.
|       ·     Strategically planning for contracted sales
| activity, analyzing
| and reporting on prospect and
|             client activity.
|
| REPORTS TO:
|
|    Designated Supervisor
|
| LEADERSHIP AND SUPERVISION:
|
|    Leadership  of a small department, unit, or major function
| and/or direct
|    supervision over administrative/professional employees.
|
| MINIMUM QUALIFICATIONS:
|
|    Must Pass Criminal Background Check
|
|    Education and Experience:
|
|       Bachelor’s  degree  in  a  relevant field; Master’s
| degree preferred;
|       four to five years relevant experience.
|
|    Other Knowledge, Skills, and Abilities:
|
|       Knowledge  of  personal  computer  applications;
| budgeting; strategic
|       planning.
|
|       Knowledge  of  business,  employee  development  and/or
|  organization
|       development approaches and concepts.
|
|       Skill in written and interpersonal communication.
|
|       Demonstration   of  ability  to  maintain  client
| relationships  for
|       retaining and/or cross-selling clients.
|
|       Demonstration  of  ability  to  strategically plan for
| contract sales
|       services and for prospecting and client calls for a small unit
|
|       Ability to provide leadership and direction.
|
|       Evaluation  experience.   Experience writing grants,
| composing budget
|       proposals for grant evaluations.
|
|       Advanced understanding of SPSS software.
|
| PHYSICAL REQUIREMENTS:
|
|       Light  work:   Exerting up to 20 pounds of force
| occasionally, and/or
|       up  to  10  pounds  of  force frequently, and/or
| negligible amount of
|       force  constantly  to  move  objects.   Typically
| requires  sitting,
|       walking,  standing,  bending,  keying,  talking,
| hearing, seeing and
|       repetitive motions.
|
|       Incumbent  may  be  required  to  travel  from
| building  to building
|       frequently  and  campus to campus occasionally.  For
| those in a sales
|       management role, incumbent will be required to travel
| to client sites
|       frequently.
|
| The intent of this description is to illustrate the types of
| duties and
| responsibilities that will be required of positions given
| this title and
| should not be interpreted to describe all the specific duties and
| responsibilities that may be required in any particular
| position. Directly
| related experience/education beyond the minimum stated may be
| substituted
| where appropriate at the discretion of the Appointing
| Authority. Kent State
| University reserves the right to revise or change job duties,
| job hours,
| and responsibilities.
|
|    FILE: AD47
|    SOURCE:  Job Description
|    ANALYST: ds
|    DEPARTMENTAL AUTHORIZATION:       A.Lane 8/7/05
|
| ------------------------------
|
| Date:    Thu, 13 Jul 2006 19:05:13 -0700
| From:    jamie canataro <[hidden email]>
| Subject: Follow-up MANCOVA interaction?
|
| I have a question regarding a MANCOVA interaction and was
| wondering for some assistance.
|
|   I am performing a 2 X 3 MANCOVA and have received
| significant results on an interaction.  I am looking to
| follow-up the significant interaction to determine which of
| the specific means are statistically different.  Would this
| require me to create a syntax program? And if so, how can I
| acquire/create/borrow the syntax.
|
|   Thank you in advance.
|
|   Nicole Canning
|
|
| ---------------------------------
| Yahoo! Music Unlimited - Access over 1 million songs.Try it free.
|
| ------------------------------
|
| End of SPSSX-L Digest - 12 Jul 2006 to 13 Jul 2006 (#2006-192)
| **************************************************************
|