Upper lilmit on number of variables in SPSS 14?

classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

Upper lilmit on number of variables in SPSS 14?

James Danowski
I would like to build a data file with approximately 100,000 variables for
500,000 cases in SPSS 14.

I am using a windows machine with an AMD 64 X-2 processor with 4 gig of
RAM (3.2 gig useable).

Before I get started on a lot of work to prepare the data, I am hoping to
find out if this sized data set is feasible.  I cannot find the answer
elsewhere.

Your help is appreciated.

Jim
Reply | Threaded
Open this post in threaded view
|

Re: Upper limit on number of variables in SPSS 14?

Richard Ristow
At 11:00 AM 1/5/2007, James Danowski wrote:

>I would like to build a data file with approximately 100,000 variables
>for
>500,000 cases in SPSS 14. I am using a windows machine with an AMD 64
>X-2 processor with 4 gig of RAM (3.2 gig useable).
>
>Before I get started on a lot of work to prepare the data, I am hoping
>to
>find out if this sized data set is feasible.

This is a FAQ (I think, last posted Tue, 23 May 2006 (11:55:43 -0400)).
The below is by Jon Peck of SPSS, Inc., and applies to all recent
versions of SPSS.

Additional points:

. For most operations, increasing the number of cases will increase the
running time about in proportion

. Increasing the number of variables will generally increase the
running time about in proportion, even if you're not using them all,
because the running time is dominated by the time to read the file from
disk, i.e. the total file size

. After some point hard to estimate (though larger if the machine has
more RAM), increasing the number of variables will increase the running
out of all proportion, because it's hard to put one case in RAM without
paging.

. I emphasize Jon's point that "modern database practice would be to
break up your variables into cohesive subsets", i.e. to restructure
with many fewer variables, and more cases instead. A typical example is
changing from one record per entity with data for many years, to one
record per entity per year. But you know your problem, and can judge
what's best done in your instance.

At 10:25 AM 6/5/2003, Peck, Jon [of SPSS, Inc.] wrote:

>There are several points to making regarding very wide files and huge
>datasets.
>
>First, the theoretical SPSS limits are
>
>Number of variables: (2**31) -1
>Number of cases: (2**31) - 1
>
>In calculating these limits, count one for each 8 bytes or part
>thereof of a string variable.  An a10 variable counts as two
>variables, for example.
>
>Approaching the theoretical limit on the number of variables, however,
>is a very bad idea in practice for several reasons.
>
>1. These are the theoretical limits in that you absolutely cannot go
>beyond them.  But there are other environmentally imposed limits that
>you will surely hit first.  For example, Windows applications are
>absolutely limited to 2GB of addressable memory, and 1GB is a more
>practical limit.  Each dictionary entry requires about 100 bytes of
>memory, because in addition to the variable name, other variable
>properties also have to be stored.  (On non-Windows platforms, SPSS
>Server could, of course, face different environmental
>limits.)  Numerical variable values take 8 bytes as they are held as
>double precision floating point values.
>
>2. The overhead of reading and writing extremely wide cases when you
>are doubtless not using more than a small fraction of them will limit
>performance.  And you don't want to be paging the variable
>dictionary.  If you have lots of RAM, you can probably reach between
>32,000 and 100,000 variables before memory paging degrades performance
>seriously.
>
>3. Dialog boxes cannot display very large variable lists.  You can use
>variable sets to restrict the lists to the variables you are really
>using, but lists with thousands of variables will always be awkward.
>
>4. Memory usage is not just about the dictionary.  The operating
>system will almost always be paging code and data between memory and
>disk.  (You can look at paging rates via the Windows Task
>Manager).  The more you page, the slower things get, but the variable
>dictionary is only one among many objects that the operating system is
>juggling.  However, there is another effect.  On NT and later, Windows
>automatically caches files (code or data) in memory so that it can
>retrieve it quickly.  This cache occupies memory that is otherwise
>surplus, so if any application needs it, portions of the cache are
>discarded to make room.  You can see this effect quite clearly if you
>start SPSS or any other large application; then shut it down and start
>it again.  It will load much more quickly the second time, because it
>is retrieving the code modules needed at startup from memory rather
>than disk.  The Windows cache, unfortunately, will not help data
>access very much unless most of the dataset stays in memory, because
>the cache will generally hold the most recently accessed data.  If you
>are reading cases sequentially, the one you just finished with is the
>LAST one you will want again.
>
>5. These points apply mainly to the number of variables.  The number
>of cases is not subject to the same problems, because the cases are
>not generally all mapped into memory by SPSS (although Windows may
>cache them).  However, there are some procedures that because of their
>computational requirements do have to hold the entire dataset in
>memory, so those would not scale well up to immense numbers of cases.
>
>The point of having an essentially unlimited number of variables is
>not that you really need to go to that limit.  Rather it is to avoid
>hitting a limit incrementally.  It's like infinity.  You never want to
>go there, but any value smaller is an arbitrary limit, which SPSS
>tries to avoid.  It is better not to have a hard stopping rule.
>
>Modern database practice would be to break up your variables into
>cohesive subsets and combine these with join (MATCH FILES in SPSS)
>operations when you need variables from more than one subset.  SPSS is
>not a relational database, but working this way will be much more
>efficient and practical with very large numbers of variables.
>
>
>Regards,
>Jon Peck
>SPSS R & D
Reply | Threaded
Open this post in threaded view
|

Re: Upper lilmit on number of variables in SPSS 14?

Art Kendall
In reply to this post by James Danowski
The main limitations would be due to your system.

There is example syntax below the sig block.
Save all your current work, then open a new instance of SPSS.
Cut-and-paste then edit the syntax so it produces a file
the size you are thinking about.  (change 10 to 100000 in two lines, and
500 to 500000).
save the file under a few different filenames.  This should give you  a
few 400G files.
(100,000 variables * 500,000 cases * 8 bytes= 400G)  Do you have
sufficient storage for them?

the wall time to create the file and to save teh file should give you a
handle on the processing capacity.



However, this would be an unusual size file in many disciplines.
Are there logical subsets of the variables or of the cases so that you
do not need to pass this much data on every run?


Hope this helps.

Art
[hidden email]
Social Research Consultants
University Park, MD  USA
(Inside the Washington, DC beltway.)

new file.
input program.
vector x (10,f3).
loop #i = 1 to 500.
loop #p = 1 to 10.
compute x(#p) = rnd(rv.normal(50,10)).
end loop.
end case.
end loop.
end file.
end input program.
execute.


James Danowski wrote:

>I would like to build a data file with approximately 100,000 variables for
>500,000 cases in SPSS 14.
>
>I am using a windows machine with an AMD 64 X-2 processor with 4 gig of
>RAM (3.2 gig useable).
>
>Before I get started on a lot of work to prepare the data, I am hoping to
>find out if this sized data set is feasible.  I cannot find the answer
>elsewhere.
>
>Your help is appreciated.
>
>Jim
>
>
>
>
Art Kendall
Social Research Consultants