Finding and marking related cases

classic Classic list List threaded Threaded
31 messages Options
12
Reply | Threaded
Open this post in threaded view
|

Re: Finding and marking related cases

David Marso
Administrator
eg:

DATA LIST FREE / ID session x.
BEGIN DATA
1 1 1  1 1 1  1 1 2  1 2 1  1 2 2  1 2 3
2 1 1  2 1 1  2 1 1  2 1 2  2 2 1  2 2 2  2 3 4  2 3 5  2 3 5
END DATA.

AGGREGATE OUTFILE * / BREAK ID session x/ Count=N.  
AGGREGATE OUTFILE * MODE ADDVARIABLES / BREAK ID Session /N_X=N.
CASESTOVARS ID=ID session/FIXED=N_x/DROP Count.
LIST.

   ID  session     N_X      x.1      x.2      x.3
 
    1.00     1.00       2     1.00     2.00      .
    1.00     2.00       3     1.00     2.00     3.00
    2.00     1.00       2     1.00     2.00      .
    2.00     2.00       2     1.00     2.00      .
    2.00     3.00       2     4.00     5.00      .
 
 
Number of cases read:  5    Number of cases listed:  5
David Marso wrote
Please for everyone's convenience, post an illustrative data-snap and the desired outcome!
I'm not going to bother to attempt to simulate something to test against! Your turn!
Off the cuff? CASESTOVARS?
---
nessie wrote
I have worked my way through the basic aggregate functions and I'm starting
to get the hang of it.
I have just one basic question - is it possible to count the number of
different values a variable contains across the cases you aggregate and
present all the different values in the aggregated sufix variables?

E.g. I have a patient who has been to several different hospital
departments during one aggregated stay. For each case/episode the
department variable returns a value. Lets say the has 10 different partial
stays (cases) from 7 different departments wich I would like to aggregate.
How can I count the number of different departments the patient has been to
and present them in the aggregat variables (wich I would like to put at the
end of each case/part of stay).

Best regards
Lars
<SNIP>
Please reply to the list and not to my personal email.
Those desiring my consulting or training services please feel free to email me.
---
"Nolite dare sanctum canibus neque mittatis margaritas vestras ante porcos ne forte conculcent eas pedibus suis."
Cum es damnatorum possederunt porcos iens ut salire off sanguinum cliff in abyssum?"
Reply | Threaded
Open this post in threaded view
|

Re: Finding and marking related cases

nessie
In reply to this post by Maguin, Eugene
Sorry for my very late reply and for not providing example data when I posted my question. I fell bad!
Thank's to both of you for two different but excellent solutions. Espacially thank's to you David for taking time to make and provide am illustrative example.
 
I have a new and different problem:
I have a big number of hospital admissions. Some of them are special and draines a lot of resources and I want to mark all the other admissions within the time-frame of these admissions to check if they are influenced in a negative way. I modified your data David, to give an example. The special cases are marked with the value 1 for variable x.
 
I want to make a new variable that markes all the cases that overlaps with the in/out time of these special cases. Here ID 3 and ID 8 are special cases, and I want to mark case; 2, 4 and 9.
 
Best regards
Lars
 
DATA LIST FREE / ID in out x.
BEGIN DATA
1 1 2 0 2 3 3 0 3 3 4 1 4 4 5 0 5 6 8 0 7 7 12 0 8 13 13 1 9 13 14 0
END DATA.
 
This gives these data:

ID in out x

1,00 1,00 2,00 ,00

2,00 3,00 3,00 ,00

3,00 3,00 4,00 1,00

4,00 4,00 5,00 ,00

5,00 6,00 8,00 ,00

7,00 7,00 12,00 ,00

8,00 13,00 13,00 1,00

9,00 13,00 14,00 ,00

 
 
 

2014-11-24 16:08 GMT+01:00 Maguin, Eugene [via SPSSX Discussion] <[hidden email]>:

Look at the aggregate function CIN. It returns a count of values between two endpoints.

Gene Maguin

 

From: SPSSX(r) Discussion [mailto:[hidden email]] On Behalf Of nessie
Sent: Friday, November 21, 2014 7:30 AM
To: [hidden email]
Subject: Re: Finding and marking related cases

 

I have worked my way through the basic aggregate functions and I'm starting to get the hang of it.

I have just one basic question - is it possible to count the number of different values a variable contains across the cases you aggregate and present all the different values in the aggregated sufix variables?

 

E.g. I have a patient who has been to several different hospital departments during one aggregated stay. For each case/episode the department variable returns a value. Lets say the has 10 different partial stays (cases) from 7 different departments wich I would like to aggregate. How can I count the number of different departments the patient has been to and present them in the aggregat variables (wich I would like to put at the end of each case/part of stay).

 

Best regards

Lars

 

2014-11-13 15:20 GMT+01:00 Lars E. Næss-Pleym <[hidden email]>:

Sorry about the missing spaces, it probably doesn't matter, but I try again to make it more easy to read.

 

DATA LIST LIST
/ dia (A2) c_level (A2) case id in out .
BEGIN DATA.
"A1" "IP" 1 11 13 17
"B1" "OP" 2 12 15 15
"A1" "OP" 3 11 14 14
"A2" "IP" 4 13 15 22
"B2" "IP" 5 11 17 22
"C1" "IP" 6 12 17 24
"B3" "IP" 7 11 27 29
"C4" "IP" 8 13 22 29
"D1" "IP" 9 12 24 26
"D2" "IP" 10 12 28 30
END DATA.

LIST.

 

SORT CASES BY id in out.
IF ($CASENUM EQ 1) episode=1.
DO REPEAT #=1 TO 10.
+  IF (id=LAG(id,#)) AND RANGE(in,lag(in,#),lag(out,#)) episode=lag(episode).
END REPEAT.
IF MISSING(episode) episode=LAG(episode) + 1.

LIST.

 

SORT CASES BY episode(A) in(A) out(A).
MATCH FILES
  /FILE=*
  /BY episode
  /FIRST=FirstCase
  /LAST=PrimaryLast.
DO IF (FirstCase).
COMPUTE  CaseNr=1-PrimaryLast.
ELSE.
COMPUTE  CaseNr=CaseNr+1.
END IF.
LEAVE  CaseNr.
FORMATS  CaseNr (f7).
MATCH FILES
  /FILE=*
  /DROP=PrimaryLast.
VALUE LABELS  FirstCase 0 'Duplicate Case' 1 'Primary Case' 2'Unique case'.
VARIABLE LEVEL  FirstCase (ORDINAL) /CaseNr (SCALE).
EXECUTE.

 

IF (CaseNr=0) FirstCase=2.
EXECUTE.

IF (FirstCase=2) CaseNr=1.
EXECUTE.

2014-11-13 15:17 GMT+01:00 Lars E. Næss-Pleym <[hidden email]>:

 

Here is an example syntax. dia=diagnosis, c_level=care_level (IP=In-Patient, OP=Out-Patient), case=unique case_ID, id=person_ID, in=date_in, out=date_out

 

 DATA LIST LIST
/ dia (A2) c_level (A2) case id in out .
BEGIN DATA.
"A1" "IP" 1 11 13 17
"B1" "OP" 2 12 15 15
"A1" "OP" 3 11 14 14
"A2" "IP" 4 13 15 22
"B2" "IP" 5 11 17 22
"C1" "IP" 6 12 17 24
"B3" "IP" 7 11 27 29
"C4" "IP" 8 13 22 29
"D1" "IP" 9 12 24 26
"D2" "IP" 10 12 28 30
END DATA.

LIST.

SORT CASES BY id in out.
IF ($CASENUM EQ 1) episode=1.
DO REPEAT #=1 TO 10.
+  IF (id=LAG(id,#)) AND RANGE(in,lag(in,#),lag(out,#)) episode=lag(episode).
END REPEAT.
IF MISSING(episode) episode=LAG(episode) + 1.

LIST.

SORT CASES BY episode(A) in(A) out(A).
MATCH FILES
  /FILE=*
  /BY episode
  /FIRST=FirstCase
  /LAST=PrimaryLast.
DO IF (FirstCase).
COMPUTE  CaseNr=1-PrimaryLast.
ELSE.
COMPUTE  CaseNr=CaseNr+1.
END IF.
LEAVE  CaseNr.
FORMATS  CaseNr (f7).
MATCH FILES
  /FILE=*
  /DROP=PrimaryLast.
VALUE LABELS  FirstCase 0 'Duplicate Case' 1 'Primary Case' 2'Unique case'.
VARIABLE LEVEL  FirstCase (ORDINAL) /CaseNr (SCALE).
EXECUTE.

IF (CaseNr=0) FirstCase=2.
EXECUTE.

IF (FirstCase=2) CaseNr=1.
EXECUTE.

From this I want to keep all the uniqe cases but also make a new aggregatet case for all episodes containing "in" from the first case and "out" from the last, the "id", and "episode" variables, the last "Dia" variable, and the first "c_level" variable.

 

I also want to know e.g. how many different unique diagnosis within the same episode and if the "c_level" has been the same for all cases within one episode.

 

I will look up look up AGGREGATE, RENAME VARIABLES, CASESTOVARS and MATCH FILES, thanks for the tip.

 

Best regards

Lars
 

2014-11-13 14:18 GMT+01:00 David Marso [via SPSSX Discussion] <[hidden email]>:

 

Lars,
  Please post a more illustrative data set with a before/after of how you want the final result to appear.
Meanwhile look up AGGREGATE, RENAME VARIABLES, CASESTOVARS and MATCH FILES commands.
HTH, David
==

nessie wrote

Thanks David
That worked!

From the original question in this tread, I have made an episode variable
linking concurrent admissions.
I have then sorted this in cronological order av given it a case number
within the episode.

For "aggregated" episodes I now want to make a new case with aggregated
data for all the same variables but pick some of the variables from the
first and some from the last case. I can sort my episode variables
according to casenumber and then try to aggregate with all the variables as
break variables from either the first or the last case.
- Is this the best way to do it?
- What if I want to gather som data from on of the episodes in the middle?
Is there an easy way to do this?

Best regards
Lars N.


2014-11-13 10:54 GMT+01:00 David Marso [via SPSSX Discussion] <
[hidden email]>:


> See AGGREGATE in the FM.  There are FIRST and LAST functions.
> --
> AGGREGATE OUTFILE * MODE=ADDVARIABLES
> /BREAK.../f=FIRST(?)/l=LAST(?)..........
>
>  nessie wrote
> I'm picking up this old tread.
> You helped me a lot in finding related cases and marking them with
> chronological case number.Now I have a new problem!
>
> I want to make a new aggregated case containing some info from the first
> and some from the last related case.
> E.g. "In" from the first related and "Out" from the last related case. Can
> you help me do this?
>
> Best regards
> Lars N.
> 2014-05-23 12:39 GMT+02:00 David Marso [via SPSSX Discussion] <

> [hidden email] <http://user/SendEmail.jtp?type=node&node=5727915&i=0>>:

>


> > Good catch Rich!
> > Here is a version using #scratch variables and a slightly different
> > approach.
> > DATA LIST LIST / case id in out.
> > BEGIN DATA.
> > 1, 11, 13, 17
> > 2, 12, 14, 15
> > 3, 11, 14, 14
> > 4, 13, 15, 22
> > 5, 11, 17, 22
> > 6, 12, 17, 24
> > 7, 11, 27, 29
> > END DATA.
> >
> > SORT CASES BY id in out.
> > DO IF ($CASENUM EQ 1 OR id NE LAG(id) ).
> > +  COMPUTE  episode=SUM(1,LAG(episode)).
> > +  COMPUTE  #hiout= out.
> > ELSE.
> > +  COMPUTE episode=sum(lag(episode),NOT(range(in,lag(in),#hiout ))).
> > END IF.
> > COMPUTE #hiout= max(out, #hiout ).
> >
> >  Rich Ulrich wrote
> > If I see the problem right, logically, you only need to look at one
> > previous line.
> >
> > If it is the same episode, then you want to extend the testable OUT date
> > whenever the new line has a higher one.  Since the file is sorted by
> > IN, the previous IN is always okay.
> >      This looks like it should work -
> >
> > SORT CASES BY id in out.
> >
> > DO IF ($CASENUM EQ 1)
> > +COMPUTE  episode=1.
> >
> > +COMPUTE  hiout= out.
> > END IF.
> >
> > DO IF (id=LAG(id)) AND RANGE(in,lag(in),lag(hiout) ) .
> > +COMPUTE episode=lag(episode).
> >
> > +COMPUTE hiout= max(out, lag(hiout) ).
> > END IF.
> >
> > DO IF MISSING(episode) .
> > +COMPUTE  episode=LAG(episode) + 1.
> >
> > +COMPUTE  hiout= out.
> > END IF.
> >
> > * That shows the logic explicitly.  Since temporary vars (#)  keep their
> > * values until changed, across cases, the code should work the same
> > * if #hiout replaced both hiout and  lag(hiout). Conceivably, that might
> > * run faster than using lag(hiout).
> >
> > --
> > Rich Ulrich
> >
> >
> >
> > Date: Thu, 22 May 2014 13:43:25 -0700
> > From: [hidden email]
> > <http://user/SendEmail.jtp?type=node&node=5726201&i=0>
> > Subject: Re: Finding and marking related cases
> > To: [hidden email] <
> http://user/SendEmail.jtp?type=node&node=5726201&i=1>
>
> >
> > Thank's a lot.
> > Is there any way to find the number of combined  cases I shall put in
> the
> > "lookback", or is there a syntax that can regulate this itself? I have a
> > couple of hundred thousand singel cases in total and no idea how many
> > combined cases I will end up with.Is it possible to sort the constructed
> > combined case numbers based on in and not id first?
> > Best regardsLars N.
> >  19. mai 2014 kl. 18:22 skrev David Marso [via SPSSX Discussion]
> <[hidden
> > email]>:
> >
> >         Something like the following?
> >
> > Note you may need to change the number of cases in the "lookback" (in
> this
> > case 4).
> >
> > --
> >
> > DATA LIST LIST / case id in out.
> >
> > BEGIN DATA.
> >
> > 1, 11, 13, 17
> >
> > 2, 12, 14, 15
> >
> > 3, 11, 14, 14
> >
> > 4, 13, 15, 22
> >
> > 5, 11, 17, 22
> >
> > 6, 12, 17, 24
> >
> > 7, 11, 27, 29
> >
> > END DATA.
> >
> >
> > SORT CASES BY id in out.
> >
> > IF ($CASENUM EQ 1) episode=1.
> >
> > DO REPEAT #=1 TO 4.
> >
> > +  IF (id=LAG(id,#)) AND RANGE(in,lag(in,#),lag(out,#))
> > episode=lag(episode).
> >
> > END REPEAT.
> >
> > IF MISSING(episode) episode=LAG(episode) + 1.
> >
> >
> > LIST.
> >
> >
> >
> >
> >     case       id       in      out  episode
> >
> >
> >
> >     1.00    11.00    13.00    17.00     1.00
> >
> >     3.00    11.00    14.00    14.00     1.00
> >
> >     5.00    11.00    17.00    22.00     1.00
> >
> >     7.00    11.00    27.00    29.00     2.00
> >
> >     2.00    12.00    14.00    15.00     3.00
> >
> >     6.00    12.00    17.00    24.00     4.00
> >
> >     4.00    13.00    15.00    22.00     5.00
> >
> >
> >
> >
> >
> > Number of cases read:  7    Number of cases listed:  7
> >
> >
> >
> >
> >                                 Please reply to the list and not to my
> > personal email.
> >
> > Those desiring my consulting or training services please feel free to
> > email me.
> >
> > ---
> >
> > "Nolite dare sanctum canibus neque mittatis margaritas vestras ante
> porcos
> > ne forte conculcent eas pedibus suis."
> >
> > Cum es damnatorum possederunt porcos iens ut salire off sanguinum cliff
> in
> > abyssum?"
> >
> >
> >
> >
> >
> >
> >
> >                 If you reply to this email, your message will be added
> to
> > the discussion below:
> >
> >
> http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5726135.html
> >
> >
> >
> >                 To unsubscribe from Finding and marking related cases,
> > click here.
> >
> >                 NAML
> >
> >
> >
> >
> >
> >
> >
> >
> > View this message in context: Re: Finding and marking related cases
> >
> > Sent from the SPSSX Discussion mailing list archive at Nabble.com.
> >
> > Please reply to the list and not to my personal email.
> > Those desiring my consulting or training services please feel free to
> > email me.
> > ---
> > "Nolite dare sanctum canibus neque mittatis margaritas vestras ante
> porcos
> > ne forte conculcent eas pedibus suis."
> > Cum es damnatorum possederunt porcos iens ut salire off sanguinum cliff
> in
> > abyssum?"
> >
> >
> > ------------------------------
> >  If you reply to this email, your message will be added to the
> discussion
> > below:
> >
> >
> http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5726201.html
> >  To unsubscribe from Finding and marking related cases, click here
> > <
>

> > .
> > NAML
> > <
>
http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>

> <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E>


> >
>
> Please reply to the list and not to my personal email.
> Those desiring my consulting or training services please feel free to
> email me.
> ---
> "Nolite dare sanctum canibus neque mittatis margaritas vestras ante porcos
> ne forte conculcent eas pedibus suis."
> Cum es damnatorum possederunt porcos iens ut salire off sanguinum cliff in
> abyssum?"
>
>
> ------------------------------
>  If you reply to this email, your message will be added to the discussion
> below:
>

> http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5727915.html
>  To unsubscribe from Finding and marking related cases, click here
> < .
> NAML
> <
http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>

Please reply to the list and not to my personal email.
Those desiring my consulting or training services please feel free to email me.
---
"Nolite dare sanctum canibus neque mittatis margaritas vestras ante porcos ne forte conculcent eas pedibus suis."
Cum es damnatorum possederunt porcos iens ut salire off sanguinum cliff in abyssum?"

 


If you reply to this email, your message will be added to the discussion below:

http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5727918.html

To unsubscribe from Finding and marking related cases, click here.
NAML

 

 

 


View this message in context: Re: Finding and marking related cases
Sent from the SPSSX Discussion mailing list archive at Nabble.com.
===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD

===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD


If you reply to this email, your message will be added to the discussion below:
http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5728006.html
To unsubscribe from Finding and marking related cases, click here.
NAML

Reply | Threaded
Open this post in threaded view
|

Re: Finding and marking related cases

David Marso
Administrator
Look up SHIFT VALUES (alternatively use the CREATE command).
There is also a RANGE function in COMPUTE.
--
nessie wrote
Sorry for my very late reply and for not providing example data when I
posted my question. I fell bad!
Thank's to both of you for two different but excellent solutions.
Espacially thank's to you David for taking time to make and provide am
illustrative example.

*I have a new and different problem:*
I have a big number of hospital admissions. Some of them are special and
draines a lot of resources and I want to mark all the other admissions
within the time-frame of these admissions to check if they are influenced
in a negative way. I modified your data David, to give an example.
The special cases are marked with the value 1 for variable x.

I want to make a new variable that markes all the cases that overlaps with
the in/out time of these special cases. Here ID 3 and ID 8 are special
cases, and I want to mark case; 2, 4 and 9.

Best regards
Lars

DATA LIST FREE / ID in out x.
BEGIN DATA
1 1 2 0 2 3 3 0 3 3 4 1 4 4 5 0 5 6 8 0 7 7 12 0 8 13 13 1 9 13 14 0
END DATA.

This gives these data:

ID in out x

1,00 1,00 2,00 ,00

2,00 3,00 3,00 ,00

3,00 3,00 4,00 1,00

4,00 4,00 5,00 ,00

5,00 6,00 8,00 ,00

7,00 7,00 12,00 ,00

8,00 13,00 13,00 1,00

9,00 13,00 14,00 ,00





2014-11-24 16:08 GMT+01:00 Maguin, Eugene [via SPSSX Discussion] <
[hidden email]>:

>  Look at the aggregate function CIN. It returns a count of values between
> two endpoints.
>
> Gene Maguin
>
>
>
> *From:* SPSSX(r) Discussion [mailto:[hidden email]
> <http:///user/SendEmail.jtp?type=node&node=5728006&i=0>] *On Behalf Of *
> nessie
> *Sent:* Friday, November 21, 2014 7:30 AM
> *To:* [hidden email]
> <http:///user/SendEmail.jtp?type=node&node=5728006&i=1>
> *Subject:* Re: Finding and marking related cases
>
>
>
> I have worked my way through the basic aggregate functions and I'm
> starting to get the hang of it.
>
> I have just one basic question - is it possible to count the number of
> different values a variable contains across the cases you aggregate and
> present all the different values in the aggregated sufix variables?
>
>
>
> E.g. I have a patient who has been to several different hospital
> departments during one aggregated stay. For each case/episode the
> department variable returns a value. Lets say the has 10 different partial
> stays (cases) from 7 different departments wich I would like to aggregate.
> How can I count the number of different departments the patient has been to
> and present them in the aggregat variables (wich I would like to put at the
> end of each case/part of stay).
>
>
>
> Best regards
>
> Lars
>
>
>
> 2014-11-13 15:20 GMT+01:00 Lars E. Næss-Pleym <[hidden email]
> <http:///user/SendEmail.jtp?type=node&node=5727996&i=0>>:
>
>  Sorry about the missing spaces, it probably doesn't matter, but I try
> again to make it more easy to read.
>
>
>
> DATA LIST LIST
> / dia (A2) c_level (A2) case id in out .
> BEGIN DATA.
> "A1" "IP" 1 11 13 17
> "B1" "OP" 2 12 15 15
> "A1" "OP" 3 11 14 14
> "A2" "IP" 4 13 15 22
> "B2" "IP" 5 11 17 22
> "C1" "IP" 6 12 17 24
> "B3" "IP" 7 11 27 29
> "C4" "IP" 8 13 22 29
> "D1" "IP" 9 12 24 26
> "D2" "IP" 10 12 28 30
> END DATA.
>
> LIST.
>
>
>
> SORT CASES BY id in out.
> IF ($CASENUM EQ 1) episode=1.
> DO REPEAT #=1 TO 10.
> +  IF (id=LAG(id,#)) AND RANGE(in,lag(in,#),lag(out,#))
> episode=lag(episode).
> END REPEAT.
> IF MISSING(episode) episode=LAG(episode) + 1.
>
> LIST.
>
>
>
> SORT CASES BY episode(A) in(A) out(A).
> MATCH FILES
>   /FILE=*
>   /BY episode
>   /FIRST=FirstCase
>   /LAST=PrimaryLast.
> DO IF (FirstCase).
> COMPUTE  CaseNr=1-PrimaryLast.
> ELSE.
> COMPUTE  CaseNr=CaseNr+1.
> END IF.
> LEAVE  CaseNr.
> FORMATS  CaseNr (f7).
> MATCH FILES
>   /FILE=*
>   /DROP=PrimaryLast.
> VALUE LABELS  FirstCase 0 'Duplicate Case' 1 'Primary Case' 2'Unique case'.
> VARIABLE LEVEL  FirstCase (ORDINAL) /CaseNr (SCALE).
> EXECUTE.
>
>
>
> IF (CaseNr=0) FirstCase=2.
> EXECUTE.
>
> IF (FirstCase=2) CaseNr=1.
> EXECUTE.
>
> 2014-11-13 15:17 GMT+01:00 Lars E. Næss-Pleym <[hidden email]
> <http:///user/SendEmail.jtp?type=node&node=5727996&i=1>>:
>
>
>
>  Here is an example syntax. dia=diagnosis, c_level=care_level
> (IP=In-Patient, OP=Out-Patient), case=unique case_ID, id=person_ID,
> in=date_in, out=date_out
>
>
>
>  DATA LIST LIST
> / dia (A2) c_level (A2) case id in out .
> BEGIN DATA.
> "A1" "IP" 1 11 13 17
> "B1" "OP" 2 12 15 15
> "A1" "OP" 3 11 14 14
> "A2" "IP" 4 13 15 22
> "B2" "IP" 5 11 17 22
> "C1" "IP" 6 12 17 24
> "B3" "IP" 7 11 27 29
> "C4" "IP" 8 13 22 29
> "D1" "IP" 9 12 24 26
> "D2" "IP" 10 12 28 30
> END DATA.
>
> LIST.
>
> SORT CASES BY id in out.
> IF ($CASENUM EQ 1) episode=1.
> DO REPEAT #=1 TO 10.
> +  IF (id=LAG(id,#)) AND RANGE(in,lag(in,#),lag(out,#))
> episode=lag(episode).
> END REPEAT.
> IF MISSING(episode) episode=LAG(episode) + 1.
>
> LIST.
>
> SORT CASES BY episode(A) in(A) out(A).
> MATCH FILES
>   /FILE=*
>   /BY episode
>   /FIRST=FirstCase
>   /LAST=PrimaryLast.
> DO IF (FirstCase).
> COMPUTE  CaseNr=1-PrimaryLast.
> ELSE.
> COMPUTE  CaseNr=CaseNr+1.
> END IF.
> LEAVE  CaseNr.
> FORMATS  CaseNr (f7).
> MATCH FILES
>   /FILE=*
>   /DROP=PrimaryLast.
> VALUE LABELS  FirstCase 0 'Duplicate Case' 1 'Primary Case' 2'Unique case'.
> VARIABLE LEVEL  FirstCase (ORDINAL) /CaseNr (SCALE).
> EXECUTE.
>
> IF (CaseNr=0) FirstCase=2.
> EXECUTE.
>
> IF (FirstCase=2) CaseNr=1.
> EXECUTE.
>
> From this I want to keep all the uniqe cases but also make a new
> aggregatet case for all episodes containing "in" from the first case and
> "out" from the last, the "id", and "episode" variables, the
> last "Dia" variable, and the first "c_level" variable.
>
>
>
> I also want to know e.g. how many different unique diagnosis within the
> same episode and if the "c_level" has been the same for all cases
> within one episode.
>
>
>
> I will look up look up AGGREGATE, RENAME VARIABLES, CASESTOVARS and MATCH
> FILES, thanks for the tip.
>
>
>
> Best regards
>
> Lars
>
>
> 2014-11-13 14:18 GMT+01:00 David Marso [via SPSSX Discussion] <[hidden
> email] <http:///user/SendEmail.jtp?type=node&node=5727996&i=2>>:
>
>
>
> Lars,
>   Please post a more illustrative data set with a before/after of how you
> want the final result to appear.
> Meanwhile look up AGGREGATE, RENAME VARIABLES, CASESTOVARS and MATCH FILES
> commands.
> HTH, David
> ==
>
>  *nessie wrote*
>
> Thanks David
> That worked!
>
> From the original question in this tread, I have made an episode variable
> linking concurrent admissions.
> I have then sorted this in cronological order av given it a case number
> within the episode.
>
> For "aggregated" episodes I now want to make a new case with aggregated
> data for all the same variables but pick some of the variables from the
> first and some from the last case. I can sort my episode variables
> according to casenumber and then try to aggregate with all the variables
> as
> break variables from either the first or the last case.
> - Is this the best way to do it?
> - What if I want to gather som data from on of the episodes in the middle?
> Is there an easy way to do this?
>
> Best regards
> Lars N.
>
>
> 2014-11-13 10:54 GMT+01:00 David Marso [via SPSSX Discussion] <
> [hidden email] <http://user/SendEmail.jtp?type=node&node=5727918&i=0>>:
>
>
> > See AGGREGATE in the FM.  There are FIRST and LAST functions.
> > --
> > AGGREGATE OUTFILE * MODE=ADDVARIABLES
> > /BREAK.../f=FIRST(?)/l=LAST(?)..........
> >
> >  nessie wrote
> > I'm picking up this old tread.
> > You helped me a lot in finding related cases and marking them with
> > chronological case number.Now I have a new problem!
> >
> > I want to make a new aggregated case containing some info from the first
> > and some from the last related case.
> > E.g. "In" from the first related and "Out" from the last related case.
> Can
> > you help me do this?
> >
> > Best regards
> > Lars N.
> > 2014-05-23 12:39 GMT+02:00 David Marso [via SPSSX Discussion] <
>
> > [hidden email] <http://user/SendEmail.jtp?type=node&node=5727915&i=0>>:
>
> >
>
>
> > > Good catch Rich!
> > > Here is a version using #scratch variables and a slightly different
> > > approach.
> > > DATA LIST LIST / case id in out.
> > > BEGIN DATA.
> > > 1, 11, 13, 17
> > > 2, 12, 14, 15
> > > 3, 11, 14, 14
> > > 4, 13, 15, 22
> > > 5, 11, 17, 22
> > > 6, 12, 17, 24
> > > 7, 11, 27, 29
> > > END DATA.
> > >
> > > SORT CASES BY id in out.
> > > DO IF ($CASENUM EQ 1 OR id NE LAG(id) ).
> > > +  COMPUTE  episode=SUM(1,LAG(episode)).
> > > +  COMPUTE  #hiout= out.
> > > ELSE.
> > > +  COMPUTE episode=sum(lag(episode),NOT(range(in,lag(in),#hiout ))).
> > > END IF.
> > > COMPUTE #hiout= max(out, #hiout ).
> > >
> > >  Rich Ulrich wrote
> > > If I see the problem right, logically, you only need to look at one
> > > previous line.
> > >
> > > If it is the same episode, then you want to extend the testable OUT
> date
> > > whenever the new line has a higher one.  Since the file is sorted by
> > > IN, the previous IN is always okay.
> > >      This looks like it should work -
> > >
> > > SORT CASES BY id in out.
> > >
> > > DO IF ($CASENUM EQ 1)
> > > +COMPUTE  episode=1.
> > >
> > > +COMPUTE  hiout= out.
> > > END IF.
> > >
> > > DO IF (id=LAG(id)) AND RANGE(in,lag(in),lag(hiout) ) .
> > > +COMPUTE episode=lag(episode).
> > >
> > > +COMPUTE hiout= max(out, lag(hiout) ).
> > > END IF.
> > >
> > > DO IF MISSING(episode) .
> > > +COMPUTE  episode=LAG(episode) + 1.
> > >
> > > +COMPUTE  hiout= out.
> > > END IF.
> > >
> > > * That shows the logic explicitly.  Since temporary vars (#)  keep
> their
> > > * values until changed, across cases, the code should work the same
> > > * if #hiout replaced both hiout and  lag(hiout). Conceivably, that
> might
> > > * run faster than using lag(hiout).
> > >
> > > --
> > > Rich Ulrich
> > >
> > >
> > >
> > > Date: Thu, 22 May 2014 13:43:25 -0700
> > > From: [hidden email]
> > > <http://user/SendEmail.jtp?type=node&node=5726201&i=0>
> > > Subject: Re: Finding and marking related cases
> > > To: [hidden email] <
> > http://user/SendEmail.jtp?type=node&node=5726201&i=1>
> >
> > >
> > > Thank's a lot.
> > > Is there any way to find the number of combined  cases I shall put in
> > the
> > > "lookback", or is there a syntax that can regulate this itself? I have
> a
> > > couple of hundred thousand singel cases in total and no idea how many
> > > combined cases I will end up with.Is it possible to sort the
> constructed
> > > combined case numbers based on in and not id first?
> > > Best regardsLars N.
> > >  19. mai 2014 kl. 18:22 skrev David Marso [via SPSSX Discussion]
> > <[hidden
> > > email]>:
> > >
> > >         Something like the following?
> > >
> > > Note you may need to change the number of cases in the "lookback" (in
> > this
> > > case 4).
> > >
> > > --
> > >
> > > DATA LIST LIST / case id in out.
> > >
> > > BEGIN DATA.
> > >
> > > 1, 11, 13, 17
> > >
> > > 2, 12, 14, 15
> > >
> > > 3, 11, 14, 14
> > >
> > > 4, 13, 15, 22
> > >
> > > 5, 11, 17, 22
> > >
> > > 6, 12, 17, 24
> > >
> > > 7, 11, 27, 29
> > >
> > > END DATA.
> > >
> > >
> > > SORT CASES BY id in out.
> > >
> > > IF ($CASENUM EQ 1) episode=1.
> > >
> > > DO REPEAT #=1 TO 4.
> > >
> > > +  IF (id=LAG(id,#)) AND RANGE(in,lag(in,#),lag(out,#))
> > > episode=lag(episode).
> > >
> > > END REPEAT.
> > >
> > > IF MISSING(episode) episode=LAG(episode) + 1.
> > >
> > >
> > > LIST.
> > >
> > >
> > >
> > >
> > >     case       id       in      out  episode
> > >
> > >
> > >
> > >     1.00    11.00    13.00    17.00     1.00
> > >
> > >     3.00    11.00    14.00    14.00     1.00
> > >
> > >     5.00    11.00    17.00    22.00     1.00
> > >
> > >     7.00    11.00    27.00    29.00     2.00
> > >
> > >     2.00    12.00    14.00    15.00     3.00
> > >
> > >     6.00    12.00    17.00    24.00     4.00
> > >
> > >     4.00    13.00    15.00    22.00     5.00
> > >
> > >
> > >
> > >
> > >
> > > Number of cases read:  7    Number of cases listed:  7
> > >
> > >
> > >
> > >
> > >                                 Please reply to the list and not to my
> > > personal email.
> > >
> > > Those desiring my consulting or training services please feel free to
> > > email me.
> > >
> > > ---
> > >
> > > "Nolite dare sanctum canibus neque mittatis margaritas vestras ante
> > porcos
> > > ne forte conculcent eas pedibus suis."
> > >
> > > Cum es damnatorum possederunt porcos iens ut salire off sanguinum
> cliff
> > in
> > > abyssum?"
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >                 If you reply to this email, your message will be added
> > to
> > > the discussion below:
> > >
> > >
> >
> http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5726135.html
> > >
> > >
> > >
> > >                 To unsubscribe from Finding and marking related cases,
> > > click here.
> > >
> > >                 NAML
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > >
> > > View this message in context: Re: Finding and marking related cases
> > >
> > > Sent from the SPSSX Discussion mailing list archive at Nabble.com.
> > >
> > > Please reply to the list and not to my personal email.
> > > Those desiring my consulting or training services please feel free to
> > > email me.
> > > ---
> > > "Nolite dare sanctum canibus neque mittatis margaritas vestras ante
> > porcos
> > > ne forte conculcent eas pedibus suis."
> > > Cum es damnatorum possederunt porcos iens ut salire off sanguinum
> cliff
> > in
> > > abyssum?"
> > >
> > >
> > > ------------------------------
> > >  If you reply to this email, your message will be added to the
> > discussion
> > > below:
> > >
> > >
> >
> http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5726201.html
> > >  To unsubscribe from Finding and marking related cases, click here
> > > <
> >
>
> > > .
> > > NAML
> > > <
> >
> http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>
>
>   > <
> http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E>
> <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E%3E>
>
>
> > >
> >
> > Please reply to the list and not to my personal email.
> > Those desiring my consulting or training services please feel free to
> > email me.
> > ---
> > "Nolite dare sanctum canibus neque mittatis margaritas vestras ante
> porcos
> > ne forte conculcent eas pedibus suis."
> > Cum es damnatorum possederunt porcos iens ut salire off sanguinum cliff
> in
> > abyssum?"
> >
> >
> > ------------------------------
> >  If you reply to this email, your message will be added to the
> discussion
> > below:
> >
>
> >
> http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5727915.html
> >  To unsubscribe from Finding and marking related cases, click here
> > < .
> > NAML
> > <
> http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
> <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E>
> >
>
>  Please reply to the list and not to my personal email.
> Those desiring my consulting or training services please feel free to
> email me.
> ---
> "Nolite dare sanctum canibus neque mittatis margaritas vestras ante porcos
> ne forte conculcent eas pedibus suis."
> Cum es damnatorum possederunt porcos iens ut salire off sanguinum cliff in
> abyssum?"
>
>
>  ------------------------------
>
> *If you reply to this email, your message will be added to the discussion
> below:*
>
>
> http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5727918.html
>
> To unsubscribe from Finding and marking related cases, click here.
> NAML
> <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>
>
>
>
>
>
>  ------------------------------
>
> View this message in context: Re: Finding and marking related cases
> <http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5727996.html>
> Sent from the SPSSX Discussion mailing list archive
> <http://spssx-discussion.1045642.n5.nabble.com/> at Nabble.com.
> ===================== To manage your subscription to SPSSX-L, send a
> message to [hidden email]
> <http:///user/SendEmail.jtp?type=node&node=5728006&i=2> (not to SPSSX-L),
> with no body text except the command. To leave the list, send the command
> SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the
> command INFO REFCARD
>  ===================== To manage your subscription to SPSSX-L, send a
> message to [hidden email]
> <http:///user/SendEmail.jtp?type=node&node=5728006&i=3> (not to SPSSX-L),
> with no body text except the command. To leave the list, send the command
> SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the
> command INFO REFCARD
>
> ------------------------------
>  If you reply to this email, your message will be added to the discussion
> below:
>
> http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5728006.html
>  To unsubscribe from Finding and marking related cases, click here
> <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=5726128&code=bGFyc25hZXNzQGdtYWlsLmNvbXw1NzI2MTI4fDM3MjU3NTE5>
> .
> NAML
> <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>
Please reply to the list and not to my personal email.
Those desiring my consulting or training services please feel free to email me.
---
"Nolite dare sanctum canibus neque mittatis margaritas vestras ante porcos ne forte conculcent eas pedibus suis."
Cum es damnatorum possederunt porcos iens ut salire off sanguinum cliff in abyssum?"
Reply | Threaded
Open this post in threaded view
|

Re: Finding and marking related cases

nessie
I have tried to solve this myself, but I can get it right.

I've used this example data:

DATA LIST FREE / ID in out x. 
BEGIN DATA 
1 1 2 0 2 3 3 0 3 3 4 1 4 4 5 0 5 6 8 0 6 7 9 0 7 7 12 0 8 13 13 1 9 13 14 0 10 14 16 0 
END DATA. 

id is case id
in is a fictive date/time equivalent
out is a fictive date/time equivalent
x is a marker (0=ordinary case, 1=special case)


To sum up, I want to mark all cases which overlaps the in/out-span of the x=1 cases.
I have tried to experiment with the lag and lead functions in SHIFT VALUES (thank’s for the tip David), but I haven’t come up with a working solution.

Does anyone have any other tips or tricks to help me solve this problem?

Best regards
Lars


5. des. 2014 kl. 16.07 skrev David Marso [via SPSSX Discussion] <[hidden email]>:

Look up SHIFT VALUES (alternatively use the CREATE command). 
There is also a RANGE function in COMPUTE. 
-- 
nessie wrote
Sorry for my very late reply and for not providing example data when I 
posted my question. I fell bad! 
Thank's to both of you for two different but excellent solutions. 
Espacially thank's to you David for taking time to make and provide am 
illustrative example. 

*I have a new and different problem:* 
I have a big number of hospital admissions. Some of them are special and 
draines a lot of resources and I want to mark all the other admissions 
within the time-frame of these admissions to check if they are influenced 
in a negative way. I modified your data David, to give an example. 
The special cases are marked with the value 1 for variable x. 

I want to make a new variable that markes all the cases that overlaps with 
the in/out time of these special cases. Here ID 3 and ID 8 are special 
cases, and I want to mark case; 2, 4 and 9. 

Best regards 
Lars 

DATA LIST FREE / ID in out x. 
BEGIN DATA 
1 1 2 0 2 3 3 0 3 3 4 1 4 4 5 0 5 6 8 0 7 7 12 0 8 13 13 1 9 13 14 0 
END DATA. 

This gives these data: 

ID in out x 

1,00 1,00 2,00 ,00 

2,00 3,00 3,00 ,00 

3,00 3,00 4,00 1,00 

4,00 4,00 5,00 ,00 

5,00 6,00 8,00 ,00 

7,00 7,00 12,00 ,00 

8,00 13,00 13,00 1,00 

9,00 13,00 14,00 ,00 





2014-11-24 16:08 GMT+01:00 Maguin, Eugene [via SPSSX Discussion] < 
<a href="x-msg://1/user/SendEmail.jtp?type=node&amp;node=5728120&amp;i=0" target="_top" rel="nofollow" link="external" class="">[hidden email]>: 

>  Look at the aggregate function CIN. It returns a count of values between 
> two endpoints. 
> 
> Gene Maguin 
> 
> 
> 
> *From:* SPSSX(r) Discussion [mailto:[hidden email] 
> <http:///user/SendEmail.jtp?type=node&node=5728006&i=0>] *On Behalf Of * 
> nessie 
> *Sent:* Friday, November 21, 2014 7:30 AM 
> *To:* [hidden email] 
> <http:///user/SendEmail.jtp?type=node&node=5728006&i=1>
> *Subject:* Re: Finding and marking related cases 
> 
> 
> 
> I have worked my way through the basic aggregate functions and I'm 
> starting to get the hang of it. 
> 
> I have just one basic question - is it possible to count the number of 
> different values a variable contains across the cases you aggregate and 
> present all the different values in the aggregated sufix variables? 
> 
> 
> 
> E.g. I have a patient who has been to several different hospital 
> departments during one aggregated stay. For each case/episode the 
> department variable returns a value. Lets say the has 10 different partial 
> stays (cases) from 7 different departments wich I would like to aggregate. 
> How can I count the number of different departments the patient has been to 
> and present them in the aggregat variables (wich I would like to put at the 
> end of each case/part of stay). 
> 
> 
> 
> Best regards 
> 
> Lars 
> 
> 
> 
> 2014-11-13 15:20 GMT+01:00 Lars E. Næss-Pleym <[hidden email] 
> <http:///user/SendEmail.jtp?type=node&node=5727996&i=0>>: 
> 
>  Sorry about the missing spaces, it probably doesn't matter, but I try 
> again to make it more easy to read. 
> 
> 
> 
> DATA LIST LIST 
> / dia (A2) c_level (A2) case id in out . 
> BEGIN DATA. 
> "A1" "IP" 1 11 13 17 
> "B1" "OP" 2 12 15 15 
> "A1" "OP" 3 11 14 14 
> "A2" "IP" 4 13 15 22 
> "B2" "IP" 5 11 17 22 
> "C1" "IP" 6 12 17 24 
> "B3" "IP" 7 11 27 29 
> "C4" "IP" 8 13 22 29 
> "D1" "IP" 9 12 24 26 
> "D2" "IP" 10 12 28 30 
> END DATA. 
> 
> LIST. 
> 
> 
> 
> SORT CASES BY id in out. 
> IF ($CASENUM EQ 1) episode=1. 
> DO REPEAT #=1 TO 10. 
> +  IF (id=LAG(id,#)) AND RANGE(in,lag(in,#),lag(out,#)) 
> episode=lag(episode). 
> END REPEAT. 
> IF MISSING(episode) episode=LAG(episode) + 1. 
> 
> LIST. 
> 
> 
> 
> SORT CASES BY episode(A) in(A) out(A). 
> MATCH FILES 
>   /FILE=* 
>   /BY episode 
>   /FIRST=FirstCase 
>   /LAST=PrimaryLast. 
> DO IF (FirstCase). 
> COMPUTE  CaseNr=1-PrimaryLast. 
> ELSE. 
> COMPUTE  CaseNr=CaseNr+1. 
> END IF. 
> LEAVE  CaseNr. 
> FORMATS  CaseNr (f7). 
> MATCH FILES 
>   /FILE=* 
>   /DROP=PrimaryLast. 
> VALUE LABELS  FirstCase 0 'Duplicate Case' 1 'Primary Case' 2'Unique case'. 
> VARIABLE LEVEL  FirstCase (ORDINAL) /CaseNr (SCALE). 
> EXECUTE. 
> 
> 
> 
> IF (CaseNr=0) FirstCase=2. 
> EXECUTE. 
> 
> IF (FirstCase=2) CaseNr=1. 
> EXECUTE. 
> 
> 2014-11-13 15:17 GMT+01:00 Lars E. Næss-Pleym <[hidden email] 
> <http:///user/SendEmail.jtp?type=node&node=5727996&i=1>>: 
> 
> 
> 
>  Here is an example syntax. dia=diagnosis, c_level=care_level 
> (IP=In-Patient, OP=Out-Patient), case=unique case_ID, id=person_ID, 
> in=date_in, out=date_out 
> 
> 
> 
>  DATA LIST LIST 
> / dia (A2) c_level (A2) case id in out . 
> BEGIN DATA. 
> "A1" "IP" 1 11 13 17 
> "B1" "OP" 2 12 15 15 
> "A1" "OP" 3 11 14 14 
> "A2" "IP" 4 13 15 22 
> "B2" "IP" 5 11 17 22 
> "C1" "IP" 6 12 17 24 
> "B3" "IP" 7 11 27 29 
> "C4" "IP" 8 13 22 29 
> "D1" "IP" 9 12 24 26 
> "D2" "IP" 10 12 28 30 
> END DATA. 
> 
> LIST. 
> 
> SORT CASES BY id in out. 
> IF ($CASENUM EQ 1) episode=1. 
> DO REPEAT #=1 TO 10. 
> +  IF (id=LAG(id,#)) AND RANGE(in,lag(in,#),lag(out,#)) 
> episode=lag(episode). 
> END REPEAT. 
> IF MISSING(episode) episode=LAG(episode) + 1. 
> 
> LIST. 
> 
> SORT CASES BY episode(A) in(A) out(A). 
> MATCH FILES 
>   /FILE=* 
>   /BY episode 
>   /FIRST=FirstCase 
>   /LAST=PrimaryLast. 
> DO IF (FirstCase). 
> COMPUTE  CaseNr=1-PrimaryLast. 
> ELSE. 
> COMPUTE  CaseNr=CaseNr+1. 
> END IF. 
> LEAVE  CaseNr. 
> FORMATS  CaseNr (f7). 
> MATCH FILES 
>   /FILE=* 
>   /DROP=PrimaryLast. 
> VALUE LABELS  FirstCase 0 'Duplicate Case' 1 'Primary Case' 2'Unique case'. 
> VARIABLE LEVEL  FirstCase (ORDINAL) /CaseNr (SCALE). 
> EXECUTE. 
> 
> IF (CaseNr=0) FirstCase=2. 
> EXECUTE. 
> 
> IF (FirstCase=2) CaseNr=1. 
> EXECUTE. 
> 
> From this I want to keep all the uniqe cases but also make a new 
> aggregatet case for all episodes containing "in" from the first case and 
> "out" from the last, the "id", and "episode" variables, the 
> last "Dia" variable, and the first "c_level" variable. 
> 
> 
> 
> I also want to know e.g. how many different unique diagnosis within the 
> same episode and if the "c_level" has been the same for all cases 
> within one episode. 
> 
> 
> 
> I will look up look up AGGREGATE, RENAME VARIABLES, CASESTOVARS and MATCH 
> FILES, thanks for the tip. 
> 
> 
> 
> Best regards 
> 
> Lars 
> 
> 
> 2014-11-13 14:18 GMT+01:00 David Marso [via SPSSX Discussion] <[hidden 
> email] <http:///user/SendEmail.jtp?type=node&node=5727996&i=2>>: 
> 
> 
> 
> Lars, 
>   Please post a more illustrative data set with a before/after of how you 
> want the final result to appear. 
> Meanwhile look up AGGREGATE, RENAME VARIABLES, CASESTOVARS and MATCH FILES 
> commands. 
> HTH, David 
> == 
> 
>  *nessie wrote* 
> 
> Thanks David 
> That worked! 
> 
> From the original question in this tread, I have made an episode variable 
> linking concurrent admissions. 
> I have then sorted this in cronological order av given it a case number 
> within the episode. 
> 
> For "aggregated" episodes I now want to make a new case with aggregated 
> data for all the same variables but pick some of the variables from the 
> first and some from the last case. I can sort my episode variables 
> according to casenumber and then try to aggregate with all the variables 
> as 
> break variables from either the first or the last case. 
> - Is this the best way to do it? 
> - What if I want to gather som data from on of the episodes in the middle? 
> Is there an easy way to do this? 
> 
> Best regards 
> Lars N. 
> 
> 
> 2014-11-13 10:54 GMT+01:00 David Marso [via SPSSX Discussion] < 
> [hidden email] <http://user/SendEmail.jtp?type=node&node=5727918&i=0>>: 
> 
> 
> > See AGGREGATE in the FM.  There are FIRST and LAST functions. 
> > -- 
> > AGGREGATE OUTFILE * MODE=ADDVARIABLES 
> > /BREAK.../f=FIRST(?)/l=LAST(?).......... 
> > 
> >  nessie wrote 
> > I'm picking up this old tread. 
> > You helped me a lot in finding related cases and marking them with 
> > chronological case number.Now I have a new problem! 
> > 
> > I want to make a new aggregated case containing some info from the first 
> > and some from the last related case. 
> > E.g. "In" from the first related and "Out" from the last related case. 
> Can 
> > you help me do this? 
> > 
> > Best regards 
> > Lars N. 
> > 2014-05-23 12:39 GMT+02:00 David Marso [via SPSSX Discussion] < 
> 
> > [hidden email] <http://user/SendEmail.jtp?type=node&node=5727915&i=0>>: 
> 
> > 
> 
> 
> > > Good catch Rich! 
> > > Here is a version using #scratch variables and a slightly different 
> > > approach. 
> > > DATA LIST LIST / case id in out. 
> > > BEGIN DATA. 
> > > 1, 11, 13, 17 
> > > 2, 12, 14, 15 
> > > 3, 11, 14, 14 
> > > 4, 13, 15, 22 
> > > 5, 11, 17, 22 
> > > 6, 12, 17, 24 
> > > 7, 11, 27, 29 
> > > END DATA. 
> > > 
> > > SORT CASES BY id in out. 
> > > DO IF ($CASENUM EQ 1 OR id NE LAG(id) ). 
> > > +  COMPUTE  episode=SUM(1,LAG(episode)). 
> > > +  COMPUTE  #hiout= out. 
> > > ELSE. 
> > > +  COMPUTE episode=sum(lag(episode),NOT(range(in,lag(in),#hiout ))). 
> > > END IF. 
> > > COMPUTE #hiout= max(out, #hiout ). 
> > > 
> > >  Rich Ulrich wrote 
> > > If I see the problem right, logically, you only need to look at one 
> > > previous line. 
> > > 
> > > If it is the same episode, then you want to extend the testable OUT 
> date 
> > > whenever the new line has a higher one.  Since the file is sorted by 
> > > IN, the previous IN is always okay. 
> > >      This looks like it should work - 
> > > 
> > > SORT CASES BY id in out. 
> > > 
> > > DO IF ($CASENUM EQ 1) 
> > > +COMPUTE  episode=1. 
> > > 
> > > +COMPUTE  hiout= out. 
> > > END IF. 
> > > 
> > > DO IF (id=LAG(id)) AND RANGE(in,lag(in),lag(hiout) ) . 
> > > +COMPUTE episode=lag(episode). 
> > > 
> > > +COMPUTE hiout= max(out, lag(hiout) ). 
> > > END IF. 
> > > 
> > > DO IF MISSING(episode) . 
> > > +COMPUTE  episode=LAG(episode) + 1. 
> > > 
> > > +COMPUTE  hiout= out. 
> > > END IF. 
> > > 
> > > * That shows the logic explicitly.  Since temporary vars (#)  keep 
> their 
> > > * values until changed, across cases, the code should work the same 
> > > * if #hiout replaced both hiout and  lag(hiout). Conceivably, that 
> might 
> > > * run faster than using lag(hiout). 
> > > 
> > > -- 
> > > Rich Ulrich 
> > > 
> > > 
> > > 
> > > Date: Thu, 22 May 2014 13:43:25 -0700 
> > > From: [hidden email] 
> > > <http://user/SendEmail.jtp?type=node&node=5726201&i=0>
> > > Subject: Re: Finding and marking related cases 
> > > To: [hidden email] < 
> > http://user/SendEmail.jtp?type=node&node=5726201&i=1> 
> > 
> > > 
> > > Thank's a lot. 
> > > Is there any way to find the number of combined  cases I shall put in 
> > the 
> > > "lookback", or is there a syntax that can regulate this itself? I have 
> a 
> > > couple of hundred thousand singel cases in total and no idea how many 
> > > combined cases I will end up with.Is it possible to sort the 
> constructed 
> > > combined case numbers based on in and not id first? 
> > > Best regardsLars N. 
> > >  19. mai 2014 kl. 18:22 skrev David Marso [via SPSSX Discussion] 
> > <[hidden 
> > > email]>: 
> > > 
> > >         Something like the following? 
> > > 
> > > Note you may need to change the number of cases in the "lookback" (in 
> > this 
> > > case 4). 
> > > 
> > > -- 
> > > 
> > > DATA LIST LIST / case id in out. 
> > > 
> > > BEGIN DATA. 
> > > 
> > > 1, 11, 13, 17 
> > > 
> > > 2, 12, 14, 15 
> > > 
> > > 3, 11, 14, 14 
> > > 
> > > 4, 13, 15, 22 
> > > 
> > > 5, 11, 17, 22 
> > > 
> > > 6, 12, 17, 24 
> > > 
> > > 7, 11, 27, 29 
> > > 
> > > END DATA. 
> > > 
> > > 
> > > SORT CASES BY id in out. 
> > > 
> > > IF ($CASENUM EQ 1) episode=1. 
> > > 
> > > DO REPEAT #=1 TO 4. 
> > > 
> > > +  IF (id=LAG(id,#)) AND RANGE(in,lag(in,#),lag(out,#)) 
> > > episode=lag(episode). 
> > > 
> > > END REPEAT. 
> > > 
> > > IF MISSING(episode) episode=LAG(episode) + 1. 
> > > 
> > > 
> > > LIST. 
> > > 
> > > 
> > > 
> > > 
> > >     case       id       in      out  episode 
> > > 
> > > 
> > > 
> > >     1.00    11.00    13.00    17.00     1.00 
> > > 
> > >     3.00    11.00    14.00    14.00     1.00 
> > > 
> > >     5.00    11.00    17.00    22.00     1.00 
> > > 
> > >     7.00    11.00    27.00    29.00     2.00 
> > > 
> > >     2.00    12.00    14.00    15.00     3.00 
> > > 
> > >     6.00    12.00    17.00    24.00     4.00 
> > > 
> > >     4.00    13.00    15.00    22.00     5.00 
> > > 
> > > 
> > > 
> > > 
> > > 
> > > Number of cases read:  7    Number of cases listed:  7 
> > > 
> > > 
> > > 
> > > 
> > >                                 Please reply to the list and not to my 
> > > personal email. 
> > > 
> > > Those desiring my consulting or training services please feel free to 
> > > email me. 
> > > 
> > > --- 
> > > 
> > > "Nolite dare sanctum canibus neque mittatis margaritas vestras ante 
> > porcos 
> > > ne forte conculcent eas pedibus suis." 
> > > 
> > > Cum es damnatorum possederunt porcos iens ut salire off sanguinum 
> cliff 
> > in 
> > > abyssum?" 
> > > 
> > > 
> > > 
> > > 
> > > 
> > > 
> > > 
> > >                 If you reply to this email, your message will be added 
> > to 
> > > the discussion below: 
> > > 
> > > 
> > 
> http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5726135.html
> > > 
> > > 
> > > 
> > >                 To unsubscribe from Finding and marking related cases, 
> > > click here. 
> > > 
> > >                 NAML 
> > > 
> > > 
> > > 
> > > 
> > > 
> > > 
> > > 
> > > 
> > > View this message in context: Re: Finding and marking related cases 
> > > 
> > > Sent from the SPSSX Discussion mailing list archive at Nabble.com. 
> > > 
> > > Please reply to the list and not to my personal email. 
> > > Those desiring my consulting or training services please feel free to 
> > > email me. 
> > > --- 
> > > "Nolite dare sanctum canibus neque mittatis margaritas vestras ante 
> > porcos 
> > > ne forte conculcent eas pedibus suis." 
> > > Cum es damnatorum possederunt porcos iens ut salire off sanguinum 
> cliff 
> > in 
> > > abyssum?" 
> > > 
> > > 
> > > ------------------------------ 
> > >  If you reply to this email, your message will be added to the 
> > discussion 
> > > below: 
> > > 
> > > 
> > 
> http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5726201.html
> > >  To unsubscribe from Finding and marking related cases, click here 
> > > < 
> > 
> 
> > > . 
> > > NAML 
> > > < 
> > 
> http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml> 
> 
> 
>   > < 
> http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E> 
> <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E%3E>
> 
> 
> > > 
> > 
> > Please reply to the list and not to my personal email. 
> > Those desiring my consulting or training services please feel free to 
> > email me. 
> > --- 
> > "Nolite dare sanctum canibus neque mittatis margaritas vestras ante 
> porcos 
> > ne forte conculcent eas pedibus suis." 
> > Cum es damnatorum possederunt porcos iens ut salire off sanguinum cliff 
> in 
> > abyssum?" 
> > 
> > 
> > ------------------------------ 
> >  If you reply to this email, your message will be added to the 
> discussion 
> > below: 
> > 
> 
> > 
> http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5727915.html
> >  To unsubscribe from Finding and marking related cases, click here 
> > < . 
> > NAML 
> > < 
> http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml> 
> <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E>
> > 
> 
>  Please reply to the list and not to my personal email. 
> Those desiring my consulting or training services please feel free to 
> email me. 
> --- 
> "Nolite dare sanctum canibus neque mittatis margaritas vestras ante porcos 
> ne forte conculcent eas pedibus suis." 
> Cum es damnatorum possederunt porcos iens ut salire off sanguinum cliff in 
> abyssum?" 
> 
> 
>  ------------------------------ 
> 
> *If you reply to this email, your message will be added to the discussion 
> below:* 
> 
> 
> http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5727918.html
> 
> To unsubscribe from Finding and marking related cases, click here. 
> NAML 
> <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
> 
> 
> 
> 
> 
> 
>  ------------------------------ 
> 
> View this message in context: Re: Finding and marking related cases 
> <http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5727996.html>
> Sent from the SPSSX Discussion mailing list archive 
> <http://spssx-discussion.1045642.n5.nabble.com/> at Nabble.com. 
> ===================== To manage your subscription to SPSSX-L, send a 
> message to [hidden email] 
> <http:///user/SendEmail.jtp?type=node&node=5728006&i=2> (not to SPSSX-L), 
> with no body text except the command. To leave the list, send the command 
> SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the 
> command INFO REFCARD 
>  ===================== To manage your subscription to SPSSX-L, send a 
> message to [hidden email] 
> <http:///user/SendEmail.jtp?type=node&node=5728006&i=3> (not to SPSSX-L), 
> with no body text except the command. To leave the list, send the command 
> SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the 
> command INFO REFCARD 
> 
> ------------------------------ 
>  If you reply to this email, your message will be added to the discussion 
> below: 
> 
> http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5728006.html
>  To unsubscribe from Finding and marking related cases, click here 
> < class="">> . 
> NAML 
> <
http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
>
Please reply to the list and not to my personal email. 
Those desiring my consulting or training services please feel free to email me. 
--- 
"Nolite dare sanctum canibus neque mittatis margaritas vestras ante porcos ne forte conculcent eas pedibus suis." 
Cum es damnatorum possederunt porcos iens ut salire off sanguinum cliff in abyssum?"



If you reply to this email, your message will be added to the discussion below:
http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5728120.html
To unsubscribe from Finding and marking related cases, click here.
NAML

Reply | Threaded
Open this post in threaded view
|

Re: Finding and marking related cases

David Marso
Administrator
It would be interesting to see what you tried.
Here is what I was trying to direct you toward:
If next case is X then
FLAG: If the 'out' of current case is between the 'in' and 'out' of this next case.
If previous case is X
FLAG if the 'in' of current case is between 'in' and 'out' of previous case.
This can be concisely coded as a 'vector product' of the ranges and the 'xflags'.

DATA LIST FREE / ID in out x.
BEGIN DATA
1 1 2 0 2 3 3 0 3 3 4 1 4 4 5 0 5 6 8 0 6 7 9 0 7 7 12 0 8 13 13 1 9 13 14 0 10 14 16 0
END DATA.
CREATE inlead=LEAD(in,1)/outlead=LEAD(out,1)/xlead=LEAD(x,1).
COMPUTE flagged=SUM(RANGE(out,inlead,outlead) * xlead, RANGE(in,LAG(in),LAG(out))*LAG(x)).
LIST.

nessie wrote
I have tried to solve this myself, but I can get it right.

I've used this example data:

DATA LIST FREE / ID in out x.
BEGIN DATA
1 1 2 0 2 3 3 0 3 3 4 1 4 4 5 0 5 6 8 0 6 7 9 0 7 7 12 0 8 13 13 1 9 13 14 0 10 14 16 0
END DATA.

id is case id
in is a fictive date/time equivalent
out is a fictive date/time equivalent
x is a marker (0=ordinary case, 1=special case)


To sum up, I want to mark all cases which overlaps the in/out-span of the x=1 cases.
I have tried to experiment with the lag and lead functions in SHIFT VALUES (thank’s for the tip David), but I haven’t come up with a working solution.

Does anyone have any other tips or tricks to help me solve this problem?

Best regards
Lars


> 5. des. 2014 kl. 16.07 skrev David Marso [via SPSSX Discussion] <[hidden email]>:
>
> Look up SHIFT VALUES (alternatively use the CREATE command).
> There is also a RANGE function in COMPUTE.
> --
> nessie wrote
> Sorry for my very late reply and for not providing example data when I
> posted my question. I fell bad!
> Thank's to both of you for two different but excellent solutions.
> Espacially thank's to you David for taking time to make and provide am
> illustrative example.
>
> *I have a new and different problem:*
> I have a big number of hospital admissions. Some of them are special and
> draines a lot of resources and I want to mark all the other admissions
> within the time-frame of these admissions to check if they are influenced
> in a negative way. I modified your data David, to give an example.
> The special cases are marked with the value 1 for variable x.
>
> I want to make a new variable that markes all the cases that overlaps with
> the in/out time of these special cases. Here ID 3 and ID 8 are special
> cases, and I want to mark case; 2, 4 and 9.
>
> Best regards
> Lars
>
> DATA LIST FREE / ID in out x.
> BEGIN DATA
> 1 1 2 0 2 3 3 0 3 3 4 1 4 4 5 0 5 6 8 0 7 7 12 0 8 13 13 1 9 13 14 0
> END DATA.
>
> This gives these data:
>
> ID in out x
>
> 1,00 1,00 2,00 ,00
>
> 2,00 3,00 3,00 ,00
>
> 3,00 3,00 4,00 1,00
>
> 4,00 4,00 5,00 ,00
>
> 5,00 6,00 8,00 ,00
>
> 7,00 7,00 12,00 ,00
>
> 8,00 13,00 13,00 1,00
>
> 9,00 13,00 14,00 ,00
>
>
>
>
>
> 2014-11-24 16:08 GMT+01:00 Maguin, Eugene [via SPSSX Discussion] <
> [hidden email] <x-msg://1/user/SendEmail.jtp?type=node&node=5728120&i=0>>:
>
> >  Look at the aggregate function CIN. It returns a count of values between
> > two endpoints.
> >
> > Gene Maguin
> >
> >
> >
> > *From:* SPSSX(r) Discussion [mailto:[hidden email]
> > <http:///user/SendEmail.jtp?type=node&node=5728006&i=0> <http://user/SendEmail.jtp?type=node&node=5728006&i=0%3E>] *On Behalf Of *
> > nessie
> > *Sent:* Friday, November 21, 2014 7:30 AM
> > *To:* [hidden email]
> > <http:///user/SendEmail.jtp?type=node&node=5728006&i=1> <http://user/SendEmail.jtp?type=node&node=5728006&i=1%3E>
> > *Subject:* Re: Finding and marking related cases
> >
> >
> >
> > I have worked my way through the basic aggregate functions and I'm
> > starting to get the hang of it.
> >
> > I have just one basic question - is it possible to count the number of
> > different values a variable contains across the cases you aggregate and
> > present all the different values in the aggregated sufix variables?
> >
> >
> >
> > E.g. I have a patient who has been to several different hospital
> > departments during one aggregated stay. For each case/episode the
> > department variable returns a value. Lets say the has 10 different partial
> > stays (cases) from 7 different departments wich I would like to aggregate.
> > How can I count the number of different departments the patient has been to
> > and present them in the aggregat variables (wich I would like to put at the
> > end of each case/part of stay).
> >
> >
> >
> > Best regards
> >
> > Lars
> >
> >
> >
> > 2014-11-13 15:20 GMT+01:00 Lars E. Næss-Pleym <[hidden email]
> > <http:///user/SendEmail.jtp?type=node&node=5727996&i=0> <http://user/SendEmail.jtp?type=node&node=5727996&i=0%3E>>:
> >
> >  Sorry about the missing spaces, it probably doesn't matter, but I try
> > again to make it more easy to read.
> >
> >
> >
> > DATA LIST LIST
> > / dia (A2) c_level (A2) case id in out .
> > BEGIN DATA.
> > "A1" "IP" 1 11 13 17
> > "B1" "OP" 2 12 15 15
> > "A1" "OP" 3 11 14 14
> > "A2" "IP" 4 13 15 22
> > "B2" "IP" 5 11 17 22
> > "C1" "IP" 6 12 17 24
> > "B3" "IP" 7 11 27 29
> > "C4" "IP" 8 13 22 29
> > "D1" "IP" 9 12 24 26
> > "D2" "IP" 10 12 28 30
> > END DATA.
> >
> > LIST.
> >
> >
> >
> > SORT CASES BY id in out.
> > IF ($CASENUM EQ 1) episode=1.
> > DO REPEAT #=1 TO 10.
> > +  IF (id=LAG(id,#)) AND RANGE(in,lag(in,#),lag(out,#))
> > episode=lag(episode).
> > END REPEAT.
> > IF MISSING(episode) episode=LAG(episode) + 1.
> >
> > LIST.
> >
> >
> >
> > SORT CASES BY episode(A) in(A) out(A).
> > MATCH FILES
> >   /FILE=*
> >   /BY episode
> >   /FIRST=FirstCase
> >   /LAST=PrimaryLast.
> > DO IF (FirstCase).
> > COMPUTE  CaseNr=1-PrimaryLast.
> > ELSE.
> > COMPUTE  CaseNr=CaseNr+1.
> > END IF.
> > LEAVE  CaseNr.
> > FORMATS  CaseNr (f7).
> > MATCH FILES
> >   /FILE=*
> >   /DROP=PrimaryLast.
> > VALUE LABELS  FirstCase 0 'Duplicate Case' 1 'Primary Case' 2'Unique case'.
> > VARIABLE LEVEL  FirstCase (ORDINAL) /CaseNr (SCALE).
> > EXECUTE.
> >
> >
> >
> > IF (CaseNr=0) FirstCase=2.
> > EXECUTE.
> >
> > IF (FirstCase=2) CaseNr=1.
> > EXECUTE.
> >
> > 2014-11-13 15:17 GMT+01:00 Lars E. Næss-Pleym <[hidden email]
> > <http:///user/SendEmail.jtp?type=node&node=5727996&i=1> <http://user/SendEmail.jtp?type=node&node=5727996&i=1%3E>>:
> >
> >
> >
> >  Here is an example syntax. dia=diagnosis, c_level=care_level
> > (IP=In-Patient, OP=Out-Patient), case=unique case_ID, id=person_ID,
> > in=date_in, out=date_out
> >
> >
> >
> >  DATA LIST LIST
> > / dia (A2) c_level (A2) case id in out .
> > BEGIN DATA.
> > "A1" "IP" 1 11 13 17
> > "B1" "OP" 2 12 15 15
> > "A1" "OP" 3 11 14 14
> > "A2" "IP" 4 13 15 22
> > "B2" "IP" 5 11 17 22
> > "C1" "IP" 6 12 17 24
> > "B3" "IP" 7 11 27 29
> > "C4" "IP" 8 13 22 29
> > "D1" "IP" 9 12 24 26
> > "D2" "IP" 10 12 28 30
> > END DATA.
> >
> > LIST.
> >
> > SORT CASES BY id in out.
> > IF ($CASENUM EQ 1) episode=1.
> > DO REPEAT #=1 TO 10.
> > +  IF (id=LAG(id,#)) AND RANGE(in,lag(in,#),lag(out,#))
> > episode=lag(episode).
> > END REPEAT.
> > IF MISSING(episode) episode=LAG(episode) + 1.
> >
> > LIST.
> >
> > SORT CASES BY episode(A) in(A) out(A).
> > MATCH FILES
> >   /FILE=*
> >   /BY episode
> >   /FIRST=FirstCase
> >   /LAST=PrimaryLast.
> > DO IF (FirstCase).
> > COMPUTE  CaseNr=1-PrimaryLast.
> > ELSE.
> > COMPUTE  CaseNr=CaseNr+1.
> > END IF.
> > LEAVE  CaseNr.
> > FORMATS  CaseNr (f7).
> > MATCH FILES
> >   /FILE=*
> >   /DROP=PrimaryLast.
> > VALUE LABELS  FirstCase 0 'Duplicate Case' 1 'Primary Case' 2'Unique case'.
> > VARIABLE LEVEL  FirstCase (ORDINAL) /CaseNr (SCALE).
> > EXECUTE.
> >
> > IF (CaseNr=0) FirstCase=2.
> > EXECUTE.
> >
> > IF (FirstCase=2) CaseNr=1.
> > EXECUTE.
> >
> > From this I want to keep all the uniqe cases but also make a new
> > aggregatet case for all episodes containing "in" from the first case and
> > "out" from the last, the "id", and "episode" variables, the
> > last "Dia" variable, and the first "c_level" variable.
> >
> >
> >
> > I also want to know e.g. how many different unique diagnosis within the
> > same episode and if the "c_level" has been the same for all cases
> > within one episode.
> >
> >
> >
> > I will look up look up AGGREGATE, RENAME VARIABLES, CASESTOVARS and MATCH
> > FILES, thanks for the tip.
> >
> >
> >
> > Best regards
> >
> > Lars
> >
> >
> > 2014-11-13 14:18 GMT+01:00 David Marso [via SPSSX Discussion] <[hidden
> > email] <http:///user/SendEmail.jtp?type=node&node=5727996&i=2> <http://user/SendEmail.jtp?type=node&node=5727996&i=2%3E>>:
> >
> >
> >
> > Lars,
> >   Please post a more illustrative data set with a before/after of how you
> > want the final result to appear.
> > Meanwhile look up AGGREGATE, RENAME VARIABLES, CASESTOVARS and MATCH FILES
> > commands.
> > HTH, David
> > ==
> >
> >  *nessie wrote*
> >
> > Thanks David
> > That worked!
> >
> > From the original question in this tread, I have made an episode variable
> > linking concurrent admissions.
> > I have then sorted this in cronological order av given it a case number
> > within the episode.
> >
> > For "aggregated" episodes I now want to make a new case with aggregated
> > data for all the same variables but pick some of the variables from the
> > first and some from the last case. I can sort my episode variables
> > according to casenumber and then try to aggregate with all the variables
> > as
> > break variables from either the first or the last case.
> > - Is this the best way to do it?
> > - What if I want to gather som data from on of the episodes in the middle?
> > Is there an easy way to do this?
> >
> > Best regards
> > Lars N.
> >
> >
> > 2014-11-13 10:54 GMT+01:00 David Marso [via SPSSX Discussion] <
> > [hidden email] <http://user/SendEmail.jtp?type=node&node=5727918&i=0> <http://user/SendEmail.jtp?type=node&node=5727918&i=0%3E>>:
> >
> >
> > > See AGGREGATE in the FM.  There are FIRST and LAST functions.
> > > --
> > > AGGREGATE OUTFILE * MODE=ADDVARIABLES
> > > /BREAK.../f=FIRST(?)/l=LAST(?)..........
> > >
> > >  nessie wrote
> > > I'm picking up this old tread.
> > > You helped me a lot in finding related cases and marking them with
> > > chronological case number.Now I have a new problem!
> > >
> > > I want to make a new aggregated case containing some info from the first
> > > and some from the last related case.
> > > E.g. "In" from the first related and "Out" from the last related case.
> > Can
> > > you help me do this?
> > >
> > > Best regards
> > > Lars N.
> > > 2014-05-23 12:39 GMT+02:00 David Marso [via SPSSX Discussion] <
> >
> > > [hidden email] <http://user/SendEmail.jtp?type=node&node=5727915&i=0> <http://user/SendEmail.jtp?type=node&node=5727915&i=0%3E>>:
> >
> > >
> >
> >
> > > > Good catch Rich!
> > > > Here is a version using #scratch variables and a slightly different
> > > > approach.
> > > > DATA LIST LIST / case id in out.
> > > > BEGIN DATA.
> > > > 1, 11, 13, 17
> > > > 2, 12, 14, 15
> > > > 3, 11, 14, 14
> > > > 4, 13, 15, 22
> > > > 5, 11, 17, 22
> > > > 6, 12, 17, 24
> > > > 7, 11, 27, 29
> > > > END DATA.
> > > >
> > > > SORT CASES BY id in out.
> > > > DO IF ($CASENUM EQ 1 OR id NE LAG(id) ).
> > > > +  COMPUTE  episode=SUM(1,LAG(episode)).
> > > > +  COMPUTE  #hiout= out.
> > > > ELSE.
> > > > +  COMPUTE episode=sum(lag(episode),NOT(range(in,lag(in),#hiout ))).
> > > > END IF.
> > > > COMPUTE #hiout= max(out, #hiout ).
> > > >
> > > >  Rich Ulrich wrote
> > > > If I see the problem right, logically, you only need to look at one
> > > > previous line.
> > > >
> > > > If it is the same episode, then you want to extend the testable OUT
> > date
> > > > whenever the new line has a higher one.  Since the file is sorted by
> > > > IN, the previous IN is always okay.
> > > >      This looks like it should work -
> > > >
> > > > SORT CASES BY id in out.
> > > >
> > > > DO IF ($CASENUM EQ 1)
> > > > +COMPUTE  episode=1.
> > > >
> > > > +COMPUTE  hiout= out.
> > > > END IF.
> > > >
> > > > DO IF (id=LAG(id)) AND RANGE(in,lag(in),lag(hiout) ) .
> > > > +COMPUTE episode=lag(episode).
> > > >
> > > > +COMPUTE hiout= max(out, lag(hiout) ).
> > > > END IF.
> > > >
> > > > DO IF MISSING(episode) .
> > > > +COMPUTE  episode=LAG(episode) + 1.
> > > >
> > > > +COMPUTE  hiout= out.
> > > > END IF.
> > > >
> > > > * That shows the logic explicitly.  Since temporary vars (#)  keep
> > their
> > > > * values until changed, across cases, the code should work the same
> > > > * if #hiout replaced both hiout and  lag(hiout). Conceivably, that
> > might
> > > > * run faster than using lag(hiout).
> > > >
> > > > --
> > > > Rich Ulrich
> > > >
> > > >
> > > >
> > > > Date: Thu, 22 May 2014 13:43:25 -0700
> > > > From: [hidden email]
> > > > <http://user/SendEmail.jtp?type=node&node=5726201&i=0> <http://user/SendEmail.jtp?type=node&node=5726201&i=0%3E>
> > > > Subject: Re: Finding and marking related cases
> > > > To: [hidden email] <
> > > http://user/SendEmail.jtp?type=node&node=5726201&i=1 <http://user/SendEmail.jtp?type=node&node=5726201&i=1>>
> > >
> > > >
> > > > Thank's a lot.
> > > > Is there any way to find the number of combined  cases I shall put in
> > > the
> > > > "lookback", or is there a syntax that can regulate this itself? I have
> > a
> > > > couple of hundred thousand singel cases in total and no idea how many
> > > > combined cases I will end up with.Is it possible to sort the
> > constructed
> > > > combined case numbers based on in and not id first?
> > > > Best regardsLars N.
> > > >  19. mai 2014 kl. 18:22 skrev David Marso [via SPSSX Discussion]
> > > <[hidden
> > > > email]>:
> > > >
> > > >         Something like the following?
> > > >
> > > > Note you may need to change the number of cases in the "lookback" (in
> > > this
> > > > case 4).
> > > >
> > > > --
> > > >
> > > > DATA LIST LIST / case id in out.
> > > >
> > > > BEGIN DATA.
> > > >
> > > > 1, 11, 13, 17
> > > >
> > > > 2, 12, 14, 15
> > > >
> > > > 3, 11, 14, 14
> > > >
> > > > 4, 13, 15, 22
> > > >
> > > > 5, 11, 17, 22
> > > >
> > > > 6, 12, 17, 24
> > > >
> > > > 7, 11, 27, 29
> > > >
> > > > END DATA.
> > > >
> > > >
> > > > SORT CASES BY id in out.
> > > >
> > > > IF ($CASENUM EQ 1) episode=1.
> > > >
> > > > DO REPEAT #=1 TO 4.
> > > >
> > > > +  IF (id=LAG(id,#)) AND RANGE(in,lag(in,#),lag(out,#))
> > > > episode=lag(episode).
> > > >
> > > > END REPEAT.
> > > >
> > > > IF MISSING(episode) episode=LAG(episode) + 1.
> > > >
> > > >
> > > > LIST.
> > > >
> > > >
> > > >
> > > >
> > > >     case       id       in      out  episode
> > > >
> > > >
> > > >
> > > >     1.00    11.00    13.00    17.00     1.00
> > > >
> > > >     3.00    11.00    14.00    14.00     1.00
> > > >
> > > >     5.00    11.00    17.00    22.00     1.00
> > > >
> > > >     7.00    11.00    27.00    29.00     2.00
> > > >
> > > >     2.00    12.00    14.00    15.00     3.00
> > > >
> > > >     6.00    12.00    17.00    24.00     4.00
> > > >
> > > >     4.00    13.00    15.00    22.00     5.00
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > Number of cases read:  7    Number of cases listed:  7
> > > >
> > > >
> > > >
> > > >
> > > >                                 Please reply to the list and not to my
> > > > personal email.
> > > >
> > > > Those desiring my consulting or training services please feel free to
> > > > email me.
> > > >
> > > > ---
> > > >
> > > > "Nolite dare sanctum canibus neque mittatis margaritas vestras ante
> > > porcos
> > > > ne forte conculcent eas pedibus suis."
> > > >
> > > > Cum es damnatorum possederunt porcos iens ut salire off sanguinum
> > cliff
> > > in
> > > > abyssum?"
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >                 If you reply to this email, your message will be added
> > > to
> > > > the discussion below:
> > > >
> > > >
> > >
> > http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5726135.html <http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5726135.html>
> > > >
> > > >
> > > >
> > > >                 To unsubscribe from Finding and marking related cases,
> > > > click here.
> > > >
> > > >                 NAML
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > View this message in context: Re: Finding and marking related cases
> > > >
> > > > Sent from the SPSSX Discussion mailing list archive at Nabble.com <http://nabble.com/>.
> > > >
> > > > Please reply to the list and not to my personal email.
> > > > Those desiring my consulting or training services please feel free to
> > > > email me.
> > > > ---
> > > > "Nolite dare sanctum canibus neque mittatis margaritas vestras ante
> > > porcos
> > > > ne forte conculcent eas pedibus suis."
> > > > Cum es damnatorum possederunt porcos iens ut salire off sanguinum
> > cliff
> > > in
> > > > abyssum?"
> > > >
> > > >
> > > > ------------------------------
> > > >  If you reply to this email, your message will be added to the
> > > discussion
> > > > below:
> > > >
> > > >
> > >
> > http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5726201.html <http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5726201.html>
> > > >  To unsubscribe from Finding and marking related cases, click here
> > > > <
> > >
> >
> > > > .
> > > > NAML
> > > > <
> > >
> > http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>>
> >
> >
> >   > <
> > http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E>>
> > <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E%3E> <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E%3E%3E>
> >
> >
> > > >
> > >
> > > Please reply to the list and not to my personal email.
> > > Those desiring my consulting or training services please feel free to
> > > email me.
> > > ---
> > > "Nolite dare sanctum canibus neque mittatis margaritas vestras ante
> > porcos
> > > ne forte conculcent eas pedibus suis."
> > > Cum es damnatorum possederunt porcos iens ut salire off sanguinum cliff
> > in
> > > abyssum?"
> > >
> > >
> > > ------------------------------
> > >  If you reply to this email, your message will be added to the
> > discussion
> > > below:
> > >
> >
> > >
> > http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5727915.html <http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5727915.html>
> > >  To unsubscribe from Finding and marking related cases, click here
> > > < .
> > > NAML
> > > <
> > http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>>
> > <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E> <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E%3E>
> > >
> >
> >  Please reply to the list and not to my personal email.
> > Those desiring my consulting or training services please feel free to
> > email me.
> > ---
> > "Nolite dare sanctum canibus neque mittatis margaritas vestras ante porcos
> > ne forte conculcent eas pedibus suis."
> > Cum es damnatorum possederunt porcos iens ut salire off sanguinum cliff in
> > abyssum?"
> >
> >
> >  ------------------------------
> >
> > *If you reply to this email, your message will be added to the discussion
> > below:*
> >
> >
> > http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5727918.html <http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5727918.html>
> >
> > To unsubscribe from Finding and marking related cases, click here.
> > NAML
> > <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml> <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E>
> >
> >
> >
> >
> >
> >
> >  ------------------------------
> >
> > View this message in context: Re: Finding and marking related cases
> > <http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5727996.html> <http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5727996.html%3E>
> > Sent from the SPSSX Discussion mailing list archive
> > <http://spssx-discussion.1045642.n5.nabble.com/> <http://spssx-discussion.1045642.n5.nabble.com/%3E> at Nabble.com <http://nabble.com/>.
> > ===================== To manage your subscription to SPSSX-L, send a
> > message to [hidden email]
> > <http:///user/SendEmail.jtp?type=node&node=5728006&i=2> <http://user/SendEmail.jtp?type=node&node=5728006&i=2%3E> (not to SPSSX-L),
> > with no body text except the command. To leave the list, send the command
> > SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the
> > command INFO REFCARD
> >  ===================== To manage your subscription to SPSSX-L, send a
> > message to [hidden email]
> > <http:///user/SendEmail.jtp?type=node&node=5728006&i=3> <http://user/SendEmail.jtp?type=node&node=5728006&i=3%3E> (not to SPSSX-L),
> > with no body text except the command. To leave the list, send the command
> > SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the
> > command INFO REFCARD
> >
> > ------------------------------
> >  If you reply to this email, your message will be added to the discussion
> > below:
> >
> > http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5728006.html <http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5728006.html>
> >  To unsubscribe from Finding and marking related cases, click here
> > <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=5726128&code=bGFyc25hZXNzQGdtYWlsLmNvbXw1NzI2MTI4fDM3MjU3NTE5> <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=5726128&code=bGFyc25hZXNzQGdtYWlsLmNvbXw1NzI2MTI4fDM3MjU3NTE5%3E>
> > .
> > NAML
> > <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml> <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E>
> >
> Please reply to the list and not to my personal email.
> Those desiring my consulting or training services please feel free to email me.
> ---
> "Nolite dare sanctum canibus neque mittatis margaritas vestras ante porcos ne forte conculcent eas pedibus suis."
> Cum es damnatorum possederunt porcos iens ut salire off sanguinum cliff in abyssum?"
>
>
> If you reply to this email, your message will be added to the discussion below:
> http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5728120.html <http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5728120.html>
> To unsubscribe from Finding and marking related cases, click here <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=5726128&code=bGFyc25hZXNzQGdtYWlsLmNvbXw1NzI2MTI4fDM3MjU3NTE5>.
> NAML <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
Please reply to the list and not to my personal email.
Those desiring my consulting or training services please feel free to email me.
---
"Nolite dare sanctum canibus neque mittatis margaritas vestras ante porcos ne forte conculcent eas pedibus suis."
Cum es damnatorum possederunt porcos iens ut salire off sanguinum cliff in abyssum?"
Reply | Threaded
Open this post in threaded view
|

Re: Finding and marking related cases

Bruce Weaver
Administrator
In reply to this post by nessie
I had to go look at one of your earlier posts to try to understand what you want.  The following is a (very) rough first draft, but I think it accomplishes what you are asking for.  Notice the use of the RANGE function.

NEW FILE.
DATASET CLOSE all.
DATA LIST FREE / ID in out x.
BEGIN DATA
1 1 2 0 2 3 3 0 3 3 4 1 4 4 5 0 5 6 8 0 6 7 9 0 7 7 12 0 8 13 13 1 9 13 14 0 10 14 16 0
END DATA.
DATASET NAME Original.
DATASET COPY Copy.
DATASET ACTIVATE Copy.
SELECT IF x EQ 1.
LIST.

VARSTOCASES
  /ID=id
  /MAKE InOut FROM in out
  /INDEX=Index1(2)
  /DROP=ID x
  /NULL=KEEP.

SORT CASES BY id Index1.
CASESTOVARS
  /INDEX=id Index1
  /GROUPBY=VARIABLE
  /SEPARATOR="".
COMPUTE @junk1 = 1.
EXECUTE.

DATASET ACTIVATE Original.
COMPUTE @junk1 = 1.
EXECUTE.

MATCH FILES
 FILE = * /  
 TABLE = "Copy" /
 BY @junk1.
EXECUTE.
DATASET NAME Final.
DATASET ACTIVATE Final.
DATASET CLOSE all.

COMPUTE @junk2 = 1.
COMPUTE FLAG = 0.
DO REPEAT I = InOut11 InOut21 / O = InOut12 InOut22.
- IF not Flag Flag = NOT x and (RANGE(in,I,O) OR RANGE(out,I,O)).
END REPEAT.
EXECUTE.
DELETE VARIABLES @junk1 to @junk2.
FORMATS Flag (F1).
LIST.

OUTPUT:

      ID       in      out        x FLAG
 
    1.00     1.00     2.00      .00   0
    2.00     3.00     3.00      .00   1
    3.00     3.00     4.00     1.00   0
    4.00     4.00     5.00      .00   1
    5.00     6.00     8.00      .00   0
    6.00     7.00     9.00      .00   0
    7.00     7.00    12.00      .00   0
    8.00    13.00    13.00     1.00   0
    9.00    13.00    14.00      .00   1
   10.00    14.00    16.00      .00   0


nessie wrote
I have tried to solve this myself, but I can get it right.

I've used this example data:

DATA LIST FREE / ID in out x.
BEGIN DATA
1 1 2 0 2 3 3 0 3 3 4 1 4 4 5 0 5 6 8 0 6 7 9 0 7 7 12 0 8 13 13 1 9 13 14 0 10 14 16 0
END DATA.

id is case id
in is a fictive date/time equivalent
out is a fictive date/time equivalent
x is a marker (0=ordinary case, 1=special case)


To sum up, I want to mark all cases which overlaps the in/out-span of the x=1 cases.
I have tried to experiment with the lag and lead functions in SHIFT VALUES (thank’s for the tip David), but I haven’t come up with a working solution.

Does anyone have any other tips or tricks to help me solve this problem?

Best regards
Lars


> 5. des. 2014 kl. 16.07 skrev David Marso [via SPSSX Discussion] <[hidden email]>:
>
> Look up SHIFT VALUES (alternatively use the CREATE command).
> There is also a RANGE function in COMPUTE.
> --
> nessie wrote
> Sorry for my very late reply and for not providing example data when I
> posted my question. I fell bad!
> Thank's to both of you for two different but excellent solutions.
> Espacially thank's to you David for taking time to make and provide am
> illustrative example.
>
> *I have a new and different problem:*
> I have a big number of hospital admissions. Some of them are special and
> draines a lot of resources and I want to mark all the other admissions
> within the time-frame of these admissions to check if they are influenced
> in a negative way. I modified your data David, to give an example.
> The special cases are marked with the value 1 for variable x.
>
> I want to make a new variable that markes all the cases that overlaps with
> the in/out time of these special cases. Here ID 3 and ID 8 are special
> cases, and I want to mark case; 2, 4 and 9.
>
> Best regards
> Lars
>
> DATA LIST FREE / ID in out x.
> BEGIN DATA
> 1 1 2 0 2 3 3 0 3 3 4 1 4 4 5 0 5 6 8 0 7 7 12 0 8 13 13 1 9 13 14 0
> END DATA.
>
> This gives these data:
>
> ID in out x
>
> 1,00 1,00 2,00 ,00
>
> 2,00 3,00 3,00 ,00
>
> 3,00 3,00 4,00 1,00
>
> 4,00 4,00 5,00 ,00
>
> 5,00 6,00 8,00 ,00
>
> 7,00 7,00 12,00 ,00
>
> 8,00 13,00 13,00 1,00
>
> 9,00 13,00 14,00 ,00
>
>
>
>
>
> 2014-11-24 16:08 GMT+01:00 Maguin, Eugene [via SPSSX Discussion] <
> [hidden email] <x-msg://1/user/SendEmail.jtp?type=node&node=5728120&i=0>>:
>
> >  Look at the aggregate function CIN. It returns a count of values between
> > two endpoints.
> >
> > Gene Maguin
> >
> >
> >
> > *From:* SPSSX(r) Discussion [mailto:[hidden email]
> > <http:///user/SendEmail.jtp?type=node&node=5728006&i=0> <http://user/SendEmail.jtp?type=node&node=5728006&i=0%3E>] *On Behalf Of *
> > nessie
> > *Sent:* Friday, November 21, 2014 7:30 AM
> > *To:* [hidden email]
> > <http:///user/SendEmail.jtp?type=node&node=5728006&i=1> <http://user/SendEmail.jtp?type=node&node=5728006&i=1%3E>
> > *Subject:* Re: Finding and marking related cases
> >
> >
> >
> > I have worked my way through the basic aggregate functions and I'm
> > starting to get the hang of it.
> >
> > I have just one basic question - is it possible to count the number of
> > different values a variable contains across the cases you aggregate and
> > present all the different values in the aggregated sufix variables?
> >
> >
> >
> > E.g. I have a patient who has been to several different hospital
> > departments during one aggregated stay. For each case/episode the
> > department variable returns a value. Lets say the has 10 different partial
> > stays (cases) from 7 different departments wich I would like to aggregate.
> > How can I count the number of different departments the patient has been to
> > and present them in the aggregat variables (wich I would like to put at the
> > end of each case/part of stay).
> >
> >
> >
> > Best regards
> >
> > Lars
> >
> >
> >
> > 2014-11-13 15:20 GMT+01:00 Lars E. Næss-Pleym <[hidden email]
> > <http:///user/SendEmail.jtp?type=node&node=5727996&i=0> <http://user/SendEmail.jtp?type=node&node=5727996&i=0%3E>>:
> >
> >  Sorry about the missing spaces, it probably doesn't matter, but I try
> > again to make it more easy to read.
> >
> >
> >
> > DATA LIST LIST
> > / dia (A2) c_level (A2) case id in out .
> > BEGIN DATA.
> > "A1" "IP" 1 11 13 17
> > "B1" "OP" 2 12 15 15
> > "A1" "OP" 3 11 14 14
> > "A2" "IP" 4 13 15 22
> > "B2" "IP" 5 11 17 22
> > "C1" "IP" 6 12 17 24
> > "B3" "IP" 7 11 27 29
> > "C4" "IP" 8 13 22 29
> > "D1" "IP" 9 12 24 26
> > "D2" "IP" 10 12 28 30
> > END DATA.
> >
> > LIST.
> >
> >
> >
> > SORT CASES BY id in out.
> > IF ($CASENUM EQ 1) episode=1.
> > DO REPEAT #=1 TO 10.
> > +  IF (id=LAG(id,#)) AND RANGE(in,lag(in,#),lag(out,#))
> > episode=lag(episode).
> > END REPEAT.
> > IF MISSING(episode) episode=LAG(episode) + 1.
> >
> > LIST.
> >
> >
> >
> > SORT CASES BY episode(A) in(A) out(A).
> > MATCH FILES
> >   /FILE=*
> >   /BY episode
> >   /FIRST=FirstCase
> >   /LAST=PrimaryLast.
> > DO IF (FirstCase).
> > COMPUTE  CaseNr=1-PrimaryLast.
> > ELSE.
> > COMPUTE  CaseNr=CaseNr+1.
> > END IF.
> > LEAVE  CaseNr.
> > FORMATS  CaseNr (f7).
> > MATCH FILES
> >   /FILE=*
> >   /DROP=PrimaryLast.
> > VALUE LABELS  FirstCase 0 'Duplicate Case' 1 'Primary Case' 2'Unique case'.
> > VARIABLE LEVEL  FirstCase (ORDINAL) /CaseNr (SCALE).
> > EXECUTE.
> >
> >
> >
> > IF (CaseNr=0) FirstCase=2.
> > EXECUTE.
> >
> > IF (FirstCase=2) CaseNr=1.
> > EXECUTE.
> >
> > 2014-11-13 15:17 GMT+01:00 Lars E. Næss-Pleym <[hidden email]
> > <http:///user/SendEmail.jtp?type=node&node=5727996&i=1> <http://user/SendEmail.jtp?type=node&node=5727996&i=1%3E>>:
> >
> >
> >
> >  Here is an example syntax. dia=diagnosis, c_level=care_level
> > (IP=In-Patient, OP=Out-Patient), case=unique case_ID, id=person_ID,
> > in=date_in, out=date_out
> >
> >
> >
> >  DATA LIST LIST
> > / dia (A2) c_level (A2) case id in out .
> > BEGIN DATA.
> > "A1" "IP" 1 11 13 17
> > "B1" "OP" 2 12 15 15
> > "A1" "OP" 3 11 14 14
> > "A2" "IP" 4 13 15 22
> > "B2" "IP" 5 11 17 22
> > "C1" "IP" 6 12 17 24
> > "B3" "IP" 7 11 27 29
> > "C4" "IP" 8 13 22 29
> > "D1" "IP" 9 12 24 26
> > "D2" "IP" 10 12 28 30
> > END DATA.
> >
> > LIST.
> >
> > SORT CASES BY id in out.
> > IF ($CASENUM EQ 1) episode=1.
> > DO REPEAT #=1 TO 10.
> > +  IF (id=LAG(id,#)) AND RANGE(in,lag(in,#),lag(out,#))
> > episode=lag(episode).
> > END REPEAT.
> > IF MISSING(episode) episode=LAG(episode) + 1.
> >
> > LIST.
> >
> > SORT CASES BY episode(A) in(A) out(A).
> > MATCH FILES
> >   /FILE=*
> >   /BY episode
> >   /FIRST=FirstCase
> >   /LAST=PrimaryLast.
> > DO IF (FirstCase).
> > COMPUTE  CaseNr=1-PrimaryLast.
> > ELSE.
> > COMPUTE  CaseNr=CaseNr+1.
> > END IF.
> > LEAVE  CaseNr.
> > FORMATS  CaseNr (f7).
> > MATCH FILES
> >   /FILE=*
> >   /DROP=PrimaryLast.
> > VALUE LABELS  FirstCase 0 'Duplicate Case' 1 'Primary Case' 2'Unique case'.
> > VARIABLE LEVEL  FirstCase (ORDINAL) /CaseNr (SCALE).
> > EXECUTE.
> >
> > IF (CaseNr=0) FirstCase=2.
> > EXECUTE.
> >
> > IF (FirstCase=2) CaseNr=1.
> > EXECUTE.
> >
> > From this I want to keep all the uniqe cases but also make a new
> > aggregatet case for all episodes containing "in" from the first case and
> > "out" from the last, the "id", and "episode" variables, the
> > last "Dia" variable, and the first "c_level" variable.
> >
> >
> >
> > I also want to know e.g. how many different unique diagnosis within the
> > same episode and if the "c_level" has been the same for all cases
> > within one episode.
> >
> >
> >
> > I will look up look up AGGREGATE, RENAME VARIABLES, CASESTOVARS and MATCH
> > FILES, thanks for the tip.
> >
> >
> >
> > Best regards
> >
> > Lars
> >
> >
> > 2014-11-13 14:18 GMT+01:00 David Marso [via SPSSX Discussion] <[hidden
> > email] <http:///user/SendEmail.jtp?type=node&node=5727996&i=2> <http://user/SendEmail.jtp?type=node&node=5727996&i=2%3E>>:
> >
> >
> >
> > Lars,
> >   Please post a more illustrative data set with a before/after of how you
> > want the final result to appear.
> > Meanwhile look up AGGREGATE, RENAME VARIABLES, CASESTOVARS and MATCH FILES
> > commands.
> > HTH, David
> > ==
> >
> >  *nessie wrote*
> >
> > Thanks David
> > That worked!
> >
> > From the original question in this tread, I have made an episode variable
> > linking concurrent admissions.
> > I have then sorted this in cronological order av given it a case number
> > within the episode.
> >
> > For "aggregated" episodes I now want to make a new case with aggregated
> > data for all the same variables but pick some of the variables from the
> > first and some from the last case. I can sort my episode variables
> > according to casenumber and then try to aggregate with all the variables
> > as
> > break variables from either the first or the last case.
> > - Is this the best way to do it?
> > - What if I want to gather som data from on of the episodes in the middle?
> > Is there an easy way to do this?
> >
> > Best regards
> > Lars N.
> >
> >
> > 2014-11-13 10:54 GMT+01:00 David Marso [via SPSSX Discussion] <
> > [hidden email] <http://user/SendEmail.jtp?type=node&node=5727918&i=0> <http://user/SendEmail.jtp?type=node&node=5727918&i=0%3E>>:
> >
> >
> > > See AGGREGATE in the FM.  There are FIRST and LAST functions.
> > > --
> > > AGGREGATE OUTFILE * MODE=ADDVARIABLES
> > > /BREAK.../f=FIRST(?)/l=LAST(?)..........
> > >
> > >  nessie wrote
> > > I'm picking up this old tread.
> > > You helped me a lot in finding related cases and marking them with
> > > chronological case number.Now I have a new problem!
> > >
> > > I want to make a new aggregated case containing some info from the first
> > > and some from the last related case.
> > > E.g. "In" from the first related and "Out" from the last related case.
> > Can
> > > you help me do this?
> > >
> > > Best regards
> > > Lars N.
> > > 2014-05-23 12:39 GMT+02:00 David Marso [via SPSSX Discussion] <
> >
> > > [hidden email] <http://user/SendEmail.jtp?type=node&node=5727915&i=0> <http://user/SendEmail.jtp?type=node&node=5727915&i=0%3E>>:
> >
> > >
> >
> >
> > > > Good catch Rich!
> > > > Here is a version using #scratch variables and a slightly different
> > > > approach.
> > > > DATA LIST LIST / case id in out.
> > > > BEGIN DATA.
> > > > 1, 11, 13, 17
> > > > 2, 12, 14, 15
> > > > 3, 11, 14, 14
> > > > 4, 13, 15, 22
> > > > 5, 11, 17, 22
> > > > 6, 12, 17, 24
> > > > 7, 11, 27, 29
> > > > END DATA.
> > > >
> > > > SORT CASES BY id in out.
> > > > DO IF ($CASENUM EQ 1 OR id NE LAG(id) ).
> > > > +  COMPUTE  episode=SUM(1,LAG(episode)).
> > > > +  COMPUTE  #hiout= out.
> > > > ELSE.
> > > > +  COMPUTE episode=sum(lag(episode),NOT(range(in,lag(in),#hiout ))).
> > > > END IF.
> > > > COMPUTE #hiout= max(out, #hiout ).
> > > >
> > > >  Rich Ulrich wrote
> > > > If I see the problem right, logically, you only need to look at one
> > > > previous line.
> > > >
> > > > If it is the same episode, then you want to extend the testable OUT
> > date
> > > > whenever the new line has a higher one.  Since the file is sorted by
> > > > IN, the previous IN is always okay.
> > > >      This looks like it should work -
> > > >
> > > > SORT CASES BY id in out.
> > > >
> > > > DO IF ($CASENUM EQ 1)
> > > > +COMPUTE  episode=1.
> > > >
> > > > +COMPUTE  hiout= out.
> > > > END IF.
> > > >
> > > > DO IF (id=LAG(id)) AND RANGE(in,lag(in),lag(hiout) ) .
> > > > +COMPUTE episode=lag(episode).
> > > >
> > > > +COMPUTE hiout= max(out, lag(hiout) ).
> > > > END IF.
> > > >
> > > > DO IF MISSING(episode) .
> > > > +COMPUTE  episode=LAG(episode) + 1.
> > > >
> > > > +COMPUTE  hiout= out.
> > > > END IF.
> > > >
> > > > * That shows the logic explicitly.  Since temporary vars (#)  keep
> > their
> > > > * values until changed, across cases, the code should work the same
> > > > * if #hiout replaced both hiout and  lag(hiout). Conceivably, that
> > might
> > > > * run faster than using lag(hiout).
> > > >
> > > > --
> > > > Rich Ulrich
> > > >
> > > >
> > > >
> > > > Date: Thu, 22 May 2014 13:43:25 -0700
> > > > From: [hidden email]
> > > > <http://user/SendEmail.jtp?type=node&node=5726201&i=0> <http://user/SendEmail.jtp?type=node&node=5726201&i=0%3E>
> > > > Subject: Re: Finding and marking related cases
> > > > To: [hidden email] <
> > > http://user/SendEmail.jtp?type=node&node=5726201&i=1 <http://user/SendEmail.jtp?type=node&node=5726201&i=1>>
> > >
> > > >
> > > > Thank's a lot.
> > > > Is there any way to find the number of combined  cases I shall put in
> > > the
> > > > "lookback", or is there a syntax that can regulate this itself? I have
> > a
> > > > couple of hundred thousand singel cases in total and no idea how many
> > > > combined cases I will end up with.Is it possible to sort the
> > constructed
> > > > combined case numbers based on in and not id first?
> > > > Best regardsLars N.
> > > >  19. mai 2014 kl. 18:22 skrev David Marso [via SPSSX Discussion]
> > > <[hidden
> > > > email]>:
> > > >
> > > >         Something like the following?
> > > >
> > > > Note you may need to change the number of cases in the "lookback" (in
> > > this
> > > > case 4).
> > > >
> > > > --
> > > >
> > > > DATA LIST LIST / case id in out.
> > > >
> > > > BEGIN DATA.
> > > >
> > > > 1, 11, 13, 17
> > > >
> > > > 2, 12, 14, 15
> > > >
> > > > 3, 11, 14, 14
> > > >
> > > > 4, 13, 15, 22
> > > >
> > > > 5, 11, 17, 22
> > > >
> > > > 6, 12, 17, 24
> > > >
> > > > 7, 11, 27, 29
> > > >
> > > > END DATA.
> > > >
> > > >
> > > > SORT CASES BY id in out.
> > > >
> > > > IF ($CASENUM EQ 1) episode=1.
> > > >
> > > > DO REPEAT #=1 TO 4.
> > > >
> > > > +  IF (id=LAG(id,#)) AND RANGE(in,lag(in,#),lag(out,#))
> > > > episode=lag(episode).
> > > >
> > > > END REPEAT.
> > > >
> > > > IF MISSING(episode) episode=LAG(episode) + 1.
> > > >
> > > >
> > > > LIST.
> > > >
> > > >
> > > >
> > > >
> > > >     case       id       in      out  episode
> > > >
> > > >
> > > >
> > > >     1.00    11.00    13.00    17.00     1.00
> > > >
> > > >     3.00    11.00    14.00    14.00     1.00
> > > >
> > > >     5.00    11.00    17.00    22.00     1.00
> > > >
> > > >     7.00    11.00    27.00    29.00     2.00
> > > >
> > > >     2.00    12.00    14.00    15.00     3.00
> > > >
> > > >     6.00    12.00    17.00    24.00     4.00
> > > >
> > > >     4.00    13.00    15.00    22.00     5.00
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > Number of cases read:  7    Number of cases listed:  7
> > > >
> > > >
> > > >
> > > >
> > > >                                 Please reply to the list and not to my
> > > > personal email.
> > > >
> > > > Those desiring my consulting or training services please feel free to
> > > > email me.
> > > >
> > > > ---
> > > >
> > > > "Nolite dare sanctum canibus neque mittatis margaritas vestras ante
> > > porcos
> > > > ne forte conculcent eas pedibus suis."
> > > >
> > > > Cum es damnatorum possederunt porcos iens ut salire off sanguinum
> > cliff
> > > in
> > > > abyssum?"
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >                 If you reply to this email, your message will be added
> > > to
> > > > the discussion below:
> > > >
> > > >
> > >
> > http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5726135.html <http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5726135.html>
> > > >
> > > >
> > > >
> > > >                 To unsubscribe from Finding and marking related cases,
> > > > click here.
> > > >
> > > >                 NAML
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > View this message in context: Re: Finding and marking related cases
> > > >
> > > > Sent from the SPSSX Discussion mailing list archive at Nabble.com <http://nabble.com/>.
> > > >
> > > > Please reply to the list and not to my personal email.
> > > > Those desiring my consulting or training services please feel free to
> > > > email me.
> > > > ---
> > > > "Nolite dare sanctum canibus neque mittatis margaritas vestras ante
> > > porcos
> > > > ne forte conculcent eas pedibus suis."
> > > > Cum es damnatorum possederunt porcos iens ut salire off sanguinum
> > cliff
> > > in
> > > > abyssum?"
> > > >
> > > >
> > > > ------------------------------
> > > >  If you reply to this email, your message will be added to the
> > > discussion
> > > > below:
> > > >
> > > >
> > >
> > http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5726201.html <http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5726201.html>
> > > >  To unsubscribe from Finding and marking related cases, click here
> > > > <
> > >
> >
> > > > .
> > > > NAML
> > > > <
> > >
> > http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>>
> >
> >
> >   > <
> > http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E>>
> > <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E%3E> <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E%3E%3E>
> >
> >
> > > >
> > >
> > > Please reply to the list and not to my personal email.
> > > Those desiring my consulting or training services please feel free to
> > > email me.
> > > ---
> > > "Nolite dare sanctum canibus neque mittatis margaritas vestras ante
> > porcos
> > > ne forte conculcent eas pedibus suis."
> > > Cum es damnatorum possederunt porcos iens ut salire off sanguinum cliff
> > in
> > > abyssum?"
> > >
> > >
> > > ------------------------------
> > >  If you reply to this email, your message will be added to the
> > discussion
> > > below:
> > >
> >
> > >
> > http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5727915.html <http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5727915.html>
> > >  To unsubscribe from Finding and marking related cases, click here
> > > < .
> > > NAML
> > > <
> > http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>>
> > <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E> <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E%3E>
> > >
> >
> >  Please reply to the list and not to my personal email.
> > Those desiring my consulting or training services please feel free to
> > email me.
> > ---
> > "Nolite dare sanctum canibus neque mittatis margaritas vestras ante porcos
> > ne forte conculcent eas pedibus suis."
> > Cum es damnatorum possederunt porcos iens ut salire off sanguinum cliff in
> > abyssum?"
> >
> >
> >  ------------------------------
> >
> > *If you reply to this email, your message will be added to the discussion
> > below:*
> >
> >
> > http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5727918.html <http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5727918.html>
> >
> > To unsubscribe from Finding and marking related cases, click here.
> > NAML
> > <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml> <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E>
> >
> >
> >
> >
> >
> >
> >  ------------------------------
> >
> > View this message in context: Re: Finding and marking related cases
> > <http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5727996.html> <http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5727996.html%3E>
> > Sent from the SPSSX Discussion mailing list archive
> > <http://spssx-discussion.1045642.n5.nabble.com/> <http://spssx-discussion.1045642.n5.nabble.com/%3E> at Nabble.com <http://nabble.com/>.
> > ===================== To manage your subscription to SPSSX-L, send a
> > message to [hidden email]
> > <http:///user/SendEmail.jtp?type=node&node=5728006&i=2> <http://user/SendEmail.jtp?type=node&node=5728006&i=2%3E> (not to SPSSX-L),
> > with no body text except the command. To leave the list, send the command
> > SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the
> > command INFO REFCARD
> >  ===================== To manage your subscription to SPSSX-L, send a
> > message to [hidden email]
> > <http:///user/SendEmail.jtp?type=node&node=5728006&i=3> <http://user/SendEmail.jtp?type=node&node=5728006&i=3%3E> (not to SPSSX-L),
> > with no body text except the command. To leave the list, send the command
> > SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the
> > command INFO REFCARD
> >
> > ------------------------------
> >  If you reply to this email, your message will be added to the discussion
> > below:
> >
> > http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5728006.html <http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5728006.html>
> >  To unsubscribe from Finding and marking related cases, click here
> > <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=5726128&code=bGFyc25hZXNzQGdtYWlsLmNvbXw1NzI2MTI4fDM3MjU3NTE5> <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=5726128&code=bGFyc25hZXNzQGdtYWlsLmNvbXw1NzI2MTI4fDM3MjU3NTE5%3E>
> > .
> > NAML
> > <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml> <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E>
> >
> Please reply to the list and not to my personal email.
> Those desiring my consulting or training services please feel free to email me.
> ---
> "Nolite dare sanctum canibus neque mittatis margaritas vestras ante porcos ne forte conculcent eas pedibus suis."
> Cum es damnatorum possederunt porcos iens ut salire off sanguinum cliff in abyssum?"
>
>
> If you reply to this email, your message will be added to the discussion below:
> http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5728120.html <http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5728120.html>
> To unsubscribe from Finding and marking related cases, click here <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=5726128&code=bGFyc25hZXNzQGdtYWlsLmNvbXw1NzI2MTI4fDM3MjU3NTE5>.
> NAML <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
--
Bruce Weaver
bweaver@lakeheadu.ca
http://sites.google.com/a/lakeheadu.ca/bweaver/

"When all else fails, RTFM."

PLEASE NOTE THE FOLLOWING: 
1. My Hotmail account is not monitored regularly. To send me an e-mail, please use the address shown above.
2. The SPSSX Discussion forum on Nabble is no longer linked to the SPSSX-L listserv administered by UGA (https://listserv.uga.edu/).
Reply | Threaded
Open this post in threaded view
|

Re: Finding and marking related cases

David Marso
Administrator
This post was updated on .
In reply to this post by David Marso
<Addendum:>
Perhaps for logical clarity substitute
+  COMPUTE flagged=MAX(flagged,RANGE(#in,in,out) | RANGE(#out,in,out)).
rather than the existing SUM function in the MACRO.
That way it returns 0/1 rather than {0:4}
</Addendum>

Here is a conceptually simpler approach which has the advantage of also flagging cases where there are more than a single qualifying overlap.  If this seems mysterious the look up SCRATCH variables in the FM and learn to love them. The macro (DEFINE !ENDDEFINE) is simply to avoid redundant code.

DATA LIST FREE / ID in out x.
BEGIN DATA
1 1 2 0  2 3 3 0  2.1 3 3.5 0  3 3 4 1  4 4 5 0  5 6 8 0  6 7 9 0  7 7 12 0
8 13 13 1  8.1 13 13.1 0  9 13 13.2 0  9.1 13 13.3 0  9.2 13 13.5 0  9.3 13 14 0  10 14 16 0
END DATA.

DEFINE flag()
DO IF x.
+  COMPUTE #in = in.
+  COMPUTE #out=out.
ELSE.
+  COMPUTE flagged=SUM(flagged,RANGE(#in,in,out),RANGE(#out,in,out)).
END IF.
!ENDDEFINE.

COMPUTE @order=$CASENUM.
SORT CASES BY in (A) out (A).
flag.
SORT CASES BY out (D) in (D).
flag.
SORT CASES BY @order.
LIST.
 ID       in      out        x   @order  flagged
 
    1.00     1.00     2.00      .00     1.00      .00
    2.00     3.00     3.00      .00     2.00     1.00
    2.00     3.00     3.50      .00     3.00     1.00
    3.00     3.00     4.00     1.00     4.00      .
    4.00     4.00     5.00      .00     5.00     1.00
    5.00     6.00     8.00      .00     6.00      .00
    6.00     7.00     9.00      .00     7.00      .00
    7.00     7.00    12.00      .00     8.00      .00
    8.00    13.00    13.00     1.00     9.00      .
    8.00    13.00    13.10      .00    10.00     2.00
    9.00    13.00    13.20      .00    11.00     2.00
    9.00    13.00    13.30      .00    12.00     2.00
    9.00    13.00    13.50      .00    13.00     2.00
    9.00    13.00    14.00      .00    14.00     2.00
   10.00    14.00    16.00      .00    15.00      .00

David Marso wrote
It would be interesting to see what you tried.
Here is what I was trying to direct you toward:
If next case is X then
FLAG: If the 'out' of current case is between the 'in' and 'out' of this next case.
If previous case is X
FLAG if the 'in' of current case is between 'in' and 'out' of previous case.
This can be concisely coded as a 'vector product' of the ranges and the 'xflags'.

DATA LIST FREE / ID in out x.
BEGIN DATA
1 1 2 0 2 3 3 0 3 3 4 1 4 4 5 0 5 6 8 0 6 7 9 0 7 7 12 0 8 13 13 1 9 13 14 0 10 14 16 0
END DATA.
CREATE inlead=LEAD(in,1)/outlead=LEAD(out,1)/xlead=LEAD(x,1).
COMPUTE flagged=SUM(RANGE(out,inlead,outlead) * xlead, RANGE(in,LAG(in),LAG(out))*LAG(x)).
LIST.

nessie wrote
I have tried to solve this myself, but I can get it right.

I've used this example data:

DATA LIST FREE / ID in out x.
BEGIN DATA
1 1 2 0 2 3 3 0 3 3 4 1 4 4 5 0 5 6 8 0 6 7 9 0 7 7 12 0 8 13 13 1 9 13 14 0 10 14 16 0
END DATA.

id is case id
in is a fictive date/time equivalent
out is a fictive date/time equivalent
x is a marker (0=ordinary case, 1=special case)


To sum up, I want to mark all cases which overlaps the in/out-span of the x=1 cases.
I have tried to experiment with the lag and lead functions in SHIFT VALUES (thank’s for the tip David), but I haven’t come up with a working solution.

Does anyone have any other tips or tricks to help me solve this problem?

Best regards
Lars


> 5. des. 2014 kl. 16.07 skrev David Marso [via SPSSX Discussion] <[hidden email]>:
>
> Look up SHIFT VALUES (alternatively use the CREATE command).
> There is also a RANGE function in COMPUTE.
> --
> nessie wrote
> Sorry for my very late reply and for not providing example data when I
> posted my question. I fell bad!
> Thank's to both of you for two different but excellent solutions.
> Espacially thank's to you David for taking time to make and provide am
> illustrative example.
>
> *I have a new and different problem:*
> I have a big number of hospital admissions. Some of them are special and
> draines a lot of resources and I want to mark all the other admissions
> within the time-frame of these admissions to check if they are influenced
> in a negative way. I modified your data David, to give an example.
> The special cases are marked with the value 1 for variable x.
>
> I want to make a new variable that markes all the cases that overlaps with
> the in/out time of these special cases. Here ID 3 and ID 8 are special
> cases, and I want to mark case; 2, 4 and 9.
>
> Best regards
> Lars
>
> DATA LIST FREE / ID in out x.
> BEGIN DATA
> 1 1 2 0 2 3 3 0 3 3 4 1 4 4 5 0 5 6 8 0 7 7 12 0 8 13 13 1 9 13 14 0
> END DATA.
>
> This gives these data:
>
> ID in out x
>
> 1,00 1,00 2,00 ,00
>
> 2,00 3,00 3,00 ,00
>
> 3,00 3,00 4,00 1,00
>
> 4,00 4,00 5,00 ,00
>
> 5,00 6,00 8,00 ,00
>
> 7,00 7,00 12,00 ,00
>
> 8,00 13,00 13,00 1,00
>
> 9,00 13,00 14,00 ,00
>
>
>
>
>
> 2014-11-24 16:08 GMT+01:00 Maguin, Eugene [via SPSSX Discussion] <
> [hidden email] <x-msg://1/user/SendEmail.jtp?type=node&node=5728120&i=0>>:
>
> >  Look at the aggregate function CIN. It returns a count of values between
> > two endpoints.
> >
> > Gene Maguin
> >
> >
> >
> > *From:* SPSSX(r) Discussion [mailto:[hidden email]
> > <http:///user/SendEmail.jtp?type=node&node=5728006&i=0> <http://user/SendEmail.jtp?type=node&node=5728006&i=0%3E>] *On Behalf Of *
> > nessie
> > *Sent:* Friday, November 21, 2014 7:30 AM
> > *To:* [hidden email]
> > <http:///user/SendEmail.jtp?type=node&node=5728006&i=1> <http://user/SendEmail.jtp?type=node&node=5728006&i=1%3E>
> > *Subject:* Re: Finding and marking related cases
> >
> >
> >
> > I have worked my way through the basic aggregate functions and I'm
> > starting to get the hang of it.
> >
> > I have just one basic question - is it possible to count the number of
> > different values a variable contains across the cases you aggregate and
> > present all the different values in the aggregated sufix variables?
> >
> >
> >
> > E.g. I have a patient who has been to several different hospital
> > departments during one aggregated stay. For each case/episode the
> > department variable returns a value. Lets say the has 10 different partial
> > stays (cases) from 7 different departments wich I would like to aggregate.
> > How can I count the number of different departments the patient has been to
> > and present them in the aggregat variables (wich I would like to put at the
> > end of each case/part of stay).
> >
> >
> >
> > Best regards
> >
> > Lars
> >
> >
> >
> > 2014-11-13 15:20 GMT+01:00 Lars E. Næss-Pleym <[hidden email]
> > <http:///user/SendEmail.jtp?type=node&node=5727996&i=0> <http://user/SendEmail.jtp?type=node&node=5727996&i=0%3E>>:
> >
> >  Sorry about the missing spaces, it probably doesn't matter, but I try
> > again to make it more easy to read.
> >
> >
> >
> > DATA LIST LIST
> > / dia (A2) c_level (A2) case id in out .
> > BEGIN DATA.
> > "A1" "IP" 1 11 13 17
> > "B1" "OP" 2 12 15 15
> > "A1" "OP" 3 11 14 14
> > "A2" "IP" 4 13 15 22
> > "B2" "IP" 5 11 17 22
> > "C1" "IP" 6 12 17 24
> > "B3" "IP" 7 11 27 29
> > "C4" "IP" 8 13 22 29
> > "D1" "IP" 9 12 24 26
> > "D2" "IP" 10 12 28 30
> > END DATA.
> >
> > LIST.
> >
> >
> >
> > SORT CASES BY id in out.
> > IF ($CASENUM EQ 1) episode=1.
> > DO REPEAT #=1 TO 10.
> > +  IF (id=LAG(id,#)) AND RANGE(in,lag(in,#),lag(out,#))
> > episode=lag(episode).
> > END REPEAT.
> > IF MISSING(episode) episode=LAG(episode) + 1.
> >
> > LIST.
> >
> >
> >
> > SORT CASES BY episode(A) in(A) out(A).
> > MATCH FILES
> >   /FILE=*
> >   /BY episode
> >   /FIRST=FirstCase
> >   /LAST=PrimaryLast.
> > DO IF (FirstCase).
> > COMPUTE  CaseNr=1-PrimaryLast.
> > ELSE.
> > COMPUTE  CaseNr=CaseNr+1.
> > END IF.
> > LEAVE  CaseNr.
> > FORMATS  CaseNr (f7).
> > MATCH FILES
> >   /FILE=*
> >   /DROP=PrimaryLast.
> > VALUE LABELS  FirstCase 0 'Duplicate Case' 1 'Primary Case' 2'Unique case'.
> > VARIABLE LEVEL  FirstCase (ORDINAL) /CaseNr (SCALE).
> > EXECUTE.
> >
> >
> >
> > IF (CaseNr=0) FirstCase=2.
> > EXECUTE.
> >
> > IF (FirstCase=2) CaseNr=1.
> > EXECUTE.
> >
> > 2014-11-13 15:17 GMT+01:00 Lars E. Næss-Pleym <[hidden email]
> > <http:///user/SendEmail.jtp?type=node&node=5727996&i=1> <http://user/SendEmail.jtp?type=node&node=5727996&i=1%3E>>:
> >
> >
> >
> >  Here is an example syntax. dia=diagnosis, c_level=care_level
> > (IP=In-Patient, OP=Out-Patient), case=unique case_ID, id=person_ID,
> > in=date_in, out=date_out
> >
> >
> >
> >  DATA LIST LIST
> > / dia (A2) c_level (A2) case id in out .
> > BEGIN DATA.
> > "A1" "IP" 1 11 13 17
> > "B1" "OP" 2 12 15 15
> > "A1" "OP" 3 11 14 14
> > "A2" "IP" 4 13 15 22
> > "B2" "IP" 5 11 17 22
> > "C1" "IP" 6 12 17 24
> > "B3" "IP" 7 11 27 29
> > "C4" "IP" 8 13 22 29
> > "D1" "IP" 9 12 24 26
> > "D2" "IP" 10 12 28 30
> > END DATA.
> >
> > LIST.
> >
> > SORT CASES BY id in out.
> > IF ($CASENUM EQ 1) episode=1.
> > DO REPEAT #=1 TO 10.
> > +  IF (id=LAG(id,#)) AND RANGE(in,lag(in,#),lag(out,#))
> > episode=lag(episode).
> > END REPEAT.
> > IF MISSING(episode) episode=LAG(episode) + 1.
> >
> > LIST.
> >
> > SORT CASES BY episode(A) in(A) out(A).
> > MATCH FILES
> >   /FILE=*
> >   /BY episode
> >   /FIRST=FirstCase
> >   /LAST=PrimaryLast.
> > DO IF (FirstCase).
> > COMPUTE  CaseNr=1-PrimaryLast.
> > ELSE.
> > COMPUTE  CaseNr=CaseNr+1.
> > END IF.
> > LEAVE  CaseNr.
> > FORMATS  CaseNr (f7).
> > MATCH FILES
> >   /FILE=*
> >   /DROP=PrimaryLast.
> > VALUE LABELS  FirstCase 0 'Duplicate Case' 1 'Primary Case' 2'Unique case'.
> > VARIABLE LEVEL  FirstCase (ORDINAL) /CaseNr (SCALE).
> > EXECUTE.
> >
> > IF (CaseNr=0) FirstCase=2.
> > EXECUTE.
> >
> > IF (FirstCase=2) CaseNr=1.
> > EXECUTE.
> >
> > From this I want to keep all the uniqe cases but also make a new
> > aggregatet case for all episodes containing "in" from the first case and
> > "out" from the last, the "id", and "episode" variables, the
> > last "Dia" variable, and the first "c_level" variable.
> >
> >
> >
> > I also want to know e.g. how many different unique diagnosis within the
> > same episode and if the "c_level" has been the same for all cases
> > within one episode.
> >
> >
> >
> > I will look up look up AGGREGATE, RENAME VARIABLES, CASESTOVARS and MATCH
> > FILES, thanks for the tip.
> >
> >
> >
> > Best regards
> >
> > Lars
> >
> >
> > 2014-11-13 14:18 GMT+01:00 David Marso [via SPSSX Discussion] <[hidden
> > email] <http:///user/SendEmail.jtp?type=node&node=5727996&i=2> <http://user/SendEmail.jtp?type=node&node=5727996&i=2%3E>>:
> >
> >
> >
> > Lars,
> >   Please post a more illustrative data set with a before/after of how you
> > want the final result to appear.
> > Meanwhile look up AGGREGATE, RENAME VARIABLES, CASESTOVARS and MATCH FILES
> > commands.
> > HTH, David
> > ==
> >
> >  *nessie wrote*
> >
> > Thanks David
> > That worked!
> >
> > From the original question in this tread, I have made an episode variable
> > linking concurrent admissions.
> > I have then sorted this in cronological order av given it a case number
> > within the episode.
> >
> > For "aggregated" episodes I now want to make a new case with aggregated
> > data for all the same variables but pick some of the variables from the
> > first and some from the last case. I can sort my episode variables
> > according to casenumber and then try to aggregate with all the variables
> > as
> > break variables from either the first or the last case.
> > - Is this the best way to do it?
> > - What if I want to gather som data from on of the episodes in the middle?
> > Is there an easy way to do this?
> >
> > Best regards
> > Lars N.
> >
> >
> > 2014-11-13 10:54 GMT+01:00 David Marso [via SPSSX Discussion] <
> > [hidden email] <http://user/SendEmail.jtp?type=node&node=5727918&i=0> <http://user/SendEmail.jtp?type=node&node=5727918&i=0%3E>>:
> >
> >
> > > See AGGREGATE in the FM.  There are FIRST and LAST functions.
> > > --
> > > AGGREGATE OUTFILE * MODE=ADDVARIABLES
> > > /BREAK.../f=FIRST(?)/l=LAST(?)..........
> > >
> > >  nessie wrote
> > > I'm picking up this old tread.
> > > You helped me a lot in finding related cases and marking them with
> > > chronological case number.Now I have a new problem!
> > >
> > > I want to make a new aggregated case containing some info from the first
> > > and some from the last related case.
> > > E.g. "In" from the first related and "Out" from the last related case.
> > Can
> > > you help me do this?
> > >
> > > Best regards
> > > Lars N.
> > > 2014-05-23 12:39 GMT+02:00 David Marso [via SPSSX Discussion] <
> >
> > > [hidden email] <http://user/SendEmail.jtp?type=node&node=5727915&i=0> <http://user/SendEmail.jtp?type=node&node=5727915&i=0%3E>>:
> >
> > >
> >
> >
> > > > Good catch Rich!
> > > > Here is a version using #scratch variables and a slightly different
> > > > approach.
> > > > DATA LIST LIST / case id in out.
> > > > BEGIN DATA.
> > > > 1, 11, 13, 17
> > > > 2, 12, 14, 15
> > > > 3, 11, 14, 14
> > > > 4, 13, 15, 22
> > > > 5, 11, 17, 22
> > > > 6, 12, 17, 24
> > > > 7, 11, 27, 29
> > > > END DATA.
> > > >
> > > > SORT CASES BY id in out.
> > > > DO IF ($CASENUM EQ 1 OR id NE LAG(id) ).
> > > > +  COMPUTE  episode=SUM(1,LAG(episode)).
> > > > +  COMPUTE  #hiout= out.
> > > > ELSE.
> > > > +  COMPUTE episode=sum(lag(episode),NOT(range(in,lag(in),#hiout ))).
> > > > END IF.
> > > > COMPUTE #hiout= max(out, #hiout ).
> > > >
> > > >  Rich Ulrich wrote
> > > > If I see the problem right, logically, you only need to look at one
> > > > previous line.
> > > >
> > > > If it is the same episode, then you want to extend the testable OUT
> > date
> > > > whenever the new line has a higher one.  Since the file is sorted by
> > > > IN, the previous IN is always okay.
> > > >      This looks like it should work -
> > > >
> > > > SORT CASES BY id in out.
> > > >
> > > > DO IF ($CASENUM EQ 1)
> > > > +COMPUTE  episode=1.
> > > >
> > > > +COMPUTE  hiout= out.
> > > > END IF.
> > > >
> > > > DO IF (id=LAG(id)) AND RANGE(in,lag(in),lag(hiout) ) .
> > > > +COMPUTE episode=lag(episode).
> > > >
> > > > +COMPUTE hiout= max(out, lag(hiout) ).
> > > > END IF.
> > > >
> > > > DO IF MISSING(episode) .
> > > > +COMPUTE  episode=LAG(episode) + 1.
> > > >
> > > > +COMPUTE  hiout= out.
> > > > END IF.
> > > >
> > > > * That shows the logic explicitly.  Since temporary vars (#)  keep
> > their
> > > > * values until changed, across cases, the code should work the same
> > > > * if #hiout replaced both hiout and  lag(hiout). Conceivably, that
> > might
> > > > * run faster than using lag(hiout).
> > > >
> > > > --
> > > > Rich Ulrich
> > > >
> > > >
> > > >
> > > > Date: Thu, 22 May 2014 13:43:25 -0700
> > > > From: [hidden email]
> > > > <http://user/SendEmail.jtp?type=node&node=5726201&i=0> <http://user/SendEmail.jtp?type=node&node=5726201&i=0%3E>
> > > > Subject: Re: Finding and marking related cases
> > > > To: [hidden email] <
> > > http://user/SendEmail.jtp?type=node&node=5726201&i=1 <http://user/SendEmail.jtp?type=node&node=5726201&i=1>>
> > >
> > > >
> > > > Thank's a lot.
> > > > Is there any way to find the number of combined  cases I shall put in
> > > the
> > > > "lookback", or is there a syntax that can regulate this itself? I have
> > a
> > > > couple of hundred thousand singel cases in total and no idea how many
> > > > combined cases I will end up with.Is it possible to sort the
> > constructed
> > > > combined case numbers based on in and not id first?
> > > > Best regardsLars N.
> > > >  19. mai 2014 kl. 18:22 skrev David Marso [via SPSSX Discussion]
> > > <[hidden
> > > > email]>:
> > > >
> > > >         Something like the following?
> > > >
> > > > Note you may need to change the number of cases in the "lookback" (in
> > > this
> > > > case 4).
> > > >
> > > > --
> > > >
> > > > DATA LIST LIST / case id in out.
> > > >
> > > > BEGIN DATA.
> > > >
> > > > 1, 11, 13, 17
> > > >
> > > > 2, 12, 14, 15
> > > >
> > > > 3, 11, 14, 14
> > > >
> > > > 4, 13, 15, 22
> > > >
> > > > 5, 11, 17, 22
> > > >
> > > > 6, 12, 17, 24
> > > >
> > > > 7, 11, 27, 29
> > > >
> > > > END DATA.
> > > >
> > > >
> > > > SORT CASES BY id in out.
> > > >
> > > > IF ($CASENUM EQ 1) episode=1.
> > > >
> > > > DO REPEAT #=1 TO 4.
> > > >
> > > > +  IF (id=LAG(id,#)) AND RANGE(in,lag(in,#),lag(out,#))
> > > > episode=lag(episode).
> > > >
> > > > END REPEAT.
> > > >
> > > > IF MISSING(episode) episode=LAG(episode) + 1.
> > > >
> > > >
> > > > LIST.
> > > >
> > > >
> > > >
> > > >
> > > >     case       id       in      out  episode
> > > >
> > > >
> > > >
> > > >     1.00    11.00    13.00    17.00     1.00
> > > >
> > > >     3.00    11.00    14.00    14.00     1.00
> > > >
> > > >     5.00    11.00    17.00    22.00     1.00
> > > >
> > > >     7.00    11.00    27.00    29.00     2.00
> > > >
> > > >     2.00    12.00    14.00    15.00     3.00
> > > >
> > > >     6.00    12.00    17.00    24.00     4.00
> > > >
> > > >     4.00    13.00    15.00    22.00     5.00
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > Number of cases read:  7    Number of cases listed:  7
> > > >
> > > >
> > > >
> > > >
> > > >                                 Please reply to the list and not to my
> > > > personal email.
> > > >
> > > > Those desiring my consulting or training services please feel free to
> > > > email me.
> > > >
> > > > ---
> > > >
> > > > "Nolite dare sanctum canibus neque mittatis margaritas vestras ante
> > > porcos
> > > > ne forte conculcent eas pedibus suis."
> > > >
> > > > Cum es damnatorum possederunt porcos iens ut salire off sanguinum
> > cliff
> > > in
> > > > abyssum?"
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >                 If you reply to this email, your message will be added
> > > to
> > > > the discussion below:
> > > >
> > > >
> > >
> > http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5726135.html <http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5726135.html>
> > > >
> > > >
> > > >
> > > >                 To unsubscribe from Finding and marking related cases,
> > > > click here.
> > > >
> > > >                 NAML
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > View this message in context: Re: Finding and marking related cases
> > > >
> > > > Sent from the SPSSX Discussion mailing list archive at Nabble.com <http://nabble.com/>.
> > > >
> > > > Please reply to the list and not to my personal email.
> > > > Those desiring my consulting or training services please feel free to
> > > > email me.
> > > > ---
> > > > "Nolite dare sanctum canibus neque mittatis margaritas vestras ante
> > > porcos
> > > > ne forte conculcent eas pedibus suis."
> > > > Cum es damnatorum possederunt porcos iens ut salire off sanguinum
> > cliff
> > > in
> > > > abyssum?"
> > > >
> > > >
> > > > ------------------------------
> > > >  If you reply to this email, your message will be added to the
> > > discussion
> > > > below:
> > > >
> > > >
> > >
> > http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5726201.html <http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5726201.html>
> > > >  To unsubscribe from Finding and marking related cases, click here
> > > > <
> > >
> >
> > > > .
> > > > NAML
> > > > <
> > >
> > http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>>
> >
> >
> >   > <
> > http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E>>
> > <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E%3E> <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E%3E%3E>
> >
> >
> > > >
> > >
> > > Please reply to the list and not to my personal email.
> > > Those desiring my consulting or training services please feel free to
> > > email me.
> > > ---
> > > "Nolite dare sanctum canibus neque mittatis margaritas vestras ante
> > porcos
> > > ne forte conculcent eas pedibus suis."
> > > Cum es damnatorum possederunt porcos iens ut salire off sanguinum cliff
> > in
> > > abyssum?"
> > >
> > >
> > > ------------------------------
> > >  If you reply to this email, your message will be added to the
> > discussion
> > > below:
> > >
> >
> > >
> > http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5727915.html <http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5727915.html>
> > >  To unsubscribe from Finding and marking related cases, click here
> > > < .
> > > NAML
> > > <
> > http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>>
> > <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E> <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E%3E>
> > >
> >
> >  Please reply to the list and not to my personal email.
> > Those desiring my consulting or training services please feel free to
> > email me.
> > ---
> > "Nolite dare sanctum canibus neque mittatis margaritas vestras ante porcos
> > ne forte conculcent eas pedibus suis."
> > Cum es damnatorum possederunt porcos iens ut salire off sanguinum cliff in
> > abyssum?"
> >
> >
> >  ------------------------------
> >
> > *If you reply to this email, your message will be added to the discussion
> > below:*
> >
> >
> > http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5727918.html <http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5727918.html>
> >
> > To unsubscribe from Finding and marking related cases, click here.
> > NAML
> > <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml> <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E>
> >
> >
> >
> >
> >
> >
> >  ------------------------------
> >
> > View this message in context: Re: Finding and marking related cases
> > <http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5727996.html> <http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5727996.html%3E>
> > Sent from the SPSSX Discussion mailing list archive
> > <http://spssx-discussion.1045642.n5.nabble.com/> <http://spssx-discussion.1045642.n5.nabble.com/%3E> at Nabble.com <http://nabble.com/>.
> > ===================== To manage your subscription to SPSSX-L, send a
> > message to [hidden email]
> > <http:///user/SendEmail.jtp?type=node&node=5728006&i=2> <http://user/SendEmail.jtp?type=node&node=5728006&i=2%3E> (not to SPSSX-L),
> > with no body text except the command. To leave the list, send the command
> > SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the
> > command INFO REFCARD
> >  ===================== To manage your subscription to SPSSX-L, send a
> > message to [hidden email]
> > <http:///user/SendEmail.jtp?type=node&node=5728006&i=3> <http://user/SendEmail.jtp?type=node&node=5728006&i=3%3E> (not to SPSSX-L),
> > with no body text except the command. To leave the list, send the command
> > SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the
> > command INFO REFCARD
> >
> > ------------------------------
> >  If you reply to this email, your message will be added to the discussion
> > below:
> >
> > http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5728006.html <http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5728006.html>
> >  To unsubscribe from Finding and marking related cases, click here
> > <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=5726128&code=bGFyc25hZXNzQGdtYWlsLmNvbXw1NzI2MTI4fDM3MjU3NTE5> <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=5726128&code=bGFyc25hZXNzQGdtYWlsLmNvbXw1NzI2MTI4fDM3MjU3NTE5%3E>
> > .
> > NAML
> > <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml> <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E>
> >
> Please reply to the list and not to my personal email.
> Those desiring my consulting or training services please feel free to email me.
> ---
> "Nolite dare sanctum canibus neque mittatis margaritas vestras ante porcos ne forte conculcent eas pedibus suis."
> Cum es damnatorum possederunt porcos iens ut salire off sanguinum cliff in abyssum?"
>
>
> If you reply to this email, your message will be added to the discussion below:
> http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5728120.html <http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5728120.html>
> To unsubscribe from Finding and marking related cases, click here <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=5726128&code=bGFyc25hZXNzQGdtYWlsLmNvbXw1NzI2MTI4fDM3MjU3NTE5>.
> NAML <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
Please reply to the list and not to my personal email.
Those desiring my consulting or training services please feel free to email me.
---
"Nolite dare sanctum canibus neque mittatis margaritas vestras ante porcos ne forte conculcent eas pedibus suis."
Cum es damnatorum possederunt porcos iens ut salire off sanguinum cliff in abyssum?"
Reply | Threaded
Open this post in threaded view
|

Re: Finding and marking related cases

Bruce Weaver
Administrator
Ahem!  Please disregard the Rubish* method I posted earlier.  David's solution is far better.  

* Re the meaning of "Rubish", please see http://www.rubegoldberg.com/.  ;-)


David Marso wrote
<Addendum:>
Perhaps for logical clarity substitute
+  COMPUTE flagged=MAX(flagged,RANGE(#in,in,out) | RANGE(#out,in,out)).
rather than the existing SUM function in the MACRO.
That way it returns 0/1 rather than {0:4}
</Addendum>

Here is a conceptually simpler approach which has the advantage of also flagging cases where there are more than a single qualifying overlap.  If this seems mysterious the look up SCRATCH variables in the FM and learn to love them. The macro (DEFINE !ENDDEFINE) is simply to avoid redundant code.

DATA LIST FREE / ID in out x.
BEGIN DATA
1 1 2 0  2 3 3 0  2.1 3 3.5 0  3 3 4 1  4 4 5 0  5 6 8 0  6 7 9 0  7 7 12 0
8 13 13 1  8.1 13 13.1 0  9 13 13.2 0  9.1 13 13.3 0  9.2 13 13.5 0  9.3 13 14 0  10 14 16 0
END DATA.

DEFINE flag()
DO IF x.
+  COMPUTE #in = in.
+  COMPUTE #out=out.
ELSE.
+  COMPUTE flagged=SUM(flagged,RANGE(#in,in,out),RANGE(#out,in,out)).
END IF.
!ENDDEFINE.

COMPUTE @order=$CASENUM.
SORT CASES BY in (A) out (A).
flag.
SORT CASES BY out (D) in (D).
flag.
SORT CASES BY @order.
LIST.
 ID       in      out        x   @order  flagged
 
    1.00     1.00     2.00      .00     1.00      .00
    2.00     3.00     3.00      .00     2.00     1.00
    2.00     3.00     3.50      .00     3.00     1.00
    3.00     3.00     4.00     1.00     4.00      .
    4.00     4.00     5.00      .00     5.00     1.00
    5.00     6.00     8.00      .00     6.00      .00
    6.00     7.00     9.00      .00     7.00      .00
    7.00     7.00    12.00      .00     8.00      .00
    8.00    13.00    13.00     1.00     9.00      .
    8.00    13.00    13.10      .00    10.00     2.00
    9.00    13.00    13.20      .00    11.00     2.00
    9.00    13.00    13.30      .00    12.00     2.00
    9.00    13.00    13.50      .00    13.00     2.00
    9.00    13.00    14.00      .00    14.00     2.00
   10.00    14.00    16.00      .00    15.00      .00

David Marso wrote
It would be interesting to see what you tried.
Here is what I was trying to direct you toward:
If next case is X then
FLAG: If the 'out' of current case is between the 'in' and 'out' of this next case.
If previous case is X
FLAG if the 'in' of current case is between 'in' and 'out' of previous case.
This can be concisely coded as a 'vector product' of the ranges and the 'xflags'.

DATA LIST FREE / ID in out x.
BEGIN DATA
1 1 2 0 2 3 3 0 3 3 4 1 4 4 5 0 5 6 8 0 6 7 9 0 7 7 12 0 8 13 13 1 9 13 14 0 10 14 16 0
END DATA.
CREATE inlead=LEAD(in,1)/outlead=LEAD(out,1)/xlead=LEAD(x,1).
COMPUTE flagged=SUM(RANGE(out,inlead,outlead) * xlead, RANGE(in,LAG(in),LAG(out))*LAG(x)).
LIST.

nessie wrote
I have tried to solve this myself, but I can get it right.

I've used this example data:

DATA LIST FREE / ID in out x.
BEGIN DATA
1 1 2 0 2 3 3 0 3 3 4 1 4 4 5 0 5 6 8 0 6 7 9 0 7 7 12 0 8 13 13 1 9 13 14 0 10 14 16 0
END DATA.

id is case id
in is a fictive date/time equivalent
out is a fictive date/time equivalent
x is a marker (0=ordinary case, 1=special case)


To sum up, I want to mark all cases which overlaps the in/out-span of the x=1 cases.
I have tried to experiment with the lag and lead functions in SHIFT VALUES (thank’s for the tip David), but I haven’t come up with a working solution.

Does anyone have any other tips or tricks to help me solve this problem?

Best regards
Lars


> 5. des. 2014 kl. 16.07 skrev David Marso [via SPSSX Discussion] <[hidden email]>:
>
> Look up SHIFT VALUES (alternatively use the CREATE command).
> There is also a RANGE function in COMPUTE.
> --
> nessie wrote
> Sorry for my very late reply and for not providing example data when I
> posted my question. I fell bad!
> Thank's to both of you for two different but excellent solutions.
> Espacially thank's to you David for taking time to make and provide am
> illustrative example.
>
> *I have a new and different problem:*
> I have a big number of hospital admissions. Some of them are special and
> draines a lot of resources and I want to mark all the other admissions
> within the time-frame of these admissions to check if they are influenced
> in a negative way. I modified your data David, to give an example.
> The special cases are marked with the value 1 for variable x.
>
> I want to make a new variable that markes all the cases that overlaps with
> the in/out time of these special cases. Here ID 3 and ID 8 are special
> cases, and I want to mark case; 2, 4 and 9.
>
> Best regards
> Lars
>
> DATA LIST FREE / ID in out x.
> BEGIN DATA
> 1 1 2 0 2 3 3 0 3 3 4 1 4 4 5 0 5 6 8 0 7 7 12 0 8 13 13 1 9 13 14 0
> END DATA.
>
> This gives these data:
>
> ID in out x
>
> 1,00 1,00 2,00 ,00
>
> 2,00 3,00 3,00 ,00
>
> 3,00 3,00 4,00 1,00
>
> 4,00 4,00 5,00 ,00
>
> 5,00 6,00 8,00 ,00
>
> 7,00 7,00 12,00 ,00
>
> 8,00 13,00 13,00 1,00
>
> 9,00 13,00 14,00 ,00
>
>
>
>
>
> 2014-11-24 16:08 GMT+01:00 Maguin, Eugene [via SPSSX Discussion] <
> [hidden email] <x-msg://1/user/SendEmail.jtp?type=node&node=5728120&i=0>>:
>
> >  Look at the aggregate function CIN. It returns a count of values between
> > two endpoints.
> >
> > Gene Maguin
> >
> >
> >
> > *From:* SPSSX(r) Discussion [mailto:[hidden email]
> > <http:///user/SendEmail.jtp?type=node&node=5728006&i=0> <http://user/SendEmail.jtp?type=node&node=5728006&i=0%3E>] *On Behalf Of *
> > nessie
> > *Sent:* Friday, November 21, 2014 7:30 AM
> > *To:* [hidden email]
> > <http:///user/SendEmail.jtp?type=node&node=5728006&i=1> <http://user/SendEmail.jtp?type=node&node=5728006&i=1%3E>
> > *Subject:* Re: Finding and marking related cases
> >
> >
> >
> > I have worked my way through the basic aggregate functions and I'm
> > starting to get the hang of it.
> >
> > I have just one basic question - is it possible to count the number of
> > different values a variable contains across the cases you aggregate and
> > present all the different values in the aggregated sufix variables?
> >
> >
> >
> > E.g. I have a patient who has been to several different hospital
> > departments during one aggregated stay. For each case/episode the
> > department variable returns a value. Lets say the has 10 different partial
> > stays (cases) from 7 different departments wich I would like to aggregate.
> > How can I count the number of different departments the patient has been to
> > and present them in the aggregat variables (wich I would like to put at the
> > end of each case/part of stay).
> >
> >
> >
> > Best regards
> >
> > Lars
> >
> >
> >
> > 2014-11-13 15:20 GMT+01:00 Lars E. Næss-Pleym <[hidden email]
> > <http:///user/SendEmail.jtp?type=node&node=5727996&i=0> <http://user/SendEmail.jtp?type=node&node=5727996&i=0%3E>>:
> >
> >  Sorry about the missing spaces, it probably doesn't matter, but I try
> > again to make it more easy to read.
> >
> >
> >
> > DATA LIST LIST
> > / dia (A2) c_level (A2) case id in out .
> > BEGIN DATA.
> > "A1" "IP" 1 11 13 17
> > "B1" "OP" 2 12 15 15
> > "A1" "OP" 3 11 14 14
> > "A2" "IP" 4 13 15 22
> > "B2" "IP" 5 11 17 22
> > "C1" "IP" 6 12 17 24
> > "B3" "IP" 7 11 27 29
> > "C4" "IP" 8 13 22 29
> > "D1" "IP" 9 12 24 26
> > "D2" "IP" 10 12 28 30
> > END DATA.
> >
> > LIST.
> >
> >
> >
> > SORT CASES BY id in out.
> > IF ($CASENUM EQ 1) episode=1.
> > DO REPEAT #=1 TO 10.
> > +  IF (id=LAG(id,#)) AND RANGE(in,lag(in,#),lag(out,#))
> > episode=lag(episode).
> > END REPEAT.
> > IF MISSING(episode) episode=LAG(episode) + 1.
> >
> > LIST.
> >
> >
> >
> > SORT CASES BY episode(A) in(A) out(A).
> > MATCH FILES
> >   /FILE=*
> >   /BY episode
> >   /FIRST=FirstCase
> >   /LAST=PrimaryLast.
> > DO IF (FirstCase).
> > COMPUTE  CaseNr=1-PrimaryLast.
> > ELSE.
> > COMPUTE  CaseNr=CaseNr+1.
> > END IF.
> > LEAVE  CaseNr.
> > FORMATS  CaseNr (f7).
> > MATCH FILES
> >   /FILE=*
> >   /DROP=PrimaryLast.
> > VALUE LABELS  FirstCase 0 'Duplicate Case' 1 'Primary Case' 2'Unique case'.
> > VARIABLE LEVEL  FirstCase (ORDINAL) /CaseNr (SCALE).
> > EXECUTE.
> >
> >
> >
> > IF (CaseNr=0) FirstCase=2.
> > EXECUTE.
> >
> > IF (FirstCase=2) CaseNr=1.
> > EXECUTE.
> >
> > 2014-11-13 15:17 GMT+01:00 Lars E. Næss-Pleym <[hidden email]
> > <http:///user/SendEmail.jtp?type=node&node=5727996&i=1> <http://user/SendEmail.jtp?type=node&node=5727996&i=1%3E>>:
> >
> >
> >
> >  Here is an example syntax. dia=diagnosis, c_level=care_level
> > (IP=In-Patient, OP=Out-Patient), case=unique case_ID, id=person_ID,
> > in=date_in, out=date_out
> >
> >
> >
> >  DATA LIST LIST
> > / dia (A2) c_level (A2) case id in out .
> > BEGIN DATA.
> > "A1" "IP" 1 11 13 17
> > "B1" "OP" 2 12 15 15
> > "A1" "OP" 3 11 14 14
> > "A2" "IP" 4 13 15 22
> > "B2" "IP" 5 11 17 22
> > "C1" "IP" 6 12 17 24
> > "B3" "IP" 7 11 27 29
> > "C4" "IP" 8 13 22 29
> > "D1" "IP" 9 12 24 26
> > "D2" "IP" 10 12 28 30
> > END DATA.
> >
> > LIST.
> >
> > SORT CASES BY id in out.
> > IF ($CASENUM EQ 1) episode=1.
> > DO REPEAT #=1 TO 10.
> > +  IF (id=LAG(id,#)) AND RANGE(in,lag(in,#),lag(out,#))
> > episode=lag(episode).
> > END REPEAT.
> > IF MISSING(episode) episode=LAG(episode) + 1.
> >
> > LIST.
> >
> > SORT CASES BY episode(A) in(A) out(A).
> > MATCH FILES
> >   /FILE=*
> >   /BY episode
> >   /FIRST=FirstCase
> >   /LAST=PrimaryLast.
> > DO IF (FirstCase).
> > COMPUTE  CaseNr=1-PrimaryLast.
> > ELSE.
> > COMPUTE  CaseNr=CaseNr+1.
> > END IF.
> > LEAVE  CaseNr.
> > FORMATS  CaseNr (f7).
> > MATCH FILES
> >   /FILE=*
> >   /DROP=PrimaryLast.
> > VALUE LABELS  FirstCase 0 'Duplicate Case' 1 'Primary Case' 2'Unique case'.
> > VARIABLE LEVEL  FirstCase (ORDINAL) /CaseNr (SCALE).
> > EXECUTE.
> >
> > IF (CaseNr=0) FirstCase=2.
> > EXECUTE.
> >
> > IF (FirstCase=2) CaseNr=1.
> > EXECUTE.
> >
> > From this I want to keep all the uniqe cases but also make a new
> > aggregatet case for all episodes containing "in" from the first case and
> > "out" from the last, the "id", and "episode" variables, the
> > last "Dia" variable, and the first "c_level" variable.
> >
> >
> >
> > I also want to know e.g. how many different unique diagnosis within the
> > same episode and if the "c_level" has been the same for all cases
> > within one episode.
> >
> >
> >
> > I will look up look up AGGREGATE, RENAME VARIABLES, CASESTOVARS and MATCH
> > FILES, thanks for the tip.
> >
> >
> >
> > Best regards
> >
> > Lars
> >
> >
> > 2014-11-13 14:18 GMT+01:00 David Marso [via SPSSX Discussion] <[hidden
> > email] <http:///user/SendEmail.jtp?type=node&node=5727996&i=2> <http://user/SendEmail.jtp?type=node&node=5727996&i=2%3E>>:
> >
> >
> >
> > Lars,
> >   Please post a more illustrative data set with a before/after of how you
> > want the final result to appear.
> > Meanwhile look up AGGREGATE, RENAME VARIABLES, CASESTOVARS and MATCH FILES
> > commands.
> > HTH, David
> > ==
> >
> >  *nessie wrote*
> >
> > Thanks David
> > That worked!
> >
> > From the original question in this tread, I have made an episode variable
> > linking concurrent admissions.
> > I have then sorted this in cronological order av given it a case number
> > within the episode.
> >
> > For "aggregated" episodes I now want to make a new case with aggregated
> > data for all the same variables but pick some of the variables from the
> > first and some from the last case. I can sort my episode variables
> > according to casenumber and then try to aggregate with all the variables
> > as
> > break variables from either the first or the last case.
> > - Is this the best way to do it?
> > - What if I want to gather som data from on of the episodes in the middle?
> > Is there an easy way to do this?
> >
> > Best regards
> > Lars N.
> >
> >
> > 2014-11-13 10:54 GMT+01:00 David Marso [via SPSSX Discussion] <
> > [hidden email] <http://user/SendEmail.jtp?type=node&node=5727918&i=0> <http://user/SendEmail.jtp?type=node&node=5727918&i=0%3E>>:
> >
> >
> > > See AGGREGATE in the FM.  There are FIRST and LAST functions.
> > > --
> > > AGGREGATE OUTFILE * MODE=ADDVARIABLES
> > > /BREAK.../f=FIRST(?)/l=LAST(?)..........
> > >
> > >  nessie wrote
> > > I'm picking up this old tread.
> > > You helped me a lot in finding related cases and marking them with
> > > chronological case number.Now I have a new problem!
> > >
> > > I want to make a new aggregated case containing some info from the first
> > > and some from the last related case.
> > > E.g. "In" from the first related and "Out" from the last related case.
> > Can
> > > you help me do this?
> > >
> > > Best regards
> > > Lars N.
> > > 2014-05-23 12:39 GMT+02:00 David Marso [via SPSSX Discussion] <
> >
> > > [hidden email] <http://user/SendEmail.jtp?type=node&node=5727915&i=0> <http://user/SendEmail.jtp?type=node&node=5727915&i=0%3E>>:
> >
> > >
> >
> >
> > > > Good catch Rich!
> > > > Here is a version using #scratch variables and a slightly different
> > > > approach.
> > > > DATA LIST LIST / case id in out.
> > > > BEGIN DATA.
> > > > 1, 11, 13, 17
> > > > 2, 12, 14, 15
> > > > 3, 11, 14, 14
> > > > 4, 13, 15, 22
> > > > 5, 11, 17, 22
> > > > 6, 12, 17, 24
> > > > 7, 11, 27, 29
> > > > END DATA.
> > > >
> > > > SORT CASES BY id in out.
> > > > DO IF ($CASENUM EQ 1 OR id NE LAG(id) ).
> > > > +  COMPUTE  episode=SUM(1,LAG(episode)).
> > > > +  COMPUTE  #hiout= out.
> > > > ELSE.
> > > > +  COMPUTE episode=sum(lag(episode),NOT(range(in,lag(in),#hiout ))).
> > > > END IF.
> > > > COMPUTE #hiout= max(out, #hiout ).
> > > >
> > > >  Rich Ulrich wrote
> > > > If I see the problem right, logically, you only need to look at one
> > > > previous line.
> > > >
> > > > If it is the same episode, then you want to extend the testable OUT
> > date
> > > > whenever the new line has a higher one.  Since the file is sorted by
> > > > IN, the previous IN is always okay.
> > > >      This looks like it should work -
> > > >
> > > > SORT CASES BY id in out.
> > > >
> > > > DO IF ($CASENUM EQ 1)
> > > > +COMPUTE  episode=1.
> > > >
> > > > +COMPUTE  hiout= out.
> > > > END IF.
> > > >
> > > > DO IF (id=LAG(id)) AND RANGE(in,lag(in),lag(hiout) ) .
> > > > +COMPUTE episode=lag(episode).
> > > >
> > > > +COMPUTE hiout= max(out, lag(hiout) ).
> > > > END IF.
> > > >
> > > > DO IF MISSING(episode) .
> > > > +COMPUTE  episode=LAG(episode) + 1.
> > > >
> > > > +COMPUTE  hiout= out.
> > > > END IF.
> > > >
> > > > * That shows the logic explicitly.  Since temporary vars (#)  keep
> > their
> > > > * values until changed, across cases, the code should work the same
> > > > * if #hiout replaced both hiout and  lag(hiout). Conceivably, that
> > might
> > > > * run faster than using lag(hiout).
> > > >
> > > > --
> > > > Rich Ulrich
> > > >
> > > >
> > > >
> > > > Date: Thu, 22 May 2014 13:43:25 -0700
> > > > From: [hidden email]
> > > > <http://user/SendEmail.jtp?type=node&node=5726201&i=0> <http://user/SendEmail.jtp?type=node&node=5726201&i=0%3E>
> > > > Subject: Re: Finding and marking related cases
> > > > To: [hidden email] <
> > > http://user/SendEmail.jtp?type=node&node=5726201&i=1 <http://user/SendEmail.jtp?type=node&node=5726201&i=1>>
> > >
> > > >
> > > > Thank's a lot.
> > > > Is there any way to find the number of combined  cases I shall put in
> > > the
> > > > "lookback", or is there a syntax that can regulate this itself? I have
> > a
> > > > couple of hundred thousand singel cases in total and no idea how many
> > > > combined cases I will end up with.Is it possible to sort the
> > constructed
> > > > combined case numbers based on in and not id first?
> > > > Best regardsLars N.
> > > >  19. mai 2014 kl. 18:22 skrev David Marso [via SPSSX Discussion]
> > > <[hidden
> > > > email]>:
> > > >
> > > >         Something like the following?
> > > >
> > > > Note you may need to change the number of cases in the "lookback" (in
> > > this
> > > > case 4).
> > > >
> > > > --
> > > >
> > > > DATA LIST LIST / case id in out.
> > > >
> > > > BEGIN DATA.
> > > >
> > > > 1, 11, 13, 17
> > > >
> > > > 2, 12, 14, 15
> > > >
> > > > 3, 11, 14, 14
> > > >
> > > > 4, 13, 15, 22
> > > >
> > > > 5, 11, 17, 22
> > > >
> > > > 6, 12, 17, 24
> > > >
> > > > 7, 11, 27, 29
> > > >
> > > > END DATA.
> > > >
> > > >
> > > > SORT CASES BY id in out.
> > > >
> > > > IF ($CASENUM EQ 1) episode=1.
> > > >
> > > > DO REPEAT #=1 TO 4.
> > > >
> > > > +  IF (id=LAG(id,#)) AND RANGE(in,lag(in,#),lag(out,#))
> > > > episode=lag(episode).
> > > >
> > > > END REPEAT.
> > > >
> > > > IF MISSING(episode) episode=LAG(episode) + 1.
> > > >
> > > >
> > > > LIST.
> > > >
> > > >
> > > >
> > > >
> > > >     case       id       in      out  episode
> > > >
> > > >
> > > >
> > > >     1.00    11.00    13.00    17.00     1.00
> > > >
> > > >     3.00    11.00    14.00    14.00     1.00
> > > >
> > > >     5.00    11.00    17.00    22.00     1.00
> > > >
> > > >     7.00    11.00    27.00    29.00     2.00
> > > >
> > > >     2.00    12.00    14.00    15.00     3.00
> > > >
> > > >     6.00    12.00    17.00    24.00     4.00
> > > >
> > > >     4.00    13.00    15.00    22.00     5.00
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > Number of cases read:  7    Number of cases listed:  7
> > > >
> > > >
> > > >
> > > >
> > > >                                 Please reply to the list and not to my
> > > > personal email.
> > > >
> > > > Those desiring my consulting or training services please feel free to
> > > > email me.
> > > >
> > > > ---
> > > >
> > > > "Nolite dare sanctum canibus neque mittatis margaritas vestras ante
> > > porcos
> > > > ne forte conculcent eas pedibus suis."
> > > >
> > > > Cum es damnatorum possederunt porcos iens ut salire off sanguinum
> > cliff
> > > in
> > > > abyssum?"
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >                 If you reply to this email, your message will be added
> > > to
> > > > the discussion below:
> > > >
> > > >
> > >
> > http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5726135.html <http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5726135.html>
> > > >
> > > >
> > > >
> > > >                 To unsubscribe from Finding and marking related cases,
> > > > click here.
> > > >
> > > >                 NAML
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > View this message in context: Re: Finding and marking related cases
> > > >
> > > > Sent from the SPSSX Discussion mailing list archive at Nabble.com <http://nabble.com/>.
> > > >
> > > > Please reply to the list and not to my personal email.
> > > > Those desiring my consulting or training services please feel free to
> > > > email me.
> > > > ---
> > > > "Nolite dare sanctum canibus neque mittatis margaritas vestras ante
> > > porcos
> > > > ne forte conculcent eas pedibus suis."
> > > > Cum es damnatorum possederunt porcos iens ut salire off sanguinum
> > cliff
> > > in
> > > > abyssum?"
> > > >
> > > >
> > > > ------------------------------
> > > >  If you reply to this email, your message will be added to the
> > > discussion
> > > > below:
> > > >
> > > >
> > >
> > http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5726201.html <http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5726201.html>
> > > >  To unsubscribe from Finding and marking related cases, click here
> > > > <
> > >
> >
> > > > .
> > > > NAML
> > > > <
> > >
> > http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>>
> >
> >
> >   > <
> > http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E>>
> > <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E%3E> <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E%3E%3E>
> >
> >
> > > >
> > >
> > > Please reply to the list and not to my personal email.
> > > Those desiring my consulting or training services please feel free to
> > > email me.
> > > ---
> > > "Nolite dare sanctum canibus neque mittatis margaritas vestras ante
> > porcos
> > > ne forte conculcent eas pedibus suis."
> > > Cum es damnatorum possederunt porcos iens ut salire off sanguinum cliff
> > in
> > > abyssum?"
> > >
> > >
> > > ------------------------------
> > >  If you reply to this email, your message will be added to the
> > discussion
> > > below:
> > >
> >
> > >
> > http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5727915.html <http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5727915.html>
> > >  To unsubscribe from Finding and marking related cases, click here
> > > < .
> > > NAML
> > > <
> > http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>>
> > <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E> <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E%3E>
> > >
> >
> >  Please reply to the list and not to my personal email.
> > Those desiring my consulting or training services please feel free to
> > email me.
> > ---
> > "Nolite dare sanctum canibus neque mittatis margaritas vestras ante porcos
> > ne forte conculcent eas pedibus suis."
> > Cum es damnatorum possederunt porcos iens ut salire off sanguinum cliff in
> > abyssum?"
> >
> >
> >  ------------------------------
> >
> > *If you reply to this email, your message will be added to the discussion
> > below:*
> >
> >
> > http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5727918.html <http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5727918.html>
> >
> > To unsubscribe from Finding and marking related cases, click here.
> > NAML
> > <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml> <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E>
> >
> >
> >
> >
> >
> >
> >  ------------------------------
> >
> > View this message in context: Re: Finding and marking related cases
> > <http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5727996.html> <http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5727996.html%3E>
> > Sent from the SPSSX Discussion mailing list archive
> > <http://spssx-discussion.1045642.n5.nabble.com/> <http://spssx-discussion.1045642.n5.nabble.com/%3E> at Nabble.com <http://nabble.com/>.
> > ===================== To manage your subscription to SPSSX-L, send a
> > message to [hidden email]
> > <http:///user/SendEmail.jtp?type=node&node=5728006&i=2> <http://user/SendEmail.jtp?type=node&node=5728006&i=2%3E> (not to SPSSX-L),
> > with no body text except the command. To leave the list, send the command
> > SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the
> > command INFO REFCARD
> >  ===================== To manage your subscription to SPSSX-L, send a
> > message to [hidden email]
> > <http:///user/SendEmail.jtp?type=node&node=5728006&i=3> <http://user/SendEmail.jtp?type=node&node=5728006&i=3%3E> (not to SPSSX-L),
> > with no body text except the command. To leave the list, send the command
> > SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the
> > command INFO REFCARD
> >
> > ------------------------------
> >  If you reply to this email, your message will be added to the discussion
> > below:
> >
> > http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5728006.html <http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5728006.html>
> >  To unsubscribe from Finding and marking related cases, click here
> > <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=5726128&code=bGFyc25hZXNzQGdtYWlsLmNvbXw1NzI2MTI4fDM3MjU3NTE5> <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=5726128&code=bGFyc25hZXNzQGdtYWlsLmNvbXw1NzI2MTI4fDM3MjU3NTE5%3E>
> > .
> > NAML
> > <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml> <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E>
> >
> Please reply to the list and not to my personal email.
> Those desiring my consulting or training services please feel free to email me.
> ---
> "Nolite dare sanctum canibus neque mittatis margaritas vestras ante porcos ne forte conculcent eas pedibus suis."
> Cum es damnatorum possederunt porcos iens ut salire off sanguinum cliff in abyssum?"
>
>
> If you reply to this email, your message will be added to the discussion below:
> http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5728120.html <http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5728120.html>
> To unsubscribe from Finding and marking related cases, click here <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=5726128&code=bGFyc25hZXNzQGdtYWlsLmNvbXw1NzI2MTI4fDM3MjU3NTE5>.
> NAML <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
--
Bruce Weaver
bweaver@lakeheadu.ca
http://sites.google.com/a/lakeheadu.ca/bweaver/

"When all else fails, RTFM."

PLEASE NOTE THE FOLLOWING: 
1. My Hotmail account is not monitored regularly. To send me an e-mail, please use the address shown above.
2. The SPSSX Discussion forum on Nabble is no longer linked to the SPSSX-L listserv administered by UGA (https://listserv.uga.edu/).
Reply | Threaded
Open this post in threaded view
|

Re: Finding and marking related cases

David Marso
Administrator
Doh:
Also disregard my first draft 2 liner using the CREATE and complicated logical involving LEAD and LAG.
My first version only covers 1.  In principle clean for 1 but very messy if one has several possible overlapping cases.
The #scratch variable version is far more general and elegant (since it covers all possible overlaps) and it doesn't require much cogitation to understand.

Included here in it's latest incarnation.
----
DEFINE flag(!POS !CMDEND !DEFAULT('A D') )

COMPUTE @order=$CASENUM.
!DO !I !IN (!1)
SORT CASES BY in (!I) out (!I).
DO IF x.
+  COMPUTE #in = in.
+  COMPUTE #out=out.
ELSE.
+  COMPUTE flagged=MAX(flagged,RANGE(#in,in,out) | RANGE(#out,in,out)).
END IF.
!DOEND.
SORT CASES BY @order.
DELETE VARIABLES @order.
!ENDDEFINE.

DATA LIST FREE / ID in out x.
BEGIN DATA
1 1 2 0  2 3 3 0  2.1 3 3.5 0  3 3 4 1  4 4 5 0  5 6 8 0  6 7 9 0  7 7 12 0
8 13 14 1  8.1 13 13.1 0  9 13 13.2 0  9.1 13 13.3 0  9.2 13 13.5 0  9.3 13 14 0  10 15 16 0
END DATA.

flag.
LIST.


Bruce Weaver wrote
Ahem!  Please disregard the Rubish* method I posted earlier.  David's solution is far better.  

* Re the meaning of "Rubish", please see http://www.rubegoldberg.com/.  ;-)


David Marso wrote
<Addendum:>
Perhaps for logical clarity substitute
+  COMPUTE flagged=MAX(flagged,RANGE(#in,in,out) | RANGE(#out,in,out)).
rather than the existing SUM function in the MACRO.
That way it returns 0/1 rather than {0:4}
</Addendum>

Here is a conceptually simpler approach which has the advantage of also flagging cases where there are more than a single qualifying overlap.  If this seems mysterious the look up SCRATCH variables in the FM and learn to love them. The macro (DEFINE !ENDDEFINE) is simply to avoid redundant code.

DATA LIST FREE / ID in out x.
BEGIN DATA
1 1 2 0  2 3 3 0  2.1 3 3.5 0  3 3 4 1  4 4 5 0  5 6 8 0  6 7 9 0  7 7 12 0
8 13 13 1  8.1 13 13.1 0  9 13 13.2 0  9.1 13 13.3 0  9.2 13 13.5 0  9.3 13 14 0  10 14 16 0
END DATA.

DEFINE flag()
DO IF x.
+  COMPUTE #in = in.
+  COMPUTE #out=out.
ELSE.
+  COMPUTE flagged=SUM(flagged,RANGE(#in,in,out),RANGE(#out,in,out)).
END IF.
!ENDDEFINE.

COMPUTE @order=$CASENUM.
SORT CASES BY in (A) out (A).
flag.
SORT CASES BY out (D) in (D).
flag.
SORT CASES BY @order.
LIST.
 ID       in      out        x   @order  flagged
 
    1.00     1.00     2.00      .00     1.00      .00
    2.00     3.00     3.00      .00     2.00     1.00
    2.00     3.00     3.50      .00     3.00     1.00
    3.00     3.00     4.00     1.00     4.00      .
    4.00     4.00     5.00      .00     5.00     1.00
    5.00     6.00     8.00      .00     6.00      .00
    6.00     7.00     9.00      .00     7.00      .00
    7.00     7.00    12.00      .00     8.00      .00
    8.00    13.00    13.00     1.00     9.00      .
    8.00    13.00    13.10      .00    10.00     2.00
    9.00    13.00    13.20      .00    11.00     2.00
    9.00    13.00    13.30      .00    12.00     2.00
    9.00    13.00    13.50      .00    13.00     2.00
    9.00    13.00    14.00      .00    14.00     2.00
   10.00    14.00    16.00      .00    15.00      .00

David Marso wrote
It would be interesting to see what you tried.
Here is what I was trying to direct you toward:
If next case is X then
FLAG: If the 'out' of current case is between the 'in' and 'out' of this next case.
If previous case is X
FLAG if the 'in' of current case is between 'in' and 'out' of previous case.
This can be concisely coded as a 'vector product' of the ranges and the 'xflags'.

DATA LIST FREE / ID in out x.
BEGIN DATA
1 1 2 0 2 3 3 0 3 3 4 1 4 4 5 0 5 6 8 0 6 7 9 0 7 7 12 0 8 13 13 1 9 13 14 0 10 14 16 0
END DATA.
CREATE inlead=LEAD(in,1)/outlead=LEAD(out,1)/xlead=LEAD(x,1).
COMPUTE flagged=SUM(RANGE(out,inlead,outlead) * xlead, RANGE(in,LAG(in),LAG(out))*LAG(x)).
LIST.

nessie wrote
I have tried to solve this myself, but I can get it right.

I've used this example data:

DATA LIST FREE / ID in out x.
BEGIN DATA
1 1 2 0 2 3 3 0 3 3 4 1 4 4 5 0 5 6 8 0 6 7 9 0 7 7 12 0 8 13 13 1 9 13 14 0 10 14 16 0
END DATA.

id is case id
in is a fictive date/time equivalent
out is a fictive date/time equivalent
x is a marker (0=ordinary case, 1=special case)


To sum up, I want to mark all cases which overlaps the in/out-span of the x=1 cases.
I have tried to experiment with the lag and lead functions in SHIFT VALUES (thank’s for the tip David), but I haven’t come up with a working solution.

Does anyone have any other tips or tricks to help me solve this problem?

Best regards
Lars


> 5. des. 2014 kl. 16.07 skrev David Marso [via SPSSX Discussion] <[hidden email]>:
>
> Look up SHIFT VALUES (alternatively use the CREATE command).
> There is also a RANGE function in COMPUTE.
> --
> nessie wrote
> Sorry for my very late reply and for not providing example data when I
> posted my question. I fell bad!
> Thank's to both of you for two different but excellent solutions.
> Espacially thank's to you David for taking time to make and provide am
> illustrative example.
>
> *I have a new and different problem:*
> I have a big number of hospital admissions. Some of them are special and
> draines a lot of resources and I want to mark all the other admissions
> within the time-frame of these admissions to check if they are influenced
> in a negative way. I modified your data David, to give an example.
> The special cases are marked with the value 1 for variable x.
>
> I want to make a new variable that markes all the cases that overlaps with
> the in/out time of these special cases. Here ID 3 and ID 8 are special
> cases, and I want to mark case; 2, 4 and 9.
>
> Best regards
> Lars
>
> DATA LIST FREE / ID in out x.
> BEGIN DATA
> 1 1 2 0 2 3 3 0 3 3 4 1 4 4 5 0 5 6 8 0 7 7 12 0 8 13 13 1 9 13 14 0
> END DATA.
>
> This gives these data:
>
> ID in out x
>
> 1,00 1,00 2,00 ,00
>
> 2,00 3,00 3,00 ,00
>
> 3,00 3,00 4,00 1,00
>
> 4,00 4,00 5,00 ,00
>
> 5,00 6,00 8,00 ,00
>
> 7,00 7,00 12,00 ,00
>
> 8,00 13,00 13,00 1,00
>
> 9,00 13,00 14,00 ,00
>
>
>
>
>
> 2014-11-24 16:08 GMT+01:00 Maguin, Eugene [via SPSSX Discussion] <
> [hidden email] <x-msg://1/user/SendEmail.jtp?type=node&node=5728120&i=0>>:
>
> >  Look at the aggregate function CIN. It returns a count of values between
> > two endpoints.
> >
> > Gene Maguin
> >
> >
> >
> > *From:* SPSSX(r) Discussion [mailto:[hidden email]
> > <http:///user/SendEmail.jtp?type=node&node=5728006&i=0> <http://user/SendEmail.jtp?type=node&node=5728006&i=0%3E>] *On Behalf Of *
> > nessie
> > *Sent:* Friday, November 21, 2014 7:30 AM
> > *To:* [hidden email]
> > <http:///user/SendEmail.jtp?type=node&node=5728006&i=1> <http://user/SendEmail.jtp?type=node&node=5728006&i=1%3E>
> > *Subject:* Re: Finding and marking related cases
> >
> >
> >
> > I have worked my way through the basic aggregate functions and I'm
> > starting to get the hang of it.
> >
> > I have just one basic question - is it possible to count the number of
> > different values a variable contains across the cases you aggregate and
> > present all the different values in the aggregated sufix variables?
> >
> >
> >
> > E.g. I have a patient who has been to several different hospital
> > departments during one aggregated stay. For each case/episode the
> > department variable returns a value. Lets say the has 10 different partial
> > stays (cases) from 7 different departments wich I would like to aggregate.
> > How can I count the number of different departments the patient has been to
> > and present them in the aggregat variables (wich I would like to put at the
> > end of each case/part of stay).
> >
> >
> >
> > Best regards
> >
> > Lars
> >
> >
> >
> > 2014-11-13 15:20 GMT+01:00 Lars E. Næss-Pleym <[hidden email]
> > <http:///user/SendEmail.jtp?type=node&node=5727996&i=0> <http://user/SendEmail.jtp?type=node&node=5727996&i=0%3E>>:
> >
> >  Sorry about the missing spaces, it probably doesn't matter, but I try
> > again to make it more easy to read.
> >
> >
> >
> > DATA LIST LIST
> > / dia (A2) c_level (A2) case id in out .
> > BEGIN DATA.
> > "A1" "IP" 1 11 13 17
> > "B1" "OP" 2 12 15 15
> > "A1" "OP" 3 11 14 14
> > "A2" "IP" 4 13 15 22
> > "B2" "IP" 5 11 17 22
> > "C1" "IP" 6 12 17 24
> > "B3" "IP" 7 11 27 29
> > "C4" "IP" 8 13 22 29
> > "D1" "IP" 9 12 24 26
> > "D2" "IP" 10 12 28 30
> > END DATA.
> >
> > LIST.
> >
> >
> >
> > SORT CASES BY id in out.
> > IF ($CASENUM EQ 1) episode=1.
> > DO REPEAT #=1 TO 10.
> > +  IF (id=LAG(id,#)) AND RANGE(in,lag(in,#),lag(out,#))
> > episode=lag(episode).
> > END REPEAT.
> > IF MISSING(episode) episode=LAG(episode) + 1.
> >
> > LIST.
> >
> >
> >
> > SORT CASES BY episode(A) in(A) out(A).
> > MATCH FILES
> >   /FILE=*
> >   /BY episode
> >   /FIRST=FirstCase
> >   /LAST=PrimaryLast.
> > DO IF (FirstCase).
> > COMPUTE  CaseNr=1-PrimaryLast.
> > ELSE.
> > COMPUTE  CaseNr=CaseNr+1.
> > END IF.
> > LEAVE  CaseNr.
> > FORMATS  CaseNr (f7).
> > MATCH FILES
> >   /FILE=*
> >   /DROP=PrimaryLast.
> > VALUE LABELS  FirstCase 0 'Duplicate Case' 1 'Primary Case' 2'Unique case'.
> > VARIABLE LEVEL  FirstCase (ORDINAL) /CaseNr (SCALE).
> > EXECUTE.
> >
> >
> >
> > IF (CaseNr=0) FirstCase=2.
> > EXECUTE.
> >
> > IF (FirstCase=2) CaseNr=1.
> > EXECUTE.
> >
> > 2014-11-13 15:17 GMT+01:00 Lars E. Næss-Pleym <[hidden email]
> > <http:///user/SendEmail.jtp?type=node&node=5727996&i=1> <http://user/SendEmail.jtp?type=node&node=5727996&i=1%3E>>:
> >
> >
> >
> >  Here is an example syntax. dia=diagnosis, c_level=care_level
> > (IP=In-Patient, OP=Out-Patient), case=unique case_ID, id=person_ID,
> > in=date_in, out=date_out
> >
> >
> >
> >  DATA LIST LIST
> > / dia (A2) c_level (A2) case id in out .
> > BEGIN DATA.
> > "A1" "IP" 1 11 13 17
> > "B1" "OP" 2 12 15 15
> > "A1" "OP" 3 11 14 14
> > "A2" "IP" 4 13 15 22
> > "B2" "IP" 5 11 17 22
> > "C1" "IP" 6 12 17 24
> > "B3" "IP" 7 11 27 29
> > "C4" "IP" 8 13 22 29
> > "D1" "IP" 9 12 24 26
> > "D2" "IP" 10 12 28 30
> > END DATA.
> >
> > LIST.
> >
> > SORT CASES BY id in out.
> > IF ($CASENUM EQ 1) episode=1.
> > DO REPEAT #=1 TO 10.
> > +  IF (id=LAG(id,#)) AND RANGE(in,lag(in,#),lag(out,#))
> > episode=lag(episode).
> > END REPEAT.
> > IF MISSING(episode) episode=LAG(episode) + 1.
> >
> > LIST.
> >
> > SORT CASES BY episode(A) in(A) out(A).
> > MATCH FILES
> >   /FILE=*
> >   /BY episode
> >   /FIRST=FirstCase
> >   /LAST=PrimaryLast.
> > DO IF (FirstCase).
> > COMPUTE  CaseNr=1-PrimaryLast.
> > ELSE.
> > COMPUTE  CaseNr=CaseNr+1.
> > END IF.
> > LEAVE  CaseNr.
> > FORMATS  CaseNr (f7).
> > MATCH FILES
> >   /FILE=*
> >   /DROP=PrimaryLast.
> > VALUE LABELS  FirstCase 0 'Duplicate Case' 1 'Primary Case' 2'Unique case'.
> > VARIABLE LEVEL  FirstCase (ORDINAL) /CaseNr (SCALE).
> > EXECUTE.
> >
> > IF (CaseNr=0) FirstCase=2.
> > EXECUTE.
> >
> > IF (FirstCase=2) CaseNr=1.
> > EXECUTE.
> >
> > From this I want to keep all the uniqe cases but also make a new
> > aggregatet case for all episodes containing "in" from the first case and
> > "out" from the last, the "id", and "episode" variables, the
> > last "Dia" variable, and the first "c_level" variable.
> >
> >
> >
> > I also want to know e.g. how many different unique diagnosis within the
> > same episode and if the "c_level" has been the same for all cases
> > within one episode.
> >
> >
> >
> > I will look up look up AGGREGATE, RENAME VARIABLES, CASESTOVARS and MATCH
> > FILES, thanks for the tip.
> >
> >
> >
> > Best regards
> >
> > Lars
> >
> >
> > 2014-11-13 14:18 GMT+01:00 David Marso [via SPSSX Discussion] <[hidden
> > email] <http:///user/SendEmail.jtp?type=node&node=5727996&i=2> <http://user/SendEmail.jtp?type=node&node=5727996&i=2%3E>>:
> >
> >
> >
> > Lars,
> >   Please post a more illustrative data set with a before/after of how you
> > want the final result to appear.
> > Meanwhile look up AGGREGATE, RENAME VARIABLES, CASESTOVARS and MATCH FILES
> > commands.
> > HTH, David
> > ==
> >
> >  *nessie wrote*
> >
> > Thanks David
> > That worked!
> >
> > From the original question in this tread, I have made an episode variable
> > linking concurrent admissions.
> > I have then sorted this in cronological order av given it a case number
> > within the episode.
> >
> > For "aggregated" episodes I now want to make a new case with aggregated
> > data for all the same variables but pick some of the variables from the
> > first and some from the last case. I can sort my episode variables
> > according to casenumber and then try to aggregate with all the variables
> > as
> > break variables from either the first or the last case.
> > - Is this the best way to do it?
> > - What if I want to gather som data from on of the episodes in the middle?
> > Is there an easy way to do this?
> >
> > Best regards
> > Lars N.
> >
> >
> > 2014-11-13 10:54 GMT+01:00 David Marso [via SPSSX Discussion] <
> > [hidden email] <http://user/SendEmail.jtp?type=node&node=5727918&i=0> <http://user/SendEmail.jtp?type=node&node=5727918&i=0%3E>>:
> >
> >
> > > See AGGREGATE in the FM.  There are FIRST and LAST functions.
> > > --
> > > AGGREGATE OUTFILE * MODE=ADDVARIABLES
> > > /BREAK.../f=FIRST(?)/l=LAST(?)..........
> > >
> > >  nessie wrote
> > > I'm picking up this old tread.
> > > You helped me a lot in finding related cases and marking them with
> > > chronological case number.Now I have a new problem!
> > >
> > > I want to make a new aggregated case containing some info from the first
> > > and some from the last related case.
> > > E.g. "In" from the first related and "Out" from the last related case.
> > Can
> > > you help me do this?
> > >
> > > Best regards
> > > Lars N.
> > > 2014-05-23 12:39 GMT+02:00 David Marso [via SPSSX Discussion] <
> >
> > > [hidden email] <http://user/SendEmail.jtp?type=node&node=5727915&i=0> <http://user/SendEmail.jtp?type=node&node=5727915&i=0%3E>>:
> >
> > >
> >
> >
> > > > Good catch Rich!
> > > > Here is a version using #scratch variables and a slightly different
> > > > approach.
> > > > DATA LIST LIST / case id in out.
> > > > BEGIN DATA.
> > > > 1, 11, 13, 17
> > > > 2, 12, 14, 15
> > > > 3, 11, 14, 14
> > > > 4, 13, 15, 22
> > > > 5, 11, 17, 22
> > > > 6, 12, 17, 24
> > > > 7, 11, 27, 29
> > > > END DATA.
> > > >
> > > > SORT CASES BY id in out.
> > > > DO IF ($CASENUM EQ 1 OR id NE LAG(id) ).
> > > > +  COMPUTE  episode=SUM(1,LAG(episode)).
> > > > +  COMPUTE  #hiout= out.
> > > > ELSE.
> > > > +  COMPUTE episode=sum(lag(episode),NOT(range(in,lag(in),#hiout ))).
> > > > END IF.
> > > > COMPUTE #hiout= max(out, #hiout ).
> > > >
> > > >  Rich Ulrich wrote
> > > > If I see the problem right, logically, you only need to look at one
> > > > previous line.
> > > >
> > > > If it is the same episode, then you want to extend the testable OUT
> > date
> > > > whenever the new line has a higher one.  Since the file is sorted by
> > > > IN, the previous IN is always okay.
> > > >      This looks like it should work -
> > > >
> > > > SORT CASES BY id in out.
> > > >
> > > > DO IF ($CASENUM EQ 1)
> > > > +COMPUTE  episode=1.
> > > >
> > > > +COMPUTE  hiout= out.
> > > > END IF.
> > > >
> > > > DO IF (id=LAG(id)) AND RANGE(in,lag(in),lag(hiout) ) .
> > > > +COMPUTE episode=lag(episode).
> > > >
> > > > +COMPUTE hiout= max(out, lag(hiout) ).
> > > > END IF.
> > > >
> > > > DO IF MISSING(episode) .
> > > > +COMPUTE  episode=LAG(episode) + 1.
> > > >
> > > > +COMPUTE  hiout= out.
> > > > END IF.
> > > >
> > > > * That shows the logic explicitly.  Since temporary vars (#)  keep
> > their
> > > > * values until changed, across cases, the code should work the same
> > > > * if #hiout replaced both hiout and  lag(hiout). Conceivably, that
> > might
> > > > * run faster than using lag(hiout).
> > > >
> > > > --
> > > > Rich Ulrich
> > > >
> > > >
> > > >
> > > > Date: Thu, 22 May 2014 13:43:25 -0700
> > > > From: [hidden email]
> > > > <http://user/SendEmail.jtp?type=node&node=5726201&i=0> <http://user/SendEmail.jtp?type=node&node=5726201&i=0%3E>
> > > > Subject: Re: Finding and marking related cases
> > > > To: [hidden email] <
> > > http://user/SendEmail.jtp?type=node&node=5726201&i=1 <http://user/SendEmail.jtp?type=node&node=5726201&i=1>>
> > >
> > > >
> > > > Thank's a lot.
> > > > Is there any way to find the number of combined  cases I shall put in
> > > the
> > > > "lookback", or is there a syntax that can regulate this itself? I have
> > a
> > > > couple of hundred thousand singel cases in total and no idea how many
> > > > combined cases I will end up with.Is it possible to sort the
> > constructed
> > > > combined case numbers based on in and not id first?
> > > > Best regardsLars N.
> > > >  19. mai 2014 kl. 18:22 skrev David Marso [via SPSSX Discussion]
> > > <[hidden
> > > > email]>:
> > > >
> > > >         Something like the following?
> > > >
> > > > Note you may need to change the number of cases in the "lookback" (in
> > > this
> > > > case 4).
> > > >
> > > > --
> > > >
> > > > DATA LIST LIST / case id in out.
> > > >
> > > > BEGIN DATA.
> > > >
> > > > 1, 11, 13, 17
> > > >
> > > > 2, 12, 14, 15
> > > >
> > > > 3, 11, 14, 14
> > > >
> > > > 4, 13, 15, 22
> > > >
> > > > 5, 11, 17, 22
> > > >
> > > > 6, 12, 17, 24
> > > >
> > > > 7, 11, 27, 29
> > > >
> > > > END DATA.
> > > >
> > > >
> > > > SORT CASES BY id in out.
> > > >
> > > > IF ($CASENUM EQ 1) episode=1.
> > > >
> > > > DO REPEAT #=1 TO 4.
> > > >
> > > > +  IF (id=LAG(id,#)) AND RANGE(in,lag(in,#),lag(out,#))
> > > > episode=lag(episode).
> > > >
> > > > END REPEAT.
> > > >
> > > > IF MISSING(episode) episode=LAG(episode) + 1.
> > > >
> > > >
> > > > LIST.
> > > >
> > > >
> > > >
> > > >
> > > >     case       id       in      out  episode
> > > >
> > > >
> > > >
> > > >     1.00    11.00    13.00    17.00     1.00
> > > >
> > > >     3.00    11.00    14.00    14.00     1.00
> > > >
> > > >     5.00    11.00    17.00    22.00     1.00
> > > >
> > > >     7.00    11.00    27.00    29.00     2.00
> > > >
> > > >     2.00    12.00    14.00    15.00     3.00
> > > >
> > > >     6.00    12.00    17.00    24.00     4.00
> > > >
> > > >     4.00    13.00    15.00    22.00     5.00
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > Number of cases read:  7    Number of cases listed:  7
> > > >
> > > >
> > > >
> > > >
> > > >                                 Please reply to the list and not to my
> > > > personal email.
> > > >
> > > > Those desiring my consulting or training services please feel free to
> > > > email me.
> > > >
> > > > ---
> > > >
> > > > "Nolite dare sanctum canibus neque mittatis margaritas vestras ante
> > > porcos
> > > > ne forte conculcent eas pedibus suis."
> > > >
> > > > Cum es damnatorum possederunt porcos iens ut salire off sanguinum
> > cliff
> > > in
> > > > abyssum?"
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >                 If you reply to this email, your message will be added
> > > to
> > > > the discussion below:
> > > >
> > > >
> > >
> > http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5726135.html <http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5726135.html>
> > > >
> > > >
> > > >
> > > >                 To unsubscribe from Finding and marking related cases,
> > > > click here.
> > > >
> > > >                 NAML
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > View this message in context: Re: Finding and marking related cases
> > > >
> > > > Sent from the SPSSX Discussion mailing list archive at Nabble.com <http://nabble.com/>.
> > > >
> > > > Please reply to the list and not to my personal email.
> > > > Those desiring my consulting or training services please feel free to
> > > > email me.
> > > > ---
> > > > "Nolite dare sanctum canibus neque mittatis margaritas vestras ante
> > > porcos
> > > > ne forte conculcent eas pedibus suis."
> > > > Cum es damnatorum possederunt porcos iens ut salire off sanguinum
> > cliff
> > > in
> > > > abyssum?"
> > > >
> > > >
> > > > ------------------------------
> > > >  If you reply to this email, your message will be added to the
> > > discussion
> > > > below:
> > > >
> > > >
> > >
> > http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5726201.html <http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5726201.html>
> > > >  To unsubscribe from Finding and marking related cases, click here
> > > > <
> > >
> >
> > > > .
> > > > NAML
> > > > <
> > >
> > http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>>
> >
> >
> >   > <
> > http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E>>
> > <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E%3E> <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E%3E%3E>
> >
> >
> > > >
> > >
> > > Please reply to the list and not to my personal email.
> > > Those desiring my consulting or training services please feel free to
> > > email me.
> > > ---
> > > "Nolite dare sanctum canibus neque mittatis margaritas vestras ante
> > porcos
> > > ne forte conculcent eas pedibus suis."
> > > Cum es damnatorum possederunt porcos iens ut salire off sanguinum cliff
> > in
> > > abyssum?"
> > >
> > >
> > > ------------------------------
> > >  If you reply to this email, your message will be added to the
> > discussion
> > > below:
> > >
> >
> > >
> > http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5727915.html <http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5727915.html>
> > >  To unsubscribe from Finding and marking related cases, click here
> > > < .
> > > NAML
> > > <
> > http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>>
> > <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E> <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E%3E>
> > >
> >
> >  Please reply to the list and not to my personal email.
> > Those desiring my consulting or training services please feel free to
> > email me.
> > ---
> > "Nolite dare sanctum canibus neque mittatis margaritas vestras ante porcos
> > ne forte conculcent eas pedibus suis."
> > Cum es damnatorum possederunt porcos iens ut salire off sanguinum cliff in
> > abyssum?"
> >
> >
> >  ------------------------------
> >
> > *If you reply to this email, your message will be added to the discussion
> > below:*
> >
> >
> > http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5727918.html <http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5727918.html>
> >
> > To unsubscribe from Finding and marking related cases, click here.
> > NAML
> > <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml> <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E>
> >
> >
> >
> >
> >
> >
> >  ------------------------------
> >
> > View this message in context: Re: Finding and marking related cases
> > <http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5727996.html> <http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5727996.html%3E>
> > Sent from the SPSSX Discussion mailing list archive
> > <http://spssx-discussion.1045642.n5.nabble.com/> <http://spssx-discussion.1045642.n5.nabble.com/%3E> at Nabble.com <http://nabble.com/>.
> > ===================== To manage your subscription to SPSSX-L, send a
> > message to [hidden email]
> > <http:///user/SendEmail.jtp?type=node&node=5728006&i=2> <http://user/SendEmail.jtp?type=node&node=5728006&i=2%3E> (not to SPSSX-L),
> > with no body text except the command. To leave the list, send the command
> > SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the
> > command INFO REFCARD
> >  ===================== To manage your subscription to SPSSX-L, send a
> > message to [hidden email]
> > <http:///user/SendEmail.jtp?type=node&node=5728006&i=3> <http://user/SendEmail.jtp?type=node&node=5728006&i=3%3E> (not to SPSSX-L),
> > with no body text except the command. To leave the list, send the command
> > SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the
> > command INFO REFCARD
> >
> > ------------------------------
> >  If you reply to this email, your message will be added to the discussion
> > below:
> >
> > http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5728006.html <http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5728006.html>
> >  To unsubscribe from Finding and marking related cases, click here
> > <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=5726128&code=bGFyc25hZXNzQGdtYWlsLmNvbXw1NzI2MTI4fDM3MjU3NTE5> <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=5726128&code=bGFyc25hZXNzQGdtYWlsLmNvbXw1NzI2MTI4fDM3MjU3NTE5%3E>
> > .
> > NAML
> > <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml> <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E>
> >
> Please reply to the list and not to my personal email.
> Those desiring my consulting or training services please feel free to email me.
> ---
> "Nolite dare sanctum canibus neque mittatis margaritas vestras ante porcos ne forte conculcent eas pedibus suis."
> Cum es damnatorum possederunt porcos iens ut salire off sanguinum cliff in abyssum?"
>
>
> If you reply to this email, your message will be added to the discussion below:
> http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5728120.html <http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5728120.html>
> To unsubscribe from Finding and marking related cases, click here <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=unsubscribe_by_code&node=5726128&code=bGFyc25hZXNzQGdtYWlsLmNvbXw1NzI2MTI4fDM3MjU3NTE5>.
> NAML <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
Please reply to the list and not to my personal email.
Those desiring my consulting or training services please feel free to email me.
---
"Nolite dare sanctum canibus neque mittatis margaritas vestras ante porcos ne forte conculcent eas pedibus suis."
Cum es damnatorum possederunt porcos iens ut salire off sanguinum cliff in abyssum?"
Reply | Threaded
Open this post in threaded view
|

Re: Finding and marking related cases

nessie
Thank's alot. You guys are fantastic. I will look into this at work tomorrow and give you a real feed back.

Best regards



Den 16. des. 2014 kl. 19.21 skrev David Marso [via SPSSX Discussion] <[hidden email]>:

Doh:
Also disregard my first draft 2 liner using the CREATE and complicated logical involving LEAD and LAG.
My first version only covers 1.  In principle clean for 1 but very messy if one has several possible overlapping cases.
The #scratch variable version is far more general and elegant (since it covers all possible overlaps) and it doesn't require much cogitation to understand.

Included here in it's latest incarnation.
----
DEFINE flag(!POS !CMDEND !DEFAULT('A D') )

COMPUTE @order=$CASENUM.
!DO !I !IN (!1)
SORT CASES BY in (!I) out (!I).
DO IF x.
+  COMPUTE #in = in.
+  COMPUTE #out=out.
ELSE.
+  COMPUTE flagged=MAX(flagged,RANGE(#in,in,out) | RANGE(#out,in,out)).
END IF.
!DOEND.
SORT CASES BY @order.
DELETE VARIABLES @order.
!ENDDEFINE.

DATA LIST FREE / ID in out x.
BEGIN DATA
1 1 2 0  2 3 3 0  2.1 3 3.5 0  3 3 4 1  4 4 5 0  5 6 8 0  6 7 9 0  7 7 12 0
8 13 14 1  8.1 13 13.1 0  9 13 13.2 0  9.1 13 13.3 0  9.2 13 13.5 0  9.3 13 14 0  10 15 16 0
END DATA.

flag.
LIST.


Bruce Weaver wrote
Ahem!  Please disregard the Rubish* method I posted earlier.  David's solution is far better.  

* Re the meaning of "Rubish", please see http://www.rubegoldberg.com/.  ;-)


David Marso wrote
<Addendum:>
Perhaps for logical clarity substitute
+  COMPUTE flagged=MAX(flagged,RANGE(#in,in,out) | RANGE(#out,in,out)).
rather than the existing SUM function in the MACRO.
That way it returns 0/1 rather than {0:4}
</Addendum>

Here is a conceptually simpler approach which has the advantage of also flagging cases where there are more than a single qualifying overlap.  If this seems mysterious the look up SCRATCH variables in the FM and learn to love them. The macro (DEFINE !ENDDEFINE) is simply to avoid redundant code.

DATA LIST FREE / ID in out x.
BEGIN DATA
1 1 2 0  2 3 3 0  2.1 3 3.5 0  3 3 4 1  4 4 5 0  5 6 8 0  6 7 9 0  7 7 12 0
8 13 13 1  8.1 13 13.1 0  9 13 13.2 0  9.1 13 13.3 0  9.2 13 13.5 0  9.3 13 14 0  10 14 16 0
END DATA.

DEFINE flag()
DO IF x.
+  COMPUTE #in = in.
+  COMPUTE #out=out.
ELSE.
+  COMPUTE flagged=SUM(flagged,RANGE(#in,in,out),RANGE(#out,in,out)).
END IF.
!ENDDEFINE.

COMPUTE @order=$CASENUM.
SORT CASES BY in (A) out (A).
flag.
SORT CASES BY out (D) in (D).
flag.
SORT CASES BY @order.
LIST.
 ID       in      out        x   @order  flagged
 
    1.00     1.00     2.00      .00     1.00      .00
    2.00     3.00     3.00      .00     2.00     1.00
    2.00     3.00     3.50      .00     3.00     1.00
    3.00     3.00     4.00     1.00     4.00      .
    4.00     4.00     5.00      .00     5.00     1.00
    5.00     6.00     8.00      .00     6.00      .00
    6.00     7.00     9.00      .00     7.00      .00
    7.00     7.00    12.00      .00     8.00      .00
    8.00    13.00    13.00     1.00     9.00      .
    8.00    13.00    13.10      .00    10.00     2.00
    9.00    13.00    13.20      .00    11.00     2.00
    9.00    13.00    13.30      .00    12.00     2.00
    9.00    13.00    13.50      .00    13.00     2.00
    9.00    13.00    14.00      .00    14.00     2.00
   10.00    14.00    16.00      .00    15.00      .00

David Marso wrote
It would be interesting to see what you tried.
Here is what I was trying to direct you toward:
If next case is X then
FLAG: If the 'out' of current case is between the 'in' and 'out' of this next case.
If previous case is X
FLAG if the 'in' of current case is between 'in' and 'out' of previous case.
This can be concisely coded as a 'vector product' of the ranges and the 'xflags'.

DATA LIST FREE / ID in out x.
BEGIN DATA
1 1 2 0 2 3 3 0 3 3 4 1 4 4 5 0 5 6 8 0 6 7 9 0 7 7 12 0 8 13 13 1 9 13 14 0 10 14 16 0
END DATA.
CREATE inlead=LEAD(in,1)/outlead=LEAD(out,1)/xlead=LEAD(x,1).
COMPUTE flagged=SUM(RANGE(out,inlead,outlead) * xlead, RANGE(in,LAG(in),LAG(out))*LAG(x)).
LIST.

nessie wrote
I have tried to solve this myself, but I can get it right.

I've used this example data:

DATA LIST FREE / ID in out x.
BEGIN DATA
1 1 2 0 2 3 3 0 3 3 4 1 4 4 5 0 5 6 8 0 6 7 9 0 7 7 12 0 8 13 13 1 9 13 14 0 10 14 16 0
END DATA.

id is case id
in is a fictive date/time equivalent
out is a fictive date/time equivalent
x is a marker (0=ordinary case, 1=special case)


To sum up, I want to mark all cases which overlaps the in/out-span of the x=1 cases.
I have tried to experiment with the lag and lead functions in SHIFT VALUES (thank’s for the tip David), but I haven’t come up with a working solution.

Does anyone have any other tips or tricks to help me solve this problem?

Best regards
Lars


> 5. des. 2014 kl. 16.07 skrev David Marso [via SPSSX Discussion] <[hidden email]>:
>
> Look up SHIFT VALUES (alternatively use the CREATE command).
> There is also a RANGE function in COMPUTE.
> --
> nessie wrote
> Sorry for my very late reply and for not providing example data when I
> posted my question. I fell bad!
> Thank's to both of you for two different but excellent solutions.
> Espacially thank's to you David for taking time to make and provide am
> illustrative example.
>
> *I have a new and different problem:*
> I have a big number of hospital admissions. Some of them are special and
> draines a lot of resources and I want to mark all the other admissions
> within the time-frame of these admissions to check if they are influenced
> in a negative way. I modified your data David, to give an example.
> The special cases are marked with the value 1 for variable x.
>
> I want to make a new variable that markes all the cases that overlaps with
> the in/out time of these special cases. Here ID 3 and ID 8 are special
> cases, and I want to mark case; 2, 4 and 9.
>
> Best regards
> Lars
>
> DATA LIST FREE / ID in out x.
> BEGIN DATA
> 1 1 2 0 2 3 3 0 3 3 4 1 4 4 5 0 5 6 8 0 7 7 12 0 8 13 13 1 9 13 14 0
> END DATA.
>
> This gives these data:
>
> ID in out x
>
> 1,00 1,00 2,00 ,00
>
> 2,00 3,00 3,00 ,00
>
> 3,00 3,00 4,00 1,00
>
> 4,00 4,00 5,00 ,00
>
> 5,00 6,00 8,00 ,00
>
> 7,00 7,00 12,00 ,00
>
> 8,00 13,00 13,00 1,00
>
> 9,00 13,00 14,00 ,00
>
>
>
>
>
> 2014-11-24 16:08 GMT+01:00 Maguin, Eugene [via SPSSX Discussion] <
> [hidden email] <x-msg://1/user/SendEmail.jtp?type=node&node=5728120&i=0>>:
>
> >  Look at the aggregate function CIN. It returns a count of values between
> > two endpoints.
> >
> > Gene Maguin
> >
> >
> >
> > *From:* SPSSX(r) Discussion [mailto:[hidden email]
> > <http:///user/SendEmail.jtp?type=node&node=5728006&i=0> <http://user/SendEmail.jtp?type=node&node=5728006&i=0%3E>] *On Behalf Of *
> > nessie
> > *Sent:* Friday, November 21, 2014 7:30 AM
> > *To:* [hidden email]
> > <http:///user/SendEmail.jtp?type=node&node=5728006&i=1> <http://user/SendEmail.jtp?type=node&node=5728006&i=1%3E>
> > *Subject:* Re: Finding and marking related cases
> >
> >
> >
> > I have worked my way through the basic aggregate functions and I'm
> > starting to get the hang of it.
> >
> > I have just one basic question - is it possible to count the number of
> > different values a variable contains across the cases you aggregate and
> > present all the different values in the aggregated sufix variables?
> >
> >
> >
> > E.g. I have a patient who has been to several different hospital
> > departments during one aggregated stay. For each case/episode the
> > department variable returns a value. Lets say the has 10 different partial
> > stays (cases) from 7 different departments wich I would like to aggregate.
> > How can I count the number of different departments the patient has been to
> > and present them in the aggregat variables (wich I would like to put at the
> > end of each case/part of stay).
> >
> >
> >
> > Best regards
> >
> > Lars
> >
> >
> >
> > 2014-11-13 15:20 GMT+01:00 Lars E. Næss-Pleym <[hidden email]
> > <http:///user/SendEmail.jtp?type=node&node=5727996&i=0> <http://user/SendEmail.jtp?type=node&node=5727996&i=0%3E>>:
> >
> >  Sorry about the missing spaces, it probably doesn't matter, but I try
> > again to make it more easy to read.
> >
> >
> >
> > DATA LIST LIST
> > / dia (A2) c_level (A2) case id in out .
> > BEGIN DATA.
> > "A1" "IP" 1 11 13 17
> > "B1" "OP" 2 12 15 15
> > "A1" "OP" 3 11 14 14
> > "A2" "IP" 4 13 15 22
> > "B2" "IP" 5 11 17 22
> > "C1" "IP" 6 12 17 24
> > "B3" "IP" 7 11 27 29
> > "C4" "IP" 8 13 22 29
> > "D1" "IP" 9 12 24 26
> > "D2" "IP" 10 12 28 30
> > END DATA.
> >
> > LIST.
> >
> >
> >
> > SORT CASES BY id in out.
> > IF ($CASENUM EQ 1) episode=1.
> > DO REPEAT #=1 TO 10.
> > +  IF (id=LAG(id,#)) AND RANGE(in,lag(in,#),lag(out,#))
> > episode=lag(episode).
> > END REPEAT.
> > IF MISSING(episode) episode=LAG(episode) + 1.
> >
> > LIST.
> >
> >
> >
> > SORT CASES BY episode(A) in(A) out(A).
> > MATCH FILES
> >   /FILE=*
> >   /BY episode
> >   /FIRST=FirstCase
> >   /LAST=PrimaryLast.
> > DO IF (FirstCase).
> > COMPUTE  CaseNr=1-PrimaryLast.
> > ELSE.
> > COMPUTE  CaseNr=CaseNr+1.
> > END IF.
> > LEAVE  CaseNr.
> > FORMATS  CaseNr (f7).
> > MATCH FILES
> >   /FILE=*
> >   /DROP=PrimaryLast.
> > VALUE LABELS  FirstCase 0 'Duplicate Case' 1 'Primary Case' 2'Unique case'.
> > VARIABLE LEVEL  FirstCase (ORDINAL) /CaseNr (SCALE).
> > EXECUTE.
> >
> >
> >
> > IF (CaseNr=0) FirstCase=2.
> > EXECUTE.
> >
> > IF (FirstCase=2) CaseNr=1.
> > EXECUTE.
> >
> > 2014-11-13 15:17 GMT+01:00 Lars E. Næss-Pleym <[hidden email]
> > <http:///user/SendEmail.jtp?type=node&node=5727996&i=1> <http://user/SendEmail.jtp?type=node&node=5727996&i=1%3E>>:
> >
> >
> >
> >  Here is an example syntax. dia=diagnosis, c_level=care_level
> > (IP=In-Patient, OP=Out-Patient), case=unique case_ID, id=person_ID,
> > in=date_in, out=date_out
> >
> >
> >
> >  DATA LIST LIST
> > / dia (A2) c_level (A2) case id in out .
> > BEGIN DATA.
> > "A1" "IP" 1 11 13 17
> > "B1" "OP" 2 12 15 15
> > "A1" "OP" 3 11 14 14
> > "A2" "IP" 4 13 15 22
> > "B2" "IP" 5 11 17 22
> > "C1" "IP" 6 12 17 24
> > "B3" "IP" 7 11 27 29
> > "C4" "IP" 8 13 22 29
> > "D1" "IP" 9 12 24 26
> > "D2" "IP" 10 12 28 30
> > END DATA.
> >
> > LIST.
> >
> > SORT CASES BY id in out.
> > IF ($CASENUM EQ 1) episode=1.
> > DO REPEAT #=1 TO 10.
> > +  IF (id=LAG(id,#)) AND RANGE(in,lag(in,#),lag(out,#))
> > episode=lag(episode).
> > END REPEAT.
> > IF MISSING(episode) episode=LAG(episode) + 1.
> >
> > LIST.
> >
> > SORT CASES BY episode(A) in(A) out(A).
> > MATCH FILES
> >   /FILE=*
> >   /BY episode
> >   /FIRST=FirstCase
> >   /LAST=PrimaryLast.
> > DO IF (FirstCase).
> > COMPUTE  CaseNr=1-PrimaryLast.
> > ELSE.
> > COMPUTE  CaseNr=CaseNr+1.
> > END IF.
> > LEAVE  CaseNr.
> > FORMATS  CaseNr (f7).
> > MATCH FILES
> >   /FILE=*
> >   /DROP=PrimaryLast.
> > VALUE LABELS  FirstCase 0 'Duplicate Case' 1 'Primary Case' 2'Unique case'.
> > VARIABLE LEVEL  FirstCase (ORDINAL) /CaseNr (SCALE).
> > EXECUTE.
> >
> > IF (CaseNr=0) FirstCase=2.
> > EXECUTE.
> >
> > IF (FirstCase=2) CaseNr=1.
> > EXECUTE.
> >
> > From this I want to keep all the uniqe cases but also make a new
> > aggregatet case for all episodes containing "in" from the first case and
> > "out" from the last, the "id", and "episode" variables, the
> > last "Dia" variable, and the first "c_level" variable.
> >
> >
> >
> > I also want to know e.g. how many different unique diagnosis within the
> > same episode and if the "c_level" has been the same for all cases
> > within one episode.
> >
> >
> >
> > I will look up look up AGGREGATE, RENAME VARIABLES, CASESTOVARS and MATCH
> > FILES, thanks for the tip.
> >
> >
> >
> > Best regards
> >
> > Lars
> >
> >
> > 2014-11-13 14:18 GMT+01:00 David Marso [via SPSSX Discussion] <[hidden
> > email] <http:///user/SendEmail.jtp?type=node&node=5727996&i=2> <http://user/SendEmail.jtp?type=node&node=5727996&i=2%3E>>:
> >
> >
> >
> > Lars,
> >   Please post a more illustrative data set with a before/after of how you
> > want the final result to appear.
> > Meanwhile look up AGGREGATE, RENAME VARIABLES, CASESTOVARS and MATCH FILES
> > commands.
> > HTH, David
> > ==
> >
> >  *nessie wrote*
> >
> > Thanks David
> > That worked!
> >
> > From the original question in this tread, I have made an episode variable
> > linking concurrent admissions.
> > I have then sorted this in cronological order av given it a case number
> > within the episode.
> >
> > For "aggregated" episodes I now want to make a new case with aggregated
> > data for all the same variables but pick some of the variables from the
> > first and some from the last case. I can sort my episode variables
> > according to casenumber and then try to aggregate with all the variables
> > as
> > break variables from either the first or the last case.
> > - Is this the best way to do it?
> > - What if I want to gather som data from on of the episodes in the middle?
> > Is there an easy way to do this?
> >
> > Best regards
> > Lars N.
> >
> >
> > 2014-11-13 10:54 GMT+01:00 David Marso [via SPSSX Discussion] <
> > [hidden email] <http://user/SendEmail.jtp?type=node&node=5727918&i=0> <http://user/SendEmail.jtp?type=node&node=5727918&i=0%3E>>:
> >
> >
> > > See AGGREGATE in the FM.  There are FIRST and LAST functions.
> > > --
> > > AGGREGATE OUTFILE * MODE=ADDVARIABLES
> > > /BREAK.../f=FIRST(?)/l=LAST(?)..........
> > >
> > >  nessie wrote
> > > I'm picking up this old tread.
> > > You helped me a lot in finding related cases and marking them with
> > > chronological case number.Now I have a new problem!
> > >
> > > I want to make a new aggregated case containing some info from the first
> > > and some from the last related case.
> > > E.g. "In" from the first related and "Out" from the last related case.
> > Can
> > > you help me do this?
> > >
> > > Best regards
> > > Lars N.
> > > 2014-05-23 12:39 GMT+02:00 David Marso [via SPSSX Discussion] <
> >
> > > [hidden email] <http://user/SendEmail.jtp?type=node&node=5727915&i=0> <http://user/SendEmail.jtp?type=node&node=5727915&i=0%3E>>:
> >
> > >
> >
> >
> > > > Good catch Rich!
> > > > Here is a version using #scratch variables and a slightly different
> > > > approach.
> > > > DATA LIST LIST / case id in out.
> > > > BEGIN DATA.
> > > > 1, 11, 13, 17
> > > > 2, 12, 14, 15
> > > > 3, 11, 14, 14
> > > > 4, 13, 15, 22
> > > > 5, 11, 17, 22
> > > > 6, 12, 17, 24
> > > > 7, 11, 27, 29
> > > > END DATA.
> > > >
> > > > SORT CASES BY id in out.
> > > > DO IF ($CASENUM EQ 1 OR id NE LAG(id) ).
> > > > +  COMPUTE  episode=SUM(1,LAG(episode)).
> > > > +  COMPUTE  #hiout= out.
> > > > ELSE.
> > > > +  COMPUTE episode=sum(lag(episode),NOT(range(in,lag(in),#hiout ))).
> > > > END IF.
> > > > COMPUTE #hiout= max(out, #hiout ).
> > > >
> > > >  Rich Ulrich wrote
> > > > If I see the problem right, logically, you only need to look at one
> > > > previous line.
> > > >
> > > > If it is the same episode, then you want to extend the testable OUT
> > date
> > > > whenever the new line has a higher one.  Since the file is sorted by
> > > > IN, the previous IN is always okay.
> > > >      This looks like it should work -
> > > >
> > > > SORT CASES BY id in out.
> > > >
> > > > DO IF ($CASENUM EQ 1)
> > > > +COMPUTE  episode=1.
> > > >
> > > > +COMPUTE  hiout= out.
> > > > END IF.
> > > >
> > > > DO IF (id=LAG(id)) AND RANGE(in,lag(in),lag(hiout) ) .
> > > > +COMPUTE episode=lag(episode).
> > > >
> > > > +COMPUTE hiout= max(out, lag(hiout) ).
> > > > END IF.
> > > >
> > > > DO IF MISSING(episode) .
> > > > +COMPUTE  episode=LAG(episode) + 1.
> > > >
> > > > +COMPUTE  hiout= out.
> > > > END IF.
> > > >
> > > > * That shows the logic explicitly.  Since temporary vars (#)  keep
> > their
> > > > * values until changed, across cases, the code should work the same
> > > > * if #hiout replaced both hiout and  lag(hiout). Conceivably, that
> > might
> > > > * run faster than using lag(hiout).
> > > >
> > > > --
> > > > Rich Ulrich
> > > >
> > > >
> > > >
> > > > Date: Thu, 22 May 2014 13:43:25 -0700
> > > > From: [hidden email]
> > > > <http://user/SendEmail.jtp?type=node&node=5726201&i=0> <http://user/SendEmail.jtp?type=node&node=5726201&i=0%3E>
> > > > Subject: Re: Finding and marking related cases
> > > > To: [hidden email] <
> > > http://user/SendEmail.jtp?type=node&node=5726201&i=1 <http://user/SendEmail.jtp?type=node&node=5726201&i=1>>
> > >
> > > >
> > > > Thank's a lot.
> > > > Is there any way to find the number of combined  cases I shall put in
> > > the
> > > > "lookback", or is there a syntax that can regulate this itself? I have
> > a
> > > > couple of hundred thousand singel cases in total and no idea how many
> > > > combined cases I will end up with.Is it possible to sort the
> > constructed
> > > > combined case numbers based on in and not id first?
> > > > Best regardsLars N.
> > > >  19. mai 2014 kl. 18:22 skrev David Marso [via SPSSX Discussion]
> > > <[hidden
> > > > email]>:
> > > >
> > > >         Something like the following?
> > > >
> > > > Note you may need to change the number of cases in the "lookback" (in
> > > this
> > > > case 4).
> > > >
> > > > --
> > > >
> > > > DATA LIST LIST / case id in out.
> > > >
> > > > BEGIN DATA.
> > > >
> > > > 1, 11, 13, 17
> > > >
> > > > 2, 12, 14, 15
> > > >
> > > > 3, 11, 14, 14
> > > >
> > > > 4, 13, 15, 22
> > > >
> > > > 5, 11, 17, 22
> > > >
> > > > 6, 12, 17, 24
> > > >
> > > > 7, 11, 27, 29
> > > >
> > > > END DATA.
> > > >
> > > >
> > > > SORT CASES BY id in out.
> > > >
> > > > IF ($CASENUM EQ 1) episode=1.
> > > >
> > > > DO REPEAT #=1 TO 4.
> > > >
> > > > +  IF (id=LAG(id,#)) AND RANGE(in,lag(in,#),lag(out,#))
> > > > episode=lag(episode).
> > > >
> > > > END REPEAT.
> > > >
> > > > IF MISSING(episode) episode=LAG(episode) + 1.
> > > >
> > > >
> > > > LIST.
> > > >
> > > >
> > > >
> > > >
> > > >     case       id       in      out  episode
> > > >
> > > >
> > > >
> > > >     1.00    11.00    13.00    17.00     1.00
> > > >
> > > >     3.00    11.00    14.00    14.00     1.00
> > > >
> > > >     5.00    11.00    17.00    22.00     1.00
> > > >
> > > >     7.00    11.00    27.00    29.00     2.00
> > > >
> > > >     2.00    12.00    14.00    15.00     3.00
> > > >
> > > >     6.00    12.00    17.00    24.00     4.00
> > > >
> > > >     4.00    13.00    15.00    22.00     5.00
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > Number of cases read:  7    Number of cases listed:  7
> > > >
> > > >
> > > >
> > > >
> > > >                                 Please reply to the list and not to my
> > > > personal email.
> > > >
> > > > Those desiring my consulting or training services please feel free to
> > > > email me.
> > > >
> > > > ---
> > > >
> > > > "Nolite dare sanctum canibus neque mittatis margaritas vestras ante
> > > porcos
> > > > ne forte conculcent eas pedibus suis."
> > > >
> > > > Cum es damnatorum possederunt porcos iens ut salire off sanguinum
> > cliff
> > > in
> > > > abyssum?"
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >                 If you reply to this email, your message will be added
> > > to
> > > > the discussion below:
> > > >
> > > >
> > >
> > http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5726135.html <http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5726135.html>
> > > >
> > > >
> > > >
> > > >                 To unsubscribe from Finding and marking related cases,
> > > > click here.
> > > >
> > > >                 NAML
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > View this message in context: Re: Finding and marking related cases
> > > >
> > > > Sent from the SPSSX Discussion mailing list archive at Nabble.com <http://nabble.com/>.
> > > >
> > > > Please reply to the list and not to my personal email.
> > > > Those desiring my consulting or training services please feel free to
> > > > email me.
> > > > ---
> > > > "Nolite dare sanctum canibus neque mittatis margaritas vestras ante
> > > porcos
> > > > ne forte conculcent eas pedibus suis."
> > > > Cum es damnatorum possederunt porcos iens ut salire off sanguinum
> > cliff
> > > in
> > > > abyssum?"
> > > >
> > > >
> > > > ------------------------------
> > > >  If you reply to this email, your message will be added to the
> > > discussion
> > > > below:
> > > >
> > > >
> > >
> > http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5726201.html <http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5726201.html>
> > > >  To unsubscribe from Finding and marking related cases, click here
> > > > <
> > >
> >
> > > > .
> > > > NAML
> > > > <
> > >
> > http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>>
> >
> >
> >   > <
> > http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E>>
> > <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E%3E> <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E%3E%3E>
> >
> >
> > > >
> > >
> > > Please reply to the list and not to my personal email.
> > > Those desiring my consulting or training services please feel free to
> > > email me.
> > > ---
> > > "Nolite dare sanctum canibus neque mittatis margaritas vestras ante
> > porcos
> > > ne forte conculcent eas pedibus suis."
> > > Cum es damnatorum possederunt porcos iens ut salire off sanguinum cliff
> > in
> > > abyssum?"
> > >
> > >
> > > ------------------------------
> > >  If you reply to this email, your message will be added to the
> > discussion
> > > below:
> > >
> >
> > >
> > http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5727915.html <http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5727915.html>
> > >  To unsubscribe from Finding and marking related cases, click here
> > > < .
> > > NAML
> > > <
> > http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>>
> > <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E> <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E%3E>
> > >
> >
> >  Please reply to the list and not to my personal email.
> > Those desiring my consulting or training services please feel free to
> > email me.
> > ---
> > "Nolite dare sanctum canibus neque mittatis margaritas vestras ante porcos
> > ne forte conculcent eas pedibus suis."
> > Cum es damnatorum possederunt porcos iens ut salire off sanguinum cliff in
> > abyssum?"
> >
> >
> >  ------------------------------
> >
> > *If you reply to this email, your message will be added to the discussion
> > below:*
> >
> >
> > http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5727918.html <http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5727918.html>
> >
> > To unsubscribe from Finding and marking related cases, click here.
> > NAML
> > <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml> <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E>
> >
> >
> >
> >
> >
> >
> >  ------------------------------
> >
> > View this message in context: Re: Finding and marking related cases
> > <http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5727996.html> <http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5727996.html%3E>
> > Sent from the SPSSX Discussion mailing list archive
> > <http://spssx-discussion.1045642.n5.nabble.com/> <http://spssx-discussion.1045642.n5.nabble.com/%3E> at Nabble.com <http://nabble.com/>.
> > ===================== To manage your subscription to SPSSX-L, send a
> > message to [hidden email]
> > <http:///user/SendEmail.jtp?type=node&node=5728006&i=2> <http://user/SendEmail.jtp?type=node&node=5728006&i=2%3E> (not to SPSSX-L),
> > with no body text except the command. To leave the list, send the command
> > SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the
> > command INFO REFCARD
> >  ===================== To manage your subscription to SPSSX-L, send a
> > message to [hidden email]
> > <http:///user/SendEmail.jtp?type=node&node=5728006&i=3> <http://user/SendEmail.jtp?type=node&node=5728006&i=3%3E> (not to SPSSX-L),
> > with no body text except the command. To leave the list, send the command
> > SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the
> > command INFO REFCARD
> >
> > ------------------------------
> >  If you reply to this email, your message will be added to the discussion
> > below:
> >
> > http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5728006.html <http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5728006.html>
> >  To unsubscribe from Finding and marking related cases, click here
> > < href="" target="_top" rel="nofollow" link="external"> > .
> > NAML
> > <
http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml> <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E>
> >
> Please reply to the list and not to my personal email.
> Those desiring my consulting or training services please feel free to email me.
> ---
> "Nolite dare sanctum canibus neque mittatis margaritas vestras ante porcos ne forte conculcent eas pedibus suis."
> Cum es damnatorum possederunt porcos iens ut salire off sanguinum cliff in abyssum?"
>
>
> If you reply to this email, your message will be added to the discussion below:
> http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5728120.html <http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5728120.html>
> To unsubscribe from Finding and marking related cases, click here <
> NAML <
http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
Please reply to the list and not to my personal email.
Those desiring my consulting or training services please feel free to email me.
---
"Nolite dare sanctum canibus neque mittatis margaritas vestras ante porcos ne forte conculcent eas pedibus suis."
Cum es damnatorum possederunt porcos iens ut salire off sanguinum cliff in abyssum?"



If you reply to this email, your message will be added to the discussion below:
http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5728205.html
To unsubscribe from Finding and marking related cases, click here.
NAML
Reply | Threaded
Open this post in threaded view
|

Re: Finding and marking related cases

nessie
In reply to this post by Bruce Weaver
Dear David and Bruce
 
This is so simple and exactly what I was trying to acchive. Works like a charm.
 
I did some basic attempts myself making two new dummy variables for in/out with values only for det marked x=1 cases and then use lag to give this value to the next case until a new x=1 case appeared. I then check each case to se if it's real in/out was within the dummy in/out. This of course would only give information from presceding cases, and I had to make a new similar variable for lead check and integrate the results and everything got very complex. The logic of this was to complex for my simple mind.
 
I am trying to get the hang of Shift values, RANGE, #scratch, macros etc. but it's still new to me and I could never achived this elegant solution on my own. Thank you very much the both of you.
 
Now I can mark our patients to se if e.g. a simlutanious trauma in the E.D. have any influence on our general care taking.
 
Thanks again.
Best regards
Lars

2014-12-16 19:06 GMT+01:00 Bruce Weaver [via SPSSX Discussion] <[hidden email]>:
Ahem!  Please disregard the Rubish* method I posted earlier.  David's solution is far better.  

* Re the meaning of "Rubish", please see http://www.rubegoldberg.com/.  ;-)


David Marso wrote
<Addendum:>
Perhaps for logical clarity substitute
+  COMPUTE flagged=MAX(flagged,RANGE(#in,in,out) | RANGE(#out,in,out)).
rather than the existing SUM function in the MACRO.
That way it returns 0/1 rather than {0:4}
</Addendum>


Here is a conceptually simpler approach which has the advantage of also flagging cases where there are more than a single qualifying overlap.  If this seems mysterious the look up SCRATCH variables in the FM and learn to love them. The macro (DEFINE !ENDDEFINE) is simply to avoid redundant code.

DATA LIST FREE / ID in out x.
BEGIN DATA
1 1 2 0  2 3 3 0  2.1 3 3.5 0  3 3 4 1  4 4 5 0  5 6 8 0  6 7 9 0  7 7 12 0
8 13 13 1  8.1 13 13.1 0  9 13 13.2 0  9.1 13 13.3 0  9.2 13 13.5 0  9.3 13 14 0  10 14 16 0
END DATA.

DEFINE flag()
DO IF x.
+  COMPUTE #in = in.
+  COMPUTE #out=out.
ELSE.
+  COMPUTE flagged=SUM(flagged,RANGE(#in,in,out),RANGE(#out,in,out)).
END IF.
!ENDDEFINE.

COMPUTE @order=$CASENUM.
SORT CASES BY in (A) out (A).
flag.
SORT CASES BY out (D) in (D).
flag.
SORT CASES BY @order.
LIST.
 ID       in      out        x   @order  flagged
 
    1.00     1.00     2.00      .00     1.00      .00
    2.00     3.00     3.00      .00     2.00     1.00
    2.00     3.00     3.50      .00     3.00     1.00
    3.00     3.00     4.00     1.00     4.00      .
    4.00     4.00     5.00      .00     5.00     1.00
    5.00     6.00     8.00      .00     6.00      .00
    6.00     7.00     9.00      .00     7.00      .00
    7.00     7.00    12.00      .00     8.00      .00
    8.00    13.00    13.00     1.00     9.00      .
    8.00    13.00    13.10      .00    10.00     2.00
    9.00    13.00    13.20      .00    11.00     2.00
    9.00    13.00    13.30      .00    12.00     2.00
    9.00    13.00    13.50      .00    13.00     2.00
    9.00    13.00    14.00      .00    14.00     2.00
   10.00    14.00    16.00      .00    15.00      .00

David Marso wrote
It would be interesting to see what you tried.
Here is what I was trying to direct you toward:
If next case is X then
FLAG: If the 'out' of current case is between the 'in' and 'out' of this next case.
If previous case is X
FLAG if the 'in' of current case is between 'in' and 'out' of previous case.
This can be concisely coded as a 'vector product' of the ranges and the 'xflags'.

DATA LIST FREE / ID in out x.
BEGIN DATA
1 1 2 0 2 3 3 0 3 3 4 1 4 4 5 0 5 6 8 0 6 7 9 0 7 7 12 0 8 13 13 1 9 13 14 0 10 14 16 0
END DATA.
CREATE inlead=LEAD(in,1)/outlead=LEAD(out,1)/xlead=LEAD(x,1).
COMPUTE flagged=SUM(RANGE(out,inlead,outlead) * xlead, RANGE(in,LAG(in),LAG(out))*LAG(x)).
LIST.

nessie wrote
I have tried to solve this myself, but I can get it right.

I've used this example data:

DATA LIST FREE / ID in out x.
BEGIN DATA
1 1 2 0 2 3 3 0 3 3 4 1 4 4 5 0 5 6 8 0 6 7 9 0 7 7 12 0 8 13 13 1 9 13 14 0 10 14 16 0
END DATA.

id is case id
in is a fictive date/time equivalent
out is a fictive date/time equivalent
x is a marker (0=ordinary case, 1=special case)


To sum up, I want to mark all cases which overlaps the in/out-span of the x=1 cases.
I have tried to experiment with the lag and lead functions in SHIFT VALUES (thank’s for the tip David), but I haven’t come up with a working solution.

Does anyone have any other tips or tricks to help me solve this problem?

Best regards
Lars


> 5. des. 2014 kl. 16.07 skrev David Marso [via SPSSX Discussion] <[hidden email]>:
>
> Look up SHIFT VALUES (alternatively use the CREATE command).
> There is also a RANGE function in COMPUTE.
> --
> nessie wrote
> Sorry for my very late reply and for not providing example data when I
> posted my question. I fell bad!
> Thank's to both of you for two different but excellent solutions.
> Espacially thank's to you David for taking time to make and provide am
> illustrative example.
>
> *I have a new and different problem:*
> I have a big number of hospital admissions. Some of them are special and
> draines a lot of resources and I want to mark all the other admissions
> within the time-frame of these admissions to check if they are influenced
> in a negative way. I modified your data David, to give an example.
> The special cases are marked with the value 1 for variable x.
>
> I want to make a new variable that markes all the cases that overlaps with
> the in/out time of these special cases. Here ID 3 and ID 8 are special
> cases, and I want to mark case; 2, 4 and 9.
>
> Best regards
> Lars
>
> DATA LIST FREE / ID in out x.
> BEGIN DATA
> 1 1 2 0 2 3 3 0 3 3 4 1 4 4 5 0 5 6 8 0 7 7 12 0 8 13 13 1 9 13 14 0
> END DATA.
>
> This gives these data:
>
> ID in out x
>
> 1,00 1,00 2,00 ,00
>
> 2,00 3,00 3,00 ,00
>
> 3,00 3,00 4,00 1,00
>
> 4,00 4,00 5,00 ,00
>
> 5,00 6,00 8,00 ,00
>
> 7,00 7,00 12,00 ,00
>
> 8,00 13,00 13,00 1,00
>
> 9,00 13,00 14,00 ,00
>
>
>
>
>
> 2014-11-24 16:08 GMT+01:00 Maguin, Eugene [via SPSSX Discussion] <
> [hidden email] <x-msg://1/user/SendEmail.jtp?type=node&node=5728120&i=0>>:
>
> >  Look at the aggregate function CIN. It returns a count of values between
> > two endpoints.
> >
> > Gene Maguin
> >
> >
> >
> > *From:* SPSSX(r) Discussion [mailto:[hidden email]
> > <http:///user/SendEmail.jtp?type=node&node=5728006&i=0> <http://user/SendEmail.jtp?type=node&node=5728006&i=0%3E>] *On Behalf Of *
> > nessie
> > *Sent:* Friday, November 21, 2014 7:30 AM
> > *To:* [hidden email]
> > <http:///user/SendEmail.jtp?type=node&node=5728006&i=1> <http://user/SendEmail.jtp?type=node&node=5728006&i=1%3E>
> > *Subject:* Re: Finding and marking related cases
> >
> >
> >
> > I have worked my way through the basic aggregate functions and I'm
> > starting to get the hang of it.
> >
> > I have just one basic question - is it possible to count the number of
> > different values a variable contains across the cases you aggregate and
> > present all the different values in the aggregated sufix variables?
> >
> >
> >
> > E.g. I have a patient who has been to several different hospital
> > departments during one aggregated stay. For each case/episode the
> > department variable returns a value. Lets say the has 10 different partial
> > stays (cases) from 7 different departments wich I would like to aggregate.
> > How can I count the number of different departments the patient has been to
> > and present them in the aggregat variables (wich I would like to put at the
> > end of each case/part of stay).
> >
> >
> >
> > Best regards
> >
> > Lars
> >
> >
> >
> > 2014-11-13 15:20 GMT+01:00 Lars E. Næss-Pleym <[hidden email]
> > <http:///user/SendEmail.jtp?type=node&node=5727996&i=0> <http://user/SendEmail.jtp?type=node&node=5727996&i=0%3E>>:
> >
> >  Sorry about the missing spaces, it probably doesn't matter, but I try
> > again to make it more easy to read.
> >
> >
> >
> > DATA LIST LIST
> > / dia (A2) c_level (A2) case id in out .
> > BEGIN DATA.
> > "A1" "IP" 1 11 13 17
> > "B1" "OP" 2 12 15 15
> > "A1" "OP" 3 11 14 14
> > "A2" "IP" 4 13 15 22
> > "B2" "IP" 5 11 17 22
> > "C1" "IP" 6 12 17 24
> > "B3" "IP" 7 11 27 29
> > "C4" "IP" 8 13 22 29
> > "D1" "IP" 9 12 24 26
> > "D2" "IP" 10 12 28 30
> > END DATA.
> >
> > LIST.
> >
> >
> >
> > SORT CASES BY id in out.
> > IF ($CASENUM EQ 1) episode=1.
> > DO REPEAT #=1 TO 10.
> > +  IF (id=LAG(id,#)) AND RANGE(in,lag(in,#),lag(out,#))
> > episode=lag(episode).
> > END REPEAT.
> > IF MISSING(episode) episode=LAG(episode) + 1.
> >
> > LIST.
> >
> >
> >
> > SORT CASES BY episode(A) in(A) out(A).
> > MATCH FILES
> >   /FILE=*
> >   /BY episode
> >   /FIRST=FirstCase
> >   /LAST=PrimaryLast.
> > DO IF (FirstCase).
> > COMPUTE  CaseNr=1-PrimaryLast.
> > ELSE.
> > COMPUTE  CaseNr=CaseNr+1.
> > END IF.
> > LEAVE  CaseNr.
> > FORMATS  CaseNr (f7).
> > MATCH FILES
> >   /FILE=*
> >   /DROP=PrimaryLast.
> > VALUE LABELS  FirstCase 0 'Duplicate Case' 1 'Primary Case' 2'Unique case'.
> > VARIABLE LEVEL  FirstCase (ORDINAL) /CaseNr (SCALE).
> > EXECUTE.
> >
> >
> >
> > IF (CaseNr=0) FirstCase=2.
> > EXECUTE.
> >
> > IF (FirstCase=2) CaseNr=1.
> > EXECUTE.
> >
> > 2014-11-13 15:17 GMT+01:00 Lars E. Næss-Pleym <[hidden email]
> > <http:///user/SendEmail.jtp?type=node&node=5727996&i=1> <http://user/SendEmail.jtp?type=node&node=5727996&i=1%3E>>:
> >
> >
> >
> >  Here is an example syntax. dia=diagnosis, c_level=care_level
> > (IP=In-Patient, OP=Out-Patient), case=unique case_ID, id=person_ID,
> > in=date_in, out=date_out
> >
> >
> >
> >  DATA LIST LIST
> > / dia (A2) c_level (A2) case id in out .
> > BEGIN DATA.
> > "A1" "IP" 1 11 13 17
> > "B1" "OP" 2 12 15 15
> > "A1" "OP" 3 11 14 14
> > "A2" "IP" 4 13 15 22
> > "B2" "IP" 5 11 17 22
> > "C1" "IP" 6 12 17 24
> > "B3" "IP" 7 11 27 29
> > "C4" "IP" 8 13 22 29
> > "D1" "IP" 9 12 24 26
> > "D2" "IP" 10 12 28 30
> > END DATA.
> >
> > LIST.
> >
> > SORT CASES BY id in out.
> > IF ($CASENUM EQ 1) episode=1.
> > DO REPEAT #=1 TO 10.
> > +  IF (id=LAG(id,#)) AND RANGE(in,lag(in,#),lag(out,#))
> > episode=lag(episode).
> > END REPEAT.
> > IF MISSING(episode) episode=LAG(episode) + 1.
> >
> > LIST.
> >
> > SORT CASES BY episode(A) in(A) out(A).
> > MATCH FILES
> >   /FILE=*
> >   /BY episode
> >   /FIRST=FirstCase
> >   /LAST=PrimaryLast.
> > DO IF (FirstCase).
> > COMPUTE  CaseNr=1-PrimaryLast.
> > ELSE.
> > COMPUTE  CaseNr=CaseNr+1.
> > END IF.
> > LEAVE  CaseNr.
> > FORMATS  CaseNr (f7).
> > MATCH FILES
> >   /FILE=*
> >   /DROP=PrimaryLast.
> > VALUE LABELS  FirstCase 0 'Duplicate Case' 1 'Primary Case' 2'Unique case'.
> > VARIABLE LEVEL  FirstCase (ORDINAL) /CaseNr (SCALE).
> > EXECUTE.
> >
> > IF (CaseNr=0) FirstCase=2.
> > EXECUTE.
> >
> > IF (FirstCase=2) CaseNr=1.
> > EXECUTE.
> >
> > From this I want to keep all the uniqe cases but also make a new
> > aggregatet case for all episodes containing "in" from the first case and
> > "out" from the last, the "id", and "episode" variables, the
> > last "Dia" variable, and the first "c_level" variable.
> >
> >
> >
> > I also want to know e.g. how many different unique diagnosis within the
> > same episode and if the "c_level" has been the same for all cases
> > within one episode.
> >
> >
> >
> > I will look up look up AGGREGATE, RENAME VARIABLES, CASESTOVARS and MATCH
> > FILES, thanks for the tip.
> >
> >
> >
> > Best regards
> >
> > Lars
> >
> >
> > 2014-11-13 14:18 GMT+01:00 David Marso [via SPSSX Discussion] <[hidden
> > email] <http:///user/SendEmail.jtp?type=node&node=5727996&i=2> <http://user/SendEmail.jtp?type=node&node=5727996&i=2%3E>>:
> >
> >
> >
> > Lars,
> >   Please post a more illustrative data set with a before/after of how you
> > want the final result to appear.
> > Meanwhile look up AGGREGATE, RENAME VARIABLES, CASESTOVARS and MATCH FILES
> > commands.
> > HTH, David
> > ==
> >
> >  *nessie wrote*
> >
> > Thanks David
> > That worked!
> >
> > From the original question in this tread, I have made an episode variable
> > linking concurrent admissions.
> > I have then sorted this in cronological order av given it a case number
> > within the episode.
> >
> > For "aggregated" episodes I now want to make a new case with aggregated
> > data for all the same variables but pick some of the variables from the
> > first and some from the last case. I can sort my episode variables
> > according to casenumber and then try to aggregate with all the variables
> > as
> > break variables from either the first or the last case.
> > - Is this the best way to do it?
> > - What if I want to gather som data from on of the episodes in the middle?
> > Is there an easy way to do this?
> >
> > Best regards
> > Lars N.
> >
> >
> > 2014-11-13 10:54 GMT+01:00 David Marso [via SPSSX Discussion] <
> > [hidden email] <http://user/SendEmail.jtp?type=node&node=5727918&i=0> <http://user/SendEmail.jtp?type=node&node=5727918&i=0%3E>>:
> >
> >
> > > See AGGREGATE in the FM.  There are FIRST and LAST functions.
> > > --
> > > AGGREGATE OUTFILE * MODE=ADDVARIABLES
> > > /BREAK.../f=FIRST(?)/l=LAST(?)..........
> > >
> > >  nessie wrote
> > > I'm picking up this old tread.
> > > You helped me a lot in finding related cases and marking them with
> > > chronological case number.Now I have a new problem!
> > >
> > > I want to make a new aggregated case containing some info from the first
> > > and some from the last related case.
> > > E.g. "In" from the first related and "Out" from the last related case.
> > Can
> > > you help me do this?
> > >
> > > Best regards
> > > Lars N.
> > > 2014-05-23 12:39 GMT+02:00 David Marso [via SPSSX Discussion] <
> >
> > > [hidden email] <http://user/SendEmail.jtp?type=node&node=5727915&i=0> <http://user/SendEmail.jtp?type=node&node=5727915&i=0%3E>>:
> >
> > >
> >
> >
> > > > Good catch Rich!
> > > > Here is a version using #scratch variables and a slightly different
> > > > approach.
> > > > DATA LIST LIST / case id in out.
> > > > BEGIN DATA.
> > > > 1, 11, 13, 17
> > > > 2, 12, 14, 15
> > > > 3, 11, 14, 14
> > > > 4, 13, 15, 22
> > > > 5, 11, 17, 22
> > > > 6, 12, 17, 24
> > > > 7, 11, 27, 29
> > > > END DATA.
> > > >
> > > > SORT CASES BY id in out.
> > > > DO IF ($CASENUM EQ 1 OR id NE LAG(id) ).
> > > > +  COMPUTE  episode=SUM(1,LAG(episode)).
> > > > +  COMPUTE  #hiout= out.
> > > > ELSE.
> > > > +  COMPUTE episode=sum(lag(episode),NOT(range(in,lag(in),#hiout ))).
> > > > END IF.
> > > > COMPUTE #hiout= max(out, #hiout ).
> > > >
> > > >  Rich Ulrich wrote
> > > > If I see the problem right, logically, you only need to look at one
> > > > previous line.
> > > >
> > > > If it is the same episode, then you want to extend the testable OUT
> > date
> > > > whenever the new line has a higher one.  Since the file is sorted by
> > > > IN, the previous IN is always okay.
> > > >      This looks like it should work -
> > > >
> > > > SORT CASES BY id in out.
> > > >
> > > > DO IF ($CASENUM EQ 1)
> > > > +COMPUTE  episode=1.
> > > >
> > > > +COMPUTE  hiout= out.
> > > > END IF.
> > > >
> > > > DO IF (id=LAG(id)) AND RANGE(in,lag(in),lag(hiout) ) .
> > > > +COMPUTE episode=lag(episode).
> > > >
> > > > +COMPUTE hiout= max(out, lag(hiout) ).
> > > > END IF.
> > > >
> > > > DO IF MISSING(episode) .
> > > > +COMPUTE  episode=LAG(episode) + 1.
> > > >
> > > > +COMPUTE  hiout= out.
> > > > END IF.
> > > >
> > > > * That shows the logic explicitly.  Since temporary vars (#)  keep
> > their
> > > > * values until changed, across cases, the code should work the same
> > > > * if #hiout replaced both hiout and  lag(hiout). Conceivably, that
> > might
> > > > * run faster than using lag(hiout).
> > > >
> > > > --
> > > > Rich Ulrich
> > > >
> > > >
> > > >
> > > > Date: Thu, 22 May 2014 13:43:25 -0700
> > > > From: [hidden email]
> > > > <http://user/SendEmail.jtp?type=node&node=5726201&i=0> <http://user/SendEmail.jtp?type=node&node=5726201&i=0%3E>
> > > > Subject: Re: Finding and marking related cases
> > > > To: [hidden email] <
> > > http://user/SendEmail.jtp?type=node&node=5726201&i=1 <http://user/SendEmail.jtp?type=node&node=5726201&i=1>>
> > >
> > > >
> > > > Thank's a lot.
> > > > Is there any way to find the number of combined  cases I shall put in
> > > the
> > > > "lookback", or is there a syntax that can regulate this itself? I have
> > a
> > > > couple of hundred thousand singel cases in total and no idea how many
> > > > combined cases I will end up with.Is it possible to sort the
> > constructed
> > > > combined case numbers based on in and not id first?
> > > > Best regardsLars N.
> > > >  19. mai 2014 kl. 18:22 skrev David Marso [via SPSSX Discussion]
> > > <[hidden
> > > > email]>:
> > > >
> > > >         Something like the following?
> > > >
> > > > Note you may need to change the number of cases in the "lookback" (in
> > > this
> > > > case 4).
> > > >
> > > > --
> > > >
> > > > DATA LIST LIST / case id in out.
> > > >
> > > > BEGIN DATA.
> > > >
> > > > 1, 11, 13, 17
> > > >
> > > > 2, 12, 14, 15
> > > >
> > > > 3, 11, 14, 14
> > > >
> > > > 4, 13, 15, 22
> > > >
> > > > 5, 11, 17, 22
> > > >
> > > > 6, 12, 17, 24
> > > >
> > > > 7, 11, 27, 29
> > > >
> > > > END DATA.
> > > >
> > > >
> > > > SORT CASES BY id in out.
> > > >
> > > > IF ($CASENUM EQ 1) episode=1.
> > > >
> > > > DO REPEAT #=1 TO 4.
> > > >
> > > > +  IF (id=LAG(id,#)) AND RANGE(in,lag(in,#),lag(out,#))
> > > > episode=lag(episode).
> > > >
> > > > END REPEAT.
> > > >
> > > > IF MISSING(episode) episode=LAG(episode) + 1.
> > > >
> > > >
> > > > LIST.
> > > >
> > > >
> > > >
> > > >
> > > >     case       id       in      out  episode
> > > >
> > > >
> > > >
> > > >     1.00    11.00    13.00    17.00     1.00
> > > >
> > > >     3.00    11.00    14.00    14.00     1.00
> > > >
> > > >     5.00    11.00    17.00    22.00     1.00
> > > >
> > > >     7.00    11.00    27.00    29.00     2.00
> > > >
> > > >     2.00    12.00    14.00    15.00     3.00
> > > >
> > > >     6.00    12.00    17.00    24.00     4.00
> > > >
> > > >     4.00    13.00    15.00    22.00     5.00
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > Number of cases read:  7    Number of cases listed:  7
> > > >
> > > >
> > > >
> > > >
> > > >                                 Please reply to the list and not to my
> > > > personal email.
> > > >
> > > > Those desiring my consulting or training services please feel free to
> > > > email me.
> > > >
> > > > ---
> > > >
> > > > "Nolite dare sanctum canibus neque mittatis margaritas vestras ante
> > > porcos
> > > > ne forte conculcent eas pedibus suis."
> > > >
> > > > Cum es damnatorum possederunt porcos iens ut salire off sanguinum
> > cliff
> > > in
> > > > abyssum?"
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >                 If you reply to this email, your message will be added
> > > to
> > > > the discussion below:
> > > >
> > > >
> > >
> > http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5726135.html <http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5726135.html>
> > > >
> > > >
> > > >
> > > >                 To unsubscribe from Finding and marking related cases,
> > > > click here.
> > > >
> > > >                 NAML
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > >
> > > > View this message in context: Re: Finding and marking related cases
> > > >
> > > > Sent from the SPSSX Discussion mailing list archive at Nabble.com <http://nabble.com/>.
> > > >
> > > > Please reply to the list and not to my personal email.
> > > > Those desiring my consulting or training services please feel free to
> > > > email me.
> > > > ---
> > > > "Nolite dare sanctum canibus neque mittatis margaritas vestras ante
> > > porcos
> > > > ne forte conculcent eas pedibus suis."
> > > > Cum es damnatorum possederunt porcos iens ut salire off sanguinum
> > cliff
> > > in
> > > > abyssum?"
> > > >
> > > >
> > > > ------------------------------
> > > >  If you reply to this email, your message will be added to the
> > > discussion
> > > > below:
> > > >
> > > >
> > >
> > http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5726201.html <http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5726201.html>
> > > >  To unsubscribe from Finding and marking related cases, click here
> > > > <
> > >
> >
> > > > .
> > > > NAML
> > > > <
> > >
> > http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>>
> >
> >
> >   > <
> > http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E>>
> > <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E%3E> <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E%3E%3E>
> >
> >
> > > >
> > >
> > > Please reply to the list and not to my personal email.
> > > Those desiring my consulting or training services please feel free to
> > > email me.
> > > ---
> > > "Nolite dare sanctum canibus neque mittatis margaritas vestras ante
> > porcos
> > > ne forte conculcent eas pedibus suis."
> > > Cum es damnatorum possederunt porcos iens ut salire off sanguinum cliff
> > in
> > > abyssum?"
> > >
> > >
> > > ------------------------------
> > >  If you reply to this email, your message will be added to the
> > discussion
> > > below:
> > >
> >
> > >
> > http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5727915.html <http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5727915.html>
> > >  To unsubscribe from Finding and marking related cases, click here
> > > < .
> > > NAML
> > > <
> > http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>>
> > <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E> <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E%3E>
> > >
> >
> >  Please reply to the list and not to my personal email.
> > Those desiring my consulting or training services please feel free to
> > email me.
> > ---
> > "Nolite dare sanctum canibus neque mittatis margaritas vestras ante porcos
> > ne forte conculcent eas pedibus suis."
> > Cum es damnatorum possederunt porcos iens ut salire off sanguinum cliff in
> > abyssum?"
> >
> >
> >  ------------------------------
> >
> > *If you reply to this email, your message will be added to the discussion
> > below:*
> >
> >
> > http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5727918.html <http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5727918.html>
> >
> > To unsubscribe from Finding and marking related cases, click here.
> > NAML
> > <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml> <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E>
> >
> >
> >
> >
> >
> >
> >  ------------------------------
> >
> > View this message in context: Re: Finding and marking related cases
> > <http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5727996.html> <http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5727996.html%3E>
> > Sent from the SPSSX Discussion mailing list archive
> > <http://spssx-discussion.1045642.n5.nabble.com/> <http://spssx-discussion.1045642.n5.nabble.com/%3E> at Nabble.com <http://nabble.com/>.
> > ===================== To manage your subscription to SPSSX-L, send a
> > message to [hidden email]
> > <http:///user/SendEmail.jtp?type=node&node=5728006&i=2> <http://user/SendEmail.jtp?type=node&node=5728006&i=2%3E> (not to SPSSX-L),
> > with no body text except the command. To leave the list, send the command
> > SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the
> > command INFO REFCARD
> >  ===================== To manage your subscription to SPSSX-L, send a
> > message to [hidden email]
> > <http:///user/SendEmail.jtp?type=node&node=5728006&i=3> <http://user/SendEmail.jtp?type=node&node=5728006&i=3%3E> (not to SPSSX-L),
> > with no body text except the command. To leave the list, send the command
> > SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the
> > command INFO REFCARD
> >
> > ------------------------------
> >  If you reply to this email, your message will be added to the discussion
> > below:
> >
> > http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5728006.html <http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5728006.html>
> >  To unsubscribe from Finding and marking related cases, click here
> > < href="" rel="nofollow" target="_blank" link="external"> > .
> > NAML
> > <
http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml> <http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml%3E>
> >
> Please reply to the list and not to my personal email.
> Those desiring my consulting or training services please feel free to email me.
> ---
> "Nolite dare sanctum canibus neque mittatis margaritas vestras ante porcos ne forte conculcent eas pedibus suis."
> Cum es damnatorum possederunt porcos iens ut salire off sanguinum cliff in abyssum?"
>
>
> If you reply to this email, your message will be added to the discussion below:
> http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5728120.html <http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5728120.html>
> To unsubscribe from Finding and marking related cases, click here <
> NAML <
http://spssx-discussion.1045642.n5.nabble.com/template/NamlServlet.jtp?macro=macro_viewer&id=instant_html%21nabble%3Aemail.naml&base=nabble.naml.namespaces.BasicNamespace-nabble.view.web.template.NabbleNamespace-nabble.view.web.template.NodeNamespace&breadcrumbs=notify_subscribers%21nabble%3Aemail.naml-instant_emails%21nabble%3Aemail.naml-send_instant_email%21nabble%3Aemail.naml>
--
Bruce Weaver
[hidden email]
http://sites.google.com/a/lakeheadu.ca/bweaver/

"When all else fails, RTFM."

NOTE: My Hotmail account is not monitored regularly.
To send me an e-mail, please use the address shown above.



If you reply to this email, your message will be added to the discussion below:
http://spssx-discussion.1045642.n5.nabble.com/Finding-and-marking-related-cases-tp5726128p5728204.html
To unsubscribe from Finding and marking related cases, click here.
NAML
12