Hi all,
A couple of questions about multiple imputation that I hope you might be able to help with… J
1. Does anyone know what type of multiple imputation SPSS uses? I’m hoping it’s either Expected Maximisation (EM) algorithm or Markov Chain Monte Carlo (MCMC). 2. I’ve been making a bit of a mistake re. my usage of SPSS multiple imputated data that may need correcting. Basically I’m running an Exploratory Factor Analysis on the imputated data which, with 20 iterations, produces 20 different version of the EFA. I’ve been using the 20th version, due to a misunderstanding of the validity of each data set. I think I should be using a ‘pooled’ data set of all 19 versions (ignoring the 1st one with the missing data), but where is this pooled version of the data to run the EFA on?
Thanks for any help you can give.
Alex Meredith Nottingham Trent University
|
Hi Alex,
1. There are 2 different imputation methods: Fully conditional specification (which uses MCMC) and Monotone. See http://publib.boulder.ibm.com/infocenter/spssstat/v20r0m0/topic/com.ibm.spss.statistics.help/idh_idd_mi_method.htm for details. 2. I'm afraid Factor analysis does not currently support pooling of multiple imputation data: See http://publib.boulder.ibm.com/infocenter/spssstat/v20r0m0/topic/com.ibm.spss.statistics.help/mi_analysis.htm for a list of procedures that do. Procedures that do support pooling automatically generate pooled output when run on multiply imputed datasets. Alex |
Administrator
|
I see that "Bivariate Correlations" is in the list of supported procedures. I assume that means that a correlation matrix is generated for each of the multiply imputed data sets, and that those estimates are pooled. So couldn't one use that pooled correlation matrix as input for the exploratory factor analysis?
Cheers, Bruce
--
Bruce Weaver bweaver@lakeheadu.ca http://sites.google.com/a/lakeheadu.ca/bweaver/ "When all else fails, RTFM." PLEASE NOTE THE FOLLOWING: 1. My Hotmail account is not monitored regularly. To send me an e-mail, please use the address shown above. 2. The SPSSX Discussion forum on Nabble is no longer linked to the SPSSX-L listserv administered by UGA (https://listserv.uga.edu/). |
This is an interesting idea. I think
I would be concerned about using the pooled correlations in much the same
way I'd be concerned about using a single imputation method. Even
if the pooled estimates provide a superior point estimate of the correlations,
we would then be using them as point estimate inputs to the factor analysis
and lose information about the variability in the correlation estimates.
Still, it's better than nothing.
I just checked whether the pooled estimates are saved with MATRIX OUT; they are not, so one would need to use OMS to collect the correlations from the output table for a sizeable correlation matrix. Alex From: Bruce Weaver <[hidden email]> To: [hidden email] Date: 11/15/2011 09:12 AM Subject: Re: Multiple Imputation Sent by: "SPSSX(r) Discussion" <[hidden email]> I see that "Bivariate Correlations" is in the list of supported procedures. I assume that means that a correlation matrix is generated for each of the multiply imputed data sets, and that those estimates are pooled. So couldn't one use that pooled correlation matrix as input for the exploratory factor analysis? Cheers, Bruce Alex Reutter wrote: > > Hi Alex, > > 1. There are 2 different imputation methods: Fully conditional > specification (which uses MCMC) and Monotone. See > http://publib.boulder.ibm.com/infocenter/spssstat/v20r0m0/topic/com.ibm.spss.statistics.help/idh_idd_mi_method.htm > for details. > > 2. I'm afraid Factor analysis does not currently support pooling of > multiple imputation data: See > http://publib.boulder.ibm.com/infocenter/spssstat/v20r0m0/topic/com.ibm.spss.statistics.help/mi_analysis.htm > for a list of procedures that do. Procedures that do support pooling > automatically generate pooled output when run on multiply imputed > datasets. > > Alex > ----- -- Bruce Weaver [hidden email] http://sites.google.com/a/lakeheadu.ca/bweaver/ "When all else fails, RTFM." NOTE: My Hotmail account is not monitored regularly. To send me an e-mail, please use the address shown above. -- View this message in context: http://spssx-discussion.1045642.n5.nabble.com/Multiple-Imputation-tp4994372p4994536.html Sent from the SPSSX Discussion mailing list archive at Nabble.com. ===================== To manage your subscription to SPSSX-L, send a message to [hidden email] (not to SPSSX-L), with no body text except the command. To leave the list, send the command SIGNOFF SPSSX-L For a list of commands to manage subscriptions, send the command INFO REFCARD |
Hi,
Thanks for the replies; looks like a tricky situation re. the pooled data. Thanks, Alex. |
Administrator
|
In reply to this post by Alex Reutter
I am resurrecting this old thread because this same question (i.e., how to handle missing data in the context of exploratory factory analysis) has just come up for a couple of my colleagues. I had forgotten about this thread, but found it when I started digging into the topic again. But here's something else interesting I found:
http://www.stats.ox.ac.uk/~snijders/Graham2009.pdf Here are some of the most relevant bits (with emphasis added). "The EM covariance matrix is also an excellent basis for exploratory factor analysis with missing data." "Although direct analysis of the EM covariance matrix can be useful, a more widely useful EM tool is to impute a single data set from EM parameters (with random error). This procedure has been described in detail in Graham et al. (2003). This single imputed data set is known to yield good parameter estimates, close to the population average. But more importantly, because it is a complete data set, it may be read in using virtually any software, including SPSS. Once read into the software, coefficient alpha and exploratory factor analyses may be carried out in the usual way. One caution is that this data set should not be used for hypothesis testing. Standard errors based on this data set, say from a multiple regression analysis, will be too small, sometimes to a substantial extent. Hypothesis testing should be carried out with MI or one of the FIML procedures. Note that the procedure in SPSS for writing out a single imputed data set based on the EM algorithm is not recommended unless random error residuals are added after the fact to each imputed value; the current implementation of SPSS, up to version 16 at least, writes data out without adding error (e.g., see von Hippel 2004). This is known to produce important biases in the data set (Graham et al. 1996)." This leads me to the following questions: 1. My v21 CSR manual does not say anything about the addition of random error when MVA - EM is used to write out an imputed dataset. So I assume nothing has changed since v16. Can anyone confirm this? (Jon?) 2. Does anyone know if Stata's implementation of EM adds random error? (Marta, I hope you're reading this!) I ask, because my digging also turned up this solution to the problem: http://www.ats.ucla.edu/stat/stata/faq/factor_missing.htm Thanks, Bruce
--
Bruce Weaver bweaver@lakeheadu.ca http://sites.google.com/a/lakeheadu.ca/bweaver/ "When all else fails, RTFM." PLEASE NOTE THE FOLLOWING: 1. My Hotmail account is not monitored regularly. To send me an e-mail, please use the address shown above. 2. The SPSSX Discussion forum on Nabble is no longer linked to the SPSSX-L listserv administered by UGA (https://listserv.uga.edu/). |
Administrator
|
Some follow-up: I've had an e-mail from John Graham in which he assures me that "the EM estimates (means, standard deviations, correlations, covariances, etc.) in SPSS are fine", and that the problem is restricted to the imputed dataset. So for exploratory factor analysis (EFA), it would appear that the simplest approach is to obtain EM estimates of the correlations (or covariances) via the MVA procedure, and use them as input for FACTOR. (A bit of data management is required to get them into the right matrix file format, but that shouldn't be too difficult.)
Graham did not know if Stata has adds random error correctly when writing out the imputed data (using EM), but suggested this method for checking: 1. Run EM and get the variances and covariances. 2. Write out the imputed data set, and use it to compute variances & covariances. 3. Compare. "If the random error is properly added to the imputed values, then variances based on the [imputed] data set will be very similar to the variances you see from the EM estimates." If the random error is not added properly, the variances computed from the imputed dataset are expected to be lower. HTH.
--
Bruce Weaver bweaver@lakeheadu.ca http://sites.google.com/a/lakeheadu.ca/bweaver/ "When all else fails, RTFM." PLEASE NOTE THE FOLLOWING: 1. My Hotmail account is not monitored regularly. To send me an e-mail, please use the address shown above. 2. The SPSSX Discussion forum on Nabble is no longer linked to the SPSSX-L listserv administered by UGA (https://listserv.uga.edu/). |
In reply to this post by Meredith, Alexander2
Hello!
Thank you so much for this interesting discussion. Unforunately as far as I understand the suggestions on using EM for EFA don't solve the problems for categorical variables with missing values, since EM cannot be applied to categorical data in SPSS (and in general??). On top of that, SPSS won't compute Little's MCAR test for categoriacal variables Does someone have any suggestions for dealing with missing values on categorical variables? My proximal goal is to build scales using EFA and reliability analyses. I know MI is the state of the art, but as it has already been noted, spss doesn't produce pooled results for EFA yet. I appreciate any help/advice/comment!!!! |
You say you are
building scales. What is the response scale on the items
you are considering? Usually scales are built of strictly
interval level items (e.g., Z's or temperature) or of items that
are not severely discrepant from interval level such as extent
response scales or Likert response scales.
How extensive is your missing data? Do you know why data is missing for some items? Were you thinking of CATPCA rather than FACTOR for EFA? Art Kendall Social Research ConsultantsOn 11/16/2013 10:28 AM, therp [via SPSSX Discussion] wrote: Hello!
Art Kendall
Social Research Consultants |
Hello Art, thank you for your answer!
I have missing values on both my IVs and DVs. IVs and DVs were measured in seperate sessions because I'm working on a prediction model. The response scale of my IV items is a Likert scale -here I'm using an established questionnaire for two constructs. I have 3.3% missing values on the first and 0.4% on the second construct. Little's test showed that MCAR is present only for one of the constructs and seperate variance t tests could not be computed on the other construct because I only had single missing values on variables in that construct. Not sure what I will do with this result yet... Some missings are due to a software error (first construct) and some are random that I can't explain (second construct). For the DVs, I constructed behavioral items and the response scale for almost all of the items is picking between a prejudiced or not prejudiced alternative [0;1]. Other DV items have a Likert response scale and some an interval. I understand that I have to z-tranform the variables before building scales, but since z-transformation uses the mean of a variable i don't know if i can do this before imputation. I have 6.1% missing data total on my DVs and I will drop variables with more than 10% missings. Yes, for the DVs I know that some of my variables allowed nonresponse too easily (that's the 3 i dropped) and some missings are due to participants abandoning the experiment at the very end, and some due to technical mistakes. Yes, I considered CATPCA, but since i have a dataset with mixed scales i wasn't sure..but I will look into it again, thank you! On top of that, a CFA might be even more right vor my procedure since I have a theory behind the item structure...either way, if I built scales for my DVs, my missing values would lead to up to 1/3 missing values on my new scales. Do you think that a better procedure in my case would be running a CFA with the missing values, build scales, and then impute missings on scale level? Thank you very very much for any further advice! regards, therp Am 16.11.2013 18:06, schrieb Art Kendall [via SPSSX Discussion]:
|
Just a couple of comments/ questions.
It seems that you mention a need to z-transform *items* at some point. Are you sure? Item-Response Theory (IRT) can use inverse normal instead of its usual logistic function, but I have never seen it, and I didn't think that used the z-scores. Is that really so, or were you working from some other assumption? In an earlier post, you worried about categorical variables, but you don't mention them here. Dichotomies are "equal interval" by convention (only one interval -- certainly not unequal intervals). Likert are treated as continuous and equal-interval unless you move to an IRT model. I was almost always comfortable of using the average of the items that were present in order to account for Missing. The exceptions would be due to the meaning of some particular item. That could be indicated either by its extreme mean or its literal meaning. I mean, I could read some items and say, "Oh, (given the other answers?) that would be left blank because blah-blah-blah. -- and therefore, score it XXX." -- Rich Ulrich Date: Sat, 16 Nov 2013 10:04:58 -0800 From: [hidden email] Subject: Re: Multiple Imputation To: [hidden email] ... For the DVs, I constructed behavioral items and the response scale for almost all of the items is picking between a prejudiced or not prejudiced alternative [0;1]. Other DV items have a Likert response scale and some an interval. I understand that I have to z-tranform the variables before building scales, but since z-transformation uses the mean of a variable i don't know if i can do this before imputation. I have 6.1% missing data total on my DVs and I will drop variables with more than 10% missings. ... |
In reply to this post by therp
Dichotomies are perhaps
the only instance of all intervals being perfectly interval
because there is only 1 interval and it is necessarily equal to
itself.
Likert items are usually not severely discrepant from interval level. What is the response scale on you other interval level items? If it is an established questionnaire, I would not want to impute values from one construct from another. This is especially so if one is an IV and one is a DV or if they were derived as factors and divergent validity is important. If it is an established scale, I would only bother with a factor analysis and reliability run to check that I was using the scoring key correctly. If you use the mean.n function to get the score that should work satisfactorily. If you are using correlation based analysis the correlation process itself standardizes (Zs) the scores. For those variables that you are considering scoring by summing, you could check the factor structure to see if it is consistent with what you think should go together by factoring with listwise deletion and with mean substitution to see if there are meaningful differences in the scoring keys you would derive. If there are not, I would just go ahead with the mean.n function to form the summative score. I would not worry about CATPCA unless you had some nominal level variables in the mix, and that would be unusual in scale construction. Keep in mind that the rationale behind using summative scores (sums or means of a set of items) is that each item is an imperfect measure of a construct. Repeating measurement of the construct with several different items, yields a summative measure that is a more credible measurement of the underlying construct. In factor analytic terms, we are interested in the common variance of the set of items as an operationalization of the underlying construct. Art Kendall Social Research ConsultantsOn 11/16/2013 1:04 PM, therp [via SPSSX Discussion] wrote:
Art Kendall
Social Research Consultants |
Thank you again for your comments and advice!
To make sure i understood you correctly: IV - questionnaire Since I'm using an established questionnaire as my IV, I don't have to impute missing values and just sum the items by mean.n. As far as I know this procedure requires that at least 2/3, or better 3/4, of the items i use for summing are complete, which is not the case in my questionnaire, i.e. one scale has 11 items and i have missings on 4 of them (->only 63.3% are complete). Can I still use the mean.n fuction or do I have to drop this scale? Can you advice me on literature for that procedure (since the analysis is for my thesis and I have to justify my procedure)? Also, are you implying that I don't have to check MCAR or MAR for that questionnaire? Indeed, without imputation, I could replicate the factor structure of that questionnaire. DVs - behavioral measures I was a little confused by Rich's comment that I don't mention categorical items. Most of my DV items have a response format, i.e. "not prejudiced behavior" vs. " prejudiced behavior". Doesn't that make them categorical? By z-transform I meant Fisher's z-transformation (my supervisor suggested that) because I will have to build scales, and 39 are categorical, one is answered on a 7-point liker scale, one is the amount of leaflets participants take with them (interval). I understand that I don't have to use Z-transformation for correlational analyses and factor analysis, right? So your advice, Art, is that I check the factor structure with CFA with listwise deletion and mean imputation and compare them. But before using the summative score or listwise deletion, don't I have to check if the data is MCAR or MAR? I understand from the literature that every method of imputation or deletion of cases assumes that data is MCAR/MAR. Thank you so much for your help!! |
comments interspersed
below.
Art Kendall Social Research ConsultantsOn 11/17/2013 5:45 AM, therp [via SPSSX Discussion] wrote: Thank you again for your comments and advice!You could just use mean.7. That uses an assumption that the missing data is equal to the mean of the valid items in the case. You could also check how many cases you would lose if you use mean.8 or mean.9. Aside: One lesson you should learn from this thesis is the importance of good data gathering (test administration). That greatly minimizes the amount of missing data. Can you advice me on literature for that procedure (since the analysis is for my thesis and I have to justify my procedure)? Also, are you implying that I don't have to check MCAR or MAR for that questionnaire?Just use that as the justification. I do not know of an article that suggests mean substitution for items in a scale. Perhaps some else has a cite for this ages old practice. Yes and no. The can be considered categorical but they are also considered interval level. The single interval is perfectly equal to itself. Do FREQUENCIES on some of them. Look at the percentages and the means. Would you consider a spelling test that used 1 for right and 0 for wrong and summed the item scores an invalid test? Why would that be different? Another lesson to take away from this thesis exercise. Use as fine grained a response scale as is practical under the circumstances. An extent scale that had more possible values on the response scale would restrict the variance less. It seems that the construct "prejudice" is a continues variable. Why else would you use a summative scale? A dichotomy is the coarsest possible operationalization of a continuous construct. By z-transform I meant Fisher's z-transformation (my supervisor suggested that) because I will have to build scales, and 39 are categorical, one is answered on a 7-point liker scale, one is the amount of leaflets participants take with them (interval). I understand that I don't have to use Z-transformation for correlational analyses and factor analysis, right?Are you putting those items into the same scale? Are you getting meaningful scoring keys form the factor analysis that includes items with very different response scales? If so, yes, you would z transform the items before summing them. If there is not a mix of response scales, then there is no need to transform them. So your advice, Art, is that I check the factor structure with CFA with listwise deletion and mean imputation and compare them. But before using the summative score or listwise deletion, don't I have to check if the data is MCAR or MAR?If you want to also try multiple imputation, only use contributors from items that are on the same scoring key. None of your data seems to be categorical. Before you create the imputed values use the mean.n function to get scores. Then use the mean function without the .n. Scatter plot the scores with the missing assumed to be at the mean of the other items vs those from the multiple imputation. How do they look? Subtract the scores using the mean.n from the score using imputed items. What is the mean min and max difference? Use both sets of score in your actual analysis model? How do the substantive conclusions compare? If you have CFA available that is fine. Do that with listwise deletion, imputed values, and mean substitution from items in the same scale. Most people do not have CFA available. So just do EFA with both options on each set of items. Do parallel analysis with both sets of data. Plot all 6 sets of eigenvalues. How do the they look? How do the scoring keys compare across the three sets? How do the scoring keys you find compare to the scoring keys used by the original research when there is some earlier research? You I understand from the literature that every method of imputation or deletion of cases assumes that data is MCAR/MAR.
Art Kendall
Social Research Consultants |
In reply to this post by therp
About DVs.
If you have 3 or more responses that cannot be put into a meaningful order, THEN you have a categorical variable. I suggest - Browse a book on psychometrics, for this and other background. "Fisher's z-transformation" is the wrong name, since that name refers to the inverse arc-tan transformation that Fisher applied to correlations. See http://en.wikipedia.org/wiki/Fisher_transformation . Perhaps your supervisor used that term; Fisher's z distribution could be relevant to the approach you should be using, as I show, below. 39 dichotomies; one item with likert-type anchors and a 7-point rang; and one numerical count. If you develop something called a *scale*, you develop it from the 39 items alone. One simple and direct approach is to assume that you have 3 DVs, not one. There are ways to work with 3 outcomes -- either by choice of analysis, or by Bonferroni correction, or creating a "composite score" as criterion. If you want a composite score that represents a single outcome, then you combine that 39-item scale with the two other, rather-independent outcomes: There are at least 3 obvious ways to do this. 1) Count the likert-type item and the count as being merely equivalent, each, to *one* of the dichotomous items. To the extent that they seem to cover a part of the "universe" that is different from what is covered by the items, that is a foolish way to under-rate them. ("universe" is a term you should learn from psychometrics.) Since this is a bad approach, I won't describe it further. You seemed to have had in mind one version of doing this. 2) Count the likert-type item and the count as being equivalent to the scale developed from the dichotomies. Do you want to use raw counts, or do you transform them, for instance, by taking the square root, or by drawing in some large outlier or two (so 10= "10 and above")? Anyway, for this, you could z-score the 39-item scale, the likert-item, and the count (means now 0, SDs= 1); add them together (mean still 0; SD greater) to get a composite score. - You might look at this and think that it gives too much weight to those two items ... See "validity" comment in (3). For convenience of reading and interpreting, I take another step with a composite formed this way, to create a T-score with mean of 50 and SD of 10: Divide by the SD (to make new SD=1); multiply by 10 (to make new SD=10); add 50 (to make new mean=50). 3) From another aspect of psychometric theory, one might prefer that the parts of a composite should be weighted by their reliability, or, even better (but less accessible) by their validity. - That is, if equal weights give too little weight to the 39 items (which would be my guess), you can assign (slightly) differing weights to these 3 parts of the composite, like (2,1,1) and not (4,1,1). COMPOSITE= 2*SCALE39 + LIKERT + COUNT. (Follow this by T-scoring.) I put in the "(slightly)" because even a statistician can be lulled into forgetting that the *effective* weighting is by variance, which increases with the square of the weight. Thus, if you use (2,1,1), you are already saying the first one is 4 times as important as each of the others. If you want more emphasis than that, then you might consider that your hypothesis testing is neater to describe if you ignore the other two entirely, for the purpose of your main test. -- Rich Ulrich Date: Sun, 17 Nov 2013 02:45:13 -0800 From: [hidden email] Subject: Re: Multiple Imputation To: [hidden email] Thank you again for your comments and advice! To make sure i understood you correctly: IV - questionnaire Since I'm using an established questionnaire as my IV, I don't have to impute missing values and just sum the items by mean.n. As far as I know this procedure requires that at least 2/3, or better 3/4, of the items i use for summing are complete, which is not the case in my questionnaire, i.e. one scale has 11 items and i have missings on 4 of them (->only 63.3% are complete). Can I still use the mean.n fuction or do I have to drop this scale? Can you advice me on literature for that procedure (since the analysis is for my thesis and I have to justify my procedure)? Also, are you implying that I don't have to check MCAR or MAR for that questionnaire? Indeed, without imputation, I could replicate the factor structure of that questionnaire. DVs - behavioral measures I was a little confused by Rich's comment that I don't mention categorical items. Most of my DV items have a response format, i.e. "not prejudiced behavior" vs. " prejudiced behavior". Doesn't that make them categorical? By z-transform I meant Fisher's z-transformation (my supervisor suggested that) because I will have to build scales, and 39 are categorical, one is answered on a 7-point liker scale, one is the amount of leaflets participants take with them (interval). I understand that I don't have to use Z-transformation for correlational analyses and factor analysis, right? So your advice, Art, is that I check the factor structure with CFA with listwise deletion and mean imputation and compare them. But before using the summative score or listwise deletion, don't I have to check if the data is MCAR or MAR? I understand from the literature that every method of imputation or deletion of cases assumes that data is MCAR/MAR. Thank you so much for your help!! View this message in context: Re: Multiple Imputation Sent from the SPSSX Discussion mailing list archive at Nabble.com. |
Big thank you to both of you Art and Rich. I'm implementing your
advice/ideas right now. I have one question left though: If my items are not categorical, meaning i have (unfortunately only) one interval and am measuring a continuous construct (prejudice), can I use the supposedly less biased, than mean substitution, Expectation Maximization? In SPSS, I could drag my items into the window for "continuous" variables and get estimates that look fine (of course seperately for the 39 items, the numerical count and the likert scale item). Also in that way, I could compute LIttle's MCAR test. What do you think? I would like to use a less biased imputation method. I understand that MI is the best, but I don't feel comfortable with it yet (running many many analyses with many data sets until they can be finally pooled in the last step, the regression). |
Your items are very
coarse, i.e., they are dichotomies. A mean or a total would
have more possible values. The possible value would range from
zero to the number of items on the scale.
If you want to try imputation, use only items that are in the same scoring key group as contributors. That way you preserve discriminative validity. You certainly want to avoid using IVs as contributors to DVs and vice versa. You can find out whether or not there is any bias in your estimates by getting scores different ways and comparing the results. just include the items and scores from the different approaches in the data set. Look at the difference of each imputed value form the mean value. Look at the difference in the means, regression coefficients, correlations etc. You are probably thinking that it is tedious to try the the different approaches as a learning exercise. I guess that is mostly because you are envisioning going back to the menus for variations on method of handling missing data and on which set of items (original vs with imputed values). That would be tedious but very poor analysis handling. If you are so far along in SPSS that you are considering multiple imputation then you should be using the menus to draft syntax but exiting them via <paste>. Then you should look at the possibilities for any procedure by looking at the <help> for the syntax. Then you can develop of syntax by copying and pasting block of syntax. e.g., Run FACTOR with one method of handling missing data and the original items. Copy and paste that block of syntax in the syntax window. Change the specification on how missing data is handled. Copy it again and substitute the variable list that includes the items that had imputed values. As you learn about your data and about analysis you will want to redraft your syntax. Also in many disciplines, there is an ethical obligation to make you data and syntax available to those that request it. Any method of deriving scores (operationalizing a construct) will include some uncertainty (i.e., bias + noise). The purpose of trying different ways of handling missing data is to get you information you can use to see whether you are (a) "straining at gnats", "polishing pig iron" or "trying to make a silk purse from a sow's ear" or (b) making meaningful improvements that results in making the narrative you are putting more complex. if the situation is (a) using old fashioned approaches then you can just add an statement to your write up that you tried other ways to forming the scores including MI but they did not make meaningful differences. This strengthens you conclusions If the situation is (b) variations in dealing with missing data made substantive differences then you have a longer write up and weaker conclusions. In today's parlance we would say not to "put to fine a point on things" or "use too may Sig figs". Aristotle in his Nichomachaen Ethics put the idea well, "for it is the mark of an educated man to look for precision in each class of things just so far as the nature of the subject admits". Art Kendall Social Research ConsultantsOn 11/18/2013 7:22 AM, therp [via SPSSX Discussion] wrote: Big thank you to both of you Art and Rich. I'm implementing your
Art Kendall
Social Research Consultants |
Free forum by Nabble | Edit this page |