Like Bruce, I'm not a fan of tests of assumptions, but I do pay attention to the
shape of distributions. In my experience - which is heavily biased towards
using rating scales - 90% or 99% of apparent heteroscedasticity is the fault
of "wrong scaling" rather than underlying lumpiness. Scale items can /usually/ be
analyzed as they are; scale totals occasionally benefit from transformation. Item
Response Theory uses logistic, though the complication may seem like over-kill; on
the cruder side, square root is most common, after deciding which end should
represent "zero".
Is there big skewness? Are there big outliers? Do these features represent scores
that you would consider at "equal intervals"? Does taking a transformation give
something that is more Normal? If there is an outlier that represents a "real interval",
that raises the question of whether /that/ case actually belongs in a least-squares
analysis of these data; or if it should be removed and discussed as a special case.
Free forum by Nabble | Edit this page |