|
So this is related to an earlier post, but I've debugged some of the causes, so I've started a new topic.
I have a small number of observations (about 30) that are in a time series. It's for years 2007-2010. For years 2007 and 2008, we only have values for the last month of the quarter. All values are between 0 and 1.
Now, if I run TSMODEL on the data and ask it to predict the values in the series and I don't have it autodetect outliers, I get a good model. As soon as I have it look for outliers, the predictions wind up being in the 100s...although all of the original data was between 0 and 1.
Why would autodetecting outliers cause that? (For the record, it finds 2 outliers.) Anything I can do to address? I'd prefer to generally allow for outliers since by doing so, and since there are many other time series that get modelled (we SPLIT FILE by country and type of score), we get much better results in general by allowing autodetecting of outliers. But for this one country and score, the results are waaaay off. The R-squared is something like -3996.
|