I’m still unclear what the issue is here? First, why have some suggested anything other than 50% as the false discovery rate? Aren’t random guessing of yes
or no (no condition, just guessing, the null hypothesis) no different than flipping a coin, thus 50:50? In terms of the statistic, you can use a non-parametric statistic for this, but if you code it as 1’s and 0’s, then it’s a ratio with normal distribution,
and thus a one sample t-test can work. Remember that it’s comparing the mean of your sample against the mean of a normal sample with a mean set to the value you set, in this case, .5 (right?). If this is wrong, I’d like to know why, as I really don’t understand.
I’m pretty certain that the nature of the condition, in the question you asked, is unimportant. You are wanting to test it against chance guessing, which would
be equal to no condition at all. I believe you would only want to adjust for this if you had a priori information that chance guessing was in fact biased in some way. It appears to me that no such evidence exists, so you would keep the value at .5. It really
seems simpler than people are making it out to be, but maybe I’m wrong on this.
Matthew J Poes
Research Data Specialist
Center for Prevention Research and Development
University of Illinois
510 Devonshire Dr.
Champaign, IL 61820
Phone: 217-265-4576
email:
[hidden email]
From: SPSSX(r) Discussion [mailto:[hidden email]]
On Behalf Of Fredric E. Rose, Ph.D.
Sent: Thursday, April 19, 2012 12:28 AM
To: [hidden email]
Subject: Re: Frequency analysis
Wow. What an incredibly condescending comment. Thank you for enriching everyone’s life for it, especially since you know little to nothing
about what I do, who I am, or why I am asking the question. I spared the list the irrelevant details regarding the background of the question and focused more on a desire for some insight on statistical analysis of nonparametric data and boy am I glad you
were here to school me.
As to the paper you mentioned...yes, I have it and have read it. And others. They don’t address the question that I asked because those numbers all relate to norms of the DRM lists and I was not asking how to determine whether the rate of false recall in
one study differed from the rate in another using the same lists. Perhaps I didn’t express it clearly, or perhaps I should be faulted for not having read every single paper on false memory (shame on me – there probably aren’t that many) but thank you for
informing me that the SPSS list is not the place to ask questions of a statistical nature. Imagine my surprise, given that I’ve been a subscriber to this list for 7+ years and have read countless questions of this type, all answered by other subscribers.
Apparently, things have changed.
If you don’t mind, Rich, take a look in the upper right corner of your keyboard. You’ll see a key that is probably marked “Delete”. Should I ever choose to post to this list again, daring to ask for information about the application of SPSS to a statistical
problem, feel free to use that key so that you might be spared my stupidity.
To the rest of the list – I appreciate your insights and thank you for taking the time to answer a question that at least one of us feels was beneath him. I feel (somewhat naively, apparently) that it is an interesting question on probability but fear there
may not be an easy answer.
On 4/18/12 6:10 PM, "Rich Ulrich" <rich-ulrich@...> wrote:
I Googled on <Roediger and McDermott False Memory>
and found, immediately, an article on "Factors that
determine false recall..."
http://memory.wustl.edu/Pubs/2001_Roediger.pdf
And the intro mentions rates from 0.01 to 0.65.
If you are going to start into doing research, you really
need to do a large amount of reading to prepare yourself,
both in general (when you know little about research)
and on your specific topic.
--
Fredric E. Rose, Ph.D.
Associate Professor of Psychology
Palomar College
(760) 744-1150 x2344
frose@...
Free forum by Nabble | Edit this page |