Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Now seems like a good time to remind folks of the Stanford dead fish fMRI study: https://law.stanford.edu/2009/09/18/what-a-dead-salmon-remin...

fMRI has always had folks highlighting how shaky the science is. It's not the strongest of experimental techniques.





This study was really highlighting a statistical issue which would occur with any imaging technique with noise (which is unavoidable). If you measure enough things, you'll inevitably find some false positives. The solution is to use procedures such as Bonferroni and FDR to correct for the multiple tests, now a standard part of such imaging experiments. It's a valid critique, but it's worth highlighting that it's not specific to fMRI or evidence of shaky science unless you skip those steps (other separate factors may indicate shakiness though).

When we published the salmon paper, approximately 25-35% of published fMRI results used uncorrected statistics. For myself and my co-authors, this was evidence of shaky science. The reader of a research paper could not say with certainty which results were legitimate and which might be false positives.

Hey, I know you got a lot of flack for the article. So, I just wanted to thank you for having the courage to publish it anyways and go through all of that for all of us.

I go back to the study frequently when looking at MRI studies, and it always holds up. It always reminds me to be careful with these things and to try to have other be careful with their results too. Though to me it's a bit of a lampooning, surprisingly it has been the best reminder for me to be more careful with my work.

So thank you for putting yourself through all that. To me, it was worth it.


Many thanks - appreciate the kind words. Thanks also for always working to work with care in your science. It makes all the difference.

Among other challenges, when we first submitted the poster to the Human Brain Mapping conference we got kicked out of consideration because the committee thought we were trolling. One person on the review committee said we actually had a good point and brought our poster back in for consideration. The salmon poster ended up being on a highlight slide at the closing session of the conference!


Thank you for publishing that paper, which I think greatly helped address this problem at the time, which you accurately describe. I guess things have to be taken in their historical context, and science is a community project which may not uniformly follow best practices, but work like this can help get everyone in line! It's unfortunate, and no fault of the authors, that the general public has run wild with referencing this work to reject fMRI as a experimental technique. There's plenty of different ways to criticize it today, for sure.

> a statistical issue which would occur with any imaging technique

I sounds like it goes beyond that: If a certain mistake ruins outcomes, and a lot of people are ruining outcomes and not noticing, then there's some much bigger systematic problem going on.


Why are you calling Bennett et al "the Stanford... study" ? Not one person on that team went to Stanford.

Direct link to the poster presentation: http://prefrontal.org/files/posters/Bennett-Salmon-2009.pdf


Why are you phrasing your correction in the form of a question? I think it's pretty reasonable to infer that he mistakenly thought it was a Stanford study because the link was from Stanford.

Discussed here. others?

Risk of false positives in fMRI of post-mortem Atlantic salmon (2010) [pdf] - https://news.ycombinator.com/item?id=15598429 - Nov 2017 (41 comments)

Scanning dead salmon in fMRI machine (2009) - https://news.ycombinator.com/item?id=831454 - Sept 2009 (1 comment)


fMRI methods and statistics have advanced quite a lot since the dead fish days, that critique does not really hold up today.

While I would agree that the prevalence of the problem has been minimized in fMRI during the last 15 years, I disagree that our critique does not hold up. The root of our concern was that proper statistical correction(s) need to be completed in order for research results to be interpretable. I am totally biased, but I think that remains worthwhile.

I immediately thought of it too. Didn't realize it was that long ago.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: