This study was really highlighting a statistical issue which would occur with any imaging technique with noise (which is unavoidable). If you measure enough things, you'll inevitably find some false positives. The solution is to use procedures such as Bonferroni and FDR to correct for the multiple tests, now a standard part of such imaging experiments. It's a valid critique, but it's worth highlighting that it's not specific to fMRI or evidence of shaky science unless you skip those steps (other separate factors may indicate shakiness though).
When we published the salmon paper, approximately 25-35% of published fMRI results used uncorrected statistics. For myself and my co-authors, this was evidence of shaky science. The reader of a research paper could not say with certainty which results were legitimate and which might be false positives.
Hey, I know you got a lot of flack for the article. So, I just wanted to thank you for having the courage to publish it anyways and go through all of that for all of us.
I go back to the study frequently when looking at MRI studies, and it always holds up. It always reminds me to be careful with these things and to try to have other be careful with their results too. Though to me it's a bit of a lampooning, surprisingly it has been the best reminder for me to be more careful with my work.
So thank you for putting yourself through all that. To me, it was worth it.
Many thanks - appreciate the kind words. Thanks also for always working to work with care in your science. It makes all the difference.
Among other challenges, when we first submitted the poster to the Human Brain Mapping conference we got kicked out of consideration because the committee thought we were trolling. One person on the review committee said we actually had a good point and brought our poster back in for consideration. The salmon poster ended up being on a highlight slide at the closing session of the conference!
Thank you for publishing that paper, which I think greatly helped address this problem at the time, which you accurately describe. I guess things have to be taken in their historical context, and science is a community project which may not uniformly follow best practices, but work like this can help get everyone in line! It's unfortunate, and no fault of the authors, that the general public has run wild with referencing this work to reject fMRI as a experimental technique. There's plenty of different ways to criticize it today, for sure.
> a statistical issue which would occur with any imaging technique
I sounds like it goes beyond that: If a certain mistake ruins outcomes, and a lot of people are ruining outcomes and not noticing, then there's some much bigger systematic problem going on.