
-Ian Smith @EcologIan
1. Choosing A Methodology
Statistical advice was integral to the initial experimental design, but we decided the advice led to too much work.
– David C Logan @angerstusson
Selected to use open data for analysis because we didn’t manage to negotiate access to potentially more useful sources.
A Northern blot was run instead of realtime QPCR because the PI is old and does not trust results unless he sees a band.
– Adriana Heguy @AdrianaHeguy
I used a snowballing method to gather interviewees/I just asked my mates as no one else came forward.
We used default priors because we have no idea what the fuck they mean, and fiddling with them changed all our results.
2. Carefully Logging Fieldwork
N=99 because the 100th etherised fruit fly became conscious and walked off.
Samples from this hotspring are not represented because they were eaten by a bear as we watched in horror.
We carefully decontaminated our sampling spoons, but did nothing about all the slag dust blowing around on site.
We only measured it in 5 patients cos the software crashed/patient moved/I pressed the wrong button the other 20 times.
We only collected one core per tree because why don’t you try repeatedly screwing a hand-powered drill into solid oak?
– Scott St. George @scottstgeorge
3. Precision in the Lab
The samples were incubated at 37 degrees for the duration of one episode of the Daily Show.
– twoearsoneheart @twoearsoneheart
DNA band was excised from the gel, dropped on floor and then scraped into an epitube for extraction.
Protein digestion was performed using an automated system. Glitches were solved by hitting the robot with a spanner.
– Vibhuti Patel @VibhutiJPatel
Samples were stored at -80 degrees until required except for a day or so when our freezer compressor failed.
– David C Logan @angerstusson
The eppendorf tubes were “shaken like a polaroid picture” until that part of the song ended.
4. Evaluating Findings
We use this simple, intuitive method because we were scared of asking a real statistician how to do it properly.
We chose this model because we already tested the drug in 3 others but it didn’t work.
Subjects A, C, and F were excluded because they were fucking up the results we wanted.
The appropriate statistical analysis was selected using the “which one gives the smallest error bars” method.
Hypothesis was not redrafted to fit results, and results were not cherry picked to fit hypothesis. It was more mutual.
– Snarky Everything @BadPhysics
5. Presenting Them Thoughtfully
Most of the buzzwords in the title have nothing to do with the article, but they wont publish it without them.
Our methodology is tentatively relevant to a range of other disease sites because saying so might get us more funding.
LITERATURE CITED (but not actually read).
– Dr. Twittenheimer @DrTwittenheimer
We didn’t read half of the papers we cite because they are behind a paywall.
From my supervisor: bear-bear interactions were significantly reduced in late season, after one of the two bears died.
Last But Not Least
What we lack in meaningful results we make up for with a witty pun in our title.
– Dr. Twittenheimer @DrTwittenheimer
Hooked? Check #overlyhonestmethods for more.
That’s amazing. Thanks for the tips too!
I’m afraid I wasted most of the day reading the hashtag. Brings back the days in the lab…
LOL. You’ve made my day. Reminds me of our less-than-stellar methods used while doing a site survey in engineering school. I have to credit the instructor with giving our group a “C” until we improved our technique.
The tag is funny, but also frightening: the truth is a lot of science is done like that…
Yes, that’s why there are checks and balances, things like peer-reviewed journals, for example, put in place as an imperfect but real world attempt to keep science rooted in good data and not wishful thinking. I would be careful to “judge” these quotes you’ve presented as completely serious excerpts from real-world scientists, considering ” Dr. Twittenheimer’s website states that this is the “comedic alter-ego of a real life scientist.” That’s one thing I learned from my science classes: always consider the source of your data. That’s why I can’t take these excerpts at face value.
As a scientist I unfortunately know how much of it is true.
E.g. if one paper contains experiments with different values of p, especially values ne 1% or 5% – you know they were chosen after the fact to match the data. In – peer reviewed, published – papers in biology for instance it is not uncommon to find something like p=0.003, or p=0.7, or both in the same paper.
Check out TED (9:25) or PLOS.
I’m not refuting your point that “bad science” exists in the real world. I’m saying that, ironically, it looks like you are including questionable data in your post to support your point.
I have worked in various labs and seen lots of papers, so I know from personal experience how “science” is done. The tweets are anecdotal, not “data”, and I’m not making an argument here.
My mistake. When I first commented on your post, I thought it was meant as humor, but then our conversation turned to the serious nature of what goes on in “real” laboratories. So then I thought these quotes were being used as evidence to support that discussion, albeit anecdotal. But since you’re not using them to mount an argument about how slap-dash and error-riddled real world science is, I’ll just take them in the humorous spirit they were offered. Thanks for the discussion!