1. Choosing A Methodology
Statistical advice was integral to the initial experimental design, but we decided the advice led to too much work.
Selected to use open data for analysis because we didn’t manage to negotiate access to potentially more useful sources.
A Northern blot was run instead of realtime QPCR because the PI is old and does not trust results unless he sees a band.
I used a snowballing method to gather interviewees/I just asked my mates as no one else came forward.
We used default priors because we have no idea what the fuck they mean, and fiddling with them changed all our results.
2. Carefully Logging Fieldwork
N=99 because the 100th etherised fruit fly became conscious and walked off.
Samples from this hotspring are not represented because they were eaten by a bear as we watched in horror.
We carefully decontaminated our sampling spoons, but did nothing about all the slag dust blowing around on site.
We only measured it in 5 patients cos the software crashed/patient moved/I pressed the wrong button the other 20 times.
We only collected one core per tree because why don’t you try repeatedly screwing a hand-powered drill into solid oak?
3. Precision in the Lab
The samples were incubated at 37 degrees for the duration of one episode of the Daily Show.
DNA band was excised from the gel, dropped on floor and then scraped into an epitube for extraction.
Protein digestion was performed using an automated system. Glitches were solved by hitting the robot with a spanner.
Samples were stored at -80 degrees until required except for a day or so when our freezer compressor failed.
The eppendorf tubes were “shaken like a polaroid picture” until that part of the song ended.
4. Evaluating Findings
We use this simple, intuitive method because we were scared of asking a real statistician how to do it properly.
We chose this model because we already tested the drug in 3 others but it didn’t work.
Subjects A, C, and F were excluded because they were fucking up the results we wanted.
The appropriate statistical analysis was selected using the “which one gives the smallest error bars” method.
Hypothesis was not redrafted to fit results, and results were not cherry picked to fit hypothesis. It was more mutual.
5. Presenting Them Thoughtfully
Most of the buzzwords in the title have nothing to do with the article, but they wont publish it without them.
Our methodology is tentatively relevant to a range of other disease sites because saying so might get us more funding.
LITERATURE CITED (but not actually read).
We didn’t read half of the papers we cite because they are behind a paywall.
From my supervisor: bear-bear interactions were significantly reduced in late season, after one of the two bears died.
Last But Not Least
What we lack in meaningful results we make up for with a witty pun in our title.
#overlyhonestmethods for more.