Tuesday, July 29, 2014
I'm Done
I know, I haven't written anything here all July. No post about Okinawa or about anything else. I also haven't emailed friends, or worked on my side projects, or read any books, or studied Japanese. It's summer, again, with temperatures in the 35°–40° range, and my body shuts down in this heat. Constantly tired, constant headaches, no strength and no appetite. This is one time I wish we'd live in cooler, breezier Okinawa.
I've long thought that I'd eventually get used to the summers here. But no, I realize that I won't. If anything it's worse this year than usual. Up until last week it was still OK, but now it's all I can do to push through at work, then quietly collapse the moment I get home. Yesterday I was falling asleep right at the dinner table.
Air conditioners are only marginally better than nothing. Belching ice-cold, clammy air into the room may make it cooler, but it sure doesn't make you comfortable. You can sit and sweat in the heat, while wearing thick, woollen socks because the floor is freezing. An effective cooling system would probably have to completely replace the air instead of just trying to mix cold and hot, but that would be like living in a wind tunnel; not sure it'd be any improvement.
I give up. Better to accept the weather than try to fight it. Sleep, rest and relax, and avoid any non-work pressures. I'll post, or keep in touch, if or when I feel up to it. If I don't, I won't. Other things can wait. Looking forward to autumn.
Tuesday, July 8, 2014
Are replication efforts useless?
A nice little dust-up is happening in neuroscience right now: An experimental neuroscientist claims that we should not waste our time replicating published results. Why? because:
"unsuccessful experiments have no meaningful scientific value."
Richard Tomsett goes through the piece here: Are replication efforts pointless? And Neurosceptic has a good take-down too: On "On the emptiness of failed replications"
The gist of the argument is that experiments can fail for any number of reasons, and so they can't falsify the published result. Null findings should not, in his view, even be published at all. He only gives a cursory nod to the possibility that the initial positive result may be false, then proceeds to ignore it.
This sounds almost bizarre. But here is the unstated assumption that his entire argument rests on: "I already know my idea is right, and the experiment is only there to confirm what I already know." His whole chain of arguments depends on this, and would make no sense without it.
In his view, an experiment is simply there to give evidence for something we already know (or wish) to be true. If it works, that confirms what we already know. A failed replication must thus fail because of experimental error of some kind; since we already know our hypothesis must be true, that's the only inescapable conclusion.
This attitude is the real danger here. If your base assumption is that your failures happen because of experimental error, not because your idea is wrong, then it can become ever so tempting to help an uncooperative experiment along just a bit. Add a few subjects — or remove a couple of "obviously" aberrant data points — to reach statistical significance. Clean up that blurry, messy picture a little. Don't include the failures in your analysis. Make the story clearer and neater. No need to actually run that time-consuming, expensive confirmatory experiment the reviewers wanted. We already know we're right after all.
I bet most cases of falsification and fraud in science started out from this assumption. People came in to the lab knowing they had the right idea, and simply wanted to get the confirmation that will convince everybody else. Telling people — young people just starting out — that this is the right attitude for doing science is dangerous.
"unsuccessful experiments have no meaningful scientific value."
Richard Tomsett goes through the piece here: Are replication efforts pointless? And Neurosceptic has a good take-down too: On "On the emptiness of failed replications"
The gist of the argument is that experiments can fail for any number of reasons, and so they can't falsify the published result. Null findings should not, in his view, even be published at all. He only gives a cursory nod to the possibility that the initial positive result may be false, then proceeds to ignore it.
This sounds almost bizarre. But here is the unstated assumption that his entire argument rests on: "I already know my idea is right, and the experiment is only there to confirm what I already know." His whole chain of arguments depends on this, and would make no sense without it.
In his view, an experiment is simply there to give evidence for something we already know (or wish) to be true. If it works, that confirms what we already know. A failed replication must thus fail because of experimental error of some kind; since we already know our hypothesis must be true, that's the only inescapable conclusion.
This attitude is the real danger here. If your base assumption is that your failures happen because of experimental error, not because your idea is wrong, then it can become ever so tempting to help an uncooperative experiment along just a bit. Add a few subjects — or remove a couple of "obviously" aberrant data points — to reach statistical significance. Clean up that blurry, messy picture a little. Don't include the failures in your analysis. Make the story clearer and neater. No need to actually run that time-consuming, expensive confirmatory experiment the reviewers wanted. We already know we're right after all.
I bet most cases of falsification and fraud in science started out from this assumption. People came in to the lab knowing they had the right idea, and simply wanted to get the confirmation that will convince everybody else. Telling people — young people just starting out — that this is the right attitude for doing science is dangerous.