The promise and pitfalls of self-experimentation

Self-experimentation is risky but has led to major scientific breakthroughs. Can science and society learn to work together to get the best of both worlds?

 
A self-portrait within a self-portrait. "Self Portrait with Vanitas Symbols" by David Bailly circa 1651. Web Gallery of Art/Wikimedia Commons (public domain) 

A self-portrait within a self-portrait. "Self Portrait with Vanitas Symbols" by David Bailly circa 1651. Web Gallery of Art/Wikimedia Commons (public domain) 

 

Nobel prize winner Barry Marshall is famous for using himself to test the view that bacteria could not survive in the acidic environment of the stomach. He ingested Helicobacter pylori, a bacterium found in in many patients presenting with stomach ulcers, which can be life threatening if they perforate. In the days after infecting himself, Marshall felt nauseous, vomited, and had a severely inflamed stomach (gastritis). After taking an antibiotic, he made a rapid recovery, showing not only that bacteria could thrive in the stomach but also that bacteria could contribute to gastritis, which can lead to gastric ulcers. This finding was a major breakthrough, but to do so he had put his life on the line. In recent years, antibiotics have become less effective in treating Helicobacter pylori. Using more traditional science, Marshall’s group recently identified the Achilles’ heal of the bacteria, which may open the gates to treating it with natural products. Is such self-experimentation ethical and effective?

As a research student studying perception, it is not uncommon for me to pilot a study on myself as I fine-tune the method. In fact, many scientists have an incentive to run small informal “pilot” studies or to self-experiment, so that they can find that next big result without wasting time completing dead end studies. A recent review claims that rewarding researchers who have more papers, citations, or grant funding can have the unintended effect of overselling positive results and underselling negative results. Such a model likely drives the well-known publication bias in which small underpowered studies with apparently positive results tend to dominate the literature, such that it is difficult to publish a well designed study with opposing findings. 

With such incentives, it would not be surprising if a number of scientists do take risks and self-experiment. How does self-experimentation stack up against traditional science? Human experimentation often benefits from within-groups designs in which each participant receives all experimental interventions. If the intervention is more serious or invasive, the study is often designed with a large sample size, and sham or placebo groups. These designs account for individual variability and the beliefs of the participants. A single subject design does away with these checks and balances, as the experimenter is not blinded and their response is very dependent on their physiology.

As long as the science itself is correct, one would think that self-experimentation is reliable, but is it ethical? And do the needs of many justify the risk to the experimenter? These questions needs to be assessed on a case-by-case basis. My own experiments are often relatively non-invasive, as they require participants to make responses to stimuli such as touch on the arm. Even so, a colleague of mine in perception research once told me you should only conduct an experiment if you would be comfortable with being a participant in the experiment. However, the researcher’s opinion alone is not enough to decide where to draw the ethical line; modern ethics committees include a range of community members, representative of society, and they decide if a study is ethically designed. Before these checks were in place, studies were performed that society would certainly be uncomfortable with. This ethical discordance between a few extreme experimenters and society drove the Nuremburg Code for Ethical Experimentation. This code, along with the more recent Helsinki Declaration, which is updated from time to time, governs how we should design experiments that use human participants. However, many instances of self-experimentation likely don’t abide by this code.

A test subject at the US Airforce's Non-Invasive Brain Stimulation lab uses a controller to navigate a set of computer tasks. Electrodes for targeted brain stimulation are attached to his head. J.M. Eddins, Jr./US Air Force (CC BY 2.0)

A test subject at the US Airforce's Non-Invasive Brain Stimulation lab uses a controller to navigate a set of computer tasks. Electrodes for targeted brain stimulation are attached to his head. J.M. Eddins, Jr./US Air Force (CC BY 2.0)

In some cases, this default to self-experimentation is understandable as there are human experiments that would struggle to find participants, ranging from tediously boring to incredibly invasive and dangerous. It is not uncommon for studies in visual perception to require participants to sit at a computer for hours over multiple days, clicking left or right on a mouse in response to a display on the screen. At the extremes of our exploration of human physiology are studies requiring participants to lie in bed face down for thirty days or live in near-isolation for eight months.

If scientists are investigating a question that is of benefit to society but cannot get anyone to participate, how are they to find an answer? Sometimes they can go rogue as Barry Marshall did. In other instances they may convince an ethics committee to give the go-ahead.  For instance, a study in Sydney, Australia, when researchers wanted to know more about how we breathe, was so invasive that the three researchers were the only participants in the experiment. Each participant received a full-body paralytic but remained totally conscious for the whole experiment (see here for video). We should be grateful for the work of self-experimenters in the past, but now we should look to ways to formalise this research and question the need for experimenters to act as their own guinea pigs.

Scientists are not the only people engaging in self-experimentation. In the last few years, evidence has emerged that stimulating the brain with electrical currents (known as transcranial direct-current stimulation, or tDCS) can affect the performance of the brain. These effects are of interest to gamers, sportspeople, and those in cognitively demanding jobs. Anna Wexler, a PhD student at MIT specialising in brain stimulation wrote in an email to me that in theory “it would be possible for data from home users to be transformed into scientifically legible knowledge." However, in practice there are many “methodological and ethical obstacles that would be difficult to overcome." Her paper on this issue, which is currently in press, lists these issues, a number of which haunt traditional science, such as small sample sizes, lack of controls, and self-report biases.

Even if this worked, would it be ethical, and could scientists provide home users with a guide to experiment within? Scientists currently deem tDCS as relatively ‘safe’ with regard to short-term physical effects; the long-term effects are unknown. Wexler says that as a result “scientists would be unlikely to ever deem tDCS safe for use at home”. 

Participant led research could overcome some of the problems with traditional science and also self-experimentation. A big problem for society is that scientists are generally not good at communicating the importance and applicability of their science. Health policy researcher Effy Veyana from the University of Zurich says that participant led research “will impact the barrier tremendously”, highlighting that those involved become “co-producers of knowledge and evidence, something that in a post-fact world is of great significance”.

But what if this model failed, could it further segregate science from society? Veyana believes that “society needs the approach and therefore it must adapt”, the problem being not whether change will happen but when change will happen. Her work suggests that, for the adaptation to be achieved, it will need the support of research institutions and funding agencies, as well as the acceptance of papers produced using this method in journals.

She continues, “Ultimately every gatekeeper in the knowledge production and its translation into any kind of product should acknowledge the value of these models and let them in.” How would research on minority groups — for instance, rare diseases — benefit from this model? Would they simply be ignored by the masses? Surprisingly, this is not the case; there have already been many cases where rare disease patients have “designed research, self-experimented, created coalitions with scientists and eventually made important scientific contributions”. There is a benefit to scientists too as studying these rare diseases can fill gaps in scientific knowledge.

It seems that in the past self-experimentation has led to many breakthroughs, but many of these were risky and the method is prone to bias. We might now see self-experimentation restricted to the occasional pilot study when necessary, subject to replication and reproduction with more robust experimental methods. Alternatively, scientists with a more open mind might consider various models of participant-led research, which are a middle ground between self-experimentation and traditional science.

Edited by Diana Crow