Changing how clinical trials are designed and reported could save billions of dollars.
Every year, we spend $200 billion globally on health and medical research, more than the annual GDP of New Zealand. Yet up to 85% of this money is wasted on research that asks the wrong questions, is badly designed, not published or poorly reported. In addition, a 2005 study by John Ioannadis showed that claimed research findings are more likely to be false than true — that is, they will be proven incorrect when better quality research is conducted later down the line.
So is clinical research a waste of time, and therefore money? As a researcher myself, I’m inclined, as you might expect, to say no. Let me explain why clinical trials are so expensive, and how we can make these expenses count.
Clinical trials are affectionately termed the ‘gold standard’ method of evaluation in a healthcare setting, and were necessary for marketing approval for everything from the paracetamol you take to ease your hangover to treatments for cancer and Alzheimer’s disease. But they also require a huge amount of resources. Trials can take years to complete and often involve thousands of people from various countries to ensure that research questions are answered satisfactorily.
At the core of high-quality medical research are randomised controlled trials. In these trials, participants are randomly allocated to one of two or more treatment groups (referred to as arms) that the trial is looking at. Most people think of trials involving drugs, but interventions might also include surgical procedures, medical devices, and lifestyle interventions such as exercise or diet modification. The randomisation of participants ensures that any outside biases, such as sex, age, or educational status, are distributed throughout the treatment groups, effectively negating the bias these outside influences may have.
Randomised trials must also be ‘controlled’; that is, one of the treatment arms acts as a control group to which the treatments are compared. In most cases, this control group will be given the standard treatment option for their condition or disease. This allows us to see if the new treatment we’re testing is better than what is already available to patients.
In a recent study, researchers looked at trials funded by Australia’s National Health and Medical Research Council between 2008 and 2010. These 77 studies required a total of A$59 million in public funding. Most people would consider this an acceptable price to pay for improved survival rates, but what if most of that $59 million was wasted due to correctable problems?
The estimate that 85% of all health research is being avoidably “wasted” is shocking. As an optimist I’m looking to the ‘avoidably’ part of that sentence; we have a lot of work to do, but it’s all work ready to be done, rather than issues we hope to solve at some point in the distant future.
The problem of research waste has been a central focus of the health services research and evidence-based medicine communities since the publication of Ioannidis’ paper “Why most published research findings are false”, and there is a clear push to prevent research being wasted.
As a PhD student in the Health Services Research Unit at the University of Aberdeen, I am working to improve the efficiency of trials. There is a bizarre contradiction in the trials world; we do trials to generate good quality evidence, but the way we carry out certain aspects of trials is not remotely evidence-based.
Here’s an example. Recruiting participants for trials is a notoriously difficult process that wastes time, effort and money, but there is limited evidence that the methods we currently use to improve recruitment are particularly efficient. For example, many trial teams approach patients via existing healthcare infrastructure, but these systems are already overstretched without the addition of research tasks, and it may be that there’s a better way to find patients without the need to involve physicians. If recruitment fails to successfully reach the trial’s target, the results of the trial as a whole can be at risk.
Many countries have introduced publicly accessible websites that allow people to search for trials currently in the process of recruiting. Patients can find trials that are relevant to their disease state, meaning the healthcare system does not need to be directly involved with recruitment. As yet we don’t have evidence to support or refute the effectiveness of these websites, so they are often used in conjunction with other recruitment strategies.
Other research groups are working to alleviate research waste by tackling poor reporting of experimental methods. “Most of us have probably tried to recreate a meal we enjoyed in a restaurant,” wrote epidemiologist Tammy Hoffmann in a recent article. “But would you attempt it without a recipe? And if you have to guess most of the ingredients, how confident would you be about the end result?”
It makes sense; for health research to be picked up and implemented in a clinical setting, we need to give clinicians the full recipe. Interventions used in trials might involve drugs or non-drug treatments like exercise, psychosocial or dietary advice, and giving partial details is a sure-fire way to ensure research doesn’t make its way to patients. Crucial details, such as the materials needed to carry out interventions, are lacking in up to 60% of trials of non-drug interventions, and the problem occurs in drug studies, too. These articles focus on published trial reports, and don’t discriminate against public- or industry-funded trials; full recipes are lacking across both of these research areas.
Research is an imperfect process, and with research funds getting increasingly scarce worldwide, it’s important that we make a concerted effort to reduce the intrinsic inefficiency of trials. At the very minimum, we must work to ensure trial results are published in a timely manner.
On a wider, and perhaps more optimistic scale, it’s clear that researchers need to take responsibility for disseminating results of the projects they are involved in. It’s no longer acceptable for results to be presented only at specialist conferences that few clinicians are privy to. Trials are conducted with the explicit aim of improving human health, and it’s down to both researchers to ensure results are circulated and the public to hold researchers accountable.
Edited by Andrew Katsis and Sara Paradowski