Recent Posts

Archive

Tags

No tags yet.

Evaluating Digital Health...a tricky business

In the wake of our new Health Secretary’s announcement

that an impressive £500 million is to be spent on transformative technologies within the NHS (less impressive when we all realised it was coming out of the existing budget) and the controversial rise of Digital Health start-ups (ahem…Babylon), this month we’ve turned our attention to how we go about evaluating health technologies.

The evaluation of Digital Health is a minefield, posing a challenge not just to the technologists themselves but to academics, clinicians and policymakers. Unfortunately, all this indecision ultimately affects those that we want to help the most, our patients and in the wider scheme of things, our citizens.

Fortunately, it seems as though a few others had been pondering the same thing and just this month articles in The Lancet and JAMA landed, trying to tackle aspects of this issue. The articles describe how the Randomised Controlled Trial (RCT) has been and still is the gold standard for providing evidence of whether a new intervention is effective. Indeed, the majority of guidelines in medicine are based on evidence from these RCTs, with their impact multiplied when combined (in the form of meta-analyses). But we also know how restrictive RCTs can be: they are costly, timely and given the often narrowly defined entry criteria for participation aren't particularly generalisable to the wider population that are seeking to benefit from the intervention.

Before we point the finger at academics or Pharma (pharmacology makes up the lions share of clinical trials) over strict candidate selection, it is one of the most effective ways to demonstrate within an acceptable statistical margin that an intervention has had an effect or not (you keep all other variables identical, as much you possibly can). Otherwise you simply can’t make sense of the effect your intervention has had. A way to overcome this might be to use eye-wateringly large numbers of trial participants at enormous cost and even then, with an unselected trial population the risk of diluting any effect the intervention had is large. With a lot riding on this (a new drug has already cost billions by the time it gets to the trial stage ), this is a risk most won't take. Many have called for the advent of a more pragmatic way of doing things, where effectiveness trumps efficacy.

Treweek and Zwarenstein 2009

In addition to this problem of external validity the academic world had been rocked with difficulties in internal validity too. The term ‘Reproducibility Crisis’ was coined in 2010. Whilst it was widely accepted that results from trials were probably not replicable in real world conditions it was a firmly held belief that they could be reliably replicated under study conditions. After all, this is the cornerstone of clinical science- otherwise the outcome was the result of chance (and not your hypothesis). In recent years we've found that a worrying number of trials, even landmark studies can not be reproduced. This is the sort of thing that keeps academics up at night. It hints to a culture of poor study design and statistical methodologies, of sub-conscious biases towards positive results and the desperate need to get something published in a high impact journal. It is a problem for the academic world to address and commendably many are doing so, starting with culture change.

For all its flaws, RCTs are still the best form of scientific evidence we have but it's right for Health Tech to be wary.

But is it right that Health Tech simply avoids the issue altogether?

The relatively low barriers to market entry for Digital Health companies means that anyone and everyone can have a go, with little repercussions (many are classed as a low risk medical device, class 1 by the MHRA and so don’t have to go through any regulation beyond ‘self-assessment’).

The majority of Health Technologists I know are proud of their product and are keen to demonstrate its efficacy but find the thought of an RCT daunting. Beyond the limitations I have already described, the classic RCT model wasn't built with the modern interventions we now have available to us in mind. For example, its commonplace for Mobile Phone Apps to constantly be updated with new content and design features in an iterative fashion based on feedback from the user (both explicit and implicit) but within the confines of an RCT with a strict study protocol this wouldn’t necessarily be possible. We discovered how the National Testbeds Program, TIHM for Dementia learnt this the hard way- having already registered their study protocol, they struggled to change the algorithms in their patient monitoring software to adapt to lessons they were learning from the live trial. By not adapting and amending their product in an iterative fashion, the Tech companies run the risk of producing an obsolete product at the end of the RCT (a few years later) whilst their competition has moved ahead.

I remain perplexed despite an informative session at the Digital Healthcare Show 2018

If academics and proponents of EBM want these Digital Health companies to play ball, then we need to work together to design an evaluation model that works. Ideally something that actually taps into some of the strengths of Digital Health and incorporating it into the study design. For starters, most Mobile Phone Apps or Web Based Platforms are constantly collecting data, a study design that uses this valuable continuous stream of data in place of single data points that we might see in traditional RCTs (eg Quality of Life questionnaires etc) may well provide more information. Until we do the work to validate some of these methods, we simply won’t know if there’s concordance with existing validated methods. As we move into an Internet of Things world with devices collecting data all around us and Artificial Intelligence matures to a point where it can process all this data in meaningful ways, these are the types of study designs the world will be looking for, the pragmatic trial.

The evaluation of Digital Health causes much debate, even amongst thought leaders.

With the advent of Digital Health challenging the status quo in both clinical and research paradigms we may see other changes in the future.

Whether we move away from the current dogma of group designed experimental trials to single case experimental trials remains to be seen. Historically, single case experimental design, using the single participant as both the control group and the intervention group (pre and post intervention with repeated withdrawal and reintroduction of interventions to increase the reliability of the effects seen) has come under criticism from the wider scientific community. However, it has been used for many years in the Psychology and Behavioural Sciences field. Interestingly a sizable share of the Digital Health market sits in this behaviour modification space so we should watch this space.

Finally, it reassured me to hear that when faced with the difficulties the conventional RCTs posed for their particular discipline, it was the Surgeons that took it upon themselves to redefine what the evaluation of a surgical procedure should look like and I think this is what we must seek from technologists with adequate support from academics, but perhaps it will take a tightening of the regulations to get there.