A Journey of a Thousand Miles.

Natalie Nelissen

2nd July 2018

Back to blog

Natalie, our evidence and evaluation lead, discusses some ideas to keep in mind when planning the evaluation of a digital health product.

Digital tools, such as apps, for health and care are currently very popular and their offer is expanding rapidly. In contrast, there is very little evidence that any of these tools are actually beneficial, if not safe, prompting some experts to call it the Wild West (discussed in more detail in a previous blog post - link to Mobile health: the Good, the Bad and the Unknown). In this post I will focus on potential solutions to tackle this problem. I’m going to cover three topics I feel are quite ‘hot’ at the moment based on recent conferences and seminars I’ve been to. These include bridging the gap between academia and industry, taking full advantage of the wealth of available data and evaluating throughout the entire life cycle of a product.

Academia versus industry

There has always been a tension between academia and industry. Government and not-for-profit organisations, such as NHS Trusts or charities, fall somewhere in between these two extremes, but are currently not contributing as much to creating and testing digital tools. Let’s start by looking at the way both cultures differ.

In academia, in order to keep your job and/or those of the people working for you, you spend a lot of time acquiring funding. The choice of topic you’ll be working on is influenced by what funding bodies deem worth investing in. Your chances of getting funding increase as you publish more high quality peer reviewed papers. To publish such papers takes time as it requires being rigorous and going in depth, since they need to be approved by experts. The typical academy standard of success for a digital health tool is therefore strong statistical evidence for its benefits and safety that convinces experts. There is no funding and little interest to market and scale up the resulting product.

In industry, the money to keep your job and that of your employees usually comes from the products or services you are creating. You pick a topic based on thorough market research, trying to provide something the customer really wants. You focus on getting your product ready as quickly as possible, and make sure it appeals to the customer. The standard of success for a digital health tool is whether the customer buys it, which depends on factors such as good publicity and user friendliness, not necessarily scientific evidence. Because industry knows how to sell, scale up and be sustainable, its products have the largest potential impact on society.

Most interactions between academia and industry are indirect, with both cultures staying fairly separate, such as industry providing sponsoring for an academic project or academic ideas leading to a spin off company. In an ideal world, we want to combine academic rigour, transparency (open data, open source) and evidence generation with industrial innovative thinking, quick turnarounds, user-centred design and long term planning. In the field of digital health, this marriage between both cultures is slowly starting to happen and actively encouraged. Especially having employees from one culture moving over to the other, either as temporary or permanent placements, helps to appreciate the strengths and weaknesses of each.

All data big and small

In addition to benefits and user satisfaction, other related components that can be evaluated include user engagement (do people use the tool correctly in order to get maximum benefit), clinical risk (is the benefit worth the risk), and cost effectiveness (is the benefit worth the cost). Also, it’s important to consider that these components don’t exist in a vacuum. Even a highly beneficial, safe, cost effective and user friendly tool can fail due to the wider environment, such as lack of patient and/or clinician buy in, embedding in existing care pathways, long term support and strategy, senior leadership and governance. Finding the optimal way to combine and use this multitude of data is a challenge.

Furthermore, the sheer amount of data, even from a single source, can be staggering. Fitness trackers at high sampling rates can produces gigabytes of raw data in just a few weeks. Web or app analytics can store every single interaction the user has with the tool, whether it’s a click or a scroll or simply the time spent waiting. We can now follow individuals not just when they are sick, but also when they are healthy and therefor by definition not tracked by the healthcare system. The trade-offs are data quality and lack of context. For example, clinical heart rate monitors offer more reliable measurements than fitness trackers. Or, when a user ‘waited’ 5 minutes on a webpage, did they read it or where they watching something on TV at that time?

It’s hard to go to a seminar nowadays without hearing the words Big Data and/or Artificial Intelligence as a solution to throw at this problem. While it certainly has an important place, we should not forget to collect and analyze those smaller controlled datasets we used to work with in the past. This includes both qualitative data, such as interviews with and observations of users, and quantitative data, such as detailed validated measurements from a small set of users. Such Small Data (small enough for a human to understand) can be used to put Big Data into context and test causation rather than correlations (click here for more information).

Evaluate for Life!

Both academia and industry already value the idea of evaluation from start to finished product and beyond, they just emphasize different aspects and use different methods.

In academia, in order to get funding for a project, you need to have a formal plan how you will evaluate the benefits and safety of your tool and plan for any eventualities months before you even get the resources or permission to start doing anything. At every step on the way, there is that critical voice in your head, or of your supervisor or expert reviewer, saying things like ‘is this evidence convincing, should we try something else to see if we can do better?’. There are several stages, from small scale pilots to full blown clinical trials, before a product is made available to the general public, and occasional updates, such as reviews, after that.

Industry is also a keen evaluator all the way throughout the life cycle of a product when it comes to data related to customer satisfaction. Before developing a product, market research is done to see what competition is out there and to gauge customer interest. During product creation, users are often involved in co-design and/or early testing. And once the product is live, customer feedback is anxiously tracked and responded to.

Ideally, all aspects of evaluation mentioned in the previous section (including user engagement, benefits, safety, value for money, adoption and sustainability) should be kept in mind from the very start and throughout the product’s entire life cycle. Any single one of these could be the point of failure for the product, and catching potential issues early and on a small scale can prevent a waste of resources later on. Frameworks such as MAPS and NASSS can help us reflect on, and potentially avoid, future failures across the lifespan of a product.

Another related aspect to evaluation, particularly for digital tools, is to not see best practice as set in stone. Today’s guidelines may no longer apply next year, given the speed of digital progress and the current quest for creating new more suitable methodologies. For example, most academics are in agreement that the current golden standard, the randomised controlled trial in its current format (often requiring 5-10 years and millions of pounds), is not suited to digital health, there is no consensus yet on how to update or replace it. While you are waiting, have a look at the framework we’re proposing here. Aimed at non-scientists, it explains the ‘why’ and suggests some easy ways to get you started.

Takeaway

There are a lot of challenges for the evaluation of the safety and benefits of digital tools. Our journey to implement digital health is one of a thousand miles. As Lao Tzu remarked, it begins with a single step. We need to start somewhere, and I suggest three guiding principles to keep in mind. First, consider borrowing the best parts from both academia and industry, to create a new but still rigorous way of evaluation suited to this fast-paced environment. Second, take full advantage of the rich data that is on offer, considering all types and sizes. And third, think of evaluation at every step of the way of a new product, from idea to market release and beyond.

Natalie Nelissen

Research Fellow