By Natalie Banner, Understanding Patient Data Lead.
Earlier this year, we published a report on public views about what makes a fair health data partnership between the NHS, and researchers, charities and industry. As part of the recommendations, we proposed what we think is a novel form of decision-making for access to data, which we termed ‘learning governance’ – borrowed from the idea of a ‘learning health system’. The term is commonly used in the business learning and development world, but we’re using it to refer to decision-making on the use of data.
In this blog post, I expand on the ideas we first suggested in that report, sketching out components of a learning data governance model.
What happens at the moment?
Applicants that want to access health data often have to submit a proposal explaining what data they need and for what purposes, which is then assessed through a governance body or process. Examples include NHS Digital’s IGARD, the HRA’s CAG, and the METADAC which governs access to data for several UK longitudinal studies. These groups provide an important safeguard to ensure data is used responsibly and safely. Their requirements vary depending on the type and sensitivity of data needed, and they are often also responsible for making sure data is used in the ‘public interest’.
However, there are aspects of this type of governance that may make it hard to embed public and patient views in a sustainable way.
Firstly, the decision to allow access to data is usually a point in a pipeline. Little if anything is known by the group about whether the proposals they approve actually end up succeeding in their ambitions. This means there is little opportunity for the group to refine their criteria and decision-making process based on what has worked or not from previous uses of data.
Secondly, these groups or panels may include patient or public membership, but this involvement can sometimes be treated as tokenistic. Further, as people develop expertise in research and data issues they may start to adopt a more ‘professionalised’ perspective. Over time, this could make them less able to play the role of critical friend or representative of patient or public views. There are powerful counterexamples to this, such as the METADAC and the Genomics England panels, but it is a well-recognised problem for data initiatives seeking to embed public views into decision-making.
Thirdly, “public interest” is not necessarily static, especially as novel uses of data emerge and public acceptability of the risk/benefit trade-offs shift over time. There is also not a single public, and it is likely that the impact of how data is used will vary for different groups. Some may be harmed while others benefit. These nuances may not be possible to explore in one-off data access decisions made in a short time frame.
Learning from learning health systems
Learning health systems are built around a process of constant iteration and feedback, informed by continuous data collection and monitoring. The specifics vary but they follow the same broad cycle: capture data as part of care, analyse it, generate insights, inform practice and improve decision-making. This is all made possible through a learning community who benefit from the improvement process. The Nuffield Trust and the Health Foundation both provide good overviews of the concept.
Inspired by this idea, we suggest a learning data governance model for data access could follow a similar pathway, shifting from a one-way pipeline to a feedback cycle. Public views and values are embedded in a cycle of decision-making but remain external to it and hold the decision-makers to account.
The model involves two key ideas to add a reflexive component to the governance system:
When data is accessed and used, the outcomes (whether positive, negative, null or unsuccessful) are reported back to inform future data access decisions.
A public/participant panel scrutinises previous data access decisions and their outcomes. It uses these insights to provide feedback, advice and recommendations to the access group or committee, to inform their future decision-making.
This panel could take a range of forms with either a fixed or evolving membership, and would draw on evidence to inform its views, from data stories and narratives to survey insights. It could be anything from an online platform with thousands of participants responding to a series of data use scenarios through to a dedicated grouping of engaged patients and research participants meeting regularly. This panel is in addition to public/participant involvement in the existing decision-making process, it does not replace it.
Fundamentally, it would seek to contribute to the access group’s future criteria and decision-making through answering these types of questions (these are suggestions, not a comprehensive list):
- Did we ask the right questions of the data applicants?
- What (or who) is missing from our decision-making process?
- How should the outcomes of previous projects inform our future thinking on potential risks and benefits from proposals to use data?
- What could help us make better decisions for future applications?
Shifting from governance as a barrier
This approach would have several advantages over existing systems.
Meaningful inclusion: The inclusion of patient and public views is not tokenistic - it is an embedded and vital part of the governance process. The panel provides external accountability and critique on decisions and recommends improvements or adjustments.
Learning from evidence: Assessing data access decisions retrospectively means the panel will know whether the benefits anticipated by the proposals have actually occurred. In some cases, it is inevitable that they will not – the nature of research is that it is experimental. However, the learning approach could make it easier to spot hype and overpromise in applications, as the outcomes from previous applications will feed into how applications are assessed in future. Over time, this accountability for outcomes adds an evaluative dimension to data governance that is often missing.
Dynamic views on ‘public interest’: Public views and values about data use are changing over time, especially with the development of new and ever more sophisticated data-driven technologies. An assessment of what is acceptable might look very different now, in light of Covid-19, compared to six months ago. The learning governance feedback loop accommodates the fact that values, and what is or isn’t in the public interest, is dynamic.
Minimising risk aversion: At present, concern about potential public reactions may be creating risk-averse governance mechanisms, slowing down the data access stage by adding in more and more layers of process and paperwork. Through learning from insights and critique by the public/participant panel, the people making decisions about whether to grant access to data can have confidence that they are asking the right questions. They’ll also have a good grasp of public acceptability and risk tolerance for applications that might be edge cases.
Enabling not blocking: The model reflects a shift from conceptualising governance as a regulatory requirement and a barrier to accessing data, to seeing it as a vital part of creating an environment that promotes trustworthy practice and good performance.
What do you think?
This is very much a first iteration and there are bound to be problems we haven’t foreseen, whether conceptual (is the basic idea flawed?) or practical (could it ever actually work?). We’d welcome your views, especially critical ones. We’re keen to work with partners to identify possible places to refine this model and trial it if feasible. Please get in touch at email@example.com if you’d like to talk to us about any of the ideas we’ve raised here.
With thanks to Jeni Tennison, Tom King and Madeleine Murtagh for their insights and comments as we’ve been developing these ideas.