The latest insights show only a small portion (<2%) of AI applications move beyond the prototyping phase. What do we need to do to responsibly bring the benefits of AI to healthcare? And how can we best collaborate for scaled impact?
On March 17 – 19 the TU Delft Digital Ethics Centre and the Convergence Center for Responsible AI in Health Care hosted the Pre-Convening of the World Health Organization AI for Health Collaborating Consortium Network at Mondai | House of AI. It brings together leading experts and institutions working at the frontier of AI research and application in health. Their mission? Collaborate to collectively advance responsible development, deployment, and governance of AI solutions in health – with a focus on equity, safety, and sustainability – for improved healthcare outcomes globally.
Shaping the role of AI (at scale) in Health Care and fostering responsible innovation
During the session, DDEC and CRAIHC brought together the Dutch AI for Healthcare ecosystem with the WHO consortium. Through open dialogue key themes were addressed and examined: What are the biggest barriers that need to be overcome for AI to make an impact on healthcare? How can we responsibly innovate and collaborate on operationalizing the guidelines for responsible AI in healthcare? How do we effectively scale innovations in healthcare?
In a panel discussion, experts in the regional ecosystem from the healthcare practice, members of the Dutch Ministry of Health, Welfare and Sport, ethics experts and industry shared their learnings and experiences. Challenges in (the governance of) AI innovation and in designing for important public values, such as strategic autonomy, public health, privacy, patients’ autonomy, accessibility and inclusivity, were discussed.
Annelien Bredenoord, President Executive Board for Erasmus University Rotterdam: “Universities that are making a real-world impact need to work interdisciplinary. The responsible development and implementation of AI in health care is a highly complex problem. We need to build ecosystems where we work closely together. It’s important to map the ethical challenges, as has been done in the WHO reports, to get a clear conceptual understanding but also to go in depth into real world applications”.
Roland Driece, Director International Affairs for Dutch Ministry of Health, Welfare, and Sport: “If you look at it from an international policy perspective, discussing these issues in policy arenas is rather new. We need better global diplomacy and collaboration for these complicated discussions. We also need people who believe in what they do, that do it from a scientific perspective, trying to make the world better. Just like the work being done in the collaborating centres and within the consortium.”
Hylke Kingma, Partner Digital Health at KPMG “We cannot transform healthcare in just one hospital. We need a more system wide approach when looking at the implementation of technology. Introducing new technology requires a very clear strategy, and the support of the boards. We should try to form better partnerships so we can learn from each other’s implementations and transformations, which is sometimes difficult in the commercial and scientific world. It is hard work to prove the value of a public-private network for all members. We need all the expertise that is out there.”
Vanda Almeida, Responsible AI Clinical Lead at Philips: “There are many trade-offs to be made and ethical challenges, such as whether to focus on niches, where we can have big impact, or for broader solutions. What I really focus on everyday is that people understand that today’s innovations are for someone we don’t know, but tomorrow it can be one of us, it can be a family member. So just bring the human to the numbers. And if we create a space for that, bring that consistently, systematically to the conversation, ethics becomes a little easier because then data is not just a number, it’s a personal thing. In the end – and yes, for Phillips the ROI is important – but we work for patients.”
