Towards Responsible AI for Mental Health & Well-being
Generative AI tools are frequently used when people seek support for mental health issues and wellbeing. Especially young people seem to be attracted to these tools that are accessible and 24/7 available. While the effectiveness of such applications requires further study, we have already seen reports on cases where the use of these systems for mental health led to tragic outcomes. In a time where people increasingly turn to technological support for their mental health challenges, it is vital to advance the safety of AI for mental health & well-being. How can we ensure AI is indeed helpful and how can we reduce likelihood of harm? What future research do we need to support policy and design in relation to AI and mental health?
On January 29th, 2026, around 30 international experts gathered to explore and discuss this problem. The workshop was organized by Dr. Stefan Buijsman, Dr. Caroline Figueroa and Leontien de Roode of the Delft Digital Ethics Centre (DDEC), a WHO Collaborating Centre on AI for health governance, including ethics. The workshop aimed to develop initial guidelines for the responsible implementation of (Generative) AI for mental health and to define directions for research. It brought together researchers, policymakers and advocates working on the intersection of AI and mental health. The online workshop was an official Pre-Summit Event of the AI Impact Summit 2026 in India to be held in New Delhi on 19–20 February 2026.
The World Health Organization (WHO) advances the safe, effective, and equitable use of Artificial Intelligence (AI) for health, grounded in science, ethics, and appropriate governance. According to Ursula Yu Zhao, Technical Officer in the AI and Frontier Technologies Unit at WHO headquarters, the Organization promotes responsible AI through the Global Initiative on AI for Health (GI‑AI4H), a tripartite UN initiative structured around three strategic pillars: enable, facilitate, and implement. Through this initiative, WHO delivers high‑level normative guidance across both cross‑cutting domains and diverse health themes. For example, WHO published its Guidance on Ethics and Governance of AI for Health in 2021, followed by a subsequent document focused on large multimodal models. Prof. Jeroen van den Hoven, the scientific director of DDEC, provided critical contributions to both documents.
During the workshop factors that lead to the responsible implementation and use of AI in mental healthcare were discussed. These experts aim to build knowledge that can guide policies and ethical principles and that can guide the design of responsible AI used for mental health support. The workshop led to the following recommendations:
- Recognize Generative AI use as a public mental health risk.
- Integrate mental health impact into impact assessments and monitoring for Generative AI solutions.
- GenAI Mental health support should be co-designed for linkages between therapists and AI.
Recognize Generative AI use as a public mental health risk
Organizations across government, health systems, and industry should respond proportionately through strategy, capacity building, policy, research, and practice to promote health and well-being through the responsible use of AI. All Generative AI solutions should be included here, not just those intended for mental health services.
While introducing the issue of mental health during the workshop Dr. Kenneth Carswell, Mental Health Specialist in the WHO Department of Noncommunicable Diseases & Mental Health, stressed the high number of people suffering mental health problems and how it requires a multi-sectoral response. “Digital technologies are often used as self-help tools and can be one part of a comprehensive mental health response. They can help address the treatment gap which may have huge benefits, but there are concerns as well. For example, how do we ensure they are safe and effective, and how do we regulate them? We need research for this to help inform the decisions of key stakeholders.” Teens are widely using AI for social interaction purposes, as Cyra Anindya Alesha showed based on her research. For example, for processing a tough situation, for getting another point of view, for self-discovery and for mental health advice.
Integrate mental health impact into impact assessments and monitoring for Generative AI solutions
GenAI solutions, whether intended for mental health services or revealed to be used as such through real-world use should be monitored for impact on determinants of health (e.g. social connectedness), short-term measures (e.g. depressive symptoms, proper referral to clinicians in crisis response) and long-term measures (e.g. emotional dependence on GenAI solution).
Dr. Claudi Bockting, professor in Clinical Psychology in Psychiatry and one of the founders and directors of the interdisciplinary Centre for Urban Mental Health of the University of Amsterdam, puts a clear call for action forward: we need to find independent investments to test the effects of generative AI chatbots on mental health globally. A regulatory perspective to the discussion was provided by Holly Coole, mental health nurse and senior manager for digital mental health at the Medicines and Healthcare products Regulatory Agency (MHRA) (medical device regulator in the UK). She presented on the guidance available in the UK and explained the need for global consensus on the regulatory qualification and classification of tools to better respond to the question “When does the digital tool qualify as a medical device?”. The focus in the UK is on the intended purpose and claims made by the manufacturer as well as on how the product is used in the real-world and taken with its marketing and promotional materials. She also stressed the importance of improving the quality of clinical evidence and of a better understanding of how to approach clinical evaluations.
GenAI Mental health support should be co-designed for linkages between therapists and AI
AI services used for mental health purposes – whether intended or revealed through naturalistic use – should be developed in line with the best available evidence and in consultation with mental health experts and people with lived experience. We must prevent creating distrust in both the AI tool (that works) and other mental health services (ie. a clinic or crisis line), and this requires responses of GenAI solutions to such situations (e.g., expression of suicidal ideation) to be tailored to cultural, linguistic and contextual factors on how to offer mental health support.
Young people should be involved in the design of the systems, and we need structures that support their involvement, as Dr. Caroline Figueroa, medical doctor by background and assistant professor at the TU Delft, stressed. Her research focusses on the benefits and harms of the usage of AI for emotional support by young people and how they view the role of AI for mental health support. What we urgently need are referral frameworks during mental health crises, consensus on outcomes (how do we measure if gen AI tools harm or help young people?), and clear accountability. Stephen Schueller, Professor of Psychology and Informatics at the University of California, Irvine, stressed the importance of consumer empowerment in these systems. Based on his experiences and research on using AI in therapy, he stressed how we need to understand why they’re using these tools and think about how we can better support users in their use and build regulations around these tools. “We need to consider how to reduce harm rather than just behaviors, this includes putting appropriate guardrails in place and focusing on transparency and public education.”
These highlights of the discussion, the ethical principles and suggestions for future research, will serve as input for a roadmap on AI for mental health & wellbeing meant for key actors such as the WHO. Together we will build more knowledge and gather evidence for the design of responsible AI systems for mental health and well-being.
Dr. Stefan Buijsman, managing director of the DDEC: “The Digital Ethics Centre aims to bridge the gap between philosophy and engineering for responsible AI systems. As a WHO Collaborating Centre on AI for health governance, including ethics, we can increase impact by collaborating with other experts around the world, domain experts and governments.”

