Global AI Policy Research Summit 2025 Delft
November 12 - November 14
Global AI Policy Research Summit 2025 Delft
Framing the Future of AI Governance: Leading with Evidence-based Policy
Framing the Future of AI Governance: Leading with Evidence-based Policy
On 12 – 14 November Mondai | House of AI is happy to host the next Global AI Policy Summit in Delft!
(this event will be held in English)
Predictions about the promises and perils of artificial intelligence (AI) are increasingly prevalent; from the future of work and education to breakthroughs in healthcare and public services, and the reconfiguration of warfare and national security. Such narratives profoundly influence how we imagine, initiate, and interrogate the development and deployment of AI innovations. Crucially, how we frame these issues shapes the decisions we make about the future of AI in society. Policy research allows us to highlight these underlying narratives and ask how they frame the economic, social, environmental and human rights impacts of AI systems.

The Global AI Policy Research Summit 2025 convenes a growing international network of research institutes and policymakers. Summit participants work to uncover the mechanisms of – and potential pathways for – effectively framing the future of responsible AI, drawing on evidence-based policy and good governance practices. Together they analyze how current dominant narratives serve to frame the global AI policy landscape, as well as jointly identify effective strategies and collaborations for the future of responsible AI governance.
Building on the AI Policy Research Roadmap, which was developed through collaborative discussions at the inaugural AI Policy Summit 2024 in Stockholm, summit participants can further advance a shared vision and concrete actions for the future of responsible AI governance through collaborative research and practice!
Programme
Wednesday 12 November – Welcome Drinks & ‘Indigenous Perspectives on AI’ @Vakwerkhuis
Before the summit starts we would like to invite participants to join us for welcome drinks and a workshop on ‘Indigenous Perspectives on AI’
Programme
18.00 – 20.00 Drinks and Workshop hosted by Anna Melnyk (Delft Design for Values Institute) & Lynnsey Chartrand (Head of Indigenous Initiatives at Mila, joining online).
This collaborative workshop invites participants to critically engage with Indigenous perspectives on artificial intelligence. Through reflective discussions with participants, you explore how Indigenous knowledges, governance practices, and relational worldviews can inform more responsible, equitable, and sustainable decision-making about AI futures. The session aims to expand awareness of diverse epistemologies and to foster dialogue on how AI systems can better serve communities, lands, and ecosystems.
More information about the work they do you can read here: Design for Values and Critical Raw Materials: Decolonial Justice Perspective – Delft Design for Values Institute

Thursday 13 November – Day 1
General Programme
08.30 – 09.00 Walk-in and Welcome Coffee
09.00 – 13.00 Plenary Morning Programme
13.00 – 14.00 Lunch
14.00 – 17.30 Plenary Afternoon Programme
17.30 Dinner and Drinks @Firma van Buiten
Friday 14 November – Day 2
General Programme
08.30 – 09.00 Walk-in and Welcome Coffee
09.00 – 12.30 Plenary ad Break-out Morning Programme
12.30 – 13.30 Lunch
13.30 – 15.30 Plenary Afternoon Programme
15.30 Close
Do you want to join this inspiring event? Please contact the organisers!
Full Programme November 13 – Day 1: Reframing AI Narratives
08.30 – 09.00 Welcome and Coffee
Summit Opening by Viriginia Dignum (Umeå University), Geert-Jan Houben (TU Delft) and Isadora Hellegren Létourneau (Mila)
Network introductions and reflections led by Isadora Hellegren Létourneau (Mila)
11.00 – 11.30 Break
What can be learned from assessing current governance approaches to AI sovereignty and safety in the EU, Africa, Asia, and Canada?
Panel moderated by Frank Dignum (Umeå University).
Panellists (confirmed so far):
> Ayantola Alayande (Global Center on AI Governance), who will be joining online;
> Carolina Aguerre (Universidad Católica del Uruguay), who will be joining online;
> Lyantoniette Chua (AI Saferty Asia), who will be joining online.
13.00 – 14.00 Lunch
Can novel insights from foresight methods and systems perspectives offer alternative narratives for the development of effective AI policies?
Panel moderated by Ginerva Castellano (Uppsala University)
Panellists (confirmed so far):
> Roel Dobbe (TU Delft);
> The Anh Han (Teesside University);
> Sam Bogerd (Centre for Future Generations).
15.30 – 16.00 Break
How could participatory and collaborative approaches foster a narrative that aligns governance and regulation with innovation?
Panel moderated by Mirko Schaefer (Utrecht University).
Panellists (confirmed so far):
> Kerstin Bach (Norwegian University of Science and Technology);
> Ley Muller (Women in AI Governance);
> David Lewis (Trinity College Dublin).
Networking opportunity
18.00 – 20.00 Dinner and Drinks at Firma van Buiten
Full Programme November 14 – Day 2: Moving Beyond High-Level Principles
08.30 – 09.00 Welcome and Coffee
Reflections on Day 1
Deep Dive Workshop led by Nitin Sawhney (University of Arts Research Institute Helsinki) and Petter Ericson (Umeå University).
This workshop is held in the Panorama @Mondai, ground floor.
For more information, please refer to the designated event page.
Deep Dive Workshop led by Jason Tucker (Umeå University), Fabian Lorig (Malmö University) and Stefan Buijsman (TU Delft).
This workshop is held in the Innovate @Mondai, 1st floor.
For more information, please refer to the designated event page.
Deep Dive Workshop moderated by Tina Comes (TU Delft), with expert speakers Thomas Kox (Weisenbaum Institute), Arkady Zgonnikov (TU Delft), and Duuk Baten (SURF).
This workshop is held in the Connect @Mondai, 1st floor.
For more information, please refer to the designated event page.
12.30 – 13.30 Lunch
Reflections on deep-dive workshop; proposals to build stronger collaborations with policy-makers moving forward
Moderated by Virginia Dignum (Umeâ University) and Tina Comes (TU Delft)
Brief reflections from deep-dive leads followed by plenary discussion led by Isadora Hellegren Létourneau (Mila)
15.30 – 16.00 Close of Summit
Speakers
Virginia Dignum is a professor in Responsible Artificial Intelligence and the Director of the AI Policy Lab and a member of the UN High Level Advisory Body on AI and senior advisor to the Wallenberg Foundations.
Sessions
> Summit opening & network round-table (November 13, 2025 at 09.00h)
> Session 1: Rethinking AI sovereignty (November 13, 2025 at 10.00h)
> Session 5: Deep-dive workshop – moving beyond high-level principles (November 14, 2025 at 09.30h)
> Session 6: Building bridges from research to policy (November 14, 2025 at 13.30h)

Geert-Jan Houben is Pro Vice Rector Magnificus Artificial Intelligence, Data and Digitalisation (PVR AI) at Delft University of Technology (TU Delft). Leading the TU Delft activities in the field of AI, data and digitalisation, for education, for research and valorisation, and for relevant support. This includes the establishment of TU Delft AI Labs to promote cross-fertilisation between AI experts and scientists who use AI in their research. It also includes the representation of TU Delft in regional, national and international co-operation on this theme. Full professor of Web Information Systems (WIS) at the Software Technology (ST) department at Delft University of Technology (TU Delft). Leading a research group on Web Information Systems (WIS) and involved in the education in Computer Science in Delft, with a focus on data-based information systems on the Web..
Sessions
> Summit opening & network round-table (November 13, 2025 at 09.00h)

Isadora Hellegren leads multistakeholder and interdisciplinary AI policy research at Mila – Quebec Artificial Intelligence Institute, such as the Mila AI Policy Fellowship. She works to bridge the gap between AI research and public policy to inform better AI policy – for the benefit of all. Before joining Mila, Isadora was a Senior Policy Specialist at the Swedish International Development Cooperation Agency (Sida), where she advised on human rights and ICTs, democratic governance, and gender equality. Her academic and professional background focusing on internet governance and policy developments in relation to emerging technologies and social movements continues to inform her dedication to advancing meaningful participation in AI and beyond. She chairs the newly founded Global AI Policy Research Network (GlobAIpol), is a former Member of Steering Committee in the Global Internet Governance Academic Network (GIGANET) and has published articles in Oxford University Press Research Encyclopedia of Communication, and Internet Histories: Digital Technology, Culture and Society on related topics.
Sessions
> Opening (November 13, 2025 at 09.00h)
> Session 1: A year with the Roadmap for AI Policy Research & Network round-table (November 13, 2025 at 09.15h)
> Session 7: Next steps for the Global AI Policy Research Network (November 14, 2025 at 14:30h)
> Closing Summit (November 14, 2025 at 15:30h)
Frank Dignum is Professor at the Umeå University Department of Computer Science, leading a research group in the field of socially conscious AI. They develop models that can provide insights into how society can respond to political changes or natural disasters.
Sessions
> Session 2: Rethinking AI safety and sovereignty – regional perspectives (November 13, 2025 at 11:30h)
Ayantola Alayande is a Researcher at the Global Center on AI Governance, where he works on issues of international cooperation in AI policymaking and governance, AI development in low- and middle-income countries (LMICs), compute governance, AI Security, state-led AI governance in Africa, and AI security. His broader interests span the geopolitics/geoeconomics of emerging technologies, global governance, technology and industrial policy, Africa-in-major-power-competition, and digital methods/media. His writings have appeared in several notable research outlets, including Nature, the Atlantic Council, The Productivity Institute, The Productivity Monitor, the Bennett Institute for Public Policy, Research ICT Africa, among others. Ayantola holds graduate degrees in public policy and international development, respectively, from the KDI School of Public Policy and the University of Edinburgh, and is currently a PhD candidate in AI Geopolitics and Governance at the University of Oxford’s Department of International Development (ODID), where he is researching approaches to sovereignty in the AI value chain of emerging power nations. He has previously worked in research and consulting roles at the Bennett Institute for Public Policy at the University of Cambridge, Kantar UK, Research ICT Africa (RIA), and Dataphyte.
Sessions
> Session 2: Rethinking AI safety and sovereignty – regional perspectives (November 13, 2025 at 11:30h)
Caroline Aguerre is Associate Professor at the Universidad Católica del Uruguay and honorary co-director at CETYS, at Universidad de San Andrés (Argentina). Her research interests include theories and practices around the governance of communications technologies and infrastructures, particularly the Internet and Artificial intelligence, the intersection with political economy and north-south perspectives. In 2020 she was part of the UNESCO Adhoc Expert Working Group on the Recommendations on the Ethics of AI. She has been part of the IGF Multistakeholder Advisory Groupd (2012-2014) and in 2025. She was a resident fellow at the CGR21 (2020-2021) University of Duisburg-Essen (Germany).
Sessions
> Session 2: Rethinking AI safety and sovereignty – regional perspectives (November 13, 2025 at 11:30h)
Edward Tsoi is the founder of Connecting Myanmar and experienced leader in technology startups and non-profits. He led the APAC business of a late-stage startup with over $100M raised. He is also a former board member of Amnesty International Hong Kong and advisor to multiple corporate-NGO initiatives. Now he is one of the co-founders of AI Safety Asia.
Sessions
> Session 2: Rethinking AI safety and sovereignty – regional perspectives (November 13, 2025 at 11:30h)
Ginevra Castellano is a Full Professor in Intelligent Interactive Systems at the Department of Information Technology of Uppsala University, Sweden, where she is the Founder and Director of the Uppsala Social Robotics Lab. Her research is in the area of social robotics and human-robot interaction, addressing questions on how we can build human-robot interactions that are ethical and trustworthy, including robot ethics, robot autonomy and human oversight, gender fairness, robot transparency and trust, human-robot relationship formation, both from the perspective of developing computational skills for robotic systems, and their evaluation with human users to study acceptance and social consequences. She has been the Principal Investigator of several national and EU-funded projects on ethical and trustworthy human-robot interaction, in application areas spanning education, healthcare, and transportation systems. She is currently the coordinator of the CHANSE-NORFACE MICRO (Measuring children’s wellbeing and mental health with social robots) project (2025-2028), and the WASP-HS Research Group on Child Development in the Age of AI and Social Robots (2025-2030, funded by WASP-HS Wallenberg AI, Autonomous Systems and Software Program – Humanity and Society. Castellano was an invited speaker at the UN AI for Good Global Summit 2024 and a keynote speaker the World Summit AI 2024. She was recently awarded the Thuréus prize 2025 from the Royal Society of Sciences in Uppsala.
Sessions
> Session 3: Building alternative narratives (November 13, 2025 at 14:00h)
Roel Dobbe is an Assistant Professor in Technology, Policy & Management at Delft University of Technology focusing on Sociotechnical AI Systems. He received a MSc in Systems & Control from Delft (2010) and a PhD in Electrical Engineering and Computer Sciences from UC Berkeley (2018), where he received the Demetri Angelakos Memorial Achievement Award. He was an inaugural postdoc at the AI Now Institute and New York University. His research addresses the integration and implications of algorithmic technologies in societal infrastructure and democratic institutions, focusing on issues related to safety, sustainability and justice. His projects are situated in various domains, including energy systems, public administration, and healthcare. Roel’s system-theoretic lens enables addressing the sociotechnical and political nature of algorithmic and artificial intelligence systems across analysis, engineering design and governance, with an aim to empower domain experts and affected communities. His results have informed various policy initiatives, including environmental assessments in the European AI Act as well as the development of the algorithm watchdog in The Netherlands.
Sessions
> Session 3: Building alternative narratives (November 13, 2025 at 14:00h)
The Anh Han is a Full Professor of Computer Science and Director of the Center for Digital Innovation at School of Computing, Engineering and Digital Technologies, Teesside University. His current research spreads several topics in AI and behavioural research, including the dynamics of human cooperation, evolutionary game theory, agent-based simulations, behavioural economics, and AI governance modelling.
Sessions
> Session 3: Building alternative narratives (November 13, 2025 at 14:00h)
Sam Bogerd bridges foresight and policy, tackling the governance of advanced technologies. With a focus on innovation and long-term impact, he turns complex challenges into practical, future-ready solutions.
Sessions
> Session 3: Building alternative narratives (November 13, 2025 at 14:00h)
Mirko Schaefer is co-founder and the Sciences Lead of the Data School. He is a member of the steering committee at the research area Governing the Digital Society and a member of the research area Applied Data Science. He is a Visiting Professor at the Helsinki Institute for Social Sciences & Humanities of the University of Helsinki. Mirko is serving on the Advisory Committee Analytics at the Ministry of Finance in the Netherlands. His research interest revolves around the socio- political impact of media technology, and the responsible use of AI and data practices. Together with the Data School, he investigates AI and data practices in government organisations and develops processes and practices for responsible use and implementation, and public accountability.
His book Bastard Culture! How User participation Transforms Cultural Production (Amsterdam University Press 2011) was listed as best-seller in the section computer science by The Library Journal. Together with Karin van Es, he edited the volume The Datafied Society. Studying Culture through Data (Amsterdam University Press 2017). His most recent publication (together with Karin van Es & Tracey Lauriault) is the edited volume Collaborative Research in the Datafied Society Methods and Practices for Investigation and Intervention (Amsterdam University Press 2024). From 2012 to 2013 he was a research fellow at the University of Applied Arts in Vienna. In 2014 he was appointed post-doctoral research fellow at the Centre for Humanities at Utrecht University. In 2016 Mirko was a Mercator Research Fellow at the NRW School of Governance at University of Duisburg-Essen.
Sessions
> Session 4: Governance for innovation (November 13, 2025 at 16:00h)
Kerstin Bach is a professor of Artificial Intelligence at the Norwegian University of Science and Technology (NTNU), Director of the Norwegian Open AI Lab, and is Research Director at the Norwegian Research Center for AI Innovation (NorwAI). She holds a PhD from the University of Hildesheim and worked as a researcher at the German Research Center for AI (DFKI), where she developed decision support systems for various industries. After completing her Ph.D., Kerstin joined Verdande Technology, a Trondheim-based AI startup developing real-time case-based reasoning (CBR) technology for the oil and gas, financial services, and healthcare industries. At Verdande, she was both a research scientist and software engineer, working closely with partners exploring CBR in their technology stack. In 2015, Kerstin joined NTNU’s computer science department.
In recent years, Kerstin’s research has been primarily focused on crafting AI prototypes tailored for healthcare, intelligent sensing, and knowledge management. She managed an EU H2020 research grant, selfBACK, whose results are currently being developed as a product for patients with lower back pain. Presently, Kerstin is steering multiple interdisciplinary projects funded by the Norwegian Research Council and NTNU dedicated to AI-driven and patient-centered healthcare services.
Beyond her research contributions, she actively organizes workshops, conferences, and symposia that discuss various aspects of AI research. Throughout her career, Kerstin has undertaken responsibilities such as being the driving force behind myCBR, an open-source tool adopted in research and industry projects across Europe, and is a board member of the Norwegian AI Society and the German AI Society. Her commitment to advancing AI extends to NTNU, where she promotes AI research among students and strongly emphasizes encouraging females to pursue technology careers. As an educator, she imparts her knowledge through AI and Machine Learning courses, guiding and involving master’s and Ph.D. Her role as NorwAI research director finds her at the forefront of collaborative projects between industry and academia. Within this context, she established FEMAIS, a mentorship program tailored for aspiring female AI students, effectively bridging the gap between their final year of studies and the launch of their professional journeys. Kerstin’s commitment to AI outreach also extends to the Norwegian Open AI Lab, where she organizes events, gives talks, and participates in panels and seminars to discuss AI research among professionals and the broader public.
Sessions
> Session 4: Governance for innovation (November 13, 2025 at 16:00h)
Ley Muller is a transformational leader with 10 years’ experience in evidence-based policy, public health, and AI. She currently leads Nordic Women in AI Governance and is the research lead for Women in AI Norway, and is very aware of how insufficient a gender-only lens is if AI governance is to properly address marginalized perspectives. She has experience in consulting, government, and academia from Norway, the US, Germany, and the WHO – and is very recently (and somewhat proudly) ex-tech.
Sessions
> Session 4: Governance for innovation (November 13, 2025 at 16:00h)
Dave Lewis is an Associate Professor at the School of Computer Science and Statistics at Trinity College Dublin where he served as the head of its Artificial Intelligence Discipline. He is the Acting Director of Ireland”s ADAPT Centre for human centric AI and digital content technology research. He investigates open semantic models for trustworthy AI and data governance and contributes to international standards in digital content processing and trustworthy AI. His research focuses on the use of open semantic models to manage the Data Protection and Data Ethics issues associated with digital content processing. He has led the development of international standards in AI-based linguistic processing of digital content at the W3C and OASIS and is currently active in international standardisation of Trustworthy AI at ISO/IEC JTC1/SC42 and CEN/CENELEC JTC21.
Sessions
> Session 4: Governance for innovation (November 13, 2025 at 16:00h)

Tina Comes is the Scientific Director of the DLR Institute for Terrestrial Infrastructure Protection in Germany, and Full Professor in Decision Theory & ICT for Resilience at the TU Delft in the Netherlands. Since her PhD, she has been determined to better understand decision-making of individuals and groups in the context of climate risk and crises. Her work aims at using AI and information technology to support decisions of individuals and groups in complex, uncertain environments. She integrates behavioural insights with advanced computational approaches—including distributed AI, multi-agent systems, optimisation models, and digital twins. She serves on the editorial Board of Nature Scientific Reports. Her research has received international recognition through awards and fellowships, and she is a member of Academia Europaea and the Norwegian Academy of Technological Sciences. Internationally, under the EU’s Science Advise Mechanism, she chaired the Working Group on Strategic Crisis Management in Europe and is now chairing the Working Group for AI in Crisis Management.
Sessions
> Deep Dive Workshop – Contextualising AI Principles: Universal Guidelines or Domain-Specific Policy? (November 14, 2025 at 9:30h)
> Session 6: Building bridges from research to policy (November 14, 2025 at 13:30)
Nitin Sawhney is a visiting researcher at the University of the Arts Research Institute. He has a background in computational media, human-centered design and documentary film. He served as a Professor of Practice in the Department of Computer Science at Aalto University, leading the Critical AI and Crisis Interrogatives (CRAI-CIS) research group. He completed his doctoral dissertation at the MIT Media Lab, and taught in the Media Studies program at The New School and the MIT Program in Art, Culture and Technology (ACT). Working at the intersection of Human Computer Interaction (HCI), responsible AI, and participatory design research, he examines the critical role of technology, civic agency, and social justice in society and crisis contexts. He has co-curated exhibitions and co-directed documentaries in Gaza and Guatemala, focusing on creative resistance and historical memory in conditions of war and conflict. In October 2024 he co-organized the Contestations.AI Transdisciplinary Symposium on AI, Human Rights and Warfare in Helsinki. He is currently developing a transdisciplinary platform to foster critical dialogues and co-existence through science, technology, and the arts, and conceptualizing a new documentary film project critically examining the role of AI in warfare.
Sessions
> Deep Dive Workshop – AI in Warfare: Actions, Policies and Practices (November 14, 2025 at 9:30h in Panorama @Mondai)
Petter Ericson is staff scientist in the research group for Responsible AI, working on graph problems and formal descriptions of structured data, with a strong interest in ethics, music and society.
Sessions
> Deep Dive Workshop – AI in Warfare: Actions, Policies and Practices (November 14, 2025 at 9:30h in Panorama @Mondai)
Jason Tucker is a researcher at the Institute for Futures Studies and an Adjunct Associate Professor at the AI Policy Lab, the Department of Computing Science, Umeå University. He is also a Visting Research Fellow at AI & Society, the Department of Technology and Society, LTH, Lund University. His research interests include AI and health, the global political economy of AI, public policy, citizenship, human rights and global governance. He currently leads the research project The Politics of AI and Health: From Snake Oil to Social Good funded by WASP-HS. Within this he is particularly interested in developing interdisciplinary approaches to better support policy making on the future role of AI in healthcare.
Previously he has worked on law and policy reform, citizenship and public sector digitalisation, having done so for the United Nations, civil society, industry and in academia.
Sessions
> Deep Dive Workshop – AI in Health Care (November 14, 2025 at 9:30h)

Fabian Lorig is an Associate Senior Lecturer (Biträdande Lektor) and Associate Professor (Docent) in Computer Science, with a focus on agent-based modelling, the use of AI in socio-technical systems, and the development of simulation-based decision and policy support. His research integrates computational methods with real-world applications in public health, mobility, and policy, with a focus on understanding and addressing the societal implications of AI and to support the development of responsible and impactful technologies. He has led and contributed to interdisciplinary research projects where they design computational models that enable stakeholders and policy actors to better understand complex dynamics of social systems and to anticipate the potential consequences of policy interventions and AI technologies. Through participatory approaches and social simulations, his research facilitates evidence-based decision-making in domains where digital technologies shape societal outcomes.
Sessions
> Deep Dive Workshop – AI in Health Care (November 14, 2025 at 9:30h)
Stefan Buijsman studied computer science and philosophy in Leiden and completed his PhD on the philosophy of mathematics at Stockholm University when he was 20. He continued his research on the intersection of philosophy of mathematics and cognitive science at Stockholm University and the Institute for Futures Studies on a research grant from Vetenskapsrådet.
Aside from research, he engages in popular science writing with now three books to his name. The most recent is on AI and its links to philosophy under the Dutch title ‘Alsmaar Intelligenter’. From there, his research focus has switched to philosophy of AI on which he works at TU Delft.
He is co-founder of the Delft Digital Ethics Centre, which focusses on the translation of ethical values into design requirements that can be used by engineers, decision and policy makers and regulators. There he works on the broad range of ethical challenges in projects with external stakeholders. His own research focusses mostly on the explainability of AI algorithms. How can we make these algorithms more transparent? What information do we need to use them responsibly in their many applications? He uses philosophical accounts from epistemology and philosophy of science to formulate design requirements on AI tools on these knowledge-related aspects. In collaboration with computer scientists he also aims to develop new tools to improve the explainability of algorithms.
Sessions
> Deep Dive Workshop – AI in Health Care (November 14, 2025 at 9:30h)
About the Network
The Global AI Policy Research Network (GlobAIpol) organizes this annual event. GlobAIpol is a community of practice that serves as a platform for policy researchers and professionals to advance responsible AI policy research, evidence-based insights and actionable strategies for stakeholders across academia, industry, public sector, and civil society. AI policy research has emerged as an essential guide to navigating the complex interplay between technological innovation and societal impact. It ensures that we guide advancements in AI in alignment with ethical, legal, and social priorities.
The network was established following the inaugural AI Policy Research Summit in Stockholm, November, 2024. The inaugural summit was a joint initiative led and organized by the AI Policy Lab, Umeå, Sweden, and Mila – Quebec AI Institute, Montreal, Canada. The summit brought together a community eager to address the need for better synergies between research, policy and impact to realize responsible, equitable and sustainable AI for the benefit of all.
A core objective of the GlobAIpol network is to inform global approaches to AI governance by sharing best practices and fostering collaboration on developing AI policy. This includes advancing responsible AI policy research that meets the growing need for governance grounded in ethical, transparent, and evidence-based practices to shape inclusive and trustworthy policies. The network takes an interdisciplinary and multistakeholder approach to holistically address the complex challenges and opportunities that arise with these developments. Through these objectives, the network works for effective knowledge exchange to bridge the gap between AI policy research and practice.
Read more about the network’s commitments in the Roadmap for AI Policy Research.


