BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Mondai - ECPv6.15.17//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:Mondai
X-ORIGINAL-URL:https://mondai.tudelftcampus.nl
X-WR-CALDESC:Evenementen voor Mondai
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Europe/Amsterdam
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20240331T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20241027T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20250330T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20251026T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20260329T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20261025T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20270328T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20271031T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20251009T120000
DTEND;TZID=Europe/Amsterdam:20251009T140000
DTSTAMP:20260420T210907
CREATED:20250612T091244Z
LAST-MODIFIED:20251001T115158Z
UID:10000235-1760011200-1760018400@mondai.tudelftcampus.nl
SUMMARY:TU Delft AI Lunch: Next Career Steps for AI PhDs & Postdocs
DESCRIPTION:Mondai | House of AI is happy to host the new edition of the TU Delft AI Lunch:\nNext Career Steps for AI PhDs & postdocs\n Charting Your AI Career Path: Insights from Experts in Innovation\, Research\, and Industry\nAre you an early-career researcher\, PhD\, or postdoc looking to carve out your next steps in AI? Join us for an inspiring lunchtime panel with leading voices from the worlds of startups\, academia\, and industry consulting! Meet entrepreneur and innovation coach Arthur Tolsma\, grant-winning researcher Savvas Zannatou\, academic mentor and counsellor Iliana Yocheva and Senior Data Science Consultant Lucas Bresser. \nHear how they navigated their career journeys\, gain practical tips on research grants\, entrepreneurship\, and thriving in academia or industry—and get your questions answered! \nWhat does it take to transition from a PhD to consulting\, academic track\, industry\, or entrepreneurship in AI-related fields\, and how can I navigate my career? Join us for a lunchtime panel and deep dive with experts sharing their experiences and advice.   \nFind out more about the event under the following Link \nNote for TU Delft PhDs:\nThis AI Lunch qualifies for discipline-related skills (GSC) credit under the “Form for earning GSC for TU Delft AI(-related) seminars” scheme. Please check with your Faculty Graduate School & supervisors. \nDon’t miss this chance to connect\, learn\, and be inspired to take the next step in your AI career. Save the date! \nThis event includes free lunch for which registration is required (help us reduce food waste!) \n(This event will be held in English) \nProgramme\n12.00 – 12.30 | Lunch & networking\n12.30 – 13.30 | Panel I. Careers\, options\, and outlooks: moderated by Marie-Therese Sekwenz\, featuring Arthur Tolsma\, Iliana Yocheva\, and Lucas Bresser\n13.30 – 14.00 | Panel II. Concrete next steps in academic career paths: With the Panel I. and Savvas Zannettou \nThe Delft AI (Lab) Lunch series\nThis series is part of the monthly Delft AI (Lab) Lunches\, a recurring meet-up hosted by the TU Delft AI Labs & Talent community at Mondai | House of AI.\nEvery month\, we host a panel to discuss challenges and developments made at the intersection of AI and a specific field. During these events\, you can participate\, learn\, make connections\, inspire and be inspired by and with the Delft AI Community. We invite all interested staff and students from TU Delft to join these sessions. Please contact community manager Charlotte Boelens for more information about this series or the TU Delft AI Labs & Talent Programme.\n \nNote for TU Delft PhDs\nThe TU Delft AI Lunch series is eligible for earning discipline related skills GSC with the ‘Form for earning GSC for TU Delft AI(-related) seminars’. Check with your local Faculty Graduate School (FGS) if your FGS offers this option for earning Discipline Related Skills GSC; and with your supervisors if they accept our seminars on your Doctoral Education (DE) list. If you already have a form\, don’t forget to bring it with you.
URL:https://mondai.tudelftcampus.nl/event/tu-delft-ai-lunch-next-career-steps-for-ai-phds-postdocs/
LOCATION:Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD
CATEGORIES:AI Lab Lunch
ATTACH;FMTTYPE=image/jpeg:https://mondai.tudelftcampus.nl/wp-content/uploads/sites/7/2025/06/TU250612_4059_0085-Verbeterd-NR_lowres.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20251112
DTEND;VALUE=DATE:20251115
DTSTAMP:20260420T210907
CREATED:20250630T140523Z
LAST-MODIFIED:20251113T083927Z
UID:10000238-1762905600-1763164799@mondai.tudelftcampus.nl
SUMMARY:Global AI Policy Research Summit 2025 Delft
DESCRIPTION:Global AI Policy Research Summit 2025 Delft\nFraming the Future of AI Governance: Leading with Evidence-based PolicyOn 12 – 14 November Mondai | House of AI is happy to host the next Global AI Policy Summit in Delft! \n(de voertaal van dit event is Engels) \nPredictions about the promises and perils of artificial intelligence (AI) are increasingly prevalent; from the future of work and education to breakthroughs in healthcare and public services\, and the reconfiguration of warfare and national security. Such narratives profoundly influence how we imagine\, initiate\, and interrogate the development and deployment of AI innovations. Crucially\, how we frame these issues shapes the decisions we make about the future of AI in society. Policy research allows us to highlight these underlying narratives and ask how they frame the economic\, social\, environmental and human rights impacts of AI systems. \nThe Global AI Policy Research Summit 2025 convenes a growing international network of research institutes and policymakers. Summit participants work to uncover the mechanisms of  – and potential pathways for – effectively framing the future of responsible AI\, drawing on evidence-based policy and good governance practices. Together they analyze how current dominant narratives serve to frame the global AI policy landscape\, as well as jointly identify effective strategies and collaborations for the future of responsible AI governance. \nBuilding on the AI Policy Research Roadmap\, which was developed through collaborative discussions at the inaugural AI Policy Summit 2024 in Stockholm\, summit participants can further advance a shared vision and concrete actions for the future of responsible AI governance through collaborative research and practice! \nLearn More About The Global AI Policy Research NetworkProgrammeWednesday 12 November – Welcome Drinks & ‘Indigenous Perspectives on AI’ @Vakwerkhuis\nBefore the summit starts we would like to invite participants to join us for welcome drinks and a workshop on ‘Indigenous Perspectives on AI’\n \nProgramme\n18.00 – 20.00 Drinks and Workshop hosted by Anna Melnyk (Delft Design for Values Institute) & Lynnsey Chartrand (Head of Indigenous Initiatives at Mila\, joining online).\nThis collaborative workshop invites participants to critically engage with Indigenous perspectives on artificial intelligence. Through reflective discussions with participants\, you explore how Indigenous knowledges\, governance practices\, and relational worldviews can inform more responsible\, equitable\, and sustainable decision-making about AI futures. The session aims to expand awareness of diverse epistemologies and to foster dialogue on how AI systems can better serve communities\, lands\, and ecosystems. \nMore information about the work they do you can read here: Design for Values and Critical Raw Materials: Decolonial Justice Perspective – Delft Design for Values Institute \nThursday 13 November – Day 1\nGeneral Programme\n08.30 – 09.00 Walk-in and Welcome Coffee\n09.00 – 13.00 Plenary Morning Programme\n13.00 – 14.00 Lunch\n14.00 – 17.30 Plenary Afternoon Programme\n17.30 Dinner and Drinks @Firma van Buiten \nFriday 14 November – Day 2\nGeneral Programme\n08.30 – 09.00 Walk-in and Welcome Coffee\n09.00 – 12.30 Plenary ad Break-out Morning Programme\n12.30 – 13.30 Lunch\n13.30 – 15.30 Plenary Afternoon Programme\n15.30 Close \nNovember 13 Detailed ProgrammeSpeakersNovember 14 Detailed ProgrammeDo you want to join this inspiring event? Please contact the organisers!\nTaylor Stone \nt.w.stone-1@tudelft.nl \nHelma Dokkum \nw.m.dokkum@tudelft.nl \nFull Programme November 13 – Day 1: Reframing AI Narratives 08.30 – 09.00 Welcome and Coffee \n09.00 - 09.15: Summit Opening\nSummit Opening by Viriginia Dignum (Umeå University)\, Geert-Jan Houben (TU Delft) and Isadora Hellegren Létourneau (Mila) \n09.15 - 11.00: Session 1 - A Year with the Roadmap for AI Policy Research & Network Round table\nNetwork introductions and reflections led by Isadora Hellegren Létourneau (Mila) \n11.00 – 11.30 Break \n11:30 - 13:00: Session 2 - Rethinking AI Safety and Sovereignty – Regional Perspectives (Hybrid session)\nWhat can be learned from assessing current governance approaches to AI sovereignty and safety in the EU\, Africa\, Asia\, and Canada? \nPanel moderated by Frank Dignum (Umeå University). \nPanellists (confirmed so far):\n> Ayantola Alayande (Global Center on AI Governance);\n> Carolina Aguerre (Universidad Católica del Uruguay)\, who will be joining online;\n> Edward Tsoi (AI Safety Asia)\, who will be joining online. \n13.00 – 14.00 Lunch \n14:30 - 15:30: Session 3 - Building Alternative Narratives\nCan novel insights from foresight methods and systems perspectives offer alternative narratives for the development of effective AI policies? \nPanel moderated by Ginerva Castellano (Uppsala University) \nPanellists (confirmed so far):\n> Roel Dobbe (TU Delft);\n> The Anh Han (Teesside University);\n> Sam Bogerd (Centre for Future Generations). \n15.30 – 16.00 Break \n16.00 - 17.30: Session 4 - Governance for Innovation\nHow could participatory and collaborative approaches foster a narrative that aligns governance and regulation with innovation? \nPanel moderated by Mirko Schaefer (Utrecht University). \nPanellists (confirmed so far):\n> Kerstin Bach (Norwegian University of Science and Technology);\n> Ley Muller (Women in AI Governance);\n> David Lewis (Trinity College Dublin). \n17.30 - 18.00: Close of Day 1 - Reframing AI Narratives\nNetworking opportunity \n18.00 – 20.00 Dinner and Drinks at Firma van Buiten \nFull Programme November 14 – Day 2: Moving Beyond High-Level Principles 08.30 – 09.00 Welcome and Coffee \n09.00 - 09.30: Opening of Day 2\nReflections on Day 1 \n09.30 - 12:30: Deep Dive Workshop - AI in Warfare: Actions\, Policies and Practices\nDeep Dive Workshop led by Nitin Sawhney (University of Arts Research Institute Helsinki) and Petter Ericson (Umeå University).\nThis workshop is held in the Panorama @Mondai\, ground floor. \nFor more information\, please refer to the designated event page. \n09.30 - 12:30: Deep Dive Workshop - AI in Health Care\nDeep Dive Workshop led by Jason Tucker (Umeå University)\, Fabian Lorig (Malmö University) and Stefan Buijsman (TU Delft).\nThis workshop is held in the Innovate @Mondai\, 1st floor. \nFor more information\, please refer to the designated event page. \n09.30 - 12:30: Deep Dive Workshop - Contextualising AI Principles: Universal Guidelines or Domain-Specific Policy?\nDeep Dive Workshop moderated by Tina Comes (TU Delft)\, with expert speakers Thomas Kox (Weisenbaum Institute)\, Arkady Zgonnikov (TU Delft)\, and Duuk Baten (SURF). \nThis workshop is held in the Connect @Mondai\, 1st floor. \nFor more information\, please refer to the designated event page. \n12.30 – 13.30 Lunch \n13.30 - 14.30: Session 6 - Building Bridges from Research to Policy\nReflections on deep-dive workshop; proposals to build stronger collaborations with policy-makers moving forward \nModerated by Virginia Dignum (Umeâ University) and Tina Comes (TU Delft) \n14.30 - 15.30: Session 7 - Next Steps for the Global AI Policy Research Network\nBrief reflections from deep-dive leads followed by plenary discussion led by Isadora Hellegren Létourneau (Mila) \n15.30 – 16.00 Close of Summit \nSpeakers Virginia Dignum. Professor in Responsible AI and Director of the AI Policy Lab\, Umeå University\nVirginia Dignum is a professor in Responsible Artificial Intelligence and the Director of the AI Policy Lab and a member of the UN High Level Advisory Body on AI and senior advisor to the Wallenberg Foundations. \nSessions\n> Summit opening & network round-table (November 13\, 2025 at 09.00h)\n> Session 1: Rethinking AI sovereignty (November 13\, 2025 at 10.00h)\n> Session 5: Deep-dive workshop – moving beyond high-level principles (November 14\, 2025 at 09.30h)\n> Session 6: Building bridges from research to policy (November 14\, 2025 at 13.30h) \nGeert-Jan Houben. Pro Vice Rector Magnificus Artificial Intelligence\, Data and Digitalisation\, TU Delft\n \nGeert-Jan Houben is Pro Vice Rector Magnificus Artificial Intelligence\, Data and Digitalisation (PVR AI) at Delft University of Technology (TU Delft). Leading the TU Delft activities in the field of AI\, data and digitalisation\, for education\, for research and valorisation\, and for relevant support. This includes the establishment of TU Delft AI Labs to promote cross-fertilisation between AI experts and scientists who use AI in their research. It also includes the representation of TU Delft in regional\, national and international co-operation on this theme. Full professor of Web Information Systems (WIS) at the Software Technology (ST) department at Delft University of Technology (TU Delft). Leading a research group on Web Information Systems (WIS) and involved in the education in Computer Science in Delft\, with a focus on data-based information systems on the Web.. \nSessions\n> Summit opening & network round-table (November 13\, 2025 at 09.00h) \nIsadora Hellegren Létourneau. Senior Project Manager AI Policy Research\, Public Policy and Inclusion\, Mila\n\nIsadora Hellegren leads multistakeholder and interdisciplinary AI policy research at Mila – Quebec Artificial Intelligence Institute\, such as the Mila AI Policy Fellowship. She works to bridge the gap between AI research and public policy to inform better AI policy – for the benefit of all. Before joining Mila\, Isadora was a Senior Policy Specialist at the Swedish International Development Cooperation Agency (Sida)\, where she advised on human rights and ICTs\, democratic governance\, and gender equality. Her academic and professional background focusing on internet governance and policy developments in relation to emerging technologies and social movements continues to inform her dedication to advancing meaningful participation in AI and beyond. She chairs the newly founded Global AI Policy Research Network (GlobAIpol)\, is a former Member of Steering Committee in the Global Internet Governance Academic Network (GIGANET) and has published articles in Oxford University Press Research Encyclopedia of Communication\, and Internet Histories: Digital Technology\, Culture and Society on related topics. \nSessions\n> Opening (November 13\, 2025 at 09.00h)\n> Session 1: A year with the Roadmap for AI Policy Research & Network round-table (November 13\, 2025 at 09.15h)\n> Session 7: Next steps for the Global AI Policy Research Network (November 14\, 2025 at 14:30h)\n> Closing Summit (November 14\, 2025 at 15:30h) \nFrank Dignum. Professor at Department of Computing Science\, Umeå University.\nFrank Dignum is Professor at the Umeå University Department of Computer Science\, leading a research group in the field of socially conscious AI. They develop models that can provide insights into how society can respond to political changes or natural disasters. \nSessions\n> Session 2: Rethinking AI safety and sovereignty – regional perspectives (November 13\, 2025 at 11:30h) \nAyantola Alayande. Researcher at the Global Center on AI Governance (Online)\nAyantola Alayande is a Researcher at the Global Center on AI Governance\, where he works on issues of international cooperation in AI policymaking and governance\, AI development in low- and middle-income countries (LMICs)\, compute governance\, AI Security\, state-led AI governance in Africa\, and AI security. His broader interests span the geopolitics/geoeconomics of emerging technologies\, global governance\, technology and industrial policy\, Africa-in-major-power-competition\, and digital methods/media. His writings have appeared in several notable research outlets\, including Nature\, the Atlantic Council\, The Productivity Institute\, The Productivity Monitor\, the Bennett Institute for Public Policy\, Research ICT Africa\, among others. Ayantola holds graduate degrees in public policy and international development\, respectively\, from the KDI School of Public Policy and the University of Edinburgh\, and is currently a PhD candidate in AI Geopolitics and Governance at the University of Oxford’s Department of International Development (ODID)\, where he is researching approaches to sovereignty in the AI value chain of emerging power nations. He has previously worked in research and consulting roles at the Bennett Institute for Public Policy at the University of Cambridge\, Kantar UK\, Research ICT Africa (RIA)\, and Dataphyte. \nSessions\n> Session 2: Rethinking AI safety and sovereignty – regional perspectives (November 13\, 2025 at 11:30h) \nCaroline Aguerre. Associate Professor\, Universidad Católica del Uruguay. (Online)\nCaroline Aguerre is Associate Professor at the Universidad Católica del Uruguay and honorary co-director at CETYS\, at Universidad de San Andrés (Argentina). Her research interests include theories and practices around the governance of communications technologies and infrastructures\, particularly the Internet and Artificial intelligence\, the intersection with political economy and north-south perspectives. In 2020 she was part of the UNESCO Adhoc Expert Working Group on the Recommendations on the Ethics of AI. She has been part of the IGF Multistakeholder Advisory Groupd (2012-2014) and in 2025. She was a resident fellow at the CGR21 (2020-2021) University of Duisburg-Essen (Germany). \nSessions\n> Session 2: Rethinking AI safety and sovereignty – regional perspectives (November 13\, 2025 at 11:30h) \nEdward Tsoi. Co-Founder AI Safety Asia. (Online)\nEdward Tsoi is the founder of Connecting Myanmar and experienced leader in technology startups and non-profits. He led the APAC business of a late-stage startup with over $100M raised. He is also a former board member of Amnesty International Hong Kong and advisor to multiple corporate-NGO initiatives. Now he is one of the co-founders of AI Safety Asia. \nSessions\n> Session 2: Rethinking AI safety and sovereignty – regional perspectives (November 13\, 2025 at 11:30h) \nGinevra Castellano. Professor at Department of Information Technology\, Uppsala University\nGinevra Castellano is a Full Professor in Intelligent Interactive Systems at the Department of Information Technology of Uppsala University\, Sweden\, where she is the Founder and Director of the Uppsala Social Robotics Lab. Her research is in the area of social robotics and human-robot interaction\, addressing questions on how we can build human-robot interactions that are ethical and trustworthy\, including robot ethics\, robot autonomy and human oversight\, gender fairness\, robot transparency and trust\, human-robot relationship formation\, both from the perspective of developing computational skills for robotic systems\, and their evaluation with human users to study acceptance and social consequences. She has been the Principal Investigator of several national and EU-funded projects on ethical and trustworthy human-robot interaction\, in application areas spanning education\, healthcare\, and transportation systems. She is currently the coordinator of the CHANSE-NORFACE MICRO (Measuring children’s wellbeing and mental health with social robots) project (2025-2028)\, and the WASP-HS Research Group on Child Development in the Age of AI and Social Robots (2025-2030\, funded by WASP-HS Wallenberg AI\, Autonomous Systems and Software Program – Humanity and Society. Castellano was an invited speaker at the UN AI for Good Global Summit 2024 and a keynote speaker the World Summit AI 2024. She was recently awarded the Thuréus prize 2025 from the Royal Society of Sciences in Uppsala. \nSessions\n> Session 3: Building alternative narratives (November 13\, 2025 at 14:00h) \nRoel Dobbe. Assistant Professor Sociotechnical AI Systems\, TU Delft\nRoel Dobbe is an Assistant Professor in Technology\, Policy & Management at Delft University of Technology focusing on Sociotechnical AI Systems. He received a MSc in Systems & Control from Delft (2010) and a PhD in Electrical Engineering and Computer Sciences from UC Berkeley (2018)\, where he received the Demetri Angelakos Memorial Achievement Award. He was an inaugural postdoc at the AI Now Institute and New York University. His research addresses the integration and implications of algorithmic technologies in societal infrastructure and democratic institutions\, focusing on issues related to safety\, sustainability and justice. His projects are situated in various domains\, including energy systems\, public administration\, and healthcare. Roel’s system-theoretic lens enables addressing the sociotechnical and political nature of algorithmic and artificial intelligence systems across analysis\, engineering design and governance\, with an aim to empower domain experts and affected communities. His results have informed various policy initiatives\, including environmental assessments in the European AI Act as well as the development of the algorithm watchdog in The Netherlands. \nSessions\n> Session 3: Building alternative narratives (November 13\, 2025 at 14:00h) \nThe Anh Han. Professor in Computer Science\, Teesside University\nThe Anh Han is a Full Professor of Computer Science and Director of the Center for Digital Innovation at School of Computing\, Engineering and Digital Technologies\, Teesside University. His current research spreads several topics in AI and behavioural research\, including the dynamics of human cooperation\, evolutionary game theory\, agent-based simulations\, behavioural economics\, and AI governance modelling. \nSessions\n> Session 3: Building alternative narratives (November 13\, 2025 at 14:00h) \nSam Bogerd. Technology Foresight Analyst\, Centre for Future Generations\nSam Bogerd bridges foresight and policy\, tackling the governance of advanced technologies. With a focus on innovation and long-term impact\, he turns complex challenges into practical\, future-ready solutions. \nSessions\n> Session 3: Building alternative narratives (November 13\, 2025 at 14:00h) \nMirko Schaefer. Associate Professor Media and Performance Studies\, Utrecht University\nMirko Tobias Schaefer is Associate Professor of AI Data & Society at Utrecht University. He leads the master program Applied Data Science\, and he is the Science Lead at the Data School. Mirko also serves on the Advisory Committee Analytics of the Netherlands Ministry of Finance. \nHis research explores the social impact of datafication\, algorithmic governance\, and the politics of digital infrastructures. With the Data School he investigates how AI and data practices transform public institutions\, and he develops applicable processes for responsible and accountable governance of AI and big data. \nTogether with Karin van Es & Tracey Lauriault he edited the volume Collaborative Research in the Datafied Society. Methods and Practices for Investigation and Intervention (Amsterdam University Press 2024). \nSessions\n> Session 4: Governance for innovation (November 13\, 2025 at 16:00h) \nKerstin Bach. Professor of Artificial Intelligence\, Norwegian University of Science and Technology\nKerstin Bach is a professor of Artificial Intelligence at the Norwegian University of Science and Technology (NTNU)\, Director of the Norwegian Open AI Lab\, and is Research Director at the Norwegian Research Center for AI Innovation (NorwAI). She holds a PhD from the University of Hildesheim and worked as a researcher at the German Research Center for AI (DFKI)\, where she developed decision support systems for various industries. After completing her Ph.D.\, Kerstin joined Verdande Technology\, a Trondheim-based AI startup developing real-time case-based reasoning (CBR) technology for the oil and gas\, financial services\, and healthcare industries. At Verdande\, she was both a research scientist and software engineer\, working closely with partners exploring CBR in their technology stack. In 2015\, Kerstin joined NTNU’s computer science department. \nIn recent years\, Kerstin’s research has been primarily focused on crafting AI prototypes tailored for healthcare\, intelligent sensing\, and knowledge management. She managed an EU H2020 research grant\, selfBACK\, whose results are currently being developed as a product for patients with lower back pain. Presently\, Kerstin is steering multiple interdisciplinary projects funded by the Norwegian Research Council and NTNU dedicated to AI-driven and patient-centered healthcare services. \nBeyond her research contributions\, she actively organizes workshops\, conferences\, and symposia that discuss various aspects of AI research. Throughout her career\, Kerstin has undertaken responsibilities such as being the driving force behind myCBR\, an open-source tool adopted in research and industry projects across Europe\, and is a board member of the Norwegian AI Society and the German AI Society. Her commitment to advancing AI extends to NTNU\, where she promotes AI research among students and strongly emphasizes encouraging females to pursue technology careers. As an educator\, she imparts her knowledge through AI and Machine Learning courses\, guiding and involving master’s and Ph.D. Her role as NorwAI research director finds her at the forefront of collaborative projects between industry and academia. Within this context\, she established FEMAIS\, a mentorship program tailored for aspiring female AI students\, effectively bridging the gap between their final year of studies and the launch of their professional journeys. Kerstin’s commitment to AI outreach also extends to the Norwegian Open AI Lab\, where she organizes events\, gives talks\, and participates in panels and seminars to discuss AI research among professionals and the broader public. \nSessions\n> Session 4: Governance for innovation (November 13\, 2025 at 16:00h) \nLey Muller. Lead of Nordic Women in AI Governance and Research Lead for Women in AI Norway\nLey Muller is a transformational leader with 10 years’ experience in evidence-based policy\, public health\, and AI. She currently leads Nordic Women in AI Governance and is the research lead for Women in AI Norway\, and is very aware of how insufficient a gender-only lens is if AI governance is to properly address marginalized perspectives. She has experience in consulting\, government\, and academia from Norway\, the US\, Germany\, and the WHO – and is very recently (and somewhat proudly) ex-tech. \nSessions\n> Session 4: Governance for innovation (November 13\, 2025 at 16:00h) \nDavid Lewis. Associate Professor at the School of Computer Science and Statistics\, Trinity College Dublin\nDave Lewis is an Associate Professor at the School of Computer Science and Statistics at Trinity College Dublin where he served as the head of its Artificial Intelligence Discipline. He is the Acting Director of Ireland”s ADAPT Centre for human centric AI and digital content technology research. He investigates open semantic models for trustworthy AI and data governance and contributes to international standards in digital content processing and trustworthy AI. His research focuses on the use of open semantic models to manage the Data Protection and Data Ethics issues associated with digital content processing. He has led the development of international standards in AI-based linguistic processing of digital content at the W3C and OASIS and is currently active in international standardisation of Trustworthy AI at ISO/IEC JTC1/SC42 and CEN/CENELEC JTC21. \nSessions\n> Session 4: Governance for innovation (November 13\, 2025 at 16:00h) \nTina Comes. Associate Professor in Resilience Engineering\, TU Delft\n \nTina Comes is the Scientific Director of the DLR Institute for Terrestrial Infrastructure Protection in Germany\, and Full Professor in Decision Theory & ICT for Resilience at the TU Delft in the Netherlands. Since her PhD\, she has been determined to better understand decision-making of individuals and groups in the context of climate risk and crises. Her work aims at using AI and information technology to support decisions of individuals and groups in complex\, uncertain environments. She integrates behavioural insights with advanced computational approaches—including distributed AI\, multi-agent systems\, optimisation models\, and digital twins. She serves on the editorial Board of Nature Scientific Reports. Her research has received international recognition through awards and fellowships\, and she is a member of Academia Europaea and the Norwegian Academy of Technological Sciences. Internationally\, under the EU’s Science Advise Mechanism\, she chaired the Working Group on Strategic Crisis Management in Europe and is now chairing the Working Group for AI in Crisis Management. \nSessions\n> Deep Dive Workshop – Contextualising AI Principles: Universal Guidelines or Domain-Specific Policy? (November 14\, 2025 at 9:30h)\n> Session 6: Building bridges from research to policy (November 14\, 2025 at 13:30) \nNitin Sawhney. Visiting Researcher\, University of the Arts Research Institute Helsinki\nNitin Sawhney is a visiting researcher at the University of the Arts Research Institute. He has a background in computational media\, human-centered design and documentary film. He served as a Professor of Practice in the Department of Computer Science at Aalto University\, leading the Critical AI and Crisis Interrogatives (CRAI-CIS) research group. He completed his doctoral dissertation at the MIT Media Lab\, and taught in the Media Studies program at The New School and the MIT Program in Art\, Culture and Technology (ACT). Working at the intersection of Human Computer Interaction (HCI)\, responsible AI\, and participatory design research\, he examines the critical role of technology\, civic agency\, and social justice in society and crisis contexts. He has co-curated exhibitions and co-directed documentaries in Gaza and Guatemala\, focusing on creative resistance and historical memory in conditions of war and conflict. In October 2024 he co-organized the Contestations.AI Transdisciplinary Symposium on AI\, Human Rights and Warfare in Helsinki. He is currently developing a transdisciplinary platform to foster critical dialogues and co-existence through science\, technology\, and the arts\, and conceptualizing a new documentary film project critically examining the role of AI in warfare. \nSessions\n> Deep Dive Workshop –  AI in Warfare: Actions\, Policies and Practices (November 14\, 2025 at 9:30h in Panorama @Mondai) \nPetter Ericson. Staff Scientist\, Umeå University\nPetter Ericson is staff scientist in the research group for Responsible AI\, working on graph problems and formal descriptions of structured data\, with a strong interest in ethics\, music and society. \nSessions\n> Deep Dive Workshop –  AI in Warfare: Actions\, Policies and Practices (November 14\, 2025 at 9:30h in Panorama @Mondai) \nJason Tucker. Researcher\, Institute for Future Studies\, Sweden. Adjunct Associate Professor AI Policy Lab\, Umeå University\nJason Tucker is a researcher at the Institute for Futures Studies and an Adjunct Associate Professor at the AI Policy Lab\, the Department of Computing Science\, Umeå University. He is also a Visting Research Fellow at AI & Society\, the Department of Technology and Society\, LTH\, Lund University. His research interests include AI and health\, the global political economy of AI\, public policy\, citizenship\, human rights and global governance. He currently leads the research project The Politics of AI and Health: From Snake Oil to Social Good funded by WASP-HS. Within this he is particularly interested in developing interdisciplinary approaches to better support policy making on the future role of AI in healthcare. \nPreviously he has worked on law and policy reform\, citizenship and public sector digitalisation\, having done so for the United Nations\, civil society\, industry and in academia. \nSessions\n> Deep Dive Workshop – AI in Health Care (November 14\, 2025 at 9:30h) \nFabian Lorig. Associate Senior Lecturer and Associate Professor Computer Science\, Malmö University\n\nFabian Lorig is an Associate Senior Lecturer (Biträdande Lektor) and Associate Professor (Docent) in Computer Science\, with a focus on agent-based modelling\, the use of AI in socio-technical systems\, and the development of simulation-based decision and policy support. His research integrates computational methods with real-world applications in public health\, mobility\, and policy\, with a focus on understanding and addressing the societal implications of AI and to support the development of responsible and impactful technologies. He has led and contributed to interdisciplinary research projects where they design computational models that enable stakeholders and policy actors to better understand complex dynamics of social systems and to anticipate the potential consequences of policy interventions and AI technologies. Through participatory approaches and social simulations\, his research facilitates evidence-based decision-making in domains where digital technologies shape societal outcomes. \nSessions\n> Deep Dive Workshop – AI in Health Care (November 14\, 2025 at 9:30h) \nStefan Buijsman. Associate Professor Responsible AI\, TU Delft\nStefan Buijsman studied computer science and philosophy in Leiden and completed his PhD on the philosophy of mathematics at Stockholm University when he was 20. He continued his research on the intersection of philosophy of mathematics and cognitive science at Stockholm University and the Institute for Futures Studies on a research grant from Vetenskapsrådet. \nAside from research\, he engages in popular science writing with now three books to his name. The most recent is on AI and its links to philosophy under the Dutch title ‘Alsmaar Intelligenter’. From there\, his research focus has switched to philosophy of AI on which he works at TU Delft. \nHe is co-founder of the Delft Digital Ethics Centre\, which focusses on the translation of ethical values into design requirements that can be used by engineers\, decision and policy makers and regulators. There he works on the broad range of ethical challenges in projects with external stakeholders. His own research focusses mostly on the explainability of AI algorithms. How can we make these algorithms more transparent? What information do we need to use them responsibly in their many applications? He uses philosophical accounts from epistemology and philosophy of science to formulate design requirements on AI tools on these knowledge-related aspects. In collaboration with computer scientists he also aims to develop new tools to improve the explainability of algorithms. \nSessions\n> Deep Dive Workshop – AI in Health Care (November 14\, 2025 at 9:30h) \nAbout the Network\nThe Global AI Policy Research Network (GlobAIpol) organizes this annual event. GlobAIpol is a community of practice that serves as a platform for policy researchers and professionals to advance responsible AI policy research\, evidence-based insights and actionable strategies for stakeholders across academia\, industry\, public sector\, and civil society. AI policy research has emerged as an essential guide to navigating the complex interplay between technological innovation and societal impact. It ensures that we guide advancements in AI in alignment with ethical\, legal\, and social priorities. \nThe network was established following the inaugural AI Policy Research Summit in Stockholm\, November\, 2024. The inaugural summit was a joint initiative led and organized by the AI Policy Lab\, Umeå\, Sweden\, and Mila – Quebec AI Institute\, Montreal\, Canada. The summit brought together a community eager to address the need for better synergies between research\, policy and impact to realize responsible\, equitable and sustainable AI for the benefit of all. \nA core objective of the GlobAIpol network is to inform global approaches to AI governance by sharing best practices and fostering collaboration on developing AI policy. This includes advancing responsible AI policy research that meets the growing need for governance grounded in ethical\, transparent\, and evidence-based practices to shape inclusive and trustworthy policies. The network takes an interdisciplinary and multistakeholder approach to holistically address the complex challenges and opportunities that arise with these developments. Through these objectives\, the network works for effective knowledge exchange to bridge the gap between AI policy research and practice. \nRead more about the network’s commitments in the Roadmap for AI Policy Research.
URL:https://mondai.tudelftcampus.nl/event/global-ai-policy-research-summit-2025/
LOCATION:Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD
ATTACH;FMTTYPE=image/jpeg:https://mondai.tudelftcampus.nl/wp-content/uploads/sites/7/2025/06/marietjkeynote_sized.jpg
ORGANIZER;CN="Mondai | House of AI":MAILTO:mondai@tudelft.nl
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20251114T093000
DTEND;TZID=Europe/Amsterdam:20251114T123000
DTSTAMP:20260420T210907
CREATED:20251020T133407Z
LAST-MODIFIED:20251030T095415Z
UID:10000246-1763112600-1763123400@mondai.tudelftcampus.nl
SUMMARY:Deep Dive Workshop - AI in Health Care
DESCRIPTION:Global AI Policy Summit Deep Dive Workshop\nAI in Health CareDespite progress in AI governance\, much of the current regulatory framework remains grounded in high-level principles and guidelines which\, while valuable\, often lack the specificity required for practical implementation – particularly at the intersection with the highly regulated and operationally complex domain of healthcare. This is especially urgent in “high-risk” areas of healthcare\, where decisions are irreversible\, outcomes are critical\, and resources are constrained. Such applications demand elevated levels of transparency\, accountability\, and ethical oversight. This workshop draws on examples from clinical practice\, public health\, health policy\, and global health to foster open discussion around the most pressing priority areas at the intersection of AI and healthcare. \nWorkshop Programme and Speakers The workshop starts off with short talks by five expert presenters followed by an interactive round table discussion. \n> Fabian Lorig (Associate Professor Computer Science\, Malmö University)\n> Jason Tucker (Researcher at the Institute for Future Studies and Adjunct Associate Professor at the AI Policy Lab\, Umeå University)\n> Stefan Buijsman (Associate Professor Responsible AI\, TU Delft)\n> Siri Helle (Psychologist and Award-winning Author of The Emotion Trap).\n> Marie-Therese Sekwenz (PhD candidate at Faculty of Technology\, Policy and Management\, TU Delft and Deputy Director of the AI Futures Lab on Rights and Justice)
URL:https://mondai.tudelftcampus.nl/event/ddw-ai-in-health-care/
LOCATION:Innovate @Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD\, Netherlands
ATTACH;FMTTYPE=image/jpeg:https://mondai.tudelftcampus.nl/wp-content/uploads/sites/7/2025/10/AIinHealthCare_OrganDonation_StockImage.jpg
ORGANIZER;CN="Mondai | House of AI":MAILTO:mondai@tudelft.nl
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20251114T093000
DTEND;TZID=Europe/Amsterdam:20251114T123000
DTSTAMP:20260420T210907
CREATED:20251020T134854Z
LAST-MODIFIED:20251113T131725Z
UID:10000245-1763112600-1763123400@mondai.tudelftcampus.nl
SUMMARY:Deep Dive Workshop - AI in Warfare: Actions\, Policies and Practices
DESCRIPTION:Global AI Policy Summit Deep Dive Workshop\nAI in Warfare: Actions\, Policies and PracticesArtificial Intelligence (AI)\, Big Data analytics\, and Automated Decision Making (ADM) are increasingly being used for surveillance\, targeting\, and autonomous or semiautonomous drone warfare\, in addition to proliferating misinformation on social media during war and conflicts. Conversely\, related technologies are also leveraged for investigation of human rights violations e.g. as done by members of Forensic Architecture\, Interpret\, Airwars and Bellingcat. Meanwhile\, campaigns such as Stop Killer Robots are working through the UN and other forums towards an international ban on\, at minimum\, Lethal Autonomous Weapon Systems (LAWS).  \nHow should researchers\, scholars\, government actors and civil society engage and act critically to highlight\, investigate\, and prevent the use of AI-based systems in perpetuating human rights violations in and out of warfare and devise critical policies and practices that mitigate harms to society today? \nThe current AI Act has many exceptions to the use of AI for policing\, surveillance and military applications\, while there are hardly any enforceable provisions related to use of EU technologies globally that violate human rights. This workshop engages inside and critical perspectives from military officers\, AI researchers and scholars in International Humanitarian Law (IHL)\, human rights activists\, Members of Parliament\, and NGOs\, dealing with these concerns. We will examine these aspects in the context of ongoing conflicts in Gaza and Ukraine\, among others globally\, from the role of AI for spreading misinformation in war\, to autonomous warfare\, and civic / human rights violations. Our aim is to encourage inter-disciplinary and critical theorizing on what policies\, regulations and practices are urgently needed to address these emerging concerns\, while developing an action agenda for future research\, concrete policy proposal work\, and pragmatic societal outcomes. \nWorkshop Programme This workshop takes places in the “Panorama @Mondai”\n09.30 – 09.40 Opening & Key Themes Presented by Nitin Sawhney & Petter Ericson\n09.40 – 11.00 Panel Presentations by Panellists + Q&A and Discussions\n11.00 – 11.10 Short Break\n11.10 – 11.30 Form Participant groups led by Invited Experts + Intros & Perspectives\n11.30 – 12.15 Workshop Deliberations and Formulating Key Outcomes\n12.15 – 12.30 Wrap-Up & Next Steps \nExpert Panellists \nVirginia Dignum\, Professor in Responsible Artificial Intelligence and Director of the AI Policy Lab\, Umeå University and Member of UN High Level Advisory Body on AI\n\nMartine Jaarsma\, Doctoral Researcher\, International Humanitarian Law\, Military uses of AI and Critical Legal Studies\, Department of Political Science\, University of Antwerp\nMegan Karlshoej-Pedersen\, Policy Specialist at Airwars (presenting online)\nRainer Rehak\, Research Associate\, Weizenbaum Institute\nIlse Verdiesen\, Research Fellow\, National Defense Academy (NLDA)\, Chief of Staff Joint IV Commando (Col)\nTaylor Kate Woodcock LL.M.\, PhD Researcher in Public International Law\, Asser Institute\n\nWorkshop Outcomes \nImplications for Policy Research Agenda\nBarriers and obstacles to enforcing / moderating use of AI in warfare – conventions\, regulations and international treaties? What can we do to highlight / change them?\nFostering new collaborations within the group for research\, policy action or advocacy\nPlanning the next Contestations.AI symposium (in 2026) and opportunities for similar workshops at other conferences?\nConcrete Action Items:\n\nGlobAIPol signing up to Stop Killer Robots?\nWhitepaper\, opinion piece or journal article?\nStakeholder deliberations (as follow-up workshop)?\n\n\n\nRelated Events\, Articles and Reports Events \nContestations.AI: Transdisciplinary Symposium on AI\, Human Rights and Warfare\, Helsinki\, Oct 23\, 2024: https://contestations.ai/  \nArticles and Reports \nOp-Ed: Regulating military use of AI is in everyone’s interest\, Michael C. Horowitz\, Financial Times\, October 13\, 2025. \nResponsible by Design: Strategic Guidance Report on the Risks\, Opportunities\, and Governance of Artificial Intelligence in the Military Domain. Global Commission on Responsible Artificial Intelligence in the Military Domain (GC REAIM)\, September 2025.  \nCoveri\, Andrea\, et al. Big Tech and the US Digital-Military-Industrial Complex. Intereconomics\, vol. 60\, no. 2\, Sciendo\, 2025\, pp. 81-87.\n\nThe rolling text of the Group of Governmental Experts working a Convention of Certain Conventional Weapons\, in particular on the legal status of Lethal Autonomous Weapon Systems.
URL:https://mondai.tudelftcampus.nl/event/ddw-ai-in-warfare/
LOCATION:Panorama @Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD\, Netherlands
ATTACH;FMTTYPE=image/png:https://mondai.tudelftcampus.nl/wp-content/uploads/sites/7/2025/10/TUD_Mondai_AI_MHC_workshop_v2.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20251114T093000
DTEND;TZID=Europe/Amsterdam:20251114T123000
DTSTAMP:20260420T210907
CREATED:20251029T153703Z
LAST-MODIFIED:20251029T154004Z
UID:10000250-1763112600-1763123400@mondai.tudelftcampus.nl
SUMMARY:Deep Dive Workshop - Contextualising AI Principles: Universal Guidelines or Domain-Specific Policy?
DESCRIPTION:Global AI Policy Summit Deep Dive Workshop\nContextualising AI Principles: Universal Guidelines or Domain-Specific Policy?Artificial Intelligence principles and guidelines have proliferated in recent years—transparency\, fairness\, accountability\, and human oversight are widely endorsed. However\, when we attempt to implement and operationalise these principles in specific domains\, fundamental dilemmas emerge: \nIn crisis management: How do we balance the need for privacy with transparency when lives are at stake? Can we afford the time for human oversight in rapidly evolving disasters? \nIn education: How do we ensure fairness in AI-supported learning while respecting pedagogical autonomy and diverse learning needs? \nIn mobility: When does mandatory human oversight become a safety liability in time-critical traffic situations? How do we operationalize accountability when decisions are distributed across interconnected systems? \nThis workshop examines a fundamental question for AI policy: What principles have to remain generic across domains\, and what should or must be contextualized? The EU AI Act attempts to address this through risk-based categories\, but how well does this approach capture domain-specific tensions? Through structured dialogue across domains\, we will map where universal principles break down\, why contextualisation is necessary\, and what this means for developing both sector-specific guidelines and cross-cutting policy frameworks. \nWorkshop Programme This workshop takes places in the “Connect @Mondai” \n09.30 – 10.00 Opening & Domain Challenges.\nWelcome and short presentations: What are the specific dilemmas when applying AI principles in crisis management\, education\, and mobility? \n10.00 – 10.30 Plenary Principle Mapping.\nInteractive session: Starting from the EU Ethics Guidelines for Trustworthy AI and the Dutch Value Compass: Which principles do we prioritize? Where do different domains face irreconcilable conflicts? \n10.30 – 11.00 Coffee Break \n11.00 – 12.30 Domain Deep Dives\, Cross-Domain Synthesis\, and Next Steps\n> Structured discussions: Develop concrete scenarios where generic principles prove inadequate or create harm\, or where additional principles are needed. What makes each domain different? What adaptations are needed?\n> Bringing insights together: What patterns emerge? Where is universality possible\, and where is contextualization essential?\n> Discussion of potential collaborative outputs and future dialogue \nSpeakers Moderator: \n\nTina Comes\, Scientific Director DLR Institute for the Protection of Terrestrial Infrastructures; Professor in Decision Theory & ICT for Resilience\, Delft University of Technology\n\nExpert Speakers: \n\nThomas Kox\, Head of Research Group “Digitalisation and Networked Security”\, Weizenbaum Institute for the Networked Society\nArkady Zgonnikov\, Assistant Professor\, Human-Technology Interaction and Centre for Meaningful Human Control\, Delft University of Technolog\nDuuk Baten\, Advisor on Digitalisation and AI in Education\, SURF
URL:https://mondai.tudelftcampus.nl/event/ddw-contextualising-ai-principles/
LOCATION:Connect @Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD\, Netherlands
ATTACH;FMTTYPE=image/jpeg:https://mondai.tudelftcampus.nl/wp-content/uploads/sites/7/2025/10/TU250612_4059_0283_lowres.jpg
ORGANIZER;CN="Mondai | House of AI":MAILTO:mondai@tudelft.nl
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20251118T100000
DTEND;TZID=Europe/Amsterdam:20251119T170000
DTSTAMP:20260420T210907
CREATED:20251029T104344Z
LAST-MODIFIED:20251029T104344Z
UID:10000249-1763460000-1763571600@mondai.tudelftcampus.nl
SUMMARY:Symposium: Feminist AI and Collective Wellbeing
DESCRIPTION:In the ‘Feminist AI and Collective Wellbeing’ symposium\, we invite researchers\, artists\, and practitioners to explore and contest the promises and pitfalls of AI in shaping collective wellbeing. \n\n\nPromises of AI include better societal wellbeing through improved healthcare\, relieved workloads\, or efficient usage of natural resources. Yet not everyone’s wellbeing counts evenly\, as AI simultaneously depends on and disrupts collectivity\, for instance\, through its pressure on shared environmental resources\, worker health\, and data exploitation and extractivism. How can we reimagine these dynamics\, and centre collective wellbeing so that it becomes a basis for caring and sustaining relationships around AI development and implementation? \nOur goal is not to provide definitive answers or fixed definitions of wellbeing and collectivity\, but to open a shared space for inquiry\, provocation\, and speculation. By foregrounding feminist\, decolonial\, and ecological perspectives\, we aim to imagine futures in which AI development and adoption are aligned with collective wellbeing.  \nThe symposium invites participants to explore how these relationships and entanglements might be reimagined\, and how AI can be critically reshaped\, reoriented\, or even refused in pursuit of more collective and caring futures. \nThrough international keynotes and a workshop on art-based AI inquiry\, we invite participants to reflect on these questions: \n\n\nWhat collectives are prioritized in the development of AI? Whose wellbeing is valued\, and whose is erased to maintain the wellbeing of others? \n\n\nHow can communities engage with AI on their own terms? What material resources\, infrastructures\, or type of data would they need to do so? \n\n\nWhat collective futures and imaginaries might we create together\, rooted in shared wellbeing rather than extractive logics? \n\n\nCan AI ever be truly aligned with collective wellbeing\, or are there cases where the most ‘caring’ act might be to refuse or resist AI altogether? \n\n\n\n\n\n\n\n\nRegister Here18 November – Part 1 |  10.00 – 15.00 \nWorkshop on art-based AI inquiry for collective knowledge generation \nOrganizers: Feminist Generative AI Lab with Virginia Tassinari and Vera van der Burg \nGuests: Soyun Park\, Mafalda Gamboa\, and Elvia Vasconcelos. \nIn this workshop\, we explore art-based inquiry as an alternative form of knowledge generation\, which can complement and enrich traditional approaches to research in AI. We invite participants to engage with new\, unusual\, artistic\, and embodied forms of exploration\, to reflect on the symposium theme. \nPlease note that the workshop has limited spots. Lunch is included. \n18 November – Part 2 |  15.00 – 17.00 \nKeynotes + Discussion + Drinks \nPlease note you can choose to register only to this part of the symposium. \nRegister Here19 November |  10.00 – 17.00 \nPhD Day \nFollowing the symposium on November 18th\, we invite PhD candidates to join us for a dedicated day of peer exchange\, collaborative feedback\, and dialogue. The PhD Day offers PhD candidates working on topics such as AI\, feminism\, care\, collectivity\, sustainability\, digital labor\, and related themes an opportunity to continue explore the theme of the symposium in relation to their own research themes and practices in an interdisciplinary environment. \nThe PhD day program will include peer review sessions that allow participants to share work in progress and to receive feedback from their peers; interactive activities that addresses the challenges of working as a PhD researcher; as well as community building and networking opportunities. \nMore info on PhD Day
URL:https://mondai.tudelftcampus.nl/event/symposium-feminist-ai-and-collective-wellbeing/
LOCATION:Next: Delft\, Molengraaffsingel 8\, 2629 JD\, Delft\, 2629 JD\, Netherlands
ATTACH;FMTTYPE=image/jpeg:https://mondai.tudelftcampus.nl/wp-content/uploads/sites/7/2025/10/event.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20251125T150000
DTEND;TZID=Europe/Amsterdam:20251125T180000
DTSTAMP:20260420T210907
CREATED:20251105T124315Z
LAST-MODIFIED:20251112T091003Z
UID:10000254-1764082800-1764093600@mondai.tudelftcampus.nl
SUMMARY:Digitale soevereiniteit van EU Recht in het tijdperk van BigTech
DESCRIPTION:Het Center for Law\, Design & AI (een samenwerking van de TU Delft en Erasmus School of Law) organiseert 25 november \nDigitale soevereiniteit van EU Recht in het tijdperk van BigTech (Beperkt aantal plaatsen beschikbaar) \nWat is het belang van digitale soevereiniteit bij de inzet van AI in de juridische sector en wat is de invloed AI en Big Tech op de rechtstaat?\nDeze onderwerpen bediscussieren we graag met alle verschillende spelers in de juridische sector\, zoals advocatenkantoren\, legal tech firms\, rechtspraak. \nDetails:\n25 november\n15 – 18 h\nTU Delft The Hague Campus\nBezuidenhoutseweg 63\nDen haag\, 2594 AC\n \nProgramma \n15.00 – Inloop\n15:30 – 15:50 Introductie: \n\nCenter for Law\, Design & AI – Cees Zweistra\, Peter Lloyd \nDigitale soevereiniteit en implicaties voor recht\nIs soeverein ‘by design’ mogelijk?\n\n15:50 – 16:30 – 10 min introductie door elke spreker \n\nPels Rijcken – Sandra van Heukelom-Verhage\nRathenau Instituut – Quirine van Eeden\nTNO – GPT-NL –  Lieke Dom\nBits of Freedom – Lotje Beek – Bits of Freedom\n\n16:30 – 16:40 – Pauze\n16:40 – 17:10 – Begeleide paneldiscussie\n17:10 – 17:30 – Q&A – open voor publieke discussie\n17:30 – 18:00 – Borrel
URL:https://mondai.tudelftcampus.nl/event/digitale-soevereiniteit-van-eu-recht-in-het-tijdperk-van-bigtech/
LOCATION:TU Delft The Hague Campus\, Bezuidenhoutseweg 63\, Den haag\, 2594 AC\, Netherlands
ATTACH;FMTTYPE=image/png:https://mondai.tudelftcampus.nl/wp-content/uploads/sites/7/2025/06/CLDAI-CAIDD_1200x900px.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20251127T143000
DTEND;TZID=Europe/Amsterdam:20251127T170000
DTSTAMP:20260420T210907
CREATED:20251008T110002Z
LAST-MODIFIED:20251106T100014Z
UID:10000243-1764253800-1764262800@mondai.tudelftcampus.nl
SUMMARY:Best AI-Related MSc thesis Award 2025
DESCRIPTION:Mondai | House of AI and the TU Delft AI Initiative are happy to host\nthe AI-Related MSc Thesis Awards 2025!The Best AI-Related MSc Thesis Award (short: AI Thesis Award) is a new award that celebrates outstanding master’s research at TU Delft dedicated to development\, application or contexts of artificial intelligence. Master’s students from all faculties participated with their finished thesis\, on condition that research is centered around or involves AI. The prize will be awarded in two categories: IN AI for research that advances AI itself\, and WITH AI for research that applies AI in a specific domain. One TU Delft graduate will be selected in each category after the top 3 candidates pitch their thesis at this award ceremony. \nProgramme \n14.30 – Walk-in\n15.00 – Opening\n15.15 – Thesis pitches\n16.15 – Break + walk-in alumni community\n16.30 – Award ceremony & Kick Off Alumni Community for AI\, Data & Digitalisation\n17.00 – Network drinks \nFinalists Best AI-related MSc Thesis Award 2025 \nCategory IN AI \n\nKrzysztof Piotr Baran (Computer Science @Faculty of EEMCS): Federated MaxFuse: Diagonal Integration of Weakly Linked Spatial and Single-cell Data through Federated Learning\nPrajit Bhaskaran (Computer Science @Faculty of EEMCS): Transformers Can Do Bayesian Clustering\nSimon Gebraad (Robotics @Faculty of ME): LeAP: Label any Pointcloud in any domain using Foundation Models\n\nCategory WITH AI \n\nAntonio Magherini (Civil Engineering @Faculty of CEG): JamUNet: predicting the morphological changes of braided sand-bed rivers with deep learning\nIsa Oguz (Management of Technology @Faculty of TPM): Victim Blaming Bias in Traffic Accidents Using Large Language Models\nJeroen Hagenus (Robotics @Faculty of ME): Realistic Adversarial Attacks for Robustness Evaluation of Trajectory Prediction Models\n\nAI Alumni Community Kick Off  \nIf you only want to attend the launch of the AI Alumni Community\, the walk-in is between 16:15 -16:30 and borrel starts around 17:00 \nDuring this event\, we also launch the new AI\, Data and Digitalisation Alumni Community (short: AI Alumni Community). By connecting TU Delft alumni across generations\, disciplines and sectors\, we aim to unlock new opportunities for innovation\, strengthen the bridge between research\, application and societal value\, and shape a digital future that benefits everyone. This community is open to all past\, present and future TU Delft alumni with an interest in or background in AI\, data and digitalisation. Community members include graduates from bachelor’s and master’s programmes as well as PhD alumni from across the university.
URL:https://mondai.tudelftcampus.nl/event/best-ai-related-msc-thesis-award-2025/
LOCATION:Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD
ATTACH;FMTTYPE=image/jpeg:https://mondai.tudelftcampus.nl/wp-content/uploads/sites/7/2025/10/TU250612_4059_0283_lowres.jpg
ORGANIZER;CN="Mondai | House of AI":MAILTO:mondai@tudelft.nl
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260213T090000
DTEND;TZID=Europe/Amsterdam:20260213T180000
DTSTAMP:20260420T210907
CREATED:20260115T145217Z
LAST-MODIFIED:20260202T104703Z
UID:10000258-1770973200-1771005600@mondai.tudelftcampus.nl
SUMMARY:AI Cup 2026
DESCRIPTION:AI Cup 2026\nPowered by Team Epoch and AIC4NL This year\, Team Epoch – the AI Dreamteam – is organising their first AI Cup. The AI Cup 2026 is a new nationwide competition uniting the Netherlands’ brightest minds to tackle real-world challenges from society with AI. While most AI competitions prioritize model accuracy\, AI Cup 2026 will also evaluate the implementation proposal. AI Cup is open to all students and recent graduates from Dutch universities. The team is currently working on the preparation of the competition platform and finalizing the technical setup. On February 13th the competition kicks off\, with Nomination Day taking place on March 28th and the Award Ceremony at the AIC4NL Congress on April 14th. \nInformation and Registration AI Cup 2026Build meaningful AI \nYou design\, train an implementable AI system that solves a real sustainability or societal challenge. Compete on a fair playing field \nDesigned for students. No GPU arms race\, no hidden metrics\, no leaderboard tricks. Show yourself on a national stage \nCompete against top engineering & AI students from 5+ universities – and present at the Dutch AI Congress from AIC4NL. WHO CAN JOIN\nAI Cup is open to all students and recent graduates from Dutch universities. You don’t need to be an AI expert\, but you should be excited to learn\, build\, and compete. The teams will be composed of 1-5 people per team. Please have everyone on your team register. \nLooking for team members? We’ll help you!
URL:https://mondai.tudelftcampus.nl/event/ai-cup-2026/
LOCATION:Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD
ATTACH;FMTTYPE=image/jpeg:https://mondai.tudelftcampus.nl/wp-content/uploads/sites/7/2026/01/AIC4NL-Webpage-_800-x-600-300ppi.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260219T120000
DTEND;TZID=Europe/Amsterdam:20260219T133000
DTSTAMP:20260420T210908
CREATED:20260122T124509Z
LAST-MODIFIED:20260122T124509Z
UID:10000261-1771502400-1771507800@mondai.tudelftcampus.nl
SUMMARY:TU Delft AI Lunch: AI Regulations (and how to work within them)
DESCRIPTION:Mondai | House of AI is happy to host the new edition of the TU Delft AI Lunch:\nAI Regulations (and how to work within them)\n This edition of the Delft AI Lunch is focused on AI Regulations and how to work within them. Contributing to the panel discussion are: \n\nAI Act expert Hannah Ruschemeier  (prof. of Public Law at Universität Osnabrück)\,\nEmpirical law expert on GDPR Julia Krämer (PhD at EUR)\,\nData steward Nicolas Dintzner (TPM Faculty)\, and TBA cybersecurity law expert.\n\nMarie-Therese Sekwenz (TPM\, AI Futures Lab) is moderating the panel. \nThis event includes free lunch for which registration is required (help us reduce food waste!) \n(This event will be held in English) \nProgramme\n12.00 – 12.30 | Lunch & networking\n12.30 – 13.30 | Panel AI Regulations and how to work within them: moderated by Marie-Therese Sekwenz\, featuring Hannah Ruschemeier\, Julia Krämer\, and Nicolas Dintzner \nThe Delft AI (Lab) Lunch series\nThis series is part of the Delft AI (Lab) Lunches\, a recurring meet-up hosted by the TU Delft AI Labs & Talent community at Mondai | House of AI.\nEvery session\, we host a panel to discuss challenges and developments made at the intersection of AI and a specific field. During these events\, you can participate\, learn\, make connections\, inspire and be inspired by and with the Delft AI Community. We invite all interested staff and students from TU Delft to join these sessions. Please contact community manager Charlotte Boelens for more information about this series or the TU Delft AI Labs & Talent Programme. \nNote for TU Delft PhDs\nThe TU Delft AI Lunch series is eligible for earning discipline related skills GSC with the ‘Form for earning GSC for TU Delft AI(-related) seminars’. Check with your local Faculty Graduate School (FGS) if your FGS offers this option for earning Discipline Related Skills GSC; and with your supervisors if they accept our seminars on your Doctoral Education (DE) list. If you already have a form\, don’t forget to bring it with you.
URL:https://mondai.tudelftcampus.nl/event/tu-delft-ai-lunch-ai-regulations/
LOCATION:Panorama @Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD\, Netherlands
CATEGORIES:AI Lab Lunch
ATTACH;FMTTYPE=image/jpeg:https://mondai.tudelftcampus.nl/wp-content/uploads/sites/7/2025/06/TU250612_4059_0085-Verbeterd-NR_lowres.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260316T100000
DTEND;TZID=Europe/Amsterdam:20260316T173000
DTSTAMP:20260420T210908
CREATED:20260129T095419Z
LAST-MODIFIED:20260304T134839Z
UID:10000263-1773655200-1773682200@mondai.tudelftcampus.nl
SUMMARY:Towards Democratized Academic LLM Hosting
DESCRIPTION:Towards Democratized Academic LLM Hosting\nA 4TU Joint Workshop on Local Inference Infrastructure (hosted at TU Delft) Large Language Models (LLMs) have become core research infrastructure for academia\, underpinning research\, education\, and innovation across disciplines. Across Dutch and European academia\, multiple initiatives are currently underway to host\, operate\, and govern LLMs locally\, driven by concerns around data sovereignty\, sustainability\, cost\, transparency\, and responsible use. \nThis 4TU.NIRICT-funded workshop brings together currently siloed academic hosting initiatives across Dutch institutions (eg\, Delft\, TU/e\, VU\, WUR\, SURF) to foster knowledge exchange\, technical alignment\, and collaboration around LLM inference infrastructure.  \nThe workshop addresses LLM hosting as a socio-technical challenge\, spanning infrastructure and deployment models\, energy efficiency\, governance and operational sustainability\, tooling ecosystems\, and integration into research and education. It lays the groundwork for a coordinated\, academic-first Dutch ecosystem for responsible\, open LLM hosting. \nFull Programme March 16 10.00 – 10.30 Welcome and Registration \n10.30 – 10.35 Opening Remarks \n10:35-11:20 Keynote 1 - Alexandre Strube – Lessons from the Blablador project\nIn this talk\, we will show some of the challenges and lessons learned while hosting an LLM inference service. BLABLADOR has been running as a single-man-operation for most of its existence\, until very recently. \nThis talk is more of a conversation\, where we will explore themes such as: \n\nReliability\nSoftware Ecosystem and evolution\nFrom small clusters to supercomputers to cloud: quite a messy journey\nWho can use it and handling user abuse\nAdvertising the service\nChoosing which models to offer\nPolitical consequences of model choice\nHow to convince management that it’s worth it\nPresent and future of the service\nJAIF and JARVIS: An AI factory for Blablador\nQuestions from the audience\n\nDr. Alexandre Strube \n\nMaintainer of LMod\, the Supercomputers’ module system\nMaintainer of FastChat\nUbuntu Member\, Debian Developer\n\nDr. Alexandre Strube has been employed at the Juelich Supercomputing Centre for approximately 15 years. During this time\, he has engaged in a diverse range of activities\, encompassing raw performance modeling\, supercomputer benchmarking\, and teaching\, among many others. Notably\, he has been a member of the Helmholtz AI consultants team since 2019\, where he has contributed to the democratization of artificial intelligence within academic institutions. \nDr. Strube has been responsible for the development\, hosting\, and maintenance of BLABLADOR\, the LLM inference infrastructure for the Helmholtz Foundation and German Academia\, for approximately four years. \n11.20 – 11.35 Coffee Break \n11:35-12:35 Breakout Sessions (Round 1)\nMorning (Round 1) \n\nEnergy consumption: Benchmarking approaches; power-aware optimization strategies\nGovernance & Operational Sustainability: Service creation pathways; Financing and maintenance models; Security\, privacy\, and access control; Architectural quality assurance\nTooling around model hosting: Monitoring and observability; guardrails and safety layers; RAG and MCPs; Service abstractions to end-users\n\n12.35 – 13.35 Lunch \n13:35-14:20 Keynote 2 - Julio Alexandrino de Oliveira Filho – National perspectives and GPT-NL\nGPT-NL is a national   initiative to build a large language model grounded in the Dutch language\, culture\, and public values from scratch. Beyond delivering a performant model\, the GPT-NL project set out to explore what it means in practice to develop a sovereign language model—one that is transparent\, inclusive\, compliant with Dutch and European law\, and embedded in a broader public-interest ecosystem. \nThis keynote reflects on the practical experiences accumulated throughout the GPT-NL project in building such a model while simultaneously fostering a diverse and sustainable LLM community. Drawing on lessons learned during data acquisition\, curation\, and training\, the talk will discuss legal and regulatory considerations\, approaches to fair and lawful data use\, and the organizational challenges of aligning academic\, public\, and commercial stakeholders. We give special attention to infrastructure-related questions\, including large-scale training\, hosting\, customization\, and the role of shared HPC and hosting facilities in enabling broader participation. \nBeyond the model itself\, the keynote will examine how the GPT-NL project has sought for community building across researchers\, LLM and AI developers\, data providers\, legal experts\, infrastructure operators\, and application developers—both nationally and in a broader European context. The talk will outline envisioned application domains\, strategies for responsible dissemination\, and open research directions that arise when a language model is treated as a shared societal asset rather than a closed product. \nBy sharing successes\, open challenges\, and unresolved questions\, this keynote aims to contribute concrete insights to the discussion on democratized academic LLM community and to inform future national and European efforts toward sovereign\, community-centered language models. \nDr. Rer Nat. Julio A. de Oliveira Filho researches and develops the next generation of intelligent autonomous systems—spanning robotics\, quantum networks\, IoT sensor systems\, automotive technologies\, and modern Defense applications.  \nAs the leading architect of GPT-NL\, the first Dutch-English large language model trained fully from scratch\, and one of the creators of NetSquid\, the world’s most widely used quantum network simulator\,  Julio thrives at the intersection of advanced AI\, complex systems\, and cutting-edge engineering.  \nHe is passionate about elevating AI engineering into a mature discipline—where autonomous systems are not only powerful\, but reliable\, safe\, and explainable by design. \n14.20 – 14.25 Room Change \n14:25-15:25 Breakout Sessions (Round 2)\nAfternoon(Round 2) \n\nAccelerate production-ready LLM deployment using commercial solutions: Why is this relevant: Possible use cases; NVIDIA AI Enterprise; Microsoft AI Foundry\nEducation & Research Use Cases: Practical implementations; researcher-education feedback loops; implications for curricula; Policy & ethics considerations\nInfrastructure & Deployment: Utilization strategies; deployment models: on-prem\, shared\, and hybrid\n\n15.25 – 15.40 Coffee Break \n15:40-16:40 Panel Discussion: Future of AI & LLM/model hosting in the Netherlands and Europe\nA moderated discussion with academic leaders and national stakeholders exploring how academic institutions should position themselves in a rapidly evolving landscape shaped by national and European initiatives\, open and proprietary models\, emerging AI hubs\, and increasing policy attention. Particular emphasis will be placed on the quality\, governance\, and sustainability of LLM services in an academic context. \n16.40 – 17.40 Borrel & Networking \nSpeakers Alexandre Strube (Helmholtz AI) - Keynote 1\nIn this talk\, we will show some of the challenges and lessons learned while hosting an LLM inference service. BLABLADOR has been running as a single-man-operation for most of its existence\, until very recently. \nThis talk is more of a conversation\, where we will explore themes such as: \n\nReliability\nSoftware Ecosystem and evolution\nFrom small clusters to supercomputers to cloud: quite a messy journey\nWho can use it and handling user abuse\nAdvertising the service\nChoosing which models to offer\nPolitical consequences of model choice\nHow to convince management that it’s worth it\nPresent and future of the service\nJAIF and JARVIS: An AI factory for Blablador\nQuestions from the audience\n\nDr. Alexandre Strube \n\nMaintainer of LMod\, the Supercomputers’ module system\nMaintainer of FastChat\nUbuntu Member\, Debian Developer\n\nDr. Alexandre Strube has been employed at the Juelich Supercomputing Centre for approximately 15 years. During this time\, he has engaged in a diverse range of activities\, encompassing raw performance modeling\, supercomputer benchmarking\, and teaching\, among many others. Notably\, he has been a member of the Helmholtz AI consultants team since 2019\, where he has contributed to the democratization of artificial intelligence within academic institutions. \nDr. Strube has been responsible for the development\, hosting\, and maintenance of BLABLADOR\, the LLM inference infrastructure for the Helmholtz Foundation and German Academia\, for approximately four years. \nJulio Alexandrino de Oliveira Filho (TNO) - Keynote 2\nDr. Rer Nat. Julio A. de Oliveira Filho researches and develops the next generation of intelligent autonomous systems—spanning robotics\, quantum networks\, IoT sensor systems\, automotive technologies\, and modern Defense applications.  \nAs the leading architect of GPT-NL\, the first Dutch-English large language model trained fully from scratch\, and one of the creators of NetSquid\, the world’s most widely used quantum network simulator\,  Julio thrives at the intersection of advanced AI\, complex systems\, and cutting-edge engineering.  \nHe is passionate about elevating AI engineering into a mature discipline—where autonomous systems are not only powerful\, but reliable\, safe\, and explainable by design. \nSovereign Language Models in Practice: The GPT-NL Endeavour in LLM Community Building \nGPT-NL is a national   initiative to build a large language model grounded in the Dutch language\, culture\, and public values from scratch. Beyond delivering a performant model\, the GPT-NL project set out to explore what it means in practice to develop a sovereign language model—one that is transparent\, inclusive\, compliant with Dutch and European law\, and embedded in a broader public-interest ecosystem. \nThis keynote reflects on the practical experiences accumulated throughout the GPT-NL project in building such a model while simultaneously fostering a diverse and sustainable LLM community. Drawing on lessons learned during data acquisition\, curation\, and training\, the talk will discuss legal and regulatory considerations\, approaches to fair and lawful data use\, and the organizational challenges of aligning academic\, public\, and commercial stakeholders. We give special attention to infrastructure-related questions\, including large-scale training\, hosting\, customization\, and the role of shared HPC and hosting facilities in enabling broader participation. \nBeyond the model itself\, the keynote will examine how the GPT-NL project has sought for community building across researchers\, LLM and AI developers\, data providers\, legal experts\, infrastructure operators\, and application developers—both nationally and in a broader European context. The talk will outline envisioned application domains\, strategies for responsible dissemination\, and open research directions that arise when a language model is treated as a shared societal asset rather than a closed product. \nBy sharing successes\, open challenges\, and unresolved questions\, this keynote aims to contribute concrete insights to the discussion on democratized academic LLM community and to inform future national and European efforts toward sovereign\, community-centered language models. \nCorry Wouters (TU/e) - Panelist\nCorry Wouters is CIO and Director of Library & Information Services at Eindhoven University of Technology (TU/e). With over 30 years in commercial tech and healthcare\, she has led major digital transformations in EHRs\, secure interoperability\, and data governance\, delivering scalable\, secure services. At TU/e\, she drives digital innovation to support education\, research\, and societal impact. In 2025\, she was named one of the most innovative leaders in the Netherlands. \nLinkedIn \nArie van Deursen (TU Delft) - Panelist\nArie van Deursen is a professor in software engineering at the department of Software Technology in the faculty of Electrical Engineering\, Mathematics\, and Computer Science. He served as head of department from 2016-2023\, and presently chairs the Software Engineering Research Group (SERG). \nHe is scientific co-director of AI4SE\, a five year collaboration between JetBrains and TU Delft investigating novel use of artificial intelligence in software engineering. This lab hosts 10 PhD students\, dozens of MSc and Bsc students\, and many TU Delft faculty members and JetBrains specialists. \nHe is co-founder of two companies: The Software Improvement Group (2000) and PerfectXL (2010). \nDamian Podareanu (SURF) - Panelist\nLinkedIn \nNiels Taatgen (RUG) - Panelist\nRead all about Niels and his work on the infopage of RUG \nDr. Thijs van der Plas (WUR)\nThijs van der Plas is a postdoctoral AI researcher at Wageningen University & Research in the newly formed AI group. He currently works on ecological data integration in the Dutch LTER-LIFE project and on explainable AI in the ESA-funded AETHER project. Thijs has a background in physics at Radboud University\, holds a DPhil in computational neuroscience from the University of Oxford and worked as Research Associate at the Alan Turing Institute. He currently builds multimodal and explainable AI methods for monitoring biodiversity at scale\, as well as LLM systems for improving data management. \nLink additional info
URL:https://mondai.tudelftcampus.nl/event/towards-democratized-academic-llm-hosting/
LOCATION:Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD
ATTACH;FMTTYPE=image/png:https://mondai.tudelftcampus.nl/wp-content/uploads/sites/7/2022/05/DO_Mondai_LOGO_futuristischblauw.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260324T123000
DTEND;TZID=Europe/Amsterdam:20260324T153000
DTSTAMP:20260420T210908
CREATED:20260203T112219Z
LAST-MODIFIED:20260223T103206Z
UID:10000264-1774355400-1774366200@mondai.tudelftcampus.nl
SUMMARY:Bijeenkomst AI Gigafactory Rotterdam
DESCRIPTION:Mondai | House of AI en de AI-hub Zuid-Holland\, samen met Volt\,\nnodigen je graag uit voor deze bijeenkomst over de AI Gigafactory in Rotterdam(de voertaal van dit event is Nederlands) \nAI ontwikkelt zich razendsnel. AI-technologieën en innovaties worden breed toegepast in industrie\, publieke diensten en onderwijs. Vragen over de positie van Nederland en Europa in deze ontwikkeling houden iedereen bezig; hoe vormen we onze digitale toekomst en autonomie? \nOp dinsdag 24 maart hosten Volt\, initiatiefnemer van de Europese AI-Gigafactory in Rotterdam\, de AI-hub Zuid-Holland en TU Delft – Mondai | House of AI een bijeenkomst over de realisatieplannen voor de AI-Gigafactory. Tijdens de sessie staat gesprek over het belang van ontwikkeling van zulke infrastructuur centraal. Niet alleen voor de regio\, voor Nederland\, en voor Europa\, maar ook voor de soevereiniteit van jouw organisatie. Daarnaast vormen we ook graag beeld van potentiële use-cases uit jouw (toekomstige) praktijk en de vertaling daarvan naar compute-behoeften en ondersteunende faciliteiten in de AI-Gigafactory. \nMeer weten over de AI-Gigafactory? Lees verder op de pagina van Volt. Dit event is op uitnodiging\, maar we staan zeker open voor alle stakeholders die onderdeel willen zijn van dit gesprek. Meld je aan\, dan komen we in contact. \nProgramma\n12.30 – 13.00 Inloop en Ontvangst met lunch\n13.00 – 13.15 Opening en Introductie door Joost Poort namens TU Delft | Mondai House of AI en de AI-hub Zuid-Holland;\n13.15 – 13.45 Presentatie van de plannen voor en voortgang van de AI-Gigafactory in Rotterdam door Han de Groot namens Volt;\n13.45 – 14.30 Inzichten rond AI & Compute in verschillende praktijken \n\nErick Webbe\, CEO – Kickstart AI\,\nSven Hamelink\, Hoofd Science & Technology – Politie\,\nErik Scherff\, CIO | IT-director TU Delft.\n\n14.30 – 15.30 Discussie onder begeleiding van moderator Tom Jessen \nKom in ContactVragen over de AI Gigafactory? Neem vooral contact met ons op! \nJoost Poort\nDirecteur Mondai | House of AI\nAI Innovation Lead TU Delft\nHan de Groot\nCEO Volt\nInitiator van de AI-Gigafactory
URL:https://mondai.tudelftcampus.nl/event/ai-gigafactory-bijeenkomst/
LOCATION:Panorama @Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD\, Netherlands
ATTACH;FMTTYPE=image/png:https://mondai.tudelftcampus.nl/wp-content/uploads/sites/7/2026/02/Volt-AIGF.png
ORGANIZER;CN="Mondai | House of AI":MAILTO:mondai@tudelft.nl
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20260413
DTEND;VALUE=DATE:20260415
DTSTAMP:20260420T210908
CREATED:20260121T121443Z
LAST-MODIFIED:20260202T104616Z
UID:10000260-1776038400-1776211199@mondai.tudelftcampus.nl
SUMMARY:AI4b.io Annual Symposium 2026
DESCRIPTION:AI4b.io Annual Symposium 2026We are excited to invite you to the fifth AI4b.io Symposium\, where cutting-edge Artificial Intelligence meets the complex challenges of Bioscience. Taking place in the vibrant city of Delft\, this year’s symposium is themed: Living Systems and Learning Machines: Shaping the Future of Bioscience \nThis year’s symposium is hosted by Mondai\, House of AI\, located in Delft. This free of charge event offers a unique platform to exchange ideas\, present innovative research\, and forge meaningful connections across academia and industry. Whether you’re advancing theoretical frameworks or driving practical applications\, this symposium fosters collaboration to push the frontiers of AI in bioscience. \nWhat to Expect\nImmerse yourself in two days of: \n\nInspiring Presentations and Posters: Dive into diverse topics ranging from large-scale factory scheduling to the genetic manipulation of microorganisms.\nEngaging Interdisciplinary Discussions: Explore the intersection of AI and bioscience across fields like:\n\nScheduling in process industries\nFluid dynamic modeling\nLab automation\nOptimal experimental design\nMicrobiome precision feed\nMetabolic engineering\nMolecular machine learning\nHuman-aware constrained optimization\n\n\n\nExperience the power of AI at the AI4b.io symposium\, featuring today’s most advanced techniques—machine learning\, deep learning\, and generative AI—transforming ideas into impact.\n\n\nConfirmed invited speakers\n\nHalima Mouhib | Towards Artificial Olfaction\nAssociate Professor at Vrije Universiteit Amsterdam\nLukas Sgenger | A Digital Platform for the Design\, Control and Scale-Up of Bioprocesses\nChemical Engineer at SimVantage GmbH\n\nShowcase Your Work\nSubmit your abstracts\, open for selection\, to present your research orally or via a poster session. Deadline for submission is February 12th 2026. (abstract template) \nPoster pitches \nA dedicated time slot for poster pitches has been scheduled on Day 2 to help draw attention to the posters. Presenters have the opportunity to give a brief 2-minute pitch summarizing their work before the poster session begins. Participation is optional\, and slides will be compiled into a single presentation for smooth transitions. The poster session will follow immediately after the pitches. \nWhy Delft?\nDelft\, renowned for its historic charm and innovation culture\, provides the perfect backdrop for this event. Conveniently located near major airports: \n\nSchiphol Airport: Direct train connection (~40 minutes) to Delft’s main station\nRotterdam Airport: 20-minute taxi ride to the city center\n\nInformation and Registration AI4b.io Symposium
URL:https://mondai.tudelftcampus.nl/event/ai4b-io-annual-symposium-2026/
LOCATION:Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD
ATTACH;FMTTYPE=image/jpeg:https://mondai.tudelftcampus.nl/wp-content/uploads/sites/7/2026/01/TU-innovatie-centrum-nieuwe-locatie-Foto-Sander-Foederer-5004-scaled.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20260414
DTEND;VALUE=DATE:20260415
DTSTAMP:20260420T210908
CREATED:20260119T084602Z
LAST-MODIFIED:20260121T090650Z
UID:10000259-1776124800-1776211199@mondai.tudelftcampus.nl
SUMMARY:The Dutch AI Congress 2026
DESCRIPTION:Impact with AI\, the Fair Tech wayOn Tuesday\, April 14\, 2026\, AIC4NL will again organize The Dutch AI Congress at DeFabrique\, Utrecht. The national stage where government\, knowledge institutions\, business and start-ups together give direction to the application of responsible AI in the Netherlands. \nWith the theme “Impact with AI\, the Fair Tech way\,” this edition revolves around one central question: how do we use AI to create social and economic value\, without losing the human touch? \nThe conference offers: \n\nA plenary morning program with highly topical keynotes and panel discussions.\nTwo rounds of breakout sessions on policy\, technology and practice.\nA plenary closing session with insights\, interaction\, and inspiration.\nA diverse exhibition floor featuring startups\, AI initiatives\, and creative AI applications.\nA pleasant networking event.\n\nThis is a paid event. AIC4NL participants will receive discounted or free tickets. More information will follow soon. \nAIC4NL Congres 2026
URL:https://mondai.tudelftcampus.nl/event/the-dutch-ai-congress-2026/
ATTACH;FMTTYPE=image/jpeg:https://mondai.tudelftcampus.nl/wp-content/uploads/sites/7/2024/12/AIC4NL-Centered-1.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260506T090000
DTEND;TZID=Europe/Amsterdam:20260507T170000
DTSTAMP:20260420T210908
CREATED:20251217T183837Z
LAST-MODIFIED:20260415T094057Z
UID:10000255-1778058000-1778173200@mondai.tudelftcampus.nl
SUMMARY:Designing and Developing Ethically Aligned Defence AI
DESCRIPTION:Mondai | House of AI is pleased to host the\nDesigning and Developing Ethically Aligned Defence AI Conference\norganised by the ELSA Defense Lab\, in collaboration with the TU Delft Digital Ethics CentreAdvances in artificial intelligence (AI) are enabling military systems to operate in environments where uncertainty\, adversarial dynamics\, and time-critical decision-making are the norm rather than the exception. In such contexts\, ethical design cannot rely solely on predictable scenarios\, assumptions of human oversight\, or static rule-based constraints; rather\, it requires careful and substantial ethical programming and design to ensure that AI-enabled systems behave in alignment with moral and legal principles throughout their operational lifecycle. \nThis conference explores how ethically aligned military AI can be conceived\, designed\, and developed for deployment in uncertain\, adversarial\, and time-critical environments. Across two days\, contributors examine normative and methodological foundations related to the embedding of moral and ethical constraints during the early stages of the lifecycle of military AI systems. \nConference ProgrammeDay 1 – May 6 08.30 – 09.00 Walk in and Registration\n09.15 – 09.30 Welcoming Remarks \n09.30 – 10.15 From Principles to Practice: An Actionable Value-Based Risk Governance Dashboard for Defense AI by Jasper van der Waa\, Lotte Kerkkamp-de Rijcke and Birgit van der Stigchel (Netherlands Organization for Applied Scientific Research-TNO)\n10.15 – 11.00 Ethical Hazard Assessment: A Functional Approach to Assess Machine Learning Risks in Airborne Weapon Systems by Hauke Budig (Hamburg University of Technology)\, Volker Gollnick (Hamburg University of Technology)\, Nathan Gabriel Wood (Hamburg University of Technology / California Polytechnic State University San Luis Obispo /Center for Environmental and Technology Ethics – Prague) and Scott Robbins (Karlsruhe Institute of Technology) \n11.00 – 11.30 Break \n11.30 – 12.15 Meeting the Moral Responsibilities Associated with Dual-Use AI Through Three Practical Solutions by Daniel Trusilo (University of St. Gallen) and David Danks (University of Virginia)\n12.15 – 13.00 The Disinformation Bomb: Generative AI\, Deepfake Detection\, and the Industrialization of Deception by Mark Evenblij (DuckDuckGoose) \n13.00 – 14.00 Lunch Break \n14.00 – 14.45 Designing Responsible AI for Cognitive Warfare by Jurriaan van Diggelen\, Aletta Eikelboom\, Neill Bo Finlayson\, Jose Kerstholt\, Kimberley Kruijver (Netherlands Organization for Applied Scientific Research-TNO)\n14.45 – 15.30 Rights-Preserving Framework to Bot Detection in AI-Enabled Cognitive Warfare by Henning Lahmann (Leiden University) and Perica Jovchevski (Delft University of Technology) \n15.30 – 16.00 Break \n16.00 – 17.30 Keynote Lecture: Military AI\, Transdisciplinarity and the Politics of Design by Filippo Santoni de Sio (Eindhoven University of Technology) \nDay 2 – May 7 09.00 – 09.30 Walk in \n09.30 – 10.15 AI-Enabled Decision-Support Systems and the In Bello Trilemma: Recalibrating Feasible Precaution in Armed Conflict by Ann-Katrien Oimann (Tilburg University)\n10.15 – 11.00 Automated Adversariality: LLMs\, Objectivity\, and Autonomy in Intelligence Analysis by Nicholas Johnston (Delft University of Technology/ Netherlands Organization for Applied Scientific Research-TNO) and Martin Sand (Delft University of Technology) \n11.00 – 11.30 Break \n11.30 – 12.15 Sabotage and Espionage in Grey Zone Warfare: Responsible Data Integrated Threat Assessment of Shadow Fleet Activities by Liselotte Polderman-Borst (Leiden University / Delft University of Technology / Netherlands Defense Academy) and Stefan Buijsman (Delft University of Technology)\n12.15 – 13.00 Encoded Values: Tradeoffs in Programing Language and Development Methods for Military Software by Joshua S. Greenberg and Varija Mehta (Cornell University) \n13.00 – 14.00 Lunch Break \n14.00 – 14.45 Meaningful Human Control in C-UAS and Swarming Strike by Lennart Bult and Flip van Wijk (Emergent Swarm Solutions B.V.)\n14.45 – 15.30 Authority\, Accountability and AI: The Case for Indexed Alignment by Bryce Goodman (University of Oxford) \n15.30 – 16.00 Break \n16.00 – 17.30 Keynote Lecture: Empirical Perspectives on the Ethical Use and Development of Military AI by Christine Boshuijzen-van Burken (Netherlands Defense Academy / Eindhoven University of Technology) \nOrganisation \nPerica Jovchevski\, Post-doctoral Researcher in the section of Ethics and Philosophy of Technology at TU Delft.\nStefan Buijsman\, Associate Professor Responsible AI at TU Delft.
URL:https://mondai.tudelftcampus.nl/event/elsa-defense-designing-developing-ethically-ai/
LOCATION:Connect @Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD\, Netherlands
ATTACH;FMTTYPE=image/jpeg:https://mondai.tudelftcampus.nl/wp-content/uploads/sites/7/2025/12/TUD_Mondai_AI_GlobAIPol_networking.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260513T090000
DTEND;TZID=Europe/Amsterdam:20260513T110000
DTSTAMP:20260420T210908
CREATED:20260415T130215Z
LAST-MODIFIED:20260416T080145Z
UID:10000266-1778662800-1778670000@mondai.tudelftcampus.nl
SUMMARY:Information Session: AI-Hub Zuid-Holland x Breaking Barriers
DESCRIPTION:Information Session: AI-Hub Zuid-Holland x Breaking Barriers\n(The event will be held in English) \nAI-Hub Zuid-Holland and Breaking Barriers are proud to host a Information Session\, where participants have the opportunity to learn more about the Breaking Barriers programme and how it supports ambitious AI startups in overcoming growth challenges and scaling into successful international players. \nBreaking Barriers\nAs an AI startup\, you want to break through\, grow\, and make an impact. Building a pioneering company comes with many challenges. How do you attract follow-up funding to accelerate your growth? How do you find and retain top talent to achieve ambitions? And how do you ensure that your company reaches new and international markets? \nThe AiNed Breaking Barriers programme is designed to turn these kinds of challenges into opportunities. With a tailor-made program\, we support you in accelerating your growth path. Our goal? Strengthen the Dutch AI ecosystem in such a way that promising Dutch startups become successful international players. To achieve this goal\, we work together with valuable partners. \nAre you an AI startup or scale‑up interested in the program? The event is mainly invite‑only\, but we’re open to AI‑driven startups and scale‑ups who want to join the conversation. Register your interest and we’ll get in touch. \nAbout Breaking BarriersPreliminary Programme 09.00 – 09.30 Walk in \n09.30 – 10.15 Information for Start-ups from Zuid-Holland about Breaking Barriers \n10.15 – 11.00 Short pitches by the startups and matchmaking with Breaking Barriers \n11.00 End of programme \nKom in Contact\nVragen over dit evenement? Neem vooral contact met ons op! \nJoost Poort\nDirecteur Mondai | House of AI\nAI Innovation Lead TU Delft\nDario Turelli\nBusiness Developer Mondai | House of AI
URL:https://mondai.tudelftcampus.nl/event/breaking-barriers-2/
LOCATION:Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD
ATTACH;FMTTYPE=image/png:https://mondai.tudelftcampus.nl/wp-content/uploads/sites/7/2024/10/Zuid-HollandAI-kleur.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260527T130000
DTEND;TZID=Europe/Amsterdam:20260527T173000
DTSTAMP:20260420T210908
CREATED:20260417T080624Z
LAST-MODIFIED:20260417T092614Z
UID:10000267-1779886800-1779903000@mondai.tudelftcampus.nl
SUMMARY:Bloopers of Brilliance: When Science Goes Sideways
DESCRIPTION:Mondai | House of AI is happy to host the next edition of the AI PhD & Postdoc Spring Symposium\, together with the TU Delft AI Initiative and the AI PhD Committee! \nBloopers or Brilliance: When Science goes Sideways (De voertaal van dit event is Engels) \n\nAre you a PhD or postdoc researcher at TU Delft working on AI-related topics? You are cordially invited!  \n\nHosted on May 27th (13:00 – 17:30) at Panorama XL@Mondai | House of AI\, this year’s event is going to shake things up and focus on a less-talked-about side of science – embracing scientific failures! The event includes poster pitches\, an interactive panel discussion on the importance of negative results\, errors\, and failures in science\, keynotes from final year PhDs\, and the ever-important borrel. It’s an excellent opportunity to present your research (particularly what did not go to plan) and network with fellow AI-focused scholars across campus. \nProgramme (preliminary) 13:00 – 17:30 \n\nKeynotes & talks from final year PhDs working in/with AI at TU Delft\nPanel discussion on embracing scientific failure\nPoster market: You are invited to contribute to this event with your own poster and/or abstract! See poster requirements below. Final deadline for participating with hand in: May 18th\n\nThe afternoon session will end in a casual manner with drinks\, refreshments and an opportunity for networking. More details to be announced so keep an eye on this page for more updates on speakers and panellists! \nPosters and/or ‘failure’ abstract Researchers at all stages of their PhD and working in all different areas of AI are welcome: \n\nMachine learning and foundational AI techniques\nHuman-centered AI systems\nApplication of AI\nFairness\, bias\, legal\, and ethical considerations of AI\nEducation and AI\nDesign with AI\nReflexive and critical research on AI\nAnd more…\n\nPrizes for best poster and ‘failure abstracts’ to be announced! \nRequirements \nYou can submit either A) a poster with a short description of failure or B) a ‘failure abstract’\, aka a short description of the research you wanted to do\, but it didn’t quite work out. \n\nA) If you’re bringing a printed poster\, we request a short description of the efforts that went awry before the successful work. What went wrong in the project before it succeeded? It can be a misstep\, a small mistake\, or a significant error—it can be any way to show that the research is rarely a smooth process.\n\nB) Alternatively\, you can submit a short description explaining the intended goal and how it did not go to plan. Here\, you don’t have to have succeeded; it can be an idea that you eventually abandoned!  \n\n\nFor A) Poster A1 or A0 size \n\nPrinting is available via the AI Initiative for new posters in A0 format. Send in as PDF\, JPEG\, or PNG (portrait mode\, 300 DPI). \n\n\nFor A) and B)\, we expect abstract submissions of up to 300 words for both types. Send in as PDF or DOC(X).\nSubmissions can be made via the registration form available on this page\n\n\n\nRegister & submit your poster and/or ‘failure abstract’ by Monday\, 18 May. \nAny questions? Please contact the AI PhD Committee at AI-PhD-Committee@tudelft.nl \nSpeakers Keynotes & talks from final year PhDs \nModeling Discretization Error with the Bayesian Finite Element Method for Better Parameter Estimates by Anne Poot\nKeynote by Anne Poot (SLIMM Lab\, CEG)\nCan computation itself be probabilistic? In this talk\, I will give a crash course on the finite element method\, demonstrate the issue of discretization error\, and describe how this error can be modeled probabilistically. We will see that by reinterpreting the finite element method from a Bayesian point of view\, we can get better performance in downstream applications such as parameter estimation in inverse problems.\n \nCan neural networks design better structures faster? Neural parameterizations in topology optimization by Surya Manoj Sanu\n \nLayman talk by Surya Manoj Sanu (MACHINA Lab\, ME)\nAs engineers\, we constantly simplify problems so we can solve them faster. That’s why it sounds counterintuitive to add complexity to an already well-defined structural optimization problem. Why make things more complicated? In this talk\, we explore exactly that idea. We introduce an unsupervised neural network — the “extra complexity” — into a traditional topology optimization pipeline — the “simple” engineering workhorse. And surprisingly\, this added layer of intelligence can improve how we design structures. But\, as in all good science\, there’s not only the good. There’s also the bad — and sometimes\, the ugly and we will try to unpack all of this! \nSearch Machines for Architects by Casper van Engelenburg\nKeynote by Casper van Engelenburg (AiDAPT Lab\, A+BE)\nWhile image-based retrieval has drastically diversified the use cases of modern-day search engines\, their relevance judgments are far from optimal for disciplines like architecture\, which heavily rely on visual data that are fundamentally different from the natural photos most search engines are trained on. Where natural photo understanding focuses on appearance mainly (color\, texture)\, architectural drawing understanding is about understanding graphic-like drawings—floor plans\, sections\, axonometric projections\, etc.—that emphasize the composition and organization of the spaces that we live in. Therefore\, to accurately judge relevance between architectural drawings\, we must rethink what it means to be similar and explore how to train domain-specific models or fine-tune pretrained large vision models on architectural data. In this talk\, I will present several of our recent works that highlight advancements in floor plan representation learning and the necessity of building high-quality architectural datasets. \nTensor decompositions for the analysis of functional ultrasound data by Sofia Kotti\nLayman talk by Sofia Kotti (DeTAIL Lab\, EEMCS)\nFunctional ultrasound indirectly measures brain activity through changes in cerebral blood flow. Tensor decompositions provide a natural framework for analysing the acquired data by exploiting their multidimensional structure and expressing them in terms of latent components. This can help identify underlying spatial and temporal patterns in brain activity\, supporting improved interpretation of functional ultrasound measurements. \nPanel: Embracing Scientific Failure \nHow can we think about and practically approach our failures in science? And how can we make them visible through\, for instance\, documentation? \n\n\nThis panel explores what scientific failure really means across different disciplines\, from rejected papers and failed grant applications\, to broader personal and professional setbacks. Speakers reflect on how failure is defined within their fields\, share their own experiences at both an individual and disciplinary level\, and discuss how these moments have shaped their work. By examining not just the challenges but also the lessons learned\, the panel aims to highlight how failure can be an essential and productive part of the scientific process. \nDuring this panel discussion\, Elvire Landstra (Tilburg University)\, Agostino Nickl (A+BE)\, Nazli Cila (IDE)\, and Megha Khosla (EEMCS) will shed light on different definitions of ‘scientific failure’ and how to deal with them.
URL:https://mondai.tudelftcampus.nl/event/ai-phd-postdoc-symposium-bloopers-brilliance/
LOCATION:Panorama @Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD\, Netherlands
ATTACH;FMTTYPE=image/jpeg:https://mondai.tudelftcampus.nl/wp-content/uploads/sites/7/2025/10/TU250612_4059_0283_lowres.jpg
ORGANIZER;CN="Mondai | House of AI":MAILTO:mondai@tudelft.nl
END:VEVENT
END:VCALENDAR