BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Mondai - ECPv6.15.17//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://mondai.tudelftcampus.nl/en/
X-WR-CALDESC:Events for Mondai
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Europe/Amsterdam
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20240331T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20241027T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20250330T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20251026T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20260329T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20261025T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20270328T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20271031T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20251114T093000
DTEND;TZID=Europe/Amsterdam:20251114T123000
DTSTAMP:20260421T194204
CREATED:20251020T134854Z
LAST-MODIFIED:20251113T131729Z
UID:10000248-1763112600-1763123400@mondai.tudelftcampus.nl
SUMMARY:Deep Dive Workshop - AI in Warfare: Actions\, Policies and Practices
DESCRIPTION:Global AI Policy Summit Deep Dive Workshop\nAI in Warfare: Actions\, Policies and PracticesArtificial Intelligence (AI)\, Big Data analytics\, and Automated Decision Making (ADM) are increasingly being used for surveillance\, targeting\, and autonomous or semiautonomous drone warfare\, in addition to proliferating misinformation on social media during war and conflicts. Conversely\, related technologies are also leveraged for investigation of human rights violations e.g. as done by members of Forensic Architecture\, Interpret\, Airwars and Bellingcat. Meanwhile\, campaigns such as Stop Killer Robots are working through the UN and other forums towards an international ban on\, at minimum\, Lethal Autonomous Weapon Systems (LAWS).  \nHow should researchers\, scholars\, government actors and civil society engage and act critically to highlight\, investigate\, and prevent the use of AI-based systems in perpetuating human rights violations in and out of warfare and devise critical policies and practices that mitigate harms to society today? \nThe current AI Act has many exceptions to the use of AI for policing\, surveillance and military applications\, while there are hardly any enforceable provisions related to use of EU technologies globally that violate human rights. This workshop engages inside and critical perspectives from military officers\, AI researchers and scholars in International Humanitarian Law (IHL)\, human rights activists\, Members of Parliament\, and NGOs\, dealing with these concerns. We will examine these aspects in the context of ongoing conflicts in Gaza and Ukraine\, among others globally\, from the role of AI for spreading misinformation in war\, to autonomous warfare\, and civic / human rights violations. Our aim is to encourage inter-disciplinary and critical theorizing on what policies\, regulations and practices are urgently needed to address these emerging concerns\, while developing an action agenda for future research\, concrete policy proposal work\, and pragmatic societal outcomes. \nWorkshop Programme This workshop takes places in the “Panorama @Mondai”\n09.30 – 09.40 Opening & Key Themes Presented by Nitin Sawhney & Petter Ericson\n09.40 – 11.00 Panel Presentations by Panellists + Q&A and Discussions\n11.00 – 11.10 Short Break\n11.10 – 11.30 Form Participant groups led by Invited Experts + Intros & Perspectives\n11.30 – 12.15 Workshop Deliberations and Formulating Key Outcomes\n12.15 – 12.30 Wrap-Up & Next Steps \nExpert Panellists \nVirginia Dignum\, Professor in Responsible Artificial Intelligence and Director of the AI Policy Lab\, Umeå University and Member of UN High Level Advisory Body on AI\n\nMartine Jaarsma\, Doctoral Researcher\, International Humanitarian Law\, Military uses of AI and Critical Legal Studies\, Department of Political Science\, University of Antwerp\nMegan Karlshoej-Pedersen\, Policy Specialist at Airwars (presenting online)\nRainer Rehak\, Research Associate\, Weizenbaum Institute\nIlse Verdiesen\, Research Fellow\, National Defense Academy (NLDA)\, Chief of Staff Joint IV Commando (Col)\nTaylor Kate Woodcock LL.M.\, PhD Researcher in Public International Law\, Asser Institute\n\nWorkshop Outcomes \nImplications for Policy Research Agenda\nBarriers and obstacles to enforcing / moderating use of AI in warfare – conventions\, regulations and international treaties? What can we do to highlight / change them?\nFostering new collaborations within the group for research\, policy action or advocacy\nPlanning the next Contestations.AI symposium (in 2026) and opportunities for similar workshops at other conferences?\nConcrete Action Items:\n\nGlobAIPol signing up to Stop Killer Robots?\nWhitepaper\, opinion piece or journal article?\nStakeholder deliberations (as follow-up workshop)?\n\n\n\nRelated Events\, Articles and Reports Events \nContestations.AI: Transdisciplinary Symposium on AI\, Human Rights and Warfare\, Helsinki\, Oct 23\, 2024: https://contestations.ai/  \nArticles and Reports \nOp-Ed: Regulating military use of AI is in everyone’s interest\, Michael C. Horowitz\, Financial Times\, October 13\, 2025. \nResponsible by Design: Strategic Guidance Report on the Risks\, Opportunities\, and Governance of Artificial Intelligence in the Military Domain. Global Commission on Responsible Artificial Intelligence in the Military Domain (GC REAIM)\, September 2025.  \nCoveri\, Andrea\, et al. Big Tech and the US Digital-Military-Industrial Complex. Intereconomics\, vol. 60\, no. 2\, Sciendo\, 2025\, pp. 81-87.\n\nThe rolling text of the Group of Governmental Experts working a Convention of Certain Conventional Weapons\, in particular on the legal status of Lethal Autonomous Weapon Systems.
URL:https://mondai.tudelftcampus.nl/en/event/ddw-ai-in-warfare/
LOCATION:Panorama @Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD\, Netherlands
ATTACH;FMTTYPE=image/png:https://mondai.tudelftcampus.nl/wp-content/uploads/sites/7/2025/10/TUD_Mondai_AI_MHC_workshop_v2.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20251114T093000
DTEND;TZID=Europe/Amsterdam:20251114T123000
DTSTAMP:20260421T194204
CREATED:20251029T153703Z
LAST-MODIFIED:20251029T154202Z
UID:10000252-1763112600-1763123400@mondai.tudelftcampus.nl
SUMMARY:Deep Dive Workshop - Contextualising AI Principles: Universal Guidelines or Domain-Specific Policy?
DESCRIPTION:Global AI Policy Summit Deep Dive Workshop\nContextualising AI Principles: Universal Guidelines or Domain-Specific Policy?Artificial Intelligence principles and guidelines have proliferated in recent years—transparency\, fairness\, accountability\, and human oversight are widely endorsed. However\, when we attempt to implement and operationalise these principles in specific domains\, fundamental dilemmas emerge: \nIn crisis management: How do we balance the need for privacy with transparency when lives are at stake? Can we afford the time for human oversight in rapidly evolving disasters? \nIn education: How do we ensure fairness in AI-supported learning while respecting pedagogical autonomy and diverse learning needs? \nIn mobility: When does mandatory human oversight become a safety liability in time-critical traffic situations? How do we operationalize accountability when decisions are distributed across interconnected systems? \nThis workshop examines a fundamental question for AI policy: What principles have to remain generic across domains\, and what should or must be contextualized? The EU AI Act attempts to address this through risk-based categories\, but how well does this approach capture domain-specific tensions? Through structured dialogue across domains\, we will map where universal principles break down\, why contextualisation is necessary\, and what this means for developing both sector-specific guidelines and cross-cutting policy frameworks. \nWorkshop Programme This workshop takes places in the “Connect @Mondai” \n09.30 – 10.00 Opening & Domain Challenges.\nWelcome and short presentations: What are the specific dilemmas when applying AI principles in crisis management\, education\, and mobility? \n10.00 – 10.30 Plenary Principle Mapping.\nInteractive session: Starting from the EU Ethics Guidelines for Trustworthy AI and the Dutch Value Compass: Which principles do we prioritize? Where do different domains face irreconcilable conflicts? \n10.30 – 11.00 Coffee Break \n11.00 – 12.30 Domain Deep Dives\, Cross-Domain Synthesis\, and Next Steps\n> Structured discussions: Develop concrete scenarios where generic principles prove inadequate or create harm\, or where additional principles are needed. What makes each domain different? What adaptations are needed?\n> Bringing insights together: What patterns emerge? Where is universality possible\, and where is contextualization essential?\n> Discussion of potential collaborative outputs and future dialogue \nSpeakers Moderator: \n\nTina Comes\, Scientific Director DLR Institute for the Protection of Terrestrial Infrastructures; Professor in Decision Theory & ICT for Resilience\, Delft University of Technology\n\nExpert Speakers: \n\nThomas Kox\, Head of Research Group “Digitalisation and Networked Security”\, Weizenbaum Institute for the Networked Society\nArkady Zgonnikov\, Assistant Professor\, Human-Technology Interaction and Centre for Meaningful Human Control\, Delft University of Technolog\nDuuk Baten\, Advisor on Digitalisation and AI in Education\, SURF
URL:https://mondai.tudelftcampus.nl/en/event/ddw-contextualising-ai-principles/
LOCATION:Connect @Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD\, Netherlands
ATTACH;FMTTYPE=image/jpeg:https://mondai.tudelftcampus.nl/wp-content/uploads/sites/7/2025/10/TU250612_4059_0283_lowres.jpg
ORGANIZER;CN="Mondai | House of AI":MAILTO:mondai@tudelft.nl
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20251118T100000
DTEND;TZID=Europe/Amsterdam:20251119T170000
DTSTAMP:20260421T194204
CREATED:20251029T104431Z
LAST-MODIFIED:20251029T104434Z
UID:10000251-1763460000-1763571600@mondai.tudelftcampus.nl
SUMMARY:Symposium: Feminist AI and Collective Wellbeing
DESCRIPTION:In the ‘Feminist AI and Collective Wellbeing’ symposium\, we invite researchers\, artists\, and practitioners to explore and contest the promises and pitfalls of AI in shaping collective wellbeing. \n\n\nPromises of AI include better societal wellbeing through improved healthcare\, relieved workloads\, or efficient usage of natural resources. Yet not everyone’s wellbeing counts evenly\, as AI simultaneously depends on and disrupts collectivity\, for instance\, through its pressure on shared environmental resources\, worker health\, and data exploitation and extractivism. How can we reimagine these dynamics\, and centre collective wellbeing so that it becomes a basis for caring and sustaining relationships around AI development and implementation? \nOur goal is not to provide definitive answers or fixed definitions of wellbeing and collectivity\, but to open a shared space for inquiry\, provocation\, and speculation. By foregrounding feminist\, decolonial\, and ecological perspectives\, we aim to imagine futures in which AI development and adoption are aligned with collective wellbeing.  \nThe symposium invites participants to explore how these relationships and entanglements might be reimagined\, and how AI can be critically reshaped\, reoriented\, or even refused in pursuit of more collective and caring futures. \nThrough international keynotes and a workshop on art-based AI inquiry\, we invite participants to reflect on these questions: \n\n\nWhat collectives are prioritized in the development of AI? Whose wellbeing is valued\, and whose is erased to maintain the wellbeing of others? \n\n\nHow can communities engage with AI on their own terms? What material resources\, infrastructures\, or type of data would they need to do so? \n\n\nWhat collective futures and imaginaries might we create together\, rooted in shared wellbeing rather than extractive logics? \n\n\nCan AI ever be truly aligned with collective wellbeing\, or are there cases where the most ‘caring’ act might be to refuse or resist AI altogether? \n\n\n\n\n\n\n\n\nRegister Here18 November – Part 1 |  10.00 – 15.00 \nWorkshop on art-based AI inquiry for collective knowledge generation \nOrganizers: Feminist Generative AI Lab with Virginia Tassinari and Vera van der Burg \nGuests: Soyun Park\, Mafalda Gamboa\, and Elvia Vasconcelos. \nIn this workshop\, we explore art-based inquiry as an alternative form of knowledge generation\, which can complement and enrich traditional approaches to research in AI. We invite participants to engage with new\, unusual\, artistic\, and embodied forms of exploration\, to reflect on the symposium theme. \nPlease note that the workshop has limited spots. Lunch is included. \n18 November – Part 2 |  15.00 – 17.00 \nKeynotes + Discussion + Drinks \nPlease note you can choose to register only to this part of the symposium. \nRegister Here19 November |  10.00 – 17.00 \nPhD Day \nFollowing the symposium on November 18th\, we invite PhD candidates to join us for a dedicated day of peer exchange\, collaborative feedback\, and dialogue. The PhD Day offers PhD candidates working on topics such as AI\, feminism\, care\, collectivity\, sustainability\, digital labor\, and related themes an opportunity to continue explore the theme of the symposium in relation to their own research themes and practices in an interdisciplinary environment. \nThe PhD day program will include peer review sessions that allow participants to share work in progress and to receive feedback from their peers; interactive activities that addresses the challenges of working as a PhD researcher; as well as community building and networking opportunities. \nMore info on PhD Day
URL:https://mondai.tudelftcampus.nl/en/event/symposium-feminist-ai-and-collective-wellbeing/
LOCATION:Next: Delft\, Molengraaffsingel 8\, 2629 JD\, Delft\, 2629 JD\, Netherlands
ATTACH;FMTTYPE=image/jpeg:https://mondai.tudelftcampus.nl/wp-content/uploads/sites/7/2025/10/event.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20251127T143000
DTEND;TZID=Europe/Amsterdam:20251127T170000
DTSTAMP:20260421T194204
CREATED:20251008T110141Z
LAST-MODIFIED:20251106T100050Z
UID:10000244-1764253800-1764262800@mondai.tudelftcampus.nl
SUMMARY:Best AI-Related MSc thesis Award 2025
DESCRIPTION:Mondai | House of AI and the TU Delft AI Initiative are happy to host\nthe AI-Related MSc Thesis Awards 2025!\nThe Best AI-Related MSc Thesis Award (short: AI Thesis Award) is a new award that celebrates outstanding master’s research at TU Delft dedicated to development\, application or contexts of artificial intelligence. Master’s students from all faculties participated with their finished thesis\, on condition that research is centered around or involves AI. The prize will be awarded in two categories: IN AI for research that advances AI itself\, and WITH AI for research that applies AI in a specific domain. One TU Delft graduate will be selected in each category after the top 3 candidates pitch their thesis at this award ceremony. \nProgramme \n14.30 – Walk-in15.00 – Opening15.15 – Thesis pitches16.15 – Break + walk-in alumni community16.30 – Award ceremony & Kick Off Alumni Community for AI\, Data & Digitalisation17.00 – Network drinks \nFinalists Best AI-related MSc Thesis Award 2025 \nCategory IN AI \n\nKrzysztof Piotr Baran (Computer Science @Faculty of EEMCS): Federated MaxFuse: Diagonal Integration of Weakly Linked Spatial and Single-cell Data through Federated Learning\nPrajit Bhaskaran (Computer Science @Faculty of EEMCS): Transformers Can Do Bayesian Clustering\nSimon Gebraad (Robotics @Faculty of ME): LeAP: Label any Pointcloud in any domain using Foundation Models\n\nCategory WITH AI \n\nAntonio Magherini (Civil Engineering @Faculty of CEG): JamUNet: predicting the morphological changes of braided sand-bed rivers with deep learning\nIsa Oguz (Management of Technology @Faculty of TPM): Victim Blaming Bias in Traffic Accidents Using Large Language Models\nJeroen Hagenus (Robotics @Faculty of ME): Realistic Adversarial Attacks for Robustness Evaluation of Trajectory Prediction Models\n\nAI Alumni Community Kick Off  \nIf you only want to attend the launch of the AI Alumni Community\, the walk-in is between 16:15 -16:30 and borrel starts around 17:00 \nDuring this event\, we also launch the new AI\, Data and Digitalisation Alumni Community (short: AI Alumni Community). By connecting TU Delft alumni across generations\, disciplines and sectors\, we aim to unlock new opportunities for innovation\, strengthen the bridge between research\, application and societal value\, and shape a digital future that benefits everyone. This community is open to all past\, present and future TU Delft alumni with an interest in or background in AI\, data and digitalisation. Community members include graduates from bachelor’s and master’s programmes as well as PhD alumni from across the university.
URL:https://mondai.tudelftcampus.nl/en/event/best-ai-related-msc-thesis-award-2025/
LOCATION:Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD
ATTACH;FMTTYPE=image/jpeg:https://mondai.tudelftcampus.nl/wp-content/uploads/sites/7/2025/10/TU250612_4059_0283_lowres.jpg
ORGANIZER;CN="Mondai | House of AI":MAILTO:mondai@tudelft.nl
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260219T120000
DTEND;TZID=Europe/Amsterdam:20260219T133000
DTSTAMP:20260421T194204
CREATED:20260122T124509Z
LAST-MODIFIED:20260122T124509Z
UID:10000262-1771502400-1771507800@mondai.tudelftcampus.nl
SUMMARY:TU Delft AI Lunch: AI Regulations (and how to work within them)
DESCRIPTION:Mondai | House of AI is happy to host the new edition of the TU Delft AI Lunch:\nAI Regulations (and how to work within them)\n This edition of the Delft AI Lunch is focused on AI Regulations and how to work within them. Contributing to the panel discussion are: \n\nAI Act expert Hannah Ruschemeier  (prof. of Public Law at Universität Osnabrück)\,\nEmpirical law expert on GDPR Julia Krämer (PhD at EUR)\,\nData steward Nicolas Dintzner (TPM Faculty)\, and TBA cybersecurity law expert.\n\nMarie-Therese Sekwenz (TPM\, AI Futures Lab) is moderating the panel. \nThis event includes free lunch for which registration is required (help us reduce food waste!) \n(This event will be held in English) \nProgramme\n12.00 – 12.30 | Lunch & networking\n12.30 – 13.30 | Panel AI Regulations and how to work within them: moderated by Marie-Therese Sekwenz\, featuring Hannah Ruschemeier\, Julia Krämer\, and Nicolas Dintzner \nThe Delft AI (Lab) Lunch series\nThis series is part of the Delft AI (Lab) Lunches\, a recurring meet-up hosted by the TU Delft AI Labs & Talent community at Mondai | House of AI.\nEvery session\, we host a panel to discuss challenges and developments made at the intersection of AI and a specific field. During these events\, you can participate\, learn\, make connections\, inspire and be inspired by and with the Delft AI Community. We invite all interested staff and students from TU Delft to join these sessions. Please contact community manager Charlotte Boelens for more information about this series or the TU Delft AI Labs & Talent Programme. \nNote for TU Delft PhDs\nThe TU Delft AI Lunch series is eligible for earning discipline related skills GSC with the ‘Form for earning GSC for TU Delft AI(-related) seminars’. Check with your local Faculty Graduate School (FGS) if your FGS offers this option for earning Discipline Related Skills GSC; and with your supervisors if they accept our seminars on your Doctoral Education (DE) list. If you already have a form\, don’t forget to bring it with you.
URL:https://mondai.tudelftcampus.nl/en/event/tu-delft-ai-lunch-ai-regulations/
LOCATION:Panorama @Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD\, Netherlands
CATEGORIES:AI Lab Lunch
ATTACH;FMTTYPE=image/jpeg:https://mondai.tudelftcampus.nl/wp-content/uploads/sites/7/2025/06/TU250612_4059_0085-Verbeterd-NR_lowres.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260324T123000
DTEND;TZID=Europe/Amsterdam:20260324T153000
DTSTAMP:20260421T194204
CREATED:20260203T113145Z
LAST-MODIFIED:20260223T103232Z
UID:10000265-1774355400-1774366200@mondai.tudelftcampus.nl
SUMMARY:Session AI Gigafactory Rotterdam
DESCRIPTION:Mondai | House of AI and the AI-hub Zuid-Holland\, together with Volt\,\nare pleased to host this session about the AI Gigafactory in Rotterdam(de voertaal van dit event is Nederlands) \nAI is developing at a rapid pace. AI technologies and innovations are widely used in industry\, public services and education. Questions about the position of the Netherlands and Europe in this development are on everyone’s mind; how do we shape our digital future and autonomy? \nOn Tuesday 24 March\, Volt\, initiator of the European AI Gigafactory in Rotterdam\, the AI-hub Zuid-Holland and TU Delft – Mondai | House of AI host a session about the plans for the realisation of the AI Gigafactory. The session focuses on the importance of developing such infrastructure. Not only for the region\, for the Netherlands and for Europe\, but also for the sovereignty of your organisation. In addition\, we would like to get an idea of potential use cases from your (future) practice and how these can be translated into computing needs and supporting facilities in the AI Gigafactory. \nWant to know more about the AI Gigafactory? Read more on the Volt page. This event is by invitation only\, but we are certainly open to any stakeholders who wants to be part of this conversation. Sign up and we will get in touch. \nProgramma\n12.30 – 13.00 Walk-in and lunch\n13.00 – 13.15 Opening and introduction by Joost Poort on behalf of  TU Delft – Mondai | House of AI en de AI-hub Zuid-Holland;\n13.15 – 13.45 Presentation of plans for\, and the progress of\, the AI-Gigafactory in Rotterdam by Han de Groot on behalf of Volt;\n13.45 – 14.30 Insights in AI & Compute from several domains \n\nErick Webbe\, CEO – Kickstart AI\,\nSven Hamelink\, Head Science & Technology – de Politie\,\nErik Scherff\, CIO | IT-director TU Delft.\n\n14.30 – 15.30 Discussion with moderation by Tom Jessen \nKom in ContactVragen over de AI Gigafactory? Neem vooral contact met ons op! \nJoost Poort\nManaging Director Mondai | House of AI\nAI Innovation Lead TU Delft\nHan de Groot\nCEO Volt\nInitiator of the AI-Gigafactory
URL:https://mondai.tudelftcampus.nl/en/event/session-ai-gigafactory-rotterdam/
LOCATION:Panorama @Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD\, Netherlands
ATTACH;FMTTYPE=image/png:https://mondai.tudelftcampus.nl/wp-content/uploads/sites/7/2026/02/Volt-AIGF.png
ORGANIZER;CN="Mondai | House of AI":MAILTO:mondai@tudelft.nl
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260506T090000
DTEND;TZID=Europe/Amsterdam:20260507T170000
DTSTAMP:20260421T194204
CREATED:20251217T183837Z
LAST-MODIFIED:20260415T094100Z
UID:10000257-1778058000-1778173200@mondai.tudelftcampus.nl
SUMMARY:Designing and Developing Ethically Aligned Defence AI
DESCRIPTION:Mondai | House of AI is pleased to host the\nDesigning and Developing Ethically Aligned Defence AI Conference\norganised by the ELSA Defense Lab\, in collaboration with the TU Delft Digital Ethics CentreAdvances in artificial intelligence (AI) are enabling military systems to operate in environments where uncertainty\, adversarial dynamics\, and time-critical decision-making are the norm rather than the exception. In such contexts\, ethical design cannot rely solely on predictable scenarios\, assumptions of human oversight\, or static rule-based constraints; rather\, it requires careful and substantial ethical programming and design to ensure that AI-enabled systems behave in alignment with moral and legal principles throughout their operational lifecycle. \nThis conference explores how ethically aligned military AI can be conceived\, designed\, and developed for deployment in uncertain\, adversarial\, and time-critical environments. Across two days\, contributors examine normative and methodological foundations related to the embedding of moral and ethical constraints during the early stages of the lifecycle of military AI systems. \nConference ProgrammeDay 1 – May 6 08.30 – 09.00 Walk in and Registration\n09.15 – 09.30 Welcoming Remarks \n09.30 – 10.15 From Principles to Practice: An Actionable Value-Based Risk Governance Dashboard for Defense AI by Jasper van der Waa\, Lotte Kerkkamp-de Rijcke and Birgit van der Stigchel (Netherlands Organization for Applied Scientific Research-TNO)\n10.15 – 11.00 Ethical Hazard Assessment: A Functional Approach to Assess Machine Learning Risks in Airborne Weapon Systems by Hauke Budig (Hamburg University of Technology)\, Volker Gollnick (Hamburg University of Technology)\, Nathan Gabriel Wood (Hamburg University of Technology / California Polytechnic State University San Luis Obispo /Center for Environmental and Technology Ethics – Prague) and Scott Robbins (Karlsruhe Institute of Technology) \n11.00 – 11.30 Break \n11.30 – 12.15 Meeting the Moral Responsibilities Associated with Dual-Use AI Through Three Practical Solutions by Daniel Trusilo (University of St. Gallen) and David Danks (University of Virginia)\n12.15 – 13.00 The Disinformation Bomb: Generative AI\, Deepfake Detection\, and the Industrialization of Deception by Mark Evenblij (DuckDuckGoose) \n13.00 – 14.00 Lunch Break \n14.00 – 14.45 Designing Responsible AI for Cognitive Warfare by Jurriaan van Diggelen\, Aletta Eikelboom\, Neill Bo Finlayson\, Jose Kerstholt\, Kimberley Kruijver (Netherlands Organization for Applied Scientific Research-TNO)\n14.45 – 15.30 Rights-Preserving Framework to Bot Detection in AI-Enabled Cognitive Warfare by Henning Lahmann (Leiden University) and Perica Jovchevski (Delft University of Technology) \n15.30 – 16.00 Break \n16.00 – 17.30 Keynote Lecture: Military AI\, Transdisciplinarity and the Politics of Design by Filippo Santoni de Sio (Eindhoven University of Technology) \nDay 2 – May 7 09.00 – 09.30 Walk in \n09.30 – 10.15 AI-Enabled Decision-Support Systems and the In Bello Trilemma: Recalibrating Feasible Precaution in Armed Conflict by Ann-Katrien Oimann (Tilburg University)\n10.15 – 11.00 Automated Adversariality: LLMs\, Objectivity\, and Autonomy in Intelligence Analysis by Nicholas Johnston (Delft University of Technology/ Netherlands Organization for Applied Scientific Research-TNO) and Martin Sand (Delft University of Technology) \n11.00 – 11.30 Break \n11.30 – 12.15 Sabotage and Espionage in Grey Zone Warfare: Responsible Data Integrated Threat Assessment of Shadow Fleet Activities by Liselotte Polderman-Borst (Leiden University / Delft University of Technology / Netherlands Defense Academy) and Stefan Buijsman (Delft University of Technology)\n12.15 – 13.00 Encoded Values: Tradeoffs in Programing Language and Development Methods for Military Software by Joshua S. Greenberg and Varija Mehta (Cornell University) \n13.00 – 14.00 Lunch Break \n14.00 – 14.45 Meaningful Human Control in C-UAS and Swarming Strike by Lennart Bult and Flip van Wijk (Emergent Swarm Solutions B.V.)\n14.45 – 15.30 Authority\, Accountability and AI: The Case for Indexed Alignment by Bryce Goodman (University of Oxford) \n15.30 – 16.00 Break \n16.00 – 17.30 Keynote Lecture: Empirical Perspectives on the Ethical Use and Development of Military AI by Christine Boshuijzen-van Burken (Netherlands Defense Academy / Eindhoven University of Technology) \nOrganisation \nPerica Jovchevski\, Post-doctoral Researcher in the section of Ethics and Philosophy of Technology at TU Delft.\nStefan Buijsman\, Associate Professor Responsible AI at TU Delft.
URL:https://mondai.tudelftcampus.nl/en/event/elsa-defense-designing-developing-ethically-ai/
LOCATION:Connect @Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD\, Netherlands
ATTACH;FMTTYPE=image/jpeg:https://mondai.tudelftcampus.nl/wp-content/uploads/sites/7/2025/12/TUD_Mondai_AI_GlobAIPol_networking.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260527T130000
DTEND;TZID=Europe/Amsterdam:20260527T173000
DTSTAMP:20260421T194204
CREATED:20260417T080900Z
LAST-MODIFIED:20260417T092632Z
UID:10000268-1779886800-1779903000@mondai.tudelftcampus.nl
SUMMARY:Bloopers of Brilliance: When Science Goes Sideways
DESCRIPTION:Mondai | House of AI is happy to host the next edition of the AI PhD & Postdoc Spring Symposium\, together with the TU Delft AI Initiative and the AI PhD Committee! \nBloopers or Brilliance: When Science goes Sideways (This event will be held in English) \n\nAre you a PhD or postdoc researcher at TU Delft working on AI-related topics? You are cordially invited!  \n\nHosted on May 27th (13:00 – 17:30) at Panorama XL@Mondai | House of AI\, this year’s event is going to shake things up and focus on a less-talked-about side of science – embracing scientific failures! The event includes poster pitches\, an interactive panel discussion on the importance of negative results\, errors\, and failures in science\, keynotes from final year PhDs\, and the ever-important borrel. It’s an excellent opportunity to present your research (particularly what did not go to plan) and network with fellow AI-focused scholars across campus. \nProgramme (preliminary) 13:00 – 17:30 \n\nKeynotes & talks from final year PhDs working in/with AI at TU Delft\nPanel discussion on interdisciplinary research & publications\nPoster market: You are invited to contribute to this event with your own poster and/or abstract! See poster requirements below. Final deadline for participating with hand in: May 18th\n\nThe afternoon session will end in a casual manner with drinks\, refreshments and an opportunity for networking. More details to be announced so keep an eye on this page for more updates on speakers and panellists! \nPosters and/or ‘failure’ abstract Researchers at all stages of their PhD and working in all different areas of AI are welcome: \n\nMachine learning and foundational AI techniques\nHuman-centered AI systems\nApplication of AI\nFairness\, bias\, legal\, and ethical considerations of AI\nEducation and AI\nDesign with AI\nReflexive and critical research on AI\nAnd more…\n\nPrizes for best poster and ‘failure abstracts’ to be announced! \nRequirements \nYou can submit either A) a poster with a short description of failure or B) a ‘failure abstract’\, aka a short description of the research you wanted to do\, but it didn’t quite work out. \n\nA) If you’re bringing a printed poster\, we request a short description of the efforts that went awry before the successful work. What went wrong in the project before it succeeded? It can be a misstep\, a small mistake\, or a significant error—it can be any way to show that the research is rarely a smooth process.\n\nB) Alternatively\, you can submit a short description explaining the intended goal and how it did not go to plan. Here\, you don’t have to have succeeded; it can be an idea that you eventually abandoned!  \n\n\nFor A) Poster A1 or A0 size \n\nPrinting is available via the AI Initiative for new posters in A0 format. Send in as PDF\, JPEG\, or PNG (portrait mode\, 300 DPI). \n\n\nFor A) and B)\, we expect abstract submissions of up to 300 words for both types. Send in as PDF or DOC(X).\nSubmissions can be made via the registration form available on this page\n\n\n\nRegister & submit your poster and/or ‘failure abstract’ by Monday\, 18 May. \nAny questions? Please contact the AI PhD Committee at AI-PhD-Committee@tudelft.nl \nSpeakers Keynotes & talks from final year PhDs \nModeling Discretization Error with the Bayesian Finite Element Method for Better Parameter Estimates by Anne Poot\nKeynote by Anne Poot (SLIMM Lab\, CEG)\nCan computation itself be probabilistic? In this talk\, I will give a crash course on the finite element method\, demonstrate the issue of discretization error\, and describe how this error can be modeled probabilistically. We will see that by reinterpreting the finite element method from a Bayesian point of view\, we can get better performance in downstream applications such as parameter estimation in inverse problems.\n \nCan neural networks design better structures faster? Neural parameterizations in topology optimization by Surya Manoj Sanu\n \nLayman talk by Surya Manoj Sanu (MACHINA Lab\, ME)\nAs engineers\, we constantly simplify problems so we can solve them faster. That’s why it sounds counterintuitive to add complexity to an already well-defined structural optimization problem. Why make things more complicated? In this talk\, we explore exactly that idea. We introduce an unsupervised neural network — the “extra complexity” — into a traditional topology optimization pipeline — the “simple” engineering workhorse. And surprisingly\, this added layer of intelligence can improve how we design structures. But\, as in all good science\, there’s not only the good. There’s also the bad — and sometimes\, the ugly and we will try to unpack all of this! \nSearch Machines for Architects by Casper van Engelenburg\nKeynote by Casper van Engelenburg (AiDAPT Lab\, A+BE)\nWhile image-based retrieval has drastically diversified the use cases of modern-day search engines\, their relevance judgments are far from optimal for disciplines like architecture\, which heavily rely on visual data that are fundamentally different from the natural photos most search engines are trained on. Where natural photo understanding focuses on appearance mainly (color\, texture)\, architectural drawing understanding is about understanding graphic-like drawings—floor plans\, sections\, axonometric projections\, etc.—that emphasize the composition and organization of the spaces that we live in. Therefore\, to accurately judge relevance between architectural drawings\, we must rethink what it means to be similar and explore how to train domain-specific models or fine-tune pretrained large vision models on architectural data. In this talk\, I will present several of our recent works that highlight advancements in floor plan representation learning and the necessity of building high-quality architectural datasets. \nTensor decompositions for the analysis of functional ultrasound data by Sofia Kotti\nLayman talk by Sofia Kotti (DeTAIL Lab\, EEMCS)\nFunctional ultrasound indirectly measures brain activity through changes in cerebral blood flow. Tensor decompositions provide a natural framework for analysing the acquired data by exploiting their multidimensional structure and expressing them in terms of latent components. This can help identify underlying spatial and temporal patterns in brain activity\, supporting improved interpretation of functional ultrasound measurements. \nPanel: Embracing Scientific Failure \nHow can we think about and practically approach our failures in science? And how can we make them visible through\, for instance\, documentation? \n\n  \n\nThis panel explores what scientific failure really means across different disciplines\, from rejected papers and failed grant applications\, to broader personal and professional setbacks. Speakers reflect on how failure is defined within their fields\, share their own experiences at both an individual and disciplinary level\, and discuss how these moments have shaped their work. By examining not just the challenges but also the lessons learned\, the panel aims to highlight how failure can be an essential and productive part of the scientific process. \nDuring this panel discussion\, Elvire Landstra (Tilburg University)\, Agostino Nickl (A+BE)\, Nazli Cila (IDE)\, and Megha Khosla (EEMCS) will shed light on different definitions of ‘scientific failure’ and how to deal with them.
URL:https://mondai.tudelftcampus.nl/en/event/bloopers-of-brilliance-when-science-goes-sideways/
LOCATION:Panorama @Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD\, Netherlands
ATTACH;FMTTYPE=image/jpeg:https://mondai.tudelftcampus.nl/wp-content/uploads/sites/7/2025/10/TU250612_4059_0283_lowres.jpg
ORGANIZER;CN="AI PhD Committee":MAILTO:ai-phd-committee@tudelft.nl
END:VEVENT
END:VCALENDAR