BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Mondai - ECPv6.15.12.2//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://mondai.tudelftcampus.nl/en/
X-WR-CALDESC:Events for Mondai
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Europe/Amsterdam
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20240331T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20241027T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20250330T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20251026T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20260329T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20261025T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20270328T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20271031T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20250310T143000
DTEND;TZID=Europe/Amsterdam:20250310T173000
DTSTAMP:20260405T184819
CREATED:20250128T154423Z
LAST-MODIFIED:20250303T082557Z
UID:10000217-1741617000-1741627800@mondai.tudelftcampus.nl
SUMMARY:Climate & AI Workshop #2
DESCRIPTION:Mondai | House of AI is glad to host the second Climate & AI workshop with the Climate Action Programme and the TU Delft | AI Initiative On Monday March 10th (14:30 – 17:30) at Mondai | House of AI\, the Climate Action Programme and the TU Delft AI Initiative are co-organising their second worskhop together. The event will bring together local researchers working on Al and climate and explore relevant overlaps and related opportunities. Purpose is to (further) delve into generating and shaping innovative ideas together. The programme includes introductions on behalf of both university-wide programmes\, short pitches from researchers\, breakout groups on 3-5 themes and will be concluded with a borrel. A minimum of 5 participants per theme is required. \nThe following thematic crossroads will be further defined during this Climate & AI event and relevant opportunities explored: \n\nAI for Sustainable Energy Transition\nUrban Data Analytics\nAI & Earth (Systems)\nSustainable Materials & Manufacturing\nMachine Learning for Climate Policy Support\n\n(This event will be held in English) \nProgramme \n14.30 – 14.45 Walk in and drinks\n14.45 – 15.30 Opening\n15.30 – 16.45 Breakout per theme\n17.00 Borrel & (interthematic) networking \nThis event is aimed (early/earlier career) faculty staff of all TU Delft faculties. Are you a phd or postdoc and interested in these themes? Or are you an interested faculty staff member who can’t join the event on March 10th but do you want to stay updated about follow up? Get in touch with the organisation team via Charlotte Boelens  . \nAI and Climate / Climate and AI at TU Delft \nRead the recap of the first Climate/AI Workshop (3 October 2024) here. \nIn March 2024\, the Climate Action Programme (CAP) dedicated their monthly lecture to ‘AI and Climate’ with a talk on “Machine-learning for understanding atmospheric physics” by Geet George (CEG) and Jing Sun (EEMCS) – recording and presentations available here. The CAP Academic Career Trackers of their 17 flagships recently also delved into AI with Angela Meyer (CEG) and AidroLab. A growing number of AI researchers also contribute to climate-related topics. A great example of this was the poster by Damla Akoluk (TPM)\, a PhD candidate from the HIPPO Lab\, who presented her work on aggregation at the Climate Action Festival. Read more about how this inspiring day went here. The last TU Delft AI Lunch of 2023/2024 was themed “The unprecedented environmental impacts of AI: a transdisciplinary discussion” with panellists Benedetta Brevini (New York University)\, Olya Kudina (TPM)\, Fanny Hidvégi (Policy Director at the AI Collaborative) and moderator Roel Dobbe (TPM).      \nAre you interested in the intersection of AI & Climate and want to join or learn more about this growing climate/AI community at TU Delft? Join this workshop by registering above or below\, or reach out to Climate Action Programme (Climate-Action@tudelft.nl) or AI Initiative (AI-Initiative@tudelft.nl).
URL:https://mondai.tudelftcampus.nl/en/event/climate-ai-workshop-2/
LOCATION:Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD
ATTACH;FMTTYPE=image/jpeg:https://mondai.tudelftcampus.nl/wp-content/uploads/sites/7/2025/02/ClimateAI_2_uitgelicht.jpg
ORGANIZER;CN="Mondai | House of AI":MAILTO:mondai@tudelft.nl
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20250312T120000
DTEND;TZID=Europe/Amsterdam:20250312T133000
DTSTAMP:20260405T184819
CREATED:20250129T122939Z
LAST-MODIFIED:20250219T122430Z
UID:10000218-1741780800-1741786200@mondai.tudelftcampus.nl
SUMMARY:TU Delft AI Lunch - Inclusive AI: Approaches to Digital Inclusion
DESCRIPTION:Mondai | House of AI is happy to host the new edition of the TU Delft AI Lunch:\nInclusive AI: Approaches to Digital Inclusion Realising inclusive digital systems requires the development of tools and methods that can achieve identified goals. This often takes the form of mitigating negative effects that lead to digital exclusion\, for example biases. However\, it can also incorporate positive values as design requirements such as fairness\, contestability\, and accessibility. Ideally\, this leads to digital tools and systems that promote participatory processes and just outcomes. But\, by what metrics or standards should we evaluate these approaches? Is a fair system necessarily reliable and accurate\, or are trade-offs required? How can ‘digital inclusion’ be operationalised as a design requirement at different levels\, from algorithms to design processes to artifacts and infrastructures? \nWhat do you think about the current approaches to digital inclusion? Join us for an interactive discussion to explore how to make inclusive AI actionable. \nThis event includes free lunch for which registration is required (help us reduce food waste!) \n(This event will be held in English) \nProgramme\n12.00 – 12.30 Walk-in and Lunch\n12.30 – 13.30 Panel Discussion on Inclusive AI with Nazli Cila (IDE)\, Alessandro Bozzon (IDE)\, Kars Alfrink (IDE)\, Emir Demirović (EEMCS)\, Roberto Rocco (ABE)\, Marie-Therese Sekwenz (TPM) \nModerators & Panellists \n\n\n\nModerator: Nazli Cila\nAssistant Professor\, Faculty of Industrial Design Engineering\nNazli is an Assistant Professor at the Department of Human-Centered Design\, Faculty of Industrial Design Engineering. Her work combines interaction design with humanities\, integrating empirical work (i.e.\, experimentation\, future modelling\, and prototyping) with practical and ethical issues surrounding collaborations with agents. Nazli is co-director of the AI DeMoS Lab.\n\n\n\n\n\n\n\n\nModerator: Alessandro Bozzon\nProfessor\, Faculty of Industrial Design Engineering\nAlessandro is Professor of Human-Centered AI and head of the Department of Sustainable Design Engineering\, Faculty of Industrial Design Engineering. His research lies at the intersection of human-computer interaction\, human computation\, user modelling\, and machine learning – developing methods and tools that support the design\, development\, control\, and operation of AI-enabled systems that are well-situated to actual human characteristics\, values\, intentions\, and behaviours. \n\n\n\n\n\n\n\n\n\n\nPanellist: Kars Alfrink\nPostdoc\, Faculty of Industrial Design Engineering\nKars is a researcher in the Department of Sustainable Design Engineering\, focusing on contestable AI. His research investigates how to design public AI systems so that they remain subject to societal control. Before entering academia\, Kars spent over 15 years as an interaction design consultant\, entrepreneur\, and community organizer- experiences that now shape his research. \n\n\n\n\n\n\n\n\n\n\n\nPanellist: Emir Demirović\nAssistant Professor\, Faculty of Electrical Engineering\, Mathematics and Computer Science\nEmir is an Assistant Professor at the Algorithmics group. He leads the Constraint Solving (“ConSol”) research group\, is co-director of the Explainable AI in Transportation Lab (“XAIT”) as part of the Delft AI Labs\, and is an ELLIS Scholar. His primary research interest lies in solving complex real-world problems through combinatorial optimisation and its integration with machine learning. \n\n\n\n\n\n\n\n\n\n\nPanellist: Roberto Rocco\nAssociate Professor\, Faculty of Architecture and the Built Environment\nRoberto is an Associate Professor of Spatial Planning and Strategy at the Department of Urbanism. He specialises in governance for the built environment and social sustainability\, as well as issues of governance in regional planning and design. This includes issues of spatial justice as a crucial dimension of sustainability transitions.\n \n\n\n\n\n\n\n\n\n\n\nPanellist: Marie-Therese Sekwenz\nPhD Candidate\, Faculty of Technology\, Policy and Management\nMarie-Therese is a PhD candidate at the Department of Multi-Actor Systems and a member of the AI Futures Lab. In her research she asks questions addressing aspects of rights and justice\, focused on content moderation\, platform governance and regulation\, AI and socio-technical and legal system design. Marie-Therese is also active as a journalist for the Austrian Broadcasting Agency (ORF). \n\n\n\n\n\nAbout the series Inclusive AI\nInclusivity can be understood as a desirable quality of AI systems\, encompassing a broad range of pressing societal and technical challenges for the responsible development and deployment of AI systems. It manifests in machine learning through concerns related to fairness\, bias\, and trustworthiness; societal issues currently underrepresented in discourse (e.g.\, feminism\, neurodiversity\, disability studies\, care ethics\, intersectionality\, more-than-human perspectives); and\, in engineering and robotics application domains such as healthcare\, mobility\, urban AI\, and the future of work. This new series aims to bring together a growing research community on campus to exploring these topics and foster an interdisciplinary exchange.  \nThis lunch is the second in a series of discussions on Inclusive AI throughout 2024-25. Stay tuned for details on upcoming events! \nThe Delft AI (Lab) Lunch series\nThis series is part of the monthly Delft AI (Lab) Lunches\, a recurring meet-up hosted by the TU Delft AI Labs & Talent community at Mondai | House of AI.\nEvery month\, we host a panel to discuss challenges and developments made at the intersection of AI and a specific field. During these events\, you can participate\, learn\, make connections\, inspire and be inspired by and with the Delft AI Community. We invite all interested staff and students from TU Delft to join these sessions. Please contact community manager Charlotte Boelens for more information about this series or the TU Delft AI Labs & Talent Programme.\n \nNote for TU Delft PhDs\nThe TU Delft AI Lunch series is eligible for earning discipline related skills GSC with the ‘Form for earning GSC for TU Delft AI(-related) seminars’. Check with your local Faculty Graduate School (FGS) if your FGS offers this option for earning Discipline Related Skills GSC; and with your supervisors if they accept our seminars on your Doctoral Education (DE) list. If you already have a form\, don’t forget to bring it with you.
URL:https://mondai.tudelftcampus.nl/en/event/tudelft-ailunch-inclusiveai-digitalinclusion/
LOCATION:Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD
ORGANIZER;CN="Mondai | House of AI":MAILTO:mondai@tudelft.nl
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20250415
DTEND;VALUE=DATE:20250417
DTSTAMP:20260405T184819
CREATED:20250115T151635Z
LAST-MODIFIED:20250219T123023Z
UID:10000220-1744675200-1744847999@mondai.tudelftcampus.nl
SUMMARY:AI in Bioscience: Redefining Frontiers
DESCRIPTION:Mondai is pleased to once again host the AI4b.io Symposium\nAI for Bioscience: Redefining FrontiersWe are excited to invite you to the fourth AI4b.io Symposium\, where cutting-edge Artificial Intelligence meets the complex challenges of Bioscience. Immerse yourself in two days of inspiring presentations and posters that dive into diverse topics ranging from large-scale factory scheduling to the genetic manipulation of microorganisms. Explore the intersection of AI and bioscience with engaging interdisciplinary discussions across various fields\, such as \n\nScheduling in process industries\nFluid dynamic modeling\nLab automation\nOptimal experimental design\nMicrobiome precision feed\nMetabolic engineering\nMolecular machine learning\nHuman-aware constrained optimization\n\nThis event offers a unique platform to exchange ideas\, present innovative research\, and forge meaningful connections across academia and industry. Whether you’re advancing theoretical frameworks or driving practical applications\, this symposium fosters collaboration to push the frontiers of AI in bioscience. \n(This event will be held in English) \nRegister here!Confirmed Speakers\nKaroline Faust\n Associate Professor in Microbiological Bioinformatics at KU Leuven\nGabriel D Weymouth\nProfessor of Ship Hydromechanics at TU Delft\nVitor A.P. Martins dos Santos\nProfessor of Biomanufacturing and Digital Twins\, Wageningen University & Research\nDaniel Probst\nAssistant Professor\, Wageningen University and Research\n\nCall for AbstractsAI4b.io invites AI practitioners\, researchers\, and innovators to share their work through poster and oral presentations. This is a valuable opportunity to showcase your expertise and contribute to the growing field of AI in bioscience! \nTo participate\, please submit your abstract following the required abstract format. Applications are welcome from academia\, established companies\, and start-ups alike. \nAbstract Submission Deadline: February 12th\, 2025 \nAbout AI4b.io \nThe Artificial Intelligence Laboratory for Bioscience (AI4b.io) is a collaboration between dsm-firmenich and Delft University of Technology. AI4b.io is aiming at long-term innovation in the fields of Artificial Intelligence and Bioscience to develop biobased products and to optimize biobased production technologies. Their mission is to develop a deep understanding of how novel AI technology (methods\, techniques\, theories\, and algorithms) can strengthen the effectiveness and efficiency of relevant research and/or business processes in the biotech industry. \nFor five years\, five PhD researchers will work in the lab. Research topics will include\, among other things\, Digital twin and smart plant scheduling\, Digital twin for large-scale fermentation\, Digital twin of lab automation processes and self-learning platforms and Machine learning for iterative metabolic engineering.
URL:https://mondai.tudelftcampus.nl/en/event/ai-in-bioscience-redefining-frontiers/
LOCATION:Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD
ORGANIZER;CN="Mondai | House of AI":MAILTO:mondai@tudelft.nl
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20250520T153000
DTEND;TZID=Europe/Amsterdam:20250520T180000
DTSTAMP:20260405T184819
CREATED:20250424T105837Z
LAST-MODIFIED:20250424T105850Z
UID:10000225-1747755000-1747764000@mondai.tudelftcampus.nl
SUMMARY:Upstream - Women in AI Connect
DESCRIPTION:Registration​WAI Connect: Women in AI Benelux X AI-Hub Zuid-Holland (side event of Upstream festival)​AI-Hub Zuid-Holland and Women in AI Benelux are proud to host Women in AI Connect during #Upstream2025. \nJoin us and thousands of changemakers on May 20 in Delft as we create new connections to shape a future proof economy.\nSign up for our event via the link and don’t forget to claim your Upstream Pass via www.upstreamfestival.com.   \nWomen in AI\nWAI Connect events are networking gatherings organized by Women in AI Benelux with regional AI hubs in The Netherlands and other partners in Benelux. These events feature expert presentations\, and networking sessions. They showcase women AI leaders sharing specialized knowledge across fields like healthcare and NLP\, connecting theory with practical applications.   \n​We aim to highlight women’s contributions while fostering community within the Benelux AI ecosystem. \nProgramme\n​15:30 | Walk in \n​16:00 – 16:30 | When AI Meets UX: What It Means for Researchers\, Designers\, and Product Managers\, Jie Li\, Chief Scientific Officer at Human-AI Symbiosis Alliance (H-AISA) \n​16:30 – 16:40 | Break \n​16:40 – 17:10 | Artificial (im)perfection\, Tessa Bruijne\, Research Scientist Digital Society at TNO Vector \n​17:10 – 18:30 | Networking\, drinks and bites \nConfirmed Speakers\nJie Li\nHuman-Computer Interaction Researcher\, User Experience Researcher\, Columnist at ACM Interactions\, Cake Designer\nTessa Bruijne\nResearch Scientist at TNO\n\nAbout UpstreamUpstream is where innovation meets community. Join us for an electrifying blend of inspiration\, collaboration\, and forward-thinking. Whether you’re a founder\, investor or a corporate change maker\, this community-driven festival is your gateway to cutting-edge ideas\, meaningful connections\, and endless possibilities.
URL:https://mondai.tudelftcampus.nl/en/event/upstream-women-in-ai-connect/
LOCATION:Molengraaffsingel 29\, Delft\, Molengraaffsingel 29\, Delft\, 2629 JD
ATTACH;FMTTYPE=image/png:https://mondai.tudelftcampus.nl/wp-content/uploads/sites/7/2025/04/Copy-of-Upstream_blob_lilac-1.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20250612T130000
DTEND;TZID=Europe/Amsterdam:20250612T173000
DTSTAMP:20260405T184819
CREATED:20250401T144700Z
LAST-MODIFIED:20250603T123348Z
UID:10000223-1749733200-1749749400@mondai.tudelftcampus.nl
SUMMARY:AI PhD & Postdoc Spring Symposium
DESCRIPTION:Mondai | House of AI is happy to host AI PhD & Postdoc Spring Symposium\, together with the TU Delft AI Initiative and the AI PhD Committee! (This event will be held in English) \n\nAre you a PhD or postdoc researcher at TU Delft working on AI-related topics? You are cordially invited!   \n\nWe are excited to announce the next edition of the Spring Symposium dedicated to the AI PhD & postdoc community\, including the yearly Poster Event! Hosted on June 12th (13:00 – 17:30) at Panorama XL@Mondai | House of AI\, the event includes poster pitches\, an interactive panel discussion on interdisciplinary publication\, keynotes from final year PhDs\, and the ever-important borrel. It’s an excellent opportunity to present your research and network with fellow AI-focused scholars across campus.   \nProgramme (preliminary) 13:00 – 17:30 \n\nKeynotes & talks from final year PhDs working in/with AI at TU Delft\nPanel discussion on interdisciplinary research & publications\nPoster market: You are invited to contribute to this event with your own poster! See poster requirements on below. Final deadline for participating with poster: June 4th\n\nThe afternoon session will end in a casual manner with drinks\, refreshments and an opportunity for networking. More details to be announced so keep an eye on this page for more updates on speakers and panellists. \nPosters Researchers at all stages of their PhD and working in all different areas of AI are welcome: \n\nMachine learning and foundational AI techniques\nHuman-centered AI systems\nApplication of AI\nFairness\, bias\, legal\, and ethical considerations of AI\nEducation and AI\nDesign with AI\nReflexive and critical research on AI\nAnd more…\n\nRequirements \n\nA0 size – printing available via AI Initiative\n\nPDF (Portrait mode\, 300 DPI)\n\n\nThe reuse of existing A1 or A0 posters is allowed\nSubmissions can be made via the registration form available on this page\n\nRegister & submit your poster by Wednesday\, 4 June. \nAny questions? Please contact the AI PhD Committee at AI-PhD-Committee@tudelft.nl  \nSpeakers Keynotes & talks from final year PhDs \nThe Learning Curve: Lessons after 5 Years of Training by Alexander Garzón\nAlexander Garzón is a final-year Ph.D. candidate at the AIdroLab\, at the intersection of AI and water engineering. Over the past few years\, he has been developing machine learning models—mainly graph neural networks—to help simulate drainage infrastructure more efficiently. In this talk\, he will share some of the highs and lows from his PhD journey\, the tools that made a difference\, and what he wishes he knew when he started. \nUnderstanding 3D Urban Environments from Point Cloud Data by Shenglan Du\n \nShenglan Du is a last-year PhD candidate from the Architecture faculty. She has a background in remote sensing and Geo information science from Wuhan\, China. Her research interests include deep learning for 3D data analysis\, 3D segmentation\, and 3D modelling. \nThe Development of a Reflective & Slow AI Design Practice by Vera van der Burg\nVera van der Burg is a designer and researcher pursuing her PhD at the Technical University in Delft’s Designing Intelligence Lab\, where she challenges conventional AI narratives by repositioning these systems as reflective tools rather than mere optimization machines. Viewing AI as a material to be disentangled and explored\, she emphasizes annotation and training phases as spaces for designers to examine their own practices and subjectivities. Through publications\, workshops\, talks\, and installations\, she reveals AI’s potential to create productive friction in creative processes\, reimagining human-AI interactions beyond automation. \nDesigning Trustworthy Human-AI Collaboration: The Interdependence and Trust Analysis Framework by Carolina Centeio Jorge\nCarolina Centeio Jorge (pronunciation [kɐɾulˈinɐ] [sẽ tˈɐju] [ʒˈɔɾʒɨ]) is a PhD candidate in the Interactive Intelligence\, focusing on mental models in the context of human-AI teams. Her goal is to enable interactive and intelligent agents (e.g.\, robots) to understand their human teammates and respond to them transparently and effectively. Specifically\, she has been investigating the concept of artificial trust in decision-making within human-AI teamwork\, particularly in modelling context-dependent human trustworthiness for collaboration scenarios. \n\nPanel on AI-related interdiscplinary research & publishing \n\n\nPanelists \n\n\nThe aim of this panel is to give insights in AI-related interdisciplinary research. It can be challenging to find the right conferences or journals to publish such work\, for example because it is difficult to completely fit in one corner of the research scope. Join us to learn from the experiences of our panelists in finding suitable outlets for publishing AI-related interdisciplinary work\, finding the right narrative styles based on your audience (when facing people from either of two disciplines)\, and related research challenges.   \nThis panel is moderated by Fatemeh Mostafavi (PhD at AiDAPT Lab and Faculty of A+BE) and other members of the AI PhD Committee \n\nJie Yang (Faculty of Electrical Engineering\, Mathematics & Computer Science)\nJie Yang is an assistant professor at TU Delft and the manager of the ICAI Lab GENIUS on Generative AI development and usage in large organizations. Before joining TU Delft\, Jie was a scientist at Amazon (Seattle) and a senior researcher at the University of Fribourg (Switzerland). His research interests span computer science\, AI ethics\, and human-computer interaction\, with a focus on developing human-centered computation for robust AI systems\, especially for natural language processing (NLP) systems. His work has received six “best paper” awards or nominations at premier AI and information systems conferences\, including ACM TheWebConf/WWW (both 2022 and 2023)\, AAAI/ACM AIES (2023)\, AAAI HCOMP (2022)\, ACM SIGIR (2024)\, and ACM HT (2017). His work finds application across a wide range of societal domains\, via collaboration with medical centers\, libraries\, and industrial companies\, and funded through national and international projects. Jie serves as an associate editor for the Journal of Human Computation and Frontiers of Artificial Intelligence\, and regularly serves on the senior program committees of TheWebConf/WWW\, AAAI\, and CIKM. \nCristina Zaga (University of Twente)\nCristina Zaga is an Assistant Professor of Human-Centred Design group and DesignLab at the University of Twente. Cristina’s research aims to develop methodology to foster societal transitions towards justice\, care\, and solidarity. She has developed Responsible Futuring\, a transdisciplinary approach to imagine future worth wanting and foster more-than human communities of belonging. She is currently working on approaches to Design for Resistance to contest and re-imagine the future of work and care with robots and AI. She leads the JEDAI network\, a transdisciplinary collective\, working towards mitigating the dehumanizing effects of AI and promoting social and environmental justice. Her award-winning work has received many accolades\, including the NWO Science Price for DEI initiatives (2022)\, the Dutch High Education Award (2022)\, and the Google Women Techmaker Award and scholarship (2018). She was selected as top 3 Diversity Leaders in AI in the Netherlands. \nMartin Sand (Faculty of Technology\, Policy & Management)\nMartin Sand works as an Assistant Professor of Ethics and Philosophy of Technology at TPM. He is interested in a broad range of topics from technological utopianism and justice to the problems of responsibility and moral luck in innovation. His work falls uncomfortable between philosophy and ethical research\, Science and Technology Studies\, Utopian Studies and Technology Assessment. \nSeyran Khademi (Faculty of Architecture and the Built Environment))\nSeyran Khademi is an Assistant Professor at the Faculty of Architecture and the Built Environment (ABE) at TU Delft\, where she also serves as co-director of the AiDAPT Lab. Her interdisciplinary research integrates computer vision into architectural design\, focusing on how data and deep learning can be applied to architectural representations—including drawings\, renders\, photographs\, 3D models\, and maps. In 2020\, she was awarded a Research-in-Residence Fellowship at the Royal Library of the Netherlands\, where she developed visual recognition tools for the children’s book collection. Prior to that\, in 2017\, she joined the Computer Vision Lab as a postdoctoral researcher on the ArchiMediaL project\, developing computer vision and deep learning methods for the automatic detection of buildings and architectural elements in archival and street-view imagery. Seyran earned her Ph.D. in statistical signal processing and optimization from TU Delft in 2015. She continued her postdoctoral work in intelligent audio and speech algorithms before joining the computer vision lab. She holds an MSc in Signal Processing from Chalmers University of Technology in Gothenburg\, Sweden\, awarded in 2010. \nLuciano Cavalcante Siebert (Faculty of Electrical Engineering\, Mathematics & Computer Science)\nLuciano Cavalcante Siebert is an assistant professor at TU Delft’s Faculty of Electrical Engineering\, Mathematics\, and Computer Science (INSY department/Interactive Intelligence group). His research focus is on Responsible Artificial Intelligence. Luciano serves as the director of technology at the Centre for Meaningful Human Control the and co-director of the AiBLE lab. His interdisciplinary research aims to develop practical methods to ensure AI remains under meaningful human control. By integrating ethical and human behavior theories\, Luciano proposes interactive approaches that enable agents to elicit and align with human values and norms\, while empowering humans to maintain control and responsibility. \nArkady Zgonnikov (Faculty of Mechanical Engineering)\nArkady Zgonnikov is an interdisciplinary cognitive scientist specializing in cognitive modeling of human behavior in human-robot interactions\, with a particular focus on automated driving. He earned his MSc in Applied Mathematics from Saint Petersburg State University in 2009 and his PhD in Computer Science and Engineering from The University of Aizu in 2014. His early research concentrated on the mathematical modeling of intermittent motor control in human operators. In 2015\, Arkady joined the Department of Psychology at the University of Galway as an Irish Research Council Postdoctoral Fellow\, where he studied the response dynamics of decision making. In 2017\, he returned to The University of Aizu to explore the interplay between decision making and motor behavior. In 2019\, he joined the Department of Cognitive Robotics at Delft University of Technology as a postdoctoral researcher and was promoted to assistant professor in 2020. Arkady’s current research aims to understand human cognition in traffic interactions through both mathematical and data-driven modeling. He is deeply concerned with the ethical and societal impacts of robotics and AI technology\, striving to develop concrete methods that empower humans to meaningfully control artificial intelligent systems.
URL:https://mondai.tudelftcampus.nl/en/event/ai-phd-postdoc-spring-symposium/
LOCATION:Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD
ATTACH;FMTTYPE=image/jpeg:https://mondai.tudelftcampus.nl/wp-content/uploads/sites/7/2025/04/PhDPoster2024_uitgelichteafbeelding.jpg
ORGANIZER;CN="Mondai | House of AI":MAILTO:mondai@tudelft.nl
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20250911T153000
DTEND;TZID=Europe/Amsterdam:20250911T190000
DTSTAMP:20260405T184819
CREATED:20250702T094307Z
LAST-MODIFIED:20250820T130914Z
UID:10000239-1757604600-1757617200@mondai.tudelftcampus.nl
SUMMARY:Grand Opening House for Robotics and AI\, Data & Digitalisation on TU Delft Campus
DESCRIPTION:All partners in the new House for Robotics and AI\, Data & Digitalisation cordially invite you for thee official opening!(this event will be held in Dutch) \nOn Thursday 11 September\, we officially open the doors of the new House of Robotics and AI\, Data & Digitalisation on the TU Delft Campus. At Molengraaffsingel\, AI & Robotics come together. In collaboration with TU Delft Fieldlab DoIoT and pioneering start-ups MomoMedical\, DuckDuckGoose\, PercivAI\, Dalco Robotics and the Robot Engineers\, Mondai | House of AI and RoboHouse are building a strong ecosystem of research and innovation\, technology and practice. \nOver a snack and drink\, the opening ceremony will be joined by Wouter Kolff (King’s Commissioner\, Province of South Holland)\, Tim van der Hagen (Rector Magnificus TU Delft)\, and Maaike Zwart (Alderman Sustainability\, Work & Income & Economy\, Municipality of Delft). \nGet ConnectedQuestions or inquiries about the opening? Don’t hesitate to contact us! \nRoos-Anne Albers\nProject Coordinator Mondai | House of AI\nOsman Akin\nMarketing & Communications Manager RoboHouse
URL:https://mondai.tudelftcampus.nl/en/event/grand-opening-house-for-ai-robotics/
LOCATION:Mondai & RoboHouse – Molengraaffsingel 29\, Molengraaffsingel 29\, Delft\, 2629 JD\, Netherlands
ATTACH;FMTTYPE=image/jpeg:https://mondai.tudelftcampus.nl/wp-content/uploads/sites/7/2025/05/Home_thuisbasis.jpg
ORGANIZER;CN="Mondai | House of AI":MAILTO:mondai@tudelft.nl
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20250924T123000
DTEND;TZID=Europe/Amsterdam:20250926T143000
DTSTAMP:20260405T184819
CREATED:20250514T133800Z
LAST-MODIFIED:20250923T142954Z
UID:10000229-1758717000-1758897000@mondai.tudelftcampus.nl
SUMMARY:Both\, Between\, Beyond: Ethics and Epistemology of AI
DESCRIPTION:Mondai | House of AI is happy to host the coming workshop on the Ethics and Epistemology of AI: Both\, Between\, Beyond (De voertaal van dit evenement is Engels – this event will be held in English) \nGeneral Programme\nWednesday Sept\, 24th12:30 Welcome13:00 – 17:30 Keynote and Talks17:30 Networking Drinks \nThursday Sept\, 25th10.15 Welcome10.30 – 16.00 Keynotes and Talks18:00 Dinner \nFriday Sept\, 26th09.45 Welcome10.00 Keynote and Talks14.30 End \nThis workshop focuses on the interplay between epistemological and ethical questions arising with the use of AI systems. So far\, central epistemologically and ethically relevant aspects pertaining to these technologies have been largely analyzed in their singularity. For example\, epistemic limitations of these systems\, such as their opacity\, have been the center of the epistemological debate but have only been marginally addressed in ethical studies. On the other hand\, issues of responsibility\, fairness\, and privacy\, among others\, have received considerable attention in discussions on the ethics of AI. However\, even though some efforts are present in the literature to bring these two dimensions together (Russo et al.\, 2023; Pozzi and Durán\, 2024)\, more needs to be said to tackle relevant and philosophically interesting issues that fall in their intersection. \nAgainst this background\, this workshop aims to bring together scholars working on topics at the intersection between the ethics and epistemology of AI\, focusing on different philosophical traditions and perspectives. \nProgramme (updated!) Wednesday September 24th12.30 – 13.00 Welcome and walk-in13.00 – 14.00 Keynote by Claus Beisbart: “AI and Non-Epistemic Values. Insights from the Debate on Science and Values”14.00 – 14.20 Break14.20 – 15.00 Talk by Tuğba Yoldaş: “Virtue Epistemology and Responsible Knowing with Generative AI”15.00 – 15.40 Talk by Anna Smadjor and Yael Friedman: “Epistemological and Ethical Dimensions of Synthetic Data for Trustworthy AI”15.40 – 16.00 Break16.00 – 16.40 Talk by Giacomo Figà Talamanca and Niel Conradie: “The Significance of Vulnerability for being Trustworthy about AI”From 17.30 Informal gathering and drinks! \nKeynote: AI and Non-Epistemic Values. Insights from the Debate on Science and Values by Claus Beisbart (Bern University\, Switzerland)\nTo what extent do AI systems incorporate non-epistemic values\, such as moral values? And to what degree must they do so? To answer these questions\, I propose examining the debate on values in science. In this debate\, proponents of value-free science have tried to show that science can be kept free of non-epistemic values. Detractors have argued that this is neither possible nor desirable\, mainly drawing on Rudner’s argument. In this talk\, I will first summarize key insights and arguments from the debate on values in science. I’ll then draw consequences for the question of whether AI systems do\, or must\, incorporate tradeoffs between non-epistemic values. A key conclusion is that the answer depends on how AI is used. Some moves made in the debate on values in science can be used to propose uses of AI that minimize the impact of non-epistemic values. However\, it’s another question of whether such uses are feasible in practice. \nThursday September 25th10.15 – 10.30 Walk-in10.30 – 11.10 Talk by Shaoyu Han: “Accidental Hate Speech and the Extended Mind: Rethinking Epistemic Responsibility in AI-Generated Content”11.10 – 11.50 Talk by Johan Largo. Human-AI interaction\, epistemic credit and moral responsibility12.00 – 13.00 Lunch Break13.00 – 13.40 Talk by Hatice Tülün: “The Invisible Third: The Ethical and Epistemic Role of Recommender Systems in Mediated Social Interactions”13.40 – 14.20 Talk by Karim Barakat: “Algorithmic Censorship and the Public Sphere”14.20 – 14.40 Break14.40 – 15.40 Keynote by Eva Schmidt: “Engineering a Concept of AI Trustworthiness as Competence and Character”15.40 – 16.00 CoffeeFrom 18.30 Dinner \nKeynote: Engineering a Concept of AI Trustworthiness as Competence and Character by Eva Schmidt (TU Dortmund\, Germany)\nIn their paper\, they critique two prominent views on AI trustworthniess: The skeptical view\, which holds that the concept of trustworthiness cannot be applied to AI systems\, and the reductive view\, which maintains that AI trustworthiness can be reduced to reliability\, competence\, or well-functioning. They contest both views by pointing out that something like good character or goodwill is highly relevant when interacting with AI systems. They then propose that AI systems are trustworthy for a stakeholder just in case they meet two conditions: (1) the system’s goals align with the stakeholder’s goals and (2) the system is competent in pursuing these goals. The concept of AI trustworthiness can\, if conceptualized in this way\, fulfill several theoretical and practical functions that cannot be fulfilled by the concept of reliability alone. This becomes apparent as soon as one takes the issue to be a conceptual engineering problem \nFriday September 26th09.45 – 10.00 Walk-in10.00 – 10.40 Talk by Joshua Hatherley: “Federation Opacity and the Promise of Federated Learning in Healthcare”10.40 – 11.20 Talk by Omkar Chattar: “What Is Phantom Trust? Ontology and Epistemology in AI Reliance”11.20 – 11.40 Break11.40 – 12.20 Talk by Eric Owens: “On Machines and Medical Schools: Machine Learning\, Epistemic Institutions and the Physician”12.20 – 13.30 Lunch Break13.30 – 14.30 Keynote by Karin Jongsma and Megan Milota: “Ethics\, Epistemology and Praxis: Making AI’s Impact on the Diagnostic Workflow Visible”14.30 Goodbye \nKeynote: Karin Jongsma and Megan Milota\, UMC Utrecht\, The Netherlands\nMachine learning and deep learning have proven to be particularly useful for pattern recognition. This may explain why the majority of current AI applications in medicine are used to aid image-based diagnostics in fields like radiology and pathology. In their interactions with these new technologies\, medical professionals will have to renegotiate their position and role in the digital transition; they will also have to critically consider what tasks they are willing to outsource to AI tools and which (new) competencies and expertise medical professionals need to responsibly use these technologies. \nWe conducted ethnographic work and produced an ethnographic film to study the ways in which AI influences the daily work of pathologists. The film shows the skills and knowledge pathologists and lab technicians require when conducting their work and provides a clearer image of the responsibilities they bear. In this session\, we will screen this film and will facilitate an interactive discussion of questions related to the ethics and epistemology of AI in image driven care. \nWorkshop Organisers Giorgia Pozzi\nTU Delft\nChirag Arora\nTU Delft\nJuan M. Durán\nTU Delft\nEmma-Jane Spencer\nErasmus MC & TU Delft
URL:https://mondai.tudelftcampus.nl/en/event/both-between-beyond-ethics-and-epistemology-of-ai/
LOCATION:Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD
ATTACH;FMTTYPE=image/jpeg:https://mondai.tudelftcampus.nl/wp-content/uploads/sites/7/2025/05/EthicsEpistomology_uitgelicht.jpg
ORGANIZER;CN="Mondai | House of AI":MAILTO:mondai@tudelft.nl
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20251009T120000
DTEND;TZID=Europe/Amsterdam:20251009T140000
DTSTAMP:20260405T184819
CREATED:20250708T081418Z
LAST-MODIFIED:20251001T115253Z
UID:10000241-1760011200-1760018400@mondai.tudelftcampus.nl
SUMMARY:TU Delft AI Lunch: Next Career Steps for AI PhDs & Postdocs
DESCRIPTION:Mondai | House of AI is happy to host the new edition of the TU Delft AI Lunch:\nNext Career Steps for AI PhDs & postdocs\n Charting Your AI Career Path: Insights from Experts in Innovation\, Research\, and Industry\nAre you an early-career researcher\, PhD\, or postdoc looking to carve out your next steps in AI? Join us for an inspiring lunchtime panel with leading voices from the worlds of startups\, academia\, and industry consulting! Meet entrepreneur and innovation coach Arthur Tolsma\, grant-winning researcher Savvas Zannatou\, academic mentor and counsellor Iliana Yocheva and Senior Data Science Consultant Lucas Bresser. \nHear how they navigated their career journeys\, gain practical tips on research grants\, entrepreneurship\, and thriving in academia or industry—and get your questions answered! \nWhat does it take to transition from a PhD to consulting\, academic track\, industry\, or entrepreneurship in AI-related fields\, and how can I navigate my career? Join us for a lunchtime panel and deep dive with experts sharing their experiences and advice.   \nFind out more about the event under the following Link \nNote for TU Delft PhDs:\nThis AI Lunch qualifies for discipline-related skills (GSC) credit under the “Form for earning GSC for TU Delft AI(-related) seminars” scheme. Please check with your Faculty Graduate School & supervisors. \nDon’t miss this chance to connect\, learn\, and be inspired to take the next step in your AI career. Save the date! \nThis event includes free lunch for which registration is required (help us reduce food waste!) \n(This event will be held in English) \nProgramme\n12.00 – 12.30 | Lunch & networking\n12.30 – 13.30 | Panel I. Careers\, options\, and outlooks: moderated by Marie-Therese Sekwenz\, featuring Arthur Tolsma\, Iliana Yocheva\, and Lucas Bresser\n13.30 – 14.00 | Panel II. Concrete next steps in academic career paths: With the Panel I. and Savvas Zannettou \nThe Delft AI (Lab) Lunch series\nThis series is part of the monthly Delft AI (Lab) Lunches\, a recurring meet-up hosted by the TU Delft AI Labs & Talent community at Mondai | House of AI.\nEvery month\, we host a panel to discuss challenges and developments made at the intersection of AI and a specific field. During these events\, you can participate\, learn\, make connections\, inspire and be inspired by and with the Delft AI Community. We invite all interested staff and students from TU Delft to join these sessions. Please contact community manager Charlotte Boelens for more information about this series or the TU Delft AI Labs & Talent Programme.\n \nNote for TU Delft PhDs\nThe TU Delft AI Lunch series is eligible for earning discipline related skills GSC with the ‘Form for earning GSC for TU Delft AI(-related) seminars’. Check with your local Faculty Graduate School (FGS) if your FGS offers this option for earning Discipline Related Skills GSC; and with your supervisors if they accept our seminars on your Doctoral Education (DE) list. If you already have a form\, don’t forget to bring it with you.
URL:https://mondai.tudelftcampus.nl/en/event/tu-delft-ai-lunch-next-career-steps-for-ai-phds-postdocs/
LOCATION:Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD
CATEGORIES:AI Lab Lunch
ATTACH;FMTTYPE=image/jpeg:https://mondai.tudelftcampus.nl/wp-content/uploads/sites/7/2025/06/TU250612_4059_0085-Verbeterd-NR_lowres.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20251112
DTEND;VALUE=DATE:20251115
DTSTAMP:20260405T184819
CREATED:20250708T070908Z
LAST-MODIFIED:20251107T095338Z
UID:10000240-1762905600-1763164799@mondai.tudelftcampus.nl
SUMMARY:Global AI Policy Research Summit 2025 Delft
DESCRIPTION:Global AI Policy Research Summit 2025 Delft\nFraming the Future of AI Governance: Leading with Evidence-based PolicyOn 12 – 14 November Mondai | House of AI is happy to host the next Global AI Policy Summit in Delft! \n  \n(this event will be held in English) \nPredictions about the promises and perils of artificial intelligence (AI) are increasingly prevalent; from the future of work and education to breakthroughs in healthcare and public services\, and the reconfiguration of warfare and national security. Such narratives profoundly influence how we imagine\, initiate\, and interrogate the development and deployment of AI innovations. Crucially\, how we frame these issues shapes the decisions we make about the future of AI in society. Policy research allows us to highlight these underlying narratives and ask how they frame the economic\, social\, environmental and human rights impacts of AI systems. \nThe Global AI Policy Research Summit 2025 convenes a growing international network of research institutes and policymakers. Summit participants work to uncover the mechanisms of  – and potential pathways for – effectively framing the future of responsible AI\, drawing on evidence-based policy and good governance practices. Together they analyze how current dominant narratives serve to frame the global AI policy landscape\, as well as jointly identify effective strategies and collaborations for the future of responsible AI governance. \nBuilding on the AI Policy Research Roadmap\, which was developed through collaborative discussions at the inaugural AI Policy Summit 2024 in Stockholm\, summit participants can further advance a shared vision and concrete actions for the future of responsible AI governance through collaborative research and practice! \nLearn More About The Global AI Policy Research NetworkProgrammeWednesday 12 November – Welcome Drinks & ‘Indigenous Perspectives on AI’ @Vakwerkhuis\nBefore the summit starts we would like to invite participants to join us for welcome drinks and a workshop on ‘Indigenous Perspectives on AI’\n \nProgramme\n18.00 – 20.00 Drinks and Workshop hosted by Anna Melnyk (Delft Design for Values Institute) & Lynnsey Chartrand (Head of Indigenous Initiatives at Mila\, joining online).\nThis collaborative workshop invites participants to critically engage with Indigenous perspectives on artificial intelligence. Through reflective discussions with participants\, you explore how Indigenous knowledges\, governance practices\, and relational worldviews can inform more responsible\, equitable\, and sustainable decision-making about AI futures. The session aims to expand awareness of diverse epistemologies and to foster dialogue on how AI systems can better serve communities\, lands\, and ecosystems. \nMore information about the work they do you can read here: Design for Values and Critical Raw Materials: Decolonial Justice Perspective – Delft Design for Values Institute \nThursday 13 November – Day 1\nGeneral Programme\n08.30 – 09.00 Walk-in and Welcome Coffee\n09.00 – 13.00 Plenary Morning Programme\n13.00 – 14.00 Lunch\n14.00 – 17.30 Plenary Afternoon Programme\n17.30 Dinner and Drinks @Firma van Buiten \nFriday 14 November – Day 2\nGeneral Programme\n08.30 – 09.00 Walk-in and Welcome Coffee\n09.00 – 12.30 Plenary ad Break-out Morning Programme\n12.30 – 13.30 Lunch\n13.30 – 15.30 Plenary Afternoon Programme\n15.30 Close \nNovember 13 Detailed ProgrammeSpeakersNovember 14 Detailed ProgrammeDo you want to join this inspiring event? Please contact the organisers!\nTaylor Stone \nt.w.stone-1@tudelft.nl \nHelma Dokkum \nw.m.dokkum@tudelft.nl \nFull Programme November 13 – Day 1: Reframing AI Narratives 08.30 – 09.00 Welcome and Coffee \n09.00 - 09.15: Summit Opening\nSummit Opening by Viriginia Dignum (Umeå University)\, Geert-Jan Houben (TU Delft) and Isadora Hellegren Létourneau (Mila) \n09.15 - 11.00: Session 1 - A Year with the Roadmap for AI Policy Research & Network Round table\nNetwork introductions and reflections led by Isadora Hellegren Létourneau (Mila) \n11.00 – 11.30 Break \n11:30 - 13:00: Session 2 - Rethinking AI Safety and Sovereignty – Regional Perspectives\nWhat can be learned from assessing current governance approaches to AI sovereignty and safety in the EU\, Africa\, Asia\, and Canada? \nPanel moderated by Frank Dignum (Umeå University). \nPanellists (confirmed so far):\n> Ayantola Alayande (Global Center on AI Governance)\, who will be joining online;\n> Carolina Aguerre (Universidad Católica del Uruguay)\, who will be joining online;\n> Lyantoniette Chua (AI Saferty Asia)\, who will be joining online. \n13.00 – 14.00 Lunch \n14:30 - 15:30: Session 3 - Building Alternative Narratives\nCan novel insights from foresight methods and systems perspectives offer alternative narratives for the development of effective AI policies? \nPanel moderated by Ginerva Castellano (Uppsala University) \nPanellists (confirmed so far):\n> Roel Dobbe (TU Delft);\n> The Anh Han (Teesside University);\n> Sam Bogerd (Centre for Future Generations). \n15.30 – 16.00 Break \n16.00 - 17.30: Session 4 - Governance for Innovation\nHow could participatory and collaborative approaches foster a narrative that aligns governance and regulation with innovation? \nPanel moderated by Mirko Schaefer (Utrecht University). \nPanellists (confirmed so far):\n> Kerstin Bach (Norwegian University of Science and Technology);\n> Ley Muller (Women in AI Governance);\n> David Lewis (Trinity College Dublin). \n17.30 - 18.00: Close of Day 1 - Reframing AI Narratives\nNetworking opportunity \n18.00 – 20.00 Dinner and Drinks at Firma van Buiten \nFull Programme November 14 – Day 2: Moving Beyond High-Level Principles 08.30 – 09.00 Welcome and Coffee \n09.00 - 09.30: Opening of Day 2\nReflections on Day 1 \n09.30 - 12:30: Deep Dive Workshop - AI in Warfare: Actions\, Policies and Practices\nDeep Dive Workshop led by Nitin Sawhney (University of Arts Research Institute Helsinki) and Petter Ericson (Umeå University).\nThis workshop is held in the Panorama @Mondai\, ground floor. \nFor more information\, please refer to the designated event page. \n09.30 - 12:30: Deep Dive Workshop - AI in Health Care\nDeep Dive Workshop led by Jason Tucker (Umeå University)\, Fabian Lorig (Malmö University) and Stefan Buijsman (TU Delft).\nThis workshop is held in the Innovate @Mondai\, 1st floor. \nFor more information\, please refer to the designated event page. \n09.30 - 12:30: Deep Dive Workshop - Contextualising AI Principles: Universal Guidelines or Domain-Specific Policy?\nDeep Dive Workshop moderated by Tina Comes (TU Delft)\, with expert speakers Thomas Kox (Weisenbaum Institute)\, Arkady Zgonnikov (TU Delft)\, and Duuk Baten (SURF). \nThis workshop is held in the Connect @Mondai\, 1st floor. \nFor more information\, please refer to the designated event page. \n12.30 – 13.30 Lunch \n13.30 - 14.30: Session 6 - Building Bridges from Research to Policy\nReflections on deep-dive workshop; proposals to build stronger collaborations with policy-makers moving forward \nModerated by Virginia Dignum (Umeâ University) and Tina Comes (TU Delft) \n14.30 - 15.30: Session 7 - Next Steps for the Global AI Policy Research Network\nBrief reflections from deep-dive leads followed by plenary discussion led by Isadora Hellegren Létourneau (Mila) \n15.30 – 16.00 Close of Summit \nSpeakers Virginia Dignum. Professor in Responsible AI and Director of the AI Policy Lab\, Umeå University\nVirginia Dignum is a professor in Responsible Artificial Intelligence and the Director of the AI Policy Lab and a member of the UN High Level Advisory Body on AI and senior advisor to the Wallenberg Foundations. \nSessions\n> Summit opening & network round-table (November 13\, 2025 at 09.00h)\n> Session 1: Rethinking AI sovereignty (November 13\, 2025 at 10.00h)\n> Session 5: Deep-dive workshop – moving beyond high-level principles (November 14\, 2025 at 09.30h)\n> Session 6: Building bridges from research to policy (November 14\, 2025 at 13.30h) \nGeert-Jan Houben. Pro Vice Rector Magnificus Artificial Intelligence\, Data and Digitalisation\, TU Delft\n \nGeert-Jan Houben is Pro Vice Rector Magnificus Artificial Intelligence\, Data and Digitalisation (PVR AI) at Delft University of Technology (TU Delft). Leading the TU Delft activities in the field of AI\, data and digitalisation\, for education\, for research and valorisation\, and for relevant support. This includes the establishment of TU Delft AI Labs to promote cross-fertilisation between AI experts and scientists who use AI in their research. It also includes the representation of TU Delft in regional\, national and international co-operation on this theme. Full professor of Web Information Systems (WIS) at the Software Technology (ST) department at Delft University of Technology (TU Delft). Leading a research group on Web Information Systems (WIS) and involved in the education in Computer Science in Delft\, with a focus on data-based information systems on the Web.. \nSessions\n> Summit opening & network round-table (November 13\, 2025 at 09.00h) \nIsadora Hellegren Létourneau. Senior Project Manager AI Policy Research\, Public Policy and Inclusion\, Mila\n\nIsadora Hellegren leads multistakeholder and interdisciplinary AI policy research at Mila – Quebec Artificial Intelligence Institute\, such as the Mila AI Policy Fellowship. She works to bridge the gap between AI research and public policy to inform better AI policy – for the benefit of all. Before joining Mila\, Isadora was a Senior Policy Specialist at the Swedish International Development Cooperation Agency (Sida)\, where she advised on human rights and ICTs\, democratic governance\, and gender equality. Her academic and professional background focusing on internet governance and policy developments in relation to emerging technologies and social movements continues to inform her dedication to advancing meaningful participation in AI and beyond. She chairs the newly founded Global AI Policy Research Network (GlobAIpol)\, is a former Member of Steering Committee in the Global Internet Governance Academic Network (GIGANET) and has published articles in Oxford University Press Research Encyclopedia of Communication\, and Internet Histories: Digital Technology\, Culture and Society on related topics. \nSessions\n> Opening (November 13\, 2025 at 09.00h)\n> Session 1: A year with the Roadmap for AI Policy Research & Network round-table (November 13\, 2025 at 09.15h)\n> Session 7: Next steps for the Global AI Policy Research Network (November 14\, 2025 at 14:30h)\n> Closing Summit (November 14\, 2025 at 15:30h) \nFrank Dignum. Professor at Department of Computing Science\, Umeå University.\nFrank Dignum is Professor at the Umeå University Department of Computer Science\, leading a research group in the field of socially conscious AI. They develop models that can provide insights into how society can respond to political changes or natural disasters. \nSessions\n> Session 2: Rethinking AI safety and sovereignty – regional perspectives (November 13\, 2025 at 11:30h) \nAyantola Alayande. Researcher at the Global Center on AI Governance (Online)\nAyantola Alayande is a Researcher at the Global Center on AI Governance\, where he works on issues of international cooperation in AI policymaking and governance\, AI development in low- and middle-income countries (LMICs)\, compute governance\, AI Security\, state-led AI governance in Africa\, and AI security. His broader interests span the geopolitics/geoeconomics of emerging technologies\, global governance\, technology and industrial policy\, Africa-in-major-power-competition\, and digital methods/media. His writings have appeared in several notable research outlets\, including Nature\, the Atlantic Council\, The Productivity Institute\, The Productivity Monitor\, the Bennett Institute for Public Policy\, Research ICT Africa\, among others. Ayantola holds graduate degrees in public policy and international development\, respectively\, from the KDI School of Public Policy and the University of Edinburgh\, and is currently a PhD candidate in AI Geopolitics and Governance at the University of Oxford’s Department of International Development (ODID)\, where he is researching approaches to sovereignty in the AI value chain of emerging power nations. He has previously worked in research and consulting roles at the Bennett Institute for Public Policy at the University of Cambridge\, Kantar UK\, Research ICT Africa (RIA)\, and Dataphyte. \nSessions\n> Session 2: Rethinking AI safety and sovereignty – regional perspectives (November 13\, 2025 at 11:30h) \nCaroline Aguerre. Associate Professor\, Universidad Católica del Uruguay. (Online)\nCaroline Aguerre is Associate Professor at the Universidad Católica del Uruguay and honorary co-director at CETYS\, at Universidad de San Andrés (Argentina). Her research interests include theories and practices around the governance of communications technologies and infrastructures\, particularly the Internet and Artificial intelligence\, the intersection with political economy and north-south perspectives. In 2020 she was part of the UNESCO Adhoc Expert Working Group on the Recommendations on the Ethics of AI. She has been part of the IGF Multistakeholder Advisory Groupd (2012-2014) and in 2025. She was a resident fellow at the CGR21 (2020-2021) University of Duisburg-Essen (Germany). \nSessions\n> Session 2: Rethinking AI safety and sovereignty – regional perspectives (November 13\, 2025 at 11:30h) \nEdward Tsoi. Co-Founder AI Safety Asia. (Online)\nEdward Tsoi is the founder of Connecting Myanmar and experienced leader in technology startups and non-profits. He led the APAC business of a late-stage startup with over $100M raised. He is also a former board member of Amnesty International Hong Kong and advisor to multiple corporate-NGO initiatives. Now he is one of the co-founders of AI Safety Asia. \nSessions\n> Session 2: Rethinking AI safety and sovereignty – regional perspectives (November 13\, 2025 at 11:30h) \nGinevra Castellano. Professor at Department of Information Technology\, Uppsala University\nGinevra Castellano is a Full Professor in Intelligent Interactive Systems at the Department of Information Technology of Uppsala University\, Sweden\, where she is the Founder and Director of the Uppsala Social Robotics Lab. Her research is in the area of social robotics and human-robot interaction\, addressing questions on how we can build human-robot interactions that are ethical and trustworthy\, including robot ethics\, robot autonomy and human oversight\, gender fairness\, robot transparency and trust\, human-robot relationship formation\, both from the perspective of developing computational skills for robotic systems\, and their evaluation with human users to study acceptance and social consequences. She has been the Principal Investigator of several national and EU-funded projects on ethical and trustworthy human-robot interaction\, in application areas spanning education\, healthcare\, and transportation systems. She is currently the coordinator of the CHANSE-NORFACE MICRO (Measuring children’s wellbeing and mental health with social robots) project (2025-2028)\, and the WASP-HS Research Group on Child Development in the Age of AI and Social Robots (2025-2030\, funded by WASP-HS Wallenberg AI\, Autonomous Systems and Software Program – Humanity and Society. Castellano was an invited speaker at the UN AI for Good Global Summit 2024 and a keynote speaker the World Summit AI 2024. She was recently awarded the Thuréus prize 2025 from the Royal Society of Sciences in Uppsala. \nSessions\n> Session 3: Building alternative narratives (November 13\, 2025 at 14:00h) \nRoel Dobbe. Assistant Professor Sociotechnical AI Systems\, TU Delft\nRoel Dobbe is an Assistant Professor in Technology\, Policy & Management at Delft University of Technology focusing on Sociotechnical AI Systems. He received a MSc in Systems & Control from Delft (2010) and a PhD in Electrical Engineering and Computer Sciences from UC Berkeley (2018)\, where he received the Demetri Angelakos Memorial Achievement Award. He was an inaugural postdoc at the AI Now Institute and New York University. His research addresses the integration and implications of algorithmic technologies in societal infrastructure and democratic institutions\, focusing on issues related to safety\, sustainability and justice. His projects are situated in various domains\, including energy systems\, public administration\, and healthcare. Roel’s system-theoretic lens enables addressing the sociotechnical and political nature of algorithmic and artificial intelligence systems across analysis\, engineering design and governance\, with an aim to empower domain experts and affected communities. His results have informed various policy initiatives\, including environmental assessments in the European AI Act as well as the development of the algorithm watchdog in The Netherlands. \nSessions\n> Session 3: Building alternative narratives (November 13\, 2025 at 14:00h) \nThe Anh Han. Professor in Computer Science\, Teesside University\nThe Anh Han is a Full Professor of Computer Science and Director of the Center for Digital Innovation at School of Computing\, Engineering and Digital Technologies\, Teesside University. His current research spreads several topics in AI and behavioural research\, including the dynamics of human cooperation\, evolutionary game theory\, agent-based simulations\, behavioural economics\, and AI governance modelling. \nSessions\n> Session 3: Building alternative narratives (November 13\, 2025 at 14:00h) \nSam Bogerd. Technology Foresight Analyst\, Centre for Future Generations\nSam Bogerd bridges foresight and policy\, tackling the governance of advanced technologies. With a focus on innovation and long-term impact\, he turns complex challenges into practical\, future-ready solutions. \nSessions\n> Session 3: Building alternative narratives (November 13\, 2025 at 14:00h) \nMirko Schaefer. Associate Professor Media and Performance Studies\, Utrecht University\nMirko Schaefer is co-founder and the Sciences Lead of the Data School. He is a member of the steering committee at the research area Governing the Digital Society and a member of the research area Applied Data Science. He is a Visiting Professor at the Helsinki Institute for Social Sciences & Humanities of the University of Helsinki. Mirko is serving on the Advisory Committee Analytics at the Ministry of Finance in the Netherlands. His research interest revolves around the socio- political impact of media technology\, and the responsible use of AI and data practices. Together with the Data School\, he investigates AI and data practices in government organisations and develops processes and practices for responsible use and implementation\, and public accountability. \nHis book Bastard Culture! How User participation Transforms Cultural Production (Amsterdam University Press 2011) was listed as best-seller in the section computer science by The Library Journal. Together with Karin van Es\, he edited the volume The Datafied Society. Studying Culture through Data (Amsterdam University Press 2017). His most recent publication (together with Karin van Es & Tracey Lauriault) is the edited volume Collaborative Research in the Datafied Society Methods and Practices for Investigation and Intervention (Amsterdam University Press 2024). From 2012 to 2013 he was a research fellow at the University of Applied Arts in Vienna. In 2014 he was appointed post-doctoral research fellow at the Centre for Humanities at Utrecht University. In 2016 Mirko was a Mercator Research Fellow at the NRW School of Governance at University of Duisburg-Essen. \nSessions\n> Session 4: Governance for innovation (November 13\, 2025 at 16:00h) \nKerstin Bach. Professor of Artificial Intelligence\, Norwegian University of Science and Technology\nKerstin Bach is a professor of Artificial Intelligence at the Norwegian University of Science and Technology (NTNU)\, Director of the Norwegian Open AI Lab\, and is Research Director at the Norwegian Research Center for AI Innovation (NorwAI). She holds a PhD from the University of Hildesheim and worked as a researcher at the German Research Center for AI (DFKI)\, where she developed decision support systems for various industries. After completing her Ph.D.\, Kerstin joined Verdande Technology\, a Trondheim-based AI startup developing real-time case-based reasoning (CBR) technology for the oil and gas\, financial services\, and healthcare industries. At Verdande\, she was both a research scientist and software engineer\, working closely with partners exploring CBR in their technology stack. In 2015\, Kerstin joined NTNU’s computer science department. \nIn recent years\, Kerstin’s research has been primarily focused on crafting AI prototypes tailored for healthcare\, intelligent sensing\, and knowledge management. She managed an EU H2020 research grant\, selfBACK\, whose results are currently being developed as a product for patients with lower back pain. Presently\, Kerstin is steering multiple interdisciplinary projects funded by the Norwegian Research Council and NTNU dedicated to AI-driven and patient-centered healthcare services. \nBeyond her research contributions\, she actively organizes workshops\, conferences\, and symposia that discuss various aspects of AI research. Throughout her career\, Kerstin has undertaken responsibilities such as being the driving force behind myCBR\, an open-source tool adopted in research and industry projects across Europe\, and is a board member of the Norwegian AI Society and the German AI Society. Her commitment to advancing AI extends to NTNU\, where she promotes AI research among students and strongly emphasizes encouraging females to pursue technology careers. As an educator\, she imparts her knowledge through AI and Machine Learning courses\, guiding and involving master’s and Ph.D. Her role as NorwAI research director finds her at the forefront of collaborative projects between industry and academia. Within this context\, she established FEMAIS\, a mentorship program tailored for aspiring female AI students\, effectively bridging the gap between their final year of studies and the launch of their professional journeys. Kerstin’s commitment to AI outreach also extends to the Norwegian Open AI Lab\, where she organizes events\, gives talks\, and participates in panels and seminars to discuss AI research among professionals and the broader public. \nSessions\n> Session 4: Governance for innovation (November 13\, 2025 at 16:00h) \nLey Muller. Lead of Nordic Women in AI Governance and Research Lead for Women in AI Norway\nLey Muller is a transformational leader with 10 years’ experience in evidence-based policy\, public health\, and AI. She currently leads Nordic Women in AI Governance and is the research lead for Women in AI Norway\, and is very aware of how insufficient a gender-only lens is if AI governance is to properly address marginalized perspectives. She has experience in consulting\, government\, and academia from Norway\, the US\, Germany\, and the WHO – and is very recently (and somewhat proudly) ex-tech. \nSessions\n> Session 4: Governance for innovation (November 13\, 2025 at 16:00h) \nDavid Lewis. Associate Professor at the School of Computer Science and Statistics\, Trinity College Dublin\nDave Lewis is an Associate Professor at the School of Computer Science and Statistics at Trinity College Dublin where he served as the head of its Artificial Intelligence Discipline. He is the Acting Director of Ireland”s ADAPT Centre for human centric AI and digital content technology research. He investigates open semantic models for trustworthy AI and data governance and contributes to international standards in digital content processing and trustworthy AI. His research focuses on the use of open semantic models to manage the Data Protection and Data Ethics issues associated with digital content processing. He has led the development of international standards in AI-based linguistic processing of digital content at the W3C and OASIS and is currently active in international standardisation of Trustworthy AI at ISO/IEC JTC1/SC42 and CEN/CENELEC JTC21. \nSessions\n> Session 4: Governance for innovation (November 13\, 2025 at 16:00h) \nTina Comes. Associate Professor in Resilience Engineering\, TU Delft\n \nTina Comes is the Scientific Director of the DLR Institute for Terrestrial Infrastructure Protection in Germany\, and Full Professor in Decision Theory & ICT for Resilience at the TU Delft in the Netherlands. Since her PhD\, she has been determined to better understand decision-making of individuals and groups in the context of climate risk and crises. Her work aims at using AI and information technology to support decisions of individuals and groups in complex\, uncertain environments. She integrates behavioural insights with advanced computational approaches—including distributed AI\, multi-agent systems\, optimisation models\, and digital twins. She serves on the editorial Board of Nature Scientific Reports. Her research has received international recognition through awards and fellowships\, and she is a member of Academia Europaea and the Norwegian Academy of Technological Sciences. Internationally\, under the EU’s Science Advise Mechanism\, she chaired the Working Group on Strategic Crisis Management in Europe and is now chairing the Working Group for AI in Crisis Management. \nSessions\n> Deep Dive Workshop – Contextualising AI Principles: Universal Guidelines or Domain-Specific Policy? (November 14\, 2025 at 9:30h)\n> Session 6: Building bridges from research to policy (November 14\, 2025 at 13:30) \nNitin Sawhney. Visiting Researcher\, University of the Arts Research Institute Helsinki\nNitin Sawhney is a visiting researcher at the University of the Arts Research Institute. He has a background in computational media\, human-centered design and documentary film. He served as a Professor of Practice in the Department of Computer Science at Aalto University\, leading the Critical AI and Crisis Interrogatives (CRAI-CIS) research group. He completed his doctoral dissertation at the MIT Media Lab\, and taught in the Media Studies program at The New School and the MIT Program in Art\, Culture and Technology (ACT). Working at the intersection of Human Computer Interaction (HCI)\, responsible AI\, and participatory design research\, he examines the critical role of technology\, civic agency\, and social justice in society and crisis contexts. He has co-curated exhibitions and co-directed documentaries in Gaza and Guatemala\, focusing on creative resistance and historical memory in conditions of war and conflict. In October 2024 he co-organized the Contestations.AI Transdisciplinary Symposium on AI\, Human Rights and Warfare in Helsinki. He is currently developing a transdisciplinary platform to foster critical dialogues and co-existence through science\, technology\, and the arts\, and conceptualizing a new documentary film project critically examining the role of AI in warfare. \nSessions\n> Deep Dive Workshop –  AI in Warfare: Actions\, Policies and Practices (November 14\, 2025 at 9:30h in Panorama @Mondai) \nPetter Ericson. Staff Scientist\, Umeå University\nPetter Ericson is staff scientist in the research group for Responsible AI\, working on graph problems and formal descriptions of structured data\, with a strong interest in ethics\, music and society. \nSessions\n> Deep Dive Workshop –  AI in Warfare: Actions\, Policies and Practices (November 14\, 2025 at 9:30h in Panorama @Mondai) \nJason Tucker. Researcher\, Institute for Future Studies\, Sweden. Adjunct Associate Professor AI Policy Lab\, Umeå University\nJason Tucker is a researcher at the Institute for Futures Studies and an Adjunct Associate Professor at the AI Policy Lab\, the Department of Computing Science\, Umeå University. He is also a Visting Research Fellow at AI & Society\, the Department of Technology and Society\, LTH\, Lund University. His research interests include AI and health\, the global political economy of AI\, public policy\, citizenship\, human rights and global governance. He currently leads the research project The Politics of AI and Health: From Snake Oil to Social Good funded by WASP-HS. Within this he is particularly interested in developing interdisciplinary approaches to better support policy making on the future role of AI in healthcare. \nPreviously he has worked on law and policy reform\, citizenship and public sector digitalisation\, having done so for the United Nations\, civil society\, industry and in academia. \nSessions\n> Deep Dive Workshop – AI in Health Care (November 14\, 2025 at 9:30h) \nFabian Lorig. Associate Senior Lecturer and Associate Professor Computer Science\, Malmö University\n\nFabian Lorig is an Associate Senior Lecturer (Biträdande Lektor) and Associate Professor (Docent) in Computer Science\, with a focus on agent-based modelling\, the use of AI in socio-technical systems\, and the development of simulation-based decision and policy support. His research integrates computational methods with real-world applications in public health\, mobility\, and policy\, with a focus on understanding and addressing the societal implications of AI and to support the development of responsible and impactful technologies. He has led and contributed to interdisciplinary research projects where they design computational models that enable stakeholders and policy actors to better understand complex dynamics of social systems and to anticipate the potential consequences of policy interventions and AI technologies. Through participatory approaches and social simulations\, his research facilitates evidence-based decision-making in domains where digital technologies shape societal outcomes. \nSessions\n> Deep Dive Workshop – AI in Health Care (November 14\, 2025 at 9:30h) \nStefan Buijsman. Associate Professor Responsible AI\, TU Delft\nStefan Buijsman studied computer science and philosophy in Leiden and completed his PhD on the philosophy of mathematics at Stockholm University when he was 20. He continued his research on the intersection of philosophy of mathematics and cognitive science at Stockholm University and the Institute for Futures Studies on a research grant from Vetenskapsrådet. \nAside from research\, he engages in popular science writing with now three books to his name. The most recent is on AI and its links to philosophy under the Dutch title ‘Alsmaar Intelligenter’. From there\, his research focus has switched to philosophy of AI on which he works at TU Delft. \nHe is co-founder of the Delft Digital Ethics Centre\, which focusses on the translation of ethical values into design requirements that can be used by engineers\, decision and policy makers and regulators. There he works on the broad range of ethical challenges in projects with external stakeholders. His own research focusses mostly on the explainability of AI algorithms. How can we make these algorithms more transparent? What information do we need to use them responsibly in their many applications? He uses philosophical accounts from epistemology and philosophy of science to formulate design requirements on AI tools on these knowledge-related aspects. In collaboration with computer scientists he also aims to develop new tools to improve the explainability of algorithms. \nSessions\n> Deep Dive Workshop – AI in Health Care (November 14\, 2025 at 9:30h) \nAbout the Network\nThe Global AI Policy Research Network (GlobAIpol) organizes this annual event. GlobAIpol is a community of practice that serves as a platform for policy researchers and professionals to advance responsible AI policy research\, evidence-based insights and actionable strategies for stakeholders across academia\, industry\, public sector\, and civil society. AI policy research has emerged as an essential guide to navigating the complex interplay between technological innovation and societal impact. It ensures that we guide advancements in AI in alignment with ethical\, legal\, and social priorities. \nThe network was established following the inaugural AI Policy Research Summit in Stockholm\, November\, 2024. The inaugural summit was a joint initiative led and organized by the AI Policy Lab\, Umeå\, Sweden\, and Mila – Quebec AI Institute\, Montreal\, Canada. The summit brought together a community eager to address the need for better synergies between research\, policy and impact to realize responsible\, equitable and sustainable AI for the benefit of all. \nA core objective of the GlobAIpol network is to inform global approaches to AI governance by sharing best practices and fostering collaboration on developing AI policy. This includes advancing responsible AI policy research that meets the growing need for governance grounded in ethical\, transparent\, and evidence-based practices to shape inclusive and trustworthy policies. The network takes an interdisciplinary and multistakeholder approach to holistically address the complex challenges and opportunities that arise with these developments. Through these objectives\, the network works for effective knowledge exchange to bridge the gap between AI policy research and practice. \nRead more about the network’s commitments in the Roadmap for AI Policy Research.
URL:https://mondai.tudelftcampus.nl/en/event/global-ai-policy-research-summit-2025/
LOCATION:Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD
ATTACH;FMTTYPE=image/jpeg:https://mondai.tudelftcampus.nl/wp-content/uploads/sites/7/2025/06/marietjkeynote_sized.jpg
ORGANIZER;CN="Mondai | House of AI":MAILTO:mondai@tudelft.nl
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20251114T093000
DTEND;TZID=Europe/Amsterdam:20251114T123000
DTSTAMP:20260405T184819
CREATED:20251020T133407Z
LAST-MODIFIED:20251030T095418Z
UID:10000247-1763112600-1763123400@mondai.tudelftcampus.nl
SUMMARY:Deep Dive Workshop - AI in Health Care
DESCRIPTION:Global AI Policy Summit Deep Dive Workshop\nAI in Health CareDespite progress in AI governance\, much of the current regulatory framework remains grounded in high-level principles and guidelines which\, while valuable\, often lack the specificity required for practical implementation – particularly at the intersection with the highly regulated and operationally complex domain of healthcare. This is especially urgent in “high-risk” areas of healthcare\, where decisions are irreversible\, outcomes are critical\, and resources are constrained. Such applications demand elevated levels of transparency\, accountability\, and ethical oversight. This workshop draws on examples from clinical practice\, public health\, health policy\, and global health to foster open discussion around the most pressing priority areas at the intersection of AI and healthcare. \nWorkshop Programme and Speakers The workshop starts off with short talks by five expert presenters followed by an interactive round table discussion. \n> Fabian Lorig (Associate Professor Computer Science\, Malmö University)\n> Jason Tucker (Researcher at the Institute for Future Studies and Adjunct Associate Professor at the AI Policy Lab\, Umeå University)\n> Stefan Buijsman (Associate Professor Responsible AI\, TU Delft)\n> Siri Helle (Psychologist and Award-winning Author of The Emotion Trap).\n> Marie-Therese Sekwenz (PhD candidate at Faculty of Technology\, Policy and Management\, TU Delft and Deputy Director of the AI Futures Lab on Rights and Justice)
URL:https://mondai.tudelftcampus.nl/en/event/ddw-ai-in-health-care/
LOCATION:Innovate @Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD\, Netherlands
ATTACH;FMTTYPE=image/jpeg:https://mondai.tudelftcampus.nl/wp-content/uploads/sites/7/2025/10/AIinHealthCare_OrganDonation_StockImage.jpg
ORGANIZER;CN="Mondai | House of AI":MAILTO:mondai@tudelft.nl
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20251114T093000
DTEND;TZID=Europe/Amsterdam:20251114T123000
DTSTAMP:20260405T184819
CREATED:20251020T134854Z
LAST-MODIFIED:20251113T131729Z
UID:10000248-1763112600-1763123400@mondai.tudelftcampus.nl
SUMMARY:Deep Dive Workshop - AI in Warfare: Actions\, Policies and Practices
DESCRIPTION:Global AI Policy Summit Deep Dive Workshop\nAI in Warfare: Actions\, Policies and PracticesArtificial Intelligence (AI)\, Big Data analytics\, and Automated Decision Making (ADM) are increasingly being used for surveillance\, targeting\, and autonomous or semiautonomous drone warfare\, in addition to proliferating misinformation on social media during war and conflicts. Conversely\, related technologies are also leveraged for investigation of human rights violations e.g. as done by members of Forensic Architecture\, Interpret\, Airwars and Bellingcat. Meanwhile\, campaigns such as Stop Killer Robots are working through the UN and other forums towards an international ban on\, at minimum\, Lethal Autonomous Weapon Systems (LAWS).  \nHow should researchers\, scholars\, government actors and civil society engage and act critically to highlight\, investigate\, and prevent the use of AI-based systems in perpetuating human rights violations in and out of warfare and devise critical policies and practices that mitigate harms to society today? \nThe current AI Act has many exceptions to the use of AI for policing\, surveillance and military applications\, while there are hardly any enforceable provisions related to use of EU technologies globally that violate human rights. This workshop engages inside and critical perspectives from military officers\, AI researchers and scholars in International Humanitarian Law (IHL)\, human rights activists\, Members of Parliament\, and NGOs\, dealing with these concerns. We will examine these aspects in the context of ongoing conflicts in Gaza and Ukraine\, among others globally\, from the role of AI for spreading misinformation in war\, to autonomous warfare\, and civic / human rights violations. Our aim is to encourage inter-disciplinary and critical theorizing on what policies\, regulations and practices are urgently needed to address these emerging concerns\, while developing an action agenda for future research\, concrete policy proposal work\, and pragmatic societal outcomes. \nWorkshop Programme This workshop takes places in the “Panorama @Mondai”\n09.30 – 09.40 Opening & Key Themes Presented by Nitin Sawhney & Petter Ericson\n09.40 – 11.00 Panel Presentations by Panellists + Q&A and Discussions\n11.00 – 11.10 Short Break\n11.10 – 11.30 Form Participant groups led by Invited Experts + Intros & Perspectives\n11.30 – 12.15 Workshop Deliberations and Formulating Key Outcomes\n12.15 – 12.30 Wrap-Up & Next Steps \nExpert Panellists \nVirginia Dignum\, Professor in Responsible Artificial Intelligence and Director of the AI Policy Lab\, Umeå University and Member of UN High Level Advisory Body on AI\n\nMartine Jaarsma\, Doctoral Researcher\, International Humanitarian Law\, Military uses of AI and Critical Legal Studies\, Department of Political Science\, University of Antwerp\nMegan Karlshoej-Pedersen\, Policy Specialist at Airwars (presenting online)\nRainer Rehak\, Research Associate\, Weizenbaum Institute\nIlse Verdiesen\, Research Fellow\, National Defense Academy (NLDA)\, Chief of Staff Joint IV Commando (Col)\nTaylor Kate Woodcock LL.M.\, PhD Researcher in Public International Law\, Asser Institute\n\nWorkshop Outcomes \nImplications for Policy Research Agenda\nBarriers and obstacles to enforcing / moderating use of AI in warfare – conventions\, regulations and international treaties? What can we do to highlight / change them?\nFostering new collaborations within the group for research\, policy action or advocacy\nPlanning the next Contestations.AI symposium (in 2026) and opportunities for similar workshops at other conferences?\nConcrete Action Items:\n\nGlobAIPol signing up to Stop Killer Robots?\nWhitepaper\, opinion piece or journal article?\nStakeholder deliberations (as follow-up workshop)?\n\n\n\nRelated Events\, Articles and Reports Events \nContestations.AI: Transdisciplinary Symposium on AI\, Human Rights and Warfare\, Helsinki\, Oct 23\, 2024: https://contestations.ai/  \nArticles and Reports \nOp-Ed: Regulating military use of AI is in everyone’s interest\, Michael C. Horowitz\, Financial Times\, October 13\, 2025. \nResponsible by Design: Strategic Guidance Report on the Risks\, Opportunities\, and Governance of Artificial Intelligence in the Military Domain. Global Commission on Responsible Artificial Intelligence in the Military Domain (GC REAIM)\, September 2025.  \nCoveri\, Andrea\, et al. Big Tech and the US Digital-Military-Industrial Complex. Intereconomics\, vol. 60\, no. 2\, Sciendo\, 2025\, pp. 81-87.\n\nThe rolling text of the Group of Governmental Experts working a Convention of Certain Conventional Weapons\, in particular on the legal status of Lethal Autonomous Weapon Systems.
URL:https://mondai.tudelftcampus.nl/en/event/ddw-ai-in-warfare/
LOCATION:Panorama @Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD\, Netherlands
ATTACH;FMTTYPE=image/png:https://mondai.tudelftcampus.nl/wp-content/uploads/sites/7/2025/10/TUD_Mondai_AI_MHC_workshop_v2.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20251114T093000
DTEND;TZID=Europe/Amsterdam:20251114T123000
DTSTAMP:20260405T184819
CREATED:20251029T153703Z
LAST-MODIFIED:20251029T154202Z
UID:10000252-1763112600-1763123400@mondai.tudelftcampus.nl
SUMMARY:Deep Dive Workshop - Contextualising AI Principles: Universal Guidelines or Domain-Specific Policy?
DESCRIPTION:Global AI Policy Summit Deep Dive Workshop\nContextualising AI Principles: Universal Guidelines or Domain-Specific Policy?Artificial Intelligence principles and guidelines have proliferated in recent years—transparency\, fairness\, accountability\, and human oversight are widely endorsed. However\, when we attempt to implement and operationalise these principles in specific domains\, fundamental dilemmas emerge: \nIn crisis management: How do we balance the need for privacy with transparency when lives are at stake? Can we afford the time for human oversight in rapidly evolving disasters? \nIn education: How do we ensure fairness in AI-supported learning while respecting pedagogical autonomy and diverse learning needs? \nIn mobility: When does mandatory human oversight become a safety liability in time-critical traffic situations? How do we operationalize accountability when decisions are distributed across interconnected systems? \nThis workshop examines a fundamental question for AI policy: What principles have to remain generic across domains\, and what should or must be contextualized? The EU AI Act attempts to address this through risk-based categories\, but how well does this approach capture domain-specific tensions? Through structured dialogue across domains\, we will map where universal principles break down\, why contextualisation is necessary\, and what this means for developing both sector-specific guidelines and cross-cutting policy frameworks. \nWorkshop Programme This workshop takes places in the “Connect @Mondai” \n09.30 – 10.00 Opening & Domain Challenges.\nWelcome and short presentations: What are the specific dilemmas when applying AI principles in crisis management\, education\, and mobility? \n10.00 – 10.30 Plenary Principle Mapping.\nInteractive session: Starting from the EU Ethics Guidelines for Trustworthy AI and the Dutch Value Compass: Which principles do we prioritize? Where do different domains face irreconcilable conflicts? \n10.30 – 11.00 Coffee Break \n11.00 – 12.30 Domain Deep Dives\, Cross-Domain Synthesis\, and Next Steps\n> Structured discussions: Develop concrete scenarios where generic principles prove inadequate or create harm\, or where additional principles are needed. What makes each domain different? What adaptations are needed?\n> Bringing insights together: What patterns emerge? Where is universality possible\, and where is contextualization essential?\n> Discussion of potential collaborative outputs and future dialogue \nSpeakers Moderator: \n\nTina Comes\, Scientific Director DLR Institute for the Protection of Terrestrial Infrastructures; Professor in Decision Theory & ICT for Resilience\, Delft University of Technology\n\nExpert Speakers: \n\nThomas Kox\, Head of Research Group “Digitalisation and Networked Security”\, Weizenbaum Institute for the Networked Society\nArkady Zgonnikov\, Assistant Professor\, Human-Technology Interaction and Centre for Meaningful Human Control\, Delft University of Technolog\nDuuk Baten\, Advisor on Digitalisation and AI in Education\, SURF
URL:https://mondai.tudelftcampus.nl/en/event/ddw-contextualising-ai-principles/
LOCATION:Connect @Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD\, Netherlands
ATTACH;FMTTYPE=image/jpeg:https://mondai.tudelftcampus.nl/wp-content/uploads/sites/7/2025/10/TU250612_4059_0283_lowres.jpg
ORGANIZER;CN="Mondai | House of AI":MAILTO:mondai@tudelft.nl
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20251118T100000
DTEND;TZID=Europe/Amsterdam:20251119T170000
DTSTAMP:20260405T184819
CREATED:20251029T104431Z
LAST-MODIFIED:20251029T104434Z
UID:10000251-1763460000-1763571600@mondai.tudelftcampus.nl
SUMMARY:Symposium: Feminist AI and Collective Wellbeing
DESCRIPTION:In the ‘Feminist AI and Collective Wellbeing’ symposium\, we invite researchers\, artists\, and practitioners to explore and contest the promises and pitfalls of AI in shaping collective wellbeing. \n\n\nPromises of AI include better societal wellbeing through improved healthcare\, relieved workloads\, or efficient usage of natural resources. Yet not everyone’s wellbeing counts evenly\, as AI simultaneously depends on and disrupts collectivity\, for instance\, through its pressure on shared environmental resources\, worker health\, and data exploitation and extractivism. How can we reimagine these dynamics\, and centre collective wellbeing so that it becomes a basis for caring and sustaining relationships around AI development and implementation? \nOur goal is not to provide definitive answers or fixed definitions of wellbeing and collectivity\, but to open a shared space for inquiry\, provocation\, and speculation. By foregrounding feminist\, decolonial\, and ecological perspectives\, we aim to imagine futures in which AI development and adoption are aligned with collective wellbeing.  \nThe symposium invites participants to explore how these relationships and entanglements might be reimagined\, and how AI can be critically reshaped\, reoriented\, or even refused in pursuit of more collective and caring futures. \nThrough international keynotes and a workshop on art-based AI inquiry\, we invite participants to reflect on these questions: \n\n\nWhat collectives are prioritized in the development of AI? Whose wellbeing is valued\, and whose is erased to maintain the wellbeing of others? \n\n\nHow can communities engage with AI on their own terms? What material resources\, infrastructures\, or type of data would they need to do so? \n\n\nWhat collective futures and imaginaries might we create together\, rooted in shared wellbeing rather than extractive logics? \n\n\nCan AI ever be truly aligned with collective wellbeing\, or are there cases where the most ‘caring’ act might be to refuse or resist AI altogether? \n\n\n\n\n\n\n\n\nRegister Here18 November – Part 1 |  10.00 – 15.00 \nWorkshop on art-based AI inquiry for collective knowledge generation \nOrganizers: Feminist Generative AI Lab with Virginia Tassinari and Vera van der Burg \nGuests: Soyun Park\, Mafalda Gamboa\, and Elvia Vasconcelos. \nIn this workshop\, we explore art-based inquiry as an alternative form of knowledge generation\, which can complement and enrich traditional approaches to research in AI. We invite participants to engage with new\, unusual\, artistic\, and embodied forms of exploration\, to reflect on the symposium theme. \nPlease note that the workshop has limited spots. Lunch is included. \n18 November – Part 2 |  15.00 – 17.00 \nKeynotes + Discussion + Drinks \nPlease note you can choose to register only to this part of the symposium. \nRegister Here19 November |  10.00 – 17.00 \nPhD Day \nFollowing the symposium on November 18th\, we invite PhD candidates to join us for a dedicated day of peer exchange\, collaborative feedback\, and dialogue. The PhD Day offers PhD candidates working on topics such as AI\, feminism\, care\, collectivity\, sustainability\, digital labor\, and related themes an opportunity to continue explore the theme of the symposium in relation to their own research themes and practices in an interdisciplinary environment. \nThe PhD day program will include peer review sessions that allow participants to share work in progress and to receive feedback from their peers; interactive activities that addresses the challenges of working as a PhD researcher; as well as community building and networking opportunities. \nMore info on PhD Day
URL:https://mondai.tudelftcampus.nl/en/event/symposium-feminist-ai-and-collective-wellbeing/
LOCATION:Next: Delft\, Molengraaffsingel 8\, 2629 JD\, Delft\, 2629 JD\, Netherlands
ATTACH;FMTTYPE=image/jpeg:https://mondai.tudelftcampus.nl/wp-content/uploads/sites/7/2025/10/event.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20251127T143000
DTEND;TZID=Europe/Amsterdam:20251127T170000
DTSTAMP:20260405T184819
CREATED:20251008T110141Z
LAST-MODIFIED:20251106T100050Z
UID:10000244-1764253800-1764262800@mondai.tudelftcampus.nl
SUMMARY:Best AI-Related MSc thesis Award 2025
DESCRIPTION:Mondai | House of AI and the TU Delft AI Initiative are happy to host\nthe AI-Related MSc Thesis Awards 2025!\nThe Best AI-Related MSc Thesis Award (short: AI Thesis Award) is a new award that celebrates outstanding master’s research at TU Delft dedicated to development\, application or contexts of artificial intelligence. Master’s students from all faculties participated with their finished thesis\, on condition that research is centered around or involves AI. The prize will be awarded in two categories: IN AI for research that advances AI itself\, and WITH AI for research that applies AI in a specific domain. One TU Delft graduate will be selected in each category after the top 3 candidates pitch their thesis at this award ceremony. \nProgramme \n14.30 – Walk-in15.00 – Opening15.15 – Thesis pitches16.15 – Break + walk-in alumni community16.30 – Award ceremony & Kick Off Alumni Community for AI\, Data & Digitalisation17.00 – Network drinks \nFinalists Best AI-related MSc Thesis Award 2025 \nCategory IN AI \n\nKrzysztof Piotr Baran (Computer Science @Faculty of EEMCS): Federated MaxFuse: Diagonal Integration of Weakly Linked Spatial and Single-cell Data through Federated Learning\nPrajit Bhaskaran (Computer Science @Faculty of EEMCS): Transformers Can Do Bayesian Clustering\nSimon Gebraad (Robotics @Faculty of ME): LeAP: Label any Pointcloud in any domain using Foundation Models\n\nCategory WITH AI \n\nAntonio Magherini (Civil Engineering @Faculty of CEG): JamUNet: predicting the morphological changes of braided sand-bed rivers with deep learning\nIsa Oguz (Management of Technology @Faculty of TPM): Victim Blaming Bias in Traffic Accidents Using Large Language Models\nJeroen Hagenus (Robotics @Faculty of ME): Realistic Adversarial Attacks for Robustness Evaluation of Trajectory Prediction Models\n\nAI Alumni Community Kick Off  \nIf you only want to attend the launch of the AI Alumni Community\, the walk-in is between 16:15 -16:30 and borrel starts around 17:00 \nDuring this event\, we also launch the new AI\, Data and Digitalisation Alumni Community (short: AI Alumni Community). By connecting TU Delft alumni across generations\, disciplines and sectors\, we aim to unlock new opportunities for innovation\, strengthen the bridge between research\, application and societal value\, and shape a digital future that benefits everyone. This community is open to all past\, present and future TU Delft alumni with an interest in or background in AI\, data and digitalisation. Community members include graduates from bachelor’s and master’s programmes as well as PhD alumni from across the university.
URL:https://mondai.tudelftcampus.nl/en/event/best-ai-related-msc-thesis-award-2025/
LOCATION:Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD
ATTACH;FMTTYPE=image/jpeg:https://mondai.tudelftcampus.nl/wp-content/uploads/sites/7/2025/10/TU250612_4059_0283_lowres.jpg
ORGANIZER;CN="Mondai | House of AI":MAILTO:mondai@tudelft.nl
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260219T120000
DTEND;TZID=Europe/Amsterdam:20260219T133000
DTSTAMP:20260405T184819
CREATED:20260122T124509Z
LAST-MODIFIED:20260122T124509Z
UID:10000262-1771502400-1771507800@mondai.tudelftcampus.nl
SUMMARY:TU Delft AI Lunch: AI Regulations (and how to work within them)
DESCRIPTION:Mondai | House of AI is happy to host the new edition of the TU Delft AI Lunch:\nAI Regulations (and how to work within them)\n This edition of the Delft AI Lunch is focused on AI Regulations and how to work within them. Contributing to the panel discussion are: \n\nAI Act expert Hannah Ruschemeier  (prof. of Public Law at Universität Osnabrück)\,\nEmpirical law expert on GDPR Julia Krämer (PhD at EUR)\,\nData steward Nicolas Dintzner (TPM Faculty)\, and TBA cybersecurity law expert.\n\nMarie-Therese Sekwenz (TPM\, AI Futures Lab) is moderating the panel. \nThis event includes free lunch for which registration is required (help us reduce food waste!) \n(This event will be held in English) \nProgramme\n12.00 – 12.30 | Lunch & networking\n12.30 – 13.30 | Panel AI Regulations and how to work within them: moderated by Marie-Therese Sekwenz\, featuring Hannah Ruschemeier\, Julia Krämer\, and Nicolas Dintzner \nThe Delft AI (Lab) Lunch series\nThis series is part of the Delft AI (Lab) Lunches\, a recurring meet-up hosted by the TU Delft AI Labs & Talent community at Mondai | House of AI.\nEvery session\, we host a panel to discuss challenges and developments made at the intersection of AI and a specific field. During these events\, you can participate\, learn\, make connections\, inspire and be inspired by and with the Delft AI Community. We invite all interested staff and students from TU Delft to join these sessions. Please contact community manager Charlotte Boelens for more information about this series or the TU Delft AI Labs & Talent Programme. \nNote for TU Delft PhDs\nThe TU Delft AI Lunch series is eligible for earning discipline related skills GSC with the ‘Form for earning GSC for TU Delft AI(-related) seminars’. Check with your local Faculty Graduate School (FGS) if your FGS offers this option for earning Discipline Related Skills GSC; and with your supervisors if they accept our seminars on your Doctoral Education (DE) list. If you already have a form\, don’t forget to bring it with you.
URL:https://mondai.tudelftcampus.nl/en/event/tu-delft-ai-lunch-ai-regulations/
LOCATION:Panorama @Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD\, Netherlands
CATEGORIES:AI Lab Lunch
ATTACH;FMTTYPE=image/jpeg:https://mondai.tudelftcampus.nl/wp-content/uploads/sites/7/2025/06/TU250612_4059_0085-Verbeterd-NR_lowres.jpg
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260324T123000
DTEND;TZID=Europe/Amsterdam:20260324T153000
DTSTAMP:20260405T184819
CREATED:20260203T113145Z
LAST-MODIFIED:20260223T103232Z
UID:10000265-1774355400-1774366200@mondai.tudelftcampus.nl
SUMMARY:Session AI Gigafactory Rotterdam
DESCRIPTION:Mondai | House of AI and the AI-hub Zuid-Holland\, together with Volt\,\nare pleased to host this session about the AI Gigafactory in Rotterdam(de voertaal van dit event is Nederlands) \nAI is developing at a rapid pace. AI technologies and innovations are widely used in industry\, public services and education. Questions about the position of the Netherlands and Europe in this development are on everyone’s mind; how do we shape our digital future and autonomy? \nOn Tuesday 24 March\, Volt\, initiator of the European AI Gigafactory in Rotterdam\, the AI-hub Zuid-Holland and TU Delft – Mondai | House of AI host a session about the plans for the realisation of the AI Gigafactory. The session focuses on the importance of developing such infrastructure. Not only for the region\, for the Netherlands and for Europe\, but also for the sovereignty of your organisation. In addition\, we would like to get an idea of potential use cases from your (future) practice and how these can be translated into computing needs and supporting facilities in the AI Gigafactory. \nWant to know more about the AI Gigafactory? Read more on the Volt page. This event is by invitation only\, but we are certainly open to any stakeholders who wants to be part of this conversation. Sign up and we will get in touch. \nProgramma\n12.30 – 13.00 Walk-in and lunch\n13.00 – 13.15 Opening and introduction by Joost Poort on behalf of  TU Delft – Mondai | House of AI en de AI-hub Zuid-Holland;\n13.15 – 13.45 Presentation of plans for\, and the progress of\, the AI-Gigafactory in Rotterdam by Han de Groot on behalf of Volt;\n13.45 – 14.30 Insights in AI & Compute from several domains \n\nErick Webbe\, CEO – Kickstart AI\,\nSven Hamelink\, Head Science & Technology – de Politie\,\nErik Scherff\, CIO | IT-director TU Delft.\n\n14.30 – 15.30 Discussion with moderation by Tom Jessen \nKom in ContactVragen over de AI Gigafactory? Neem vooral contact met ons op! \nJoost Poort\nManaging Director Mondai | House of AI\nAI Innovation Lead TU Delft\nHan de Groot\nCEO Volt\nInitiator of the AI-Gigafactory
URL:https://mondai.tudelftcampus.nl/en/event/session-ai-gigafactory-rotterdam/
LOCATION:Panorama @Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD\, Netherlands
ATTACH;FMTTYPE=image/png:https://mondai.tudelftcampus.nl/wp-content/uploads/sites/7/2026/02/Volt-AIGF.png
ORGANIZER;CN="Mondai | House of AI":MAILTO:mondai@tudelft.nl
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260506T090000
DTEND;TZID=Europe/Amsterdam:20260507T170000
DTSTAMP:20260405T184819
CREATED:20251217T183837Z
LAST-MODIFIED:20260204T101925Z
UID:10000257-1778058000-1778173200@mondai.tudelftcampus.nl
SUMMARY:Designing and Developing Ethically Aligned Defence AI
DESCRIPTION:Mondai | House of AI is pleased to host the\nDesigning and Developing Ethically Aligned Defence AI Conference\norganised by the ELSA Defense Lab\, in collaboration with the TU Delft Digital Ethics CentreAdvances in artificial intelligence (AI) are enabling military systems to operate in environments where uncertainty\, adversarial dynamics\, and time-critical decision-making are the norm rather than the exception. In such contexts\, ethical design cannot rely solely on predictable scenarios\, assumptions of human oversight\, or static rule-based constraints; rather\, it requires careful and substantial ethical programming and design to ensure that AI-enabled systems behave in alignment with moral and legal principles throughout their operational lifecycle. \nThis conference explores how ethically aligned military AI can be conceived\, designed\, and developed for deployment in uncertain\, adversarial\, and time-critical environments. Across two days\, contributors examine normative and methodological foundations related to the embedding of moral and ethical constraints during the early stages of the lifecycle of military AI systems. \nCall for Abstracts This conference focuses specifically on the challenge of designing and developing ethically aligned military AI technologies. We invite contributions that address\, among others\, the following questions: \n\nWhat methods should be used to embed robust moral constraints into particular AI-enabled systems that must act adaptively under uncertainty?\nWhat technical architectures (e.g.\, constraint learning\, formal verification\, runtime monitoring) best support ethical and moral guardrails under environmental uncertainty for different military AI capabilities?\nWhat fail-safe behaviors and override mechanisms should be incorporated depending on the risks-levels?\nHow should human–machine decision authority be allocated and dynamically recalibrated as operations evolve in real time?\nHow do data pipeline choices (dataset curation\, adversarial robustness\, bias mitigation) influence downstream ethical reliability for particular military AI decision-support systems?\nWhat forms of explainability are meaningful and practical in time-critical command settings for particular military AI decision-support systems?\nHow can testing\, validation\, and verification frameworks account for emergent behaviors in deployment environments?\n\nParticularly welcome contributions explore the relationship between normative ethical principles\, applied ethical solutions\, and concrete engineering methods\, ensuring that ethically relevant constraints are embedded from the design phase through the deployment \nHow to hand in Abstracts of no more than 500 words should be submitted to Perica Jovchevski by (the extended deadline) February 20\, 2026\, via email (p.j.jovchevski@tudelft.nl).\nNotifications of acceptance will be communicated by March 1\, 2026. \nAccepted abstracts will be allocated 25 minutes for presentation and 15 minutes for discussion.\nRevised conference papers (8.000-10.000 words)\, to be considered for publication\, are due by June 30\, 2026.\nFor further inquiries\, please also contact Perica Jovchevski\, via the button below. \nPapers presented at the conference\, will be considered for publication in an edited volume on the conference theme. Please note that acceptance to present at the conference does not guarantee publication. \nHand in AbstractConference Programme Please keep an eye on this page for more updates in the future! \nOrganisation \nPerica Jovchevski\, Post-doctoral Researcher in the section of Ethics and Philosophy of Technology at TU Delft.\nStefan Buijsman\, Associate Professor Responsible AI at TU Delft.
URL:https://mondai.tudelftcampus.nl/en/event/elsa-defense-designing-developing-ethically-ai/
LOCATION:Connect @Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD\, Netherlands
ATTACH;FMTTYPE=image/jpeg:https://mondai.tudelftcampus.nl/wp-content/uploads/sites/7/2025/12/TUD_Mondai_AI_GlobAIPol_networking.jpg
END:VEVENT
END:VCALENDAR