BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Mondai - ECPv6.15.12.2//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://mondai.tudelftcampus.nl
X-WR-CALDESC:Evenementen voor Mondai
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Europe/Amsterdam
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20230326T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20231029T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20240331T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20241027T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20250330T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20251026T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20260329T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20261025T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20270328T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20271031T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20260324T123000
DTEND;TZID=Europe/Amsterdam:20260324T153000
DTSTAMP:20260404T191708
CREATED:20260203T112219Z
LAST-MODIFIED:20260223T103206Z
UID:10000264-1774355400-1774366200@mondai.tudelftcampus.nl
SUMMARY:Bijeenkomst AI Gigafactory Rotterdam
DESCRIPTION:Mondai | House of AI en de AI-hub Zuid-Holland\, samen met Volt\,\nnodigen je graag uit voor deze bijeenkomst over de AI Gigafactory in Rotterdam(de voertaal van dit event is Nederlands) \nAI ontwikkelt zich razendsnel. AI-technologieën en innovaties worden breed toegepast in industrie\, publieke diensten en onderwijs. Vragen over de positie van Nederland en Europa in deze ontwikkeling houden iedereen bezig; hoe vormen we onze digitale toekomst en autonomie? \nOp dinsdag 24 maart hosten Volt\, initiatiefnemer van de Europese AI-Gigafactory in Rotterdam\, de AI-hub Zuid-Holland en TU Delft – Mondai | House of AI een bijeenkomst over de realisatieplannen voor de AI-Gigafactory. Tijdens de sessie staat gesprek over het belang van ontwikkeling van zulke infrastructuur centraal. Niet alleen voor de regio\, voor Nederland\, en voor Europa\, maar ook voor de soevereiniteit van jouw organisatie. Daarnaast vormen we ook graag beeld van potentiële use-cases uit jouw (toekomstige) praktijk en de vertaling daarvan naar compute-behoeften en ondersteunende faciliteiten in de AI-Gigafactory. \nMeer weten over de AI-Gigafactory? Lees verder op de pagina van Volt. Dit event is op uitnodiging\, maar we staan zeker open voor alle stakeholders die onderdeel willen zijn van dit gesprek. Meld je aan\, dan komen we in contact. \nProgramma\n12.30 – 13.00 Inloop en Ontvangst met lunch\n13.00 – 13.15 Opening en Introductie door Joost Poort namens TU Delft | Mondai House of AI en de AI-hub Zuid-Holland;\n13.15 – 13.45 Presentatie van de plannen voor en voortgang van de AI-Gigafactory in Rotterdam door Han de Groot namens Volt;\n13.45 – 14.30 Inzichten rond AI & Compute in verschillende praktijken \n\nErick Webbe\, CEO – Kickstart AI\,\nSven Hamelink\, Hoofd Science & Technology – Politie\,\nErik Scherff\, CIO | IT-director TU Delft.\n\n14.30 – 15.30 Discussie onder begeleiding van moderator Tom Jessen \nKom in ContactVragen over de AI Gigafactory? Neem vooral contact met ons op! \nJoost Poort\nDirecteur Mondai | House of AI\nAI Innovation Lead TU Delft\nHan de Groot\nCEO Volt\nInitiator van de AI-Gigafactory
URL:https://mondai.tudelftcampus.nl/event/ai-gigafactory-bijeenkomst/
LOCATION:Panorama @Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD\, Netherlands
ATTACH;FMTTYPE=image/png:https://mondai.tudelftcampus.nl/wp-content/uploads/sites/7/2026/02/Volt-AIGF.png
ORGANIZER;CN="Mondai | House of AI":MAILTO:mondai@tudelft.nl
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20251127T143000
DTEND;TZID=Europe/Amsterdam:20251127T170000
DTSTAMP:20260404T191708
CREATED:20251008T110002Z
LAST-MODIFIED:20251106T100014Z
UID:10000243-1764253800-1764262800@mondai.tudelftcampus.nl
SUMMARY:Best AI-Related MSc thesis Award 2025
DESCRIPTION:Mondai | House of AI and the TU Delft AI Initiative are happy to host\nthe AI-Related MSc Thesis Awards 2025!The Best AI-Related MSc Thesis Award (short: AI Thesis Award) is a new award that celebrates outstanding master’s research at TU Delft dedicated to development\, application or contexts of artificial intelligence. Master’s students from all faculties participated with their finished thesis\, on condition that research is centered around or involves AI. The prize will be awarded in two categories: IN AI for research that advances AI itself\, and WITH AI for research that applies AI in a specific domain. One TU Delft graduate will be selected in each category after the top 3 candidates pitch their thesis at this award ceremony. \nProgramme \n14.30 – Walk-in\n15.00 – Opening\n15.15 – Thesis pitches\n16.15 – Break + walk-in alumni community\n16.30 – Award ceremony & Kick Off Alumni Community for AI\, Data & Digitalisation\n17.00 – Network drinks \nFinalists Best AI-related MSc Thesis Award 2025 \nCategory IN AI \n\nKrzysztof Piotr Baran (Computer Science @Faculty of EEMCS): Federated MaxFuse: Diagonal Integration of Weakly Linked Spatial and Single-cell Data through Federated Learning\nPrajit Bhaskaran (Computer Science @Faculty of EEMCS): Transformers Can Do Bayesian Clustering\nSimon Gebraad (Robotics @Faculty of ME): LeAP: Label any Pointcloud in any domain using Foundation Models\n\nCategory WITH AI \n\nAntonio Magherini (Civil Engineering @Faculty of CEG): JamUNet: predicting the morphological changes of braided sand-bed rivers with deep learning\nIsa Oguz (Management of Technology @Faculty of TPM): Victim Blaming Bias in Traffic Accidents Using Large Language Models\nJeroen Hagenus (Robotics @Faculty of ME): Realistic Adversarial Attacks for Robustness Evaluation of Trajectory Prediction Models\n\nAI Alumni Community Kick Off  \nIf you only want to attend the launch of the AI Alumni Community\, the walk-in is between 16:15 -16:30 and borrel starts around 17:00 \nDuring this event\, we also launch the new AI\, Data and Digitalisation Alumni Community (short: AI Alumni Community). By connecting TU Delft alumni across generations\, disciplines and sectors\, we aim to unlock new opportunities for innovation\, strengthen the bridge between research\, application and societal value\, and shape a digital future that benefits everyone. This community is open to all past\, present and future TU Delft alumni with an interest in or background in AI\, data and digitalisation. Community members include graduates from bachelor’s and master’s programmes as well as PhD alumni from across the university.
URL:https://mondai.tudelftcampus.nl/event/best-ai-related-msc-thesis-award-2025/
LOCATION:Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD
ATTACH;FMTTYPE=image/jpeg:https://mondai.tudelftcampus.nl/wp-content/uploads/sites/7/2025/10/TU250612_4059_0283_lowres.jpg
ORGANIZER;CN="Mondai | House of AI":MAILTO:mondai@tudelft.nl
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20251114T093000
DTEND;TZID=Europe/Amsterdam:20251114T123000
DTSTAMP:20260404T191708
CREATED:20251029T153703Z
LAST-MODIFIED:20251029T154004Z
UID:10000250-1763112600-1763123400@mondai.tudelftcampus.nl
SUMMARY:Deep Dive Workshop - Contextualising AI Principles: Universal Guidelines or Domain-Specific Policy?
DESCRIPTION:Global AI Policy Summit Deep Dive Workshop\nContextualising AI Principles: Universal Guidelines or Domain-Specific Policy?Artificial Intelligence principles and guidelines have proliferated in recent years—transparency\, fairness\, accountability\, and human oversight are widely endorsed. However\, when we attempt to implement and operationalise these principles in specific domains\, fundamental dilemmas emerge: \nIn crisis management: How do we balance the need for privacy with transparency when lives are at stake? Can we afford the time for human oversight in rapidly evolving disasters? \nIn education: How do we ensure fairness in AI-supported learning while respecting pedagogical autonomy and diverse learning needs? \nIn mobility: When does mandatory human oversight become a safety liability in time-critical traffic situations? How do we operationalize accountability when decisions are distributed across interconnected systems? \nThis workshop examines a fundamental question for AI policy: What principles have to remain generic across domains\, and what should or must be contextualized? The EU AI Act attempts to address this through risk-based categories\, but how well does this approach capture domain-specific tensions? Through structured dialogue across domains\, we will map where universal principles break down\, why contextualisation is necessary\, and what this means for developing both sector-specific guidelines and cross-cutting policy frameworks. \nWorkshop Programme This workshop takes places in the “Connect @Mondai” \n09.30 – 10.00 Opening & Domain Challenges.\nWelcome and short presentations: What are the specific dilemmas when applying AI principles in crisis management\, education\, and mobility? \n10.00 – 10.30 Plenary Principle Mapping.\nInteractive session: Starting from the EU Ethics Guidelines for Trustworthy AI and the Dutch Value Compass: Which principles do we prioritize? Where do different domains face irreconcilable conflicts? \n10.30 – 11.00 Coffee Break \n11.00 – 12.30 Domain Deep Dives\, Cross-Domain Synthesis\, and Next Steps\n> Structured discussions: Develop concrete scenarios where generic principles prove inadequate or create harm\, or where additional principles are needed. What makes each domain different? What adaptations are needed?\n> Bringing insights together: What patterns emerge? Where is universality possible\, and where is contextualization essential?\n> Discussion of potential collaborative outputs and future dialogue \nSpeakers Moderator: \n\nTina Comes\, Scientific Director DLR Institute for the Protection of Terrestrial Infrastructures; Professor in Decision Theory & ICT for Resilience\, Delft University of Technology\n\nExpert Speakers: \n\nThomas Kox\, Head of Research Group “Digitalisation and Networked Security”\, Weizenbaum Institute for the Networked Society\nArkady Zgonnikov\, Assistant Professor\, Human-Technology Interaction and Centre for Meaningful Human Control\, Delft University of Technolog\nDuuk Baten\, Advisor on Digitalisation and AI in Education\, SURF
URL:https://mondai.tudelftcampus.nl/event/ddw-contextualising-ai-principles/
LOCATION:Connect @Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD\, Netherlands
ATTACH;FMTTYPE=image/jpeg:https://mondai.tudelftcampus.nl/wp-content/uploads/sites/7/2025/10/TU250612_4059_0283_lowres.jpg
ORGANIZER;CN="Mondai | House of AI":MAILTO:mondai@tudelft.nl
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20251114T093000
DTEND;TZID=Europe/Amsterdam:20251114T123000
DTSTAMP:20260404T191708
CREATED:20251020T133407Z
LAST-MODIFIED:20251030T095415Z
UID:10000246-1763112600-1763123400@mondai.tudelftcampus.nl
SUMMARY:Deep Dive Workshop - AI in Health Care
DESCRIPTION:Global AI Policy Summit Deep Dive Workshop\nAI in Health CareDespite progress in AI governance\, much of the current regulatory framework remains grounded in high-level principles and guidelines which\, while valuable\, often lack the specificity required for practical implementation – particularly at the intersection with the highly regulated and operationally complex domain of healthcare. This is especially urgent in “high-risk” areas of healthcare\, where decisions are irreversible\, outcomes are critical\, and resources are constrained. Such applications demand elevated levels of transparency\, accountability\, and ethical oversight. This workshop draws on examples from clinical practice\, public health\, health policy\, and global health to foster open discussion around the most pressing priority areas at the intersection of AI and healthcare. \nWorkshop Programme and Speakers The workshop starts off with short talks by five expert presenters followed by an interactive round table discussion. \n> Fabian Lorig (Associate Professor Computer Science\, Malmö University)\n> Jason Tucker (Researcher at the Institute for Future Studies and Adjunct Associate Professor at the AI Policy Lab\, Umeå University)\n> Stefan Buijsman (Associate Professor Responsible AI\, TU Delft)\n> Siri Helle (Psychologist and Award-winning Author of The Emotion Trap).\n> Marie-Therese Sekwenz (PhD candidate at Faculty of Technology\, Policy and Management\, TU Delft and Deputy Director of the AI Futures Lab on Rights and Justice)
URL:https://mondai.tudelftcampus.nl/event/ddw-ai-in-health-care/
LOCATION:Innovate @Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD\, Netherlands
ATTACH;FMTTYPE=image/jpeg:https://mondai.tudelftcampus.nl/wp-content/uploads/sites/7/2025/10/AIinHealthCare_OrganDonation_StockImage.jpg
ORGANIZER;CN="Mondai | House of AI":MAILTO:mondai@tudelft.nl
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20251112
DTEND;VALUE=DATE:20251115
DTSTAMP:20260404T191708
CREATED:20250630T140523Z
LAST-MODIFIED:20251113T083927Z
UID:10000238-1762905600-1763164799@mondai.tudelftcampus.nl
SUMMARY:Global AI Policy Research Summit 2025 Delft
DESCRIPTION:Global AI Policy Research Summit 2025 Delft\nFraming the Future of AI Governance: Leading with Evidence-based PolicyOn 12 – 14 November Mondai | House of AI is happy to host the next Global AI Policy Summit in Delft! \n(de voertaal van dit event is Engels) \nPredictions about the promises and perils of artificial intelligence (AI) are increasingly prevalent; from the future of work and education to breakthroughs in healthcare and public services\, and the reconfiguration of warfare and national security. Such narratives profoundly influence how we imagine\, initiate\, and interrogate the development and deployment of AI innovations. Crucially\, how we frame these issues shapes the decisions we make about the future of AI in society. Policy research allows us to highlight these underlying narratives and ask how they frame the economic\, social\, environmental and human rights impacts of AI systems. \nThe Global AI Policy Research Summit 2025 convenes a growing international network of research institutes and policymakers. Summit participants work to uncover the mechanisms of  – and potential pathways for – effectively framing the future of responsible AI\, drawing on evidence-based policy and good governance practices. Together they analyze how current dominant narratives serve to frame the global AI policy landscape\, as well as jointly identify effective strategies and collaborations for the future of responsible AI governance. \nBuilding on the AI Policy Research Roadmap\, which was developed through collaborative discussions at the inaugural AI Policy Summit 2024 in Stockholm\, summit participants can further advance a shared vision and concrete actions for the future of responsible AI governance through collaborative research and practice! \nLearn More About The Global AI Policy Research NetworkProgrammeWednesday 12 November – Welcome Drinks & ‘Indigenous Perspectives on AI’ @Vakwerkhuis\nBefore the summit starts we would like to invite participants to join us for welcome drinks and a workshop on ‘Indigenous Perspectives on AI’\n \nProgramme\n18.00 – 20.00 Drinks and Workshop hosted by Anna Melnyk (Delft Design for Values Institute) & Lynnsey Chartrand (Head of Indigenous Initiatives at Mila\, joining online).\nThis collaborative workshop invites participants to critically engage with Indigenous perspectives on artificial intelligence. Through reflective discussions with participants\, you explore how Indigenous knowledges\, governance practices\, and relational worldviews can inform more responsible\, equitable\, and sustainable decision-making about AI futures. The session aims to expand awareness of diverse epistemologies and to foster dialogue on how AI systems can better serve communities\, lands\, and ecosystems. \nMore information about the work they do you can read here: Design for Values and Critical Raw Materials: Decolonial Justice Perspective – Delft Design for Values Institute \nThursday 13 November – Day 1\nGeneral Programme\n08.30 – 09.00 Walk-in and Welcome Coffee\n09.00 – 13.00 Plenary Morning Programme\n13.00 – 14.00 Lunch\n14.00 – 17.30 Plenary Afternoon Programme\n17.30 Dinner and Drinks @Firma van Buiten \nFriday 14 November – Day 2\nGeneral Programme\n08.30 – 09.00 Walk-in and Welcome Coffee\n09.00 – 12.30 Plenary ad Break-out Morning Programme\n12.30 – 13.30 Lunch\n13.30 – 15.30 Plenary Afternoon Programme\n15.30 Close \nNovember 13 Detailed ProgrammeSpeakersNovember 14 Detailed ProgrammeDo you want to join this inspiring event? Please contact the organisers!\nTaylor Stone \nt.w.stone-1@tudelft.nl \nHelma Dokkum \nw.m.dokkum@tudelft.nl \nFull Programme November 13 – Day 1: Reframing AI Narratives 08.30 – 09.00 Welcome and Coffee \n09.00 - 09.15: Summit Opening\nSummit Opening by Viriginia Dignum (Umeå University)\, Geert-Jan Houben (TU Delft) and Isadora Hellegren Létourneau (Mila) \n09.15 - 11.00: Session 1 - A Year with the Roadmap for AI Policy Research & Network Round table\nNetwork introductions and reflections led by Isadora Hellegren Létourneau (Mila) \n11.00 – 11.30 Break \n11:30 - 13:00: Session 2 - Rethinking AI Safety and Sovereignty – Regional Perspectives (Hybrid session)\nWhat can be learned from assessing current governance approaches to AI sovereignty and safety in the EU\, Africa\, Asia\, and Canada? \nPanel moderated by Frank Dignum (Umeå University). \nPanellists (confirmed so far):\n> Ayantola Alayande (Global Center on AI Governance);\n> Carolina Aguerre (Universidad Católica del Uruguay)\, who will be joining online;\n> Edward Tsoi (AI Safety Asia)\, who will be joining online. \n13.00 – 14.00 Lunch \n14:30 - 15:30: Session 3 - Building Alternative Narratives\nCan novel insights from foresight methods and systems perspectives offer alternative narratives for the development of effective AI policies? \nPanel moderated by Ginerva Castellano (Uppsala University) \nPanellists (confirmed so far):\n> Roel Dobbe (TU Delft);\n> The Anh Han (Teesside University);\n> Sam Bogerd (Centre for Future Generations). \n15.30 – 16.00 Break \n16.00 - 17.30: Session 4 - Governance for Innovation\nHow could participatory and collaborative approaches foster a narrative that aligns governance and regulation with innovation? \nPanel moderated by Mirko Schaefer (Utrecht University). \nPanellists (confirmed so far):\n> Kerstin Bach (Norwegian University of Science and Technology);\n> Ley Muller (Women in AI Governance);\n> David Lewis (Trinity College Dublin). \n17.30 - 18.00: Close of Day 1 - Reframing AI Narratives\nNetworking opportunity \n18.00 – 20.00 Dinner and Drinks at Firma van Buiten \nFull Programme November 14 – Day 2: Moving Beyond High-Level Principles 08.30 – 09.00 Welcome and Coffee \n09.00 - 09.30: Opening of Day 2\nReflections on Day 1 \n09.30 - 12:30: Deep Dive Workshop - AI in Warfare: Actions\, Policies and Practices\nDeep Dive Workshop led by Nitin Sawhney (University of Arts Research Institute Helsinki) and Petter Ericson (Umeå University).\nThis workshop is held in the Panorama @Mondai\, ground floor. \nFor more information\, please refer to the designated event page. \n09.30 - 12:30: Deep Dive Workshop - AI in Health Care\nDeep Dive Workshop led by Jason Tucker (Umeå University)\, Fabian Lorig (Malmö University) and Stefan Buijsman (TU Delft).\nThis workshop is held in the Innovate @Mondai\, 1st floor. \nFor more information\, please refer to the designated event page. \n09.30 - 12:30: Deep Dive Workshop - Contextualising AI Principles: Universal Guidelines or Domain-Specific Policy?\nDeep Dive Workshop moderated by Tina Comes (TU Delft)\, with expert speakers Thomas Kox (Weisenbaum Institute)\, Arkady Zgonnikov (TU Delft)\, and Duuk Baten (SURF). \nThis workshop is held in the Connect @Mondai\, 1st floor. \nFor more information\, please refer to the designated event page. \n12.30 – 13.30 Lunch \n13.30 - 14.30: Session 6 - Building Bridges from Research to Policy\nReflections on deep-dive workshop; proposals to build stronger collaborations with policy-makers moving forward \nModerated by Virginia Dignum (Umeâ University) and Tina Comes (TU Delft) \n14.30 - 15.30: Session 7 - Next Steps for the Global AI Policy Research Network\nBrief reflections from deep-dive leads followed by plenary discussion led by Isadora Hellegren Létourneau (Mila) \n15.30 – 16.00 Close of Summit \nSpeakers Virginia Dignum. Professor in Responsible AI and Director of the AI Policy Lab\, Umeå University\nVirginia Dignum is a professor in Responsible Artificial Intelligence and the Director of the AI Policy Lab and a member of the UN High Level Advisory Body on AI and senior advisor to the Wallenberg Foundations. \nSessions\n> Summit opening & network round-table (November 13\, 2025 at 09.00h)\n> Session 1: Rethinking AI sovereignty (November 13\, 2025 at 10.00h)\n> Session 5: Deep-dive workshop – moving beyond high-level principles (November 14\, 2025 at 09.30h)\n> Session 6: Building bridges from research to policy (November 14\, 2025 at 13.30h) \nGeert-Jan Houben. Pro Vice Rector Magnificus Artificial Intelligence\, Data and Digitalisation\, TU Delft\n \nGeert-Jan Houben is Pro Vice Rector Magnificus Artificial Intelligence\, Data and Digitalisation (PVR AI) at Delft University of Technology (TU Delft). Leading the TU Delft activities in the field of AI\, data and digitalisation\, for education\, for research and valorisation\, and for relevant support. This includes the establishment of TU Delft AI Labs to promote cross-fertilisation between AI experts and scientists who use AI in their research. It also includes the representation of TU Delft in regional\, national and international co-operation on this theme. Full professor of Web Information Systems (WIS) at the Software Technology (ST) department at Delft University of Technology (TU Delft). Leading a research group on Web Information Systems (WIS) and involved in the education in Computer Science in Delft\, with a focus on data-based information systems on the Web.. \nSessions\n> Summit opening & network round-table (November 13\, 2025 at 09.00h) \nIsadora Hellegren Létourneau. Senior Project Manager AI Policy Research\, Public Policy and Inclusion\, Mila\n\nIsadora Hellegren leads multistakeholder and interdisciplinary AI policy research at Mila – Quebec Artificial Intelligence Institute\, such as the Mila AI Policy Fellowship. She works to bridge the gap between AI research and public policy to inform better AI policy – for the benefit of all. Before joining Mila\, Isadora was a Senior Policy Specialist at the Swedish International Development Cooperation Agency (Sida)\, where she advised on human rights and ICTs\, democratic governance\, and gender equality. Her academic and professional background focusing on internet governance and policy developments in relation to emerging technologies and social movements continues to inform her dedication to advancing meaningful participation in AI and beyond. She chairs the newly founded Global AI Policy Research Network (GlobAIpol)\, is a former Member of Steering Committee in the Global Internet Governance Academic Network (GIGANET) and has published articles in Oxford University Press Research Encyclopedia of Communication\, and Internet Histories: Digital Technology\, Culture and Society on related topics. \nSessions\n> Opening (November 13\, 2025 at 09.00h)\n> Session 1: A year with the Roadmap for AI Policy Research & Network round-table (November 13\, 2025 at 09.15h)\n> Session 7: Next steps for the Global AI Policy Research Network (November 14\, 2025 at 14:30h)\n> Closing Summit (November 14\, 2025 at 15:30h) \nFrank Dignum. Professor at Department of Computing Science\, Umeå University.\nFrank Dignum is Professor at the Umeå University Department of Computer Science\, leading a research group in the field of socially conscious AI. They develop models that can provide insights into how society can respond to political changes or natural disasters. \nSessions\n> Session 2: Rethinking AI safety and sovereignty – regional perspectives (November 13\, 2025 at 11:30h) \nAyantola Alayande. Researcher at the Global Center on AI Governance (Online)\nAyantola Alayande is a Researcher at the Global Center on AI Governance\, where he works on issues of international cooperation in AI policymaking and governance\, AI development in low- and middle-income countries (LMICs)\, compute governance\, AI Security\, state-led AI governance in Africa\, and AI security. His broader interests span the geopolitics/geoeconomics of emerging technologies\, global governance\, technology and industrial policy\, Africa-in-major-power-competition\, and digital methods/media. His writings have appeared in several notable research outlets\, including Nature\, the Atlantic Council\, The Productivity Institute\, The Productivity Monitor\, the Bennett Institute for Public Policy\, Research ICT Africa\, among others. Ayantola holds graduate degrees in public policy and international development\, respectively\, from the KDI School of Public Policy and the University of Edinburgh\, and is currently a PhD candidate in AI Geopolitics and Governance at the University of Oxford’s Department of International Development (ODID)\, where he is researching approaches to sovereignty in the AI value chain of emerging power nations. He has previously worked in research and consulting roles at the Bennett Institute for Public Policy at the University of Cambridge\, Kantar UK\, Research ICT Africa (RIA)\, and Dataphyte. \nSessions\n> Session 2: Rethinking AI safety and sovereignty – regional perspectives (November 13\, 2025 at 11:30h) \nCaroline Aguerre. Associate Professor\, Universidad Católica del Uruguay. (Online)\nCaroline Aguerre is Associate Professor at the Universidad Católica del Uruguay and honorary co-director at CETYS\, at Universidad de San Andrés (Argentina). Her research interests include theories and practices around the governance of communications technologies and infrastructures\, particularly the Internet and Artificial intelligence\, the intersection with political economy and north-south perspectives. In 2020 she was part of the UNESCO Adhoc Expert Working Group on the Recommendations on the Ethics of AI. She has been part of the IGF Multistakeholder Advisory Groupd (2012-2014) and in 2025. She was a resident fellow at the CGR21 (2020-2021) University of Duisburg-Essen (Germany). \nSessions\n> Session 2: Rethinking AI safety and sovereignty – regional perspectives (November 13\, 2025 at 11:30h) \nEdward Tsoi. Co-Founder AI Safety Asia. (Online)\nEdward Tsoi is the founder of Connecting Myanmar and experienced leader in technology startups and non-profits. He led the APAC business of a late-stage startup with over $100M raised. He is also a former board member of Amnesty International Hong Kong and advisor to multiple corporate-NGO initiatives. Now he is one of the co-founders of AI Safety Asia. \nSessions\n> Session 2: Rethinking AI safety and sovereignty – regional perspectives (November 13\, 2025 at 11:30h) \nGinevra Castellano. Professor at Department of Information Technology\, Uppsala University\nGinevra Castellano is a Full Professor in Intelligent Interactive Systems at the Department of Information Technology of Uppsala University\, Sweden\, where she is the Founder and Director of the Uppsala Social Robotics Lab. Her research is in the area of social robotics and human-robot interaction\, addressing questions on how we can build human-robot interactions that are ethical and trustworthy\, including robot ethics\, robot autonomy and human oversight\, gender fairness\, robot transparency and trust\, human-robot relationship formation\, both from the perspective of developing computational skills for robotic systems\, and their evaluation with human users to study acceptance and social consequences. She has been the Principal Investigator of several national and EU-funded projects on ethical and trustworthy human-robot interaction\, in application areas spanning education\, healthcare\, and transportation systems. She is currently the coordinator of the CHANSE-NORFACE MICRO (Measuring children’s wellbeing and mental health with social robots) project (2025-2028)\, and the WASP-HS Research Group on Child Development in the Age of AI and Social Robots (2025-2030\, funded by WASP-HS Wallenberg AI\, Autonomous Systems and Software Program – Humanity and Society. Castellano was an invited speaker at the UN AI for Good Global Summit 2024 and a keynote speaker the World Summit AI 2024. She was recently awarded the Thuréus prize 2025 from the Royal Society of Sciences in Uppsala. \nSessions\n> Session 3: Building alternative narratives (November 13\, 2025 at 14:00h) \nRoel Dobbe. Assistant Professor Sociotechnical AI Systems\, TU Delft\nRoel Dobbe is an Assistant Professor in Technology\, Policy & Management at Delft University of Technology focusing on Sociotechnical AI Systems. He received a MSc in Systems & Control from Delft (2010) and a PhD in Electrical Engineering and Computer Sciences from UC Berkeley (2018)\, where he received the Demetri Angelakos Memorial Achievement Award. He was an inaugural postdoc at the AI Now Institute and New York University. His research addresses the integration and implications of algorithmic technologies in societal infrastructure and democratic institutions\, focusing on issues related to safety\, sustainability and justice. His projects are situated in various domains\, including energy systems\, public administration\, and healthcare. Roel’s system-theoretic lens enables addressing the sociotechnical and political nature of algorithmic and artificial intelligence systems across analysis\, engineering design and governance\, with an aim to empower domain experts and affected communities. His results have informed various policy initiatives\, including environmental assessments in the European AI Act as well as the development of the algorithm watchdog in The Netherlands. \nSessions\n> Session 3: Building alternative narratives (November 13\, 2025 at 14:00h) \nThe Anh Han. Professor in Computer Science\, Teesside University\nThe Anh Han is a Full Professor of Computer Science and Director of the Center for Digital Innovation at School of Computing\, Engineering and Digital Technologies\, Teesside University. His current research spreads several topics in AI and behavioural research\, including the dynamics of human cooperation\, evolutionary game theory\, agent-based simulations\, behavioural economics\, and AI governance modelling. \nSessions\n> Session 3: Building alternative narratives (November 13\, 2025 at 14:00h) \nSam Bogerd. Technology Foresight Analyst\, Centre for Future Generations\nSam Bogerd bridges foresight and policy\, tackling the governance of advanced technologies. With a focus on innovation and long-term impact\, he turns complex challenges into practical\, future-ready solutions. \nSessions\n> Session 3: Building alternative narratives (November 13\, 2025 at 14:00h) \nMirko Schaefer. Associate Professor Media and Performance Studies\, Utrecht University\nMirko Tobias Schaefer is Associate Professor of AI Data & Society at Utrecht University. He leads the master program Applied Data Science\, and he is the Science Lead at the Data School. Mirko also serves on the Advisory Committee Analytics of the Netherlands Ministry of Finance. \nHis research explores the social impact of datafication\, algorithmic governance\, and the politics of digital infrastructures. With the Data School he investigates how AI and data practices transform public institutions\, and he develops applicable processes for responsible and accountable governance of AI and big data. \nTogether with Karin van Es & Tracey Lauriault he edited the volume Collaborative Research in the Datafied Society. Methods and Practices for Investigation and Intervention (Amsterdam University Press 2024). \nSessions\n> Session 4: Governance for innovation (November 13\, 2025 at 16:00h) \nKerstin Bach. Professor of Artificial Intelligence\, Norwegian University of Science and Technology\nKerstin Bach is a professor of Artificial Intelligence at the Norwegian University of Science and Technology (NTNU)\, Director of the Norwegian Open AI Lab\, and is Research Director at the Norwegian Research Center for AI Innovation (NorwAI). She holds a PhD from the University of Hildesheim and worked as a researcher at the German Research Center for AI (DFKI)\, where she developed decision support systems for various industries. After completing her Ph.D.\, Kerstin joined Verdande Technology\, a Trondheim-based AI startup developing real-time case-based reasoning (CBR) technology for the oil and gas\, financial services\, and healthcare industries. At Verdande\, she was both a research scientist and software engineer\, working closely with partners exploring CBR in their technology stack. In 2015\, Kerstin joined NTNU’s computer science department. \nIn recent years\, Kerstin’s research has been primarily focused on crafting AI prototypes tailored for healthcare\, intelligent sensing\, and knowledge management. She managed an EU H2020 research grant\, selfBACK\, whose results are currently being developed as a product for patients with lower back pain. Presently\, Kerstin is steering multiple interdisciplinary projects funded by the Norwegian Research Council and NTNU dedicated to AI-driven and patient-centered healthcare services. \nBeyond her research contributions\, she actively organizes workshops\, conferences\, and symposia that discuss various aspects of AI research. Throughout her career\, Kerstin has undertaken responsibilities such as being the driving force behind myCBR\, an open-source tool adopted in research and industry projects across Europe\, and is a board member of the Norwegian AI Society and the German AI Society. Her commitment to advancing AI extends to NTNU\, where she promotes AI research among students and strongly emphasizes encouraging females to pursue technology careers. As an educator\, she imparts her knowledge through AI and Machine Learning courses\, guiding and involving master’s and Ph.D. Her role as NorwAI research director finds her at the forefront of collaborative projects between industry and academia. Within this context\, she established FEMAIS\, a mentorship program tailored for aspiring female AI students\, effectively bridging the gap between their final year of studies and the launch of their professional journeys. Kerstin’s commitment to AI outreach also extends to the Norwegian Open AI Lab\, where she organizes events\, gives talks\, and participates in panels and seminars to discuss AI research among professionals and the broader public. \nSessions\n> Session 4: Governance for innovation (November 13\, 2025 at 16:00h) \nLey Muller. Lead of Nordic Women in AI Governance and Research Lead for Women in AI Norway\nLey Muller is a transformational leader with 10 years’ experience in evidence-based policy\, public health\, and AI. She currently leads Nordic Women in AI Governance and is the research lead for Women in AI Norway\, and is very aware of how insufficient a gender-only lens is if AI governance is to properly address marginalized perspectives. She has experience in consulting\, government\, and academia from Norway\, the US\, Germany\, and the WHO – and is very recently (and somewhat proudly) ex-tech. \nSessions\n> Session 4: Governance for innovation (November 13\, 2025 at 16:00h) \nDavid Lewis. Associate Professor at the School of Computer Science and Statistics\, Trinity College Dublin\nDave Lewis is an Associate Professor at the School of Computer Science and Statistics at Trinity College Dublin where he served as the head of its Artificial Intelligence Discipline. He is the Acting Director of Ireland”s ADAPT Centre for human centric AI and digital content technology research. He investigates open semantic models for trustworthy AI and data governance and contributes to international standards in digital content processing and trustworthy AI. His research focuses on the use of open semantic models to manage the Data Protection and Data Ethics issues associated with digital content processing. He has led the development of international standards in AI-based linguistic processing of digital content at the W3C and OASIS and is currently active in international standardisation of Trustworthy AI at ISO/IEC JTC1/SC42 and CEN/CENELEC JTC21. \nSessions\n> Session 4: Governance for innovation (November 13\, 2025 at 16:00h) \nTina Comes. Associate Professor in Resilience Engineering\, TU Delft\n \nTina Comes is the Scientific Director of the DLR Institute for Terrestrial Infrastructure Protection in Germany\, and Full Professor in Decision Theory & ICT for Resilience at the TU Delft in the Netherlands. Since her PhD\, she has been determined to better understand decision-making of individuals and groups in the context of climate risk and crises. Her work aims at using AI and information technology to support decisions of individuals and groups in complex\, uncertain environments. She integrates behavioural insights with advanced computational approaches—including distributed AI\, multi-agent systems\, optimisation models\, and digital twins. She serves on the editorial Board of Nature Scientific Reports. Her research has received international recognition through awards and fellowships\, and she is a member of Academia Europaea and the Norwegian Academy of Technological Sciences. Internationally\, under the EU’s Science Advise Mechanism\, she chaired the Working Group on Strategic Crisis Management in Europe and is now chairing the Working Group for AI in Crisis Management. \nSessions\n> Deep Dive Workshop – Contextualising AI Principles: Universal Guidelines or Domain-Specific Policy? (November 14\, 2025 at 9:30h)\n> Session 6: Building bridges from research to policy (November 14\, 2025 at 13:30) \nNitin Sawhney. Visiting Researcher\, University of the Arts Research Institute Helsinki\nNitin Sawhney is a visiting researcher at the University of the Arts Research Institute. He has a background in computational media\, human-centered design and documentary film. He served as a Professor of Practice in the Department of Computer Science at Aalto University\, leading the Critical AI and Crisis Interrogatives (CRAI-CIS) research group. He completed his doctoral dissertation at the MIT Media Lab\, and taught in the Media Studies program at The New School and the MIT Program in Art\, Culture and Technology (ACT). Working at the intersection of Human Computer Interaction (HCI)\, responsible AI\, and participatory design research\, he examines the critical role of technology\, civic agency\, and social justice in society and crisis contexts. He has co-curated exhibitions and co-directed documentaries in Gaza and Guatemala\, focusing on creative resistance and historical memory in conditions of war and conflict. In October 2024 he co-organized the Contestations.AI Transdisciplinary Symposium on AI\, Human Rights and Warfare in Helsinki. He is currently developing a transdisciplinary platform to foster critical dialogues and co-existence through science\, technology\, and the arts\, and conceptualizing a new documentary film project critically examining the role of AI in warfare. \nSessions\n> Deep Dive Workshop –  AI in Warfare: Actions\, Policies and Practices (November 14\, 2025 at 9:30h in Panorama @Mondai) \nPetter Ericson. Staff Scientist\, Umeå University\nPetter Ericson is staff scientist in the research group for Responsible AI\, working on graph problems and formal descriptions of structured data\, with a strong interest in ethics\, music and society. \nSessions\n> Deep Dive Workshop –  AI in Warfare: Actions\, Policies and Practices (November 14\, 2025 at 9:30h in Panorama @Mondai) \nJason Tucker. Researcher\, Institute for Future Studies\, Sweden. Adjunct Associate Professor AI Policy Lab\, Umeå University\nJason Tucker is a researcher at the Institute for Futures Studies and an Adjunct Associate Professor at the AI Policy Lab\, the Department of Computing Science\, Umeå University. He is also a Visting Research Fellow at AI & Society\, the Department of Technology and Society\, LTH\, Lund University. His research interests include AI and health\, the global political economy of AI\, public policy\, citizenship\, human rights and global governance. He currently leads the research project The Politics of AI and Health: From Snake Oil to Social Good funded by WASP-HS. Within this he is particularly interested in developing interdisciplinary approaches to better support policy making on the future role of AI in healthcare. \nPreviously he has worked on law and policy reform\, citizenship and public sector digitalisation\, having done so for the United Nations\, civil society\, industry and in academia. \nSessions\n> Deep Dive Workshop – AI in Health Care (November 14\, 2025 at 9:30h) \nFabian Lorig. Associate Senior Lecturer and Associate Professor Computer Science\, Malmö University\n\nFabian Lorig is an Associate Senior Lecturer (Biträdande Lektor) and Associate Professor (Docent) in Computer Science\, with a focus on agent-based modelling\, the use of AI in socio-technical systems\, and the development of simulation-based decision and policy support. His research integrates computational methods with real-world applications in public health\, mobility\, and policy\, with a focus on understanding and addressing the societal implications of AI and to support the development of responsible and impactful technologies. He has led and contributed to interdisciplinary research projects where they design computational models that enable stakeholders and policy actors to better understand complex dynamics of social systems and to anticipate the potential consequences of policy interventions and AI technologies. Through participatory approaches and social simulations\, his research facilitates evidence-based decision-making in domains where digital technologies shape societal outcomes. \nSessions\n> Deep Dive Workshop – AI in Health Care (November 14\, 2025 at 9:30h) \nStefan Buijsman. Associate Professor Responsible AI\, TU Delft\nStefan Buijsman studied computer science and philosophy in Leiden and completed his PhD on the philosophy of mathematics at Stockholm University when he was 20. He continued his research on the intersection of philosophy of mathematics and cognitive science at Stockholm University and the Institute for Futures Studies on a research grant from Vetenskapsrådet. \nAside from research\, he engages in popular science writing with now three books to his name. The most recent is on AI and its links to philosophy under the Dutch title ‘Alsmaar Intelligenter’. From there\, his research focus has switched to philosophy of AI on which he works at TU Delft. \nHe is co-founder of the Delft Digital Ethics Centre\, which focusses on the translation of ethical values into design requirements that can be used by engineers\, decision and policy makers and regulators. There he works on the broad range of ethical challenges in projects with external stakeholders. His own research focusses mostly on the explainability of AI algorithms. How can we make these algorithms more transparent? What information do we need to use them responsibly in their many applications? He uses philosophical accounts from epistemology and philosophy of science to formulate design requirements on AI tools on these knowledge-related aspects. In collaboration with computer scientists he also aims to develop new tools to improve the explainability of algorithms. \nSessions\n> Deep Dive Workshop – AI in Health Care (November 14\, 2025 at 9:30h) \nAbout the Network\nThe Global AI Policy Research Network (GlobAIpol) organizes this annual event. GlobAIpol is a community of practice that serves as a platform for policy researchers and professionals to advance responsible AI policy research\, evidence-based insights and actionable strategies for stakeholders across academia\, industry\, public sector\, and civil society. AI policy research has emerged as an essential guide to navigating the complex interplay between technological innovation and societal impact. It ensures that we guide advancements in AI in alignment with ethical\, legal\, and social priorities. \nThe network was established following the inaugural AI Policy Research Summit in Stockholm\, November\, 2024. The inaugural summit was a joint initiative led and organized by the AI Policy Lab\, Umeå\, Sweden\, and Mila – Quebec AI Institute\, Montreal\, Canada. The summit brought together a community eager to address the need for better synergies between research\, policy and impact to realize responsible\, equitable and sustainable AI for the benefit of all. \nA core objective of the GlobAIpol network is to inform global approaches to AI governance by sharing best practices and fostering collaboration on developing AI policy. This includes advancing responsible AI policy research that meets the growing need for governance grounded in ethical\, transparent\, and evidence-based practices to shape inclusive and trustworthy policies. The network takes an interdisciplinary and multistakeholder approach to holistically address the complex challenges and opportunities that arise with these developments. Through these objectives\, the network works for effective knowledge exchange to bridge the gap between AI policy research and practice. \nRead more about the network’s commitments in the Roadmap for AI Policy Research.
URL:https://mondai.tudelftcampus.nl/event/global-ai-policy-research-summit-2025/
LOCATION:Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD
ATTACH;FMTTYPE=image/jpeg:https://mondai.tudelftcampus.nl/wp-content/uploads/sites/7/2025/06/marietjkeynote_sized.jpg
ORGANIZER;CN="Mondai | House of AI":MAILTO:mondai@tudelft.nl
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20250924T123000
DTEND;TZID=Europe/Amsterdam:20250926T143000
DTSTAMP:20260404T191708
CREATED:20250514T131845Z
LAST-MODIFIED:20250916T083137Z
UID:10000228-1758717000-1758897000@mondai.tudelftcampus.nl
SUMMARY:Both\, Between\, Beyond: Ethics and Epistemology of AI
DESCRIPTION:Mondai | House of AI is happy to host the coming workshop on the Ethics and Epistemology of AI: Both\, Between\, Beyond (De voertaal van dit evenement is Engels – this event will be held in English) \nGeneral Programme\nWednesday Sept\, 24th\n12:30 Welcome\n13:00 – 17:30 Keynote and Talks\n17:30 Networking Drinks \nThursday Sept\, 25th\n10.15 Welcome\n10.30 – 16.00 Keynotes and Talks\n18:00 Dinner \nFriday Sept\, 26th\n09.45 Welcome\n10.00 Keynote and Talks\n14.30 End \nThis workshop focuses on the interplay between epistemological and ethical questions arising with the use of AI systems. So far\, central epistemologically and ethically relevant aspects pertaining to these technologies have been largely analyzed in their singularity. For example\, epistemic limitations of these systems\, such as their opacity\, have been the center of the epistemological debate but have only been marginally addressed in ethical studies. On the other hand\, issues of responsibility\, fairness\, and privacy\, among others\, have received considerable attention in discussions on the ethics of AI. However\, even though some efforts are present in the literature to bring these two dimensions together (Russo et al.\, 2023; Pozzi and Durán\, 2024)\, more needs to be said to tackle relevant and philosophically interesting issues that fall in their intersection. \nAgainst this background\, this workshop aims to bring together scholars working on topics at the intersection between the ethics and epistemology of AI\, focusing on different philosophical traditions and perspectives. \nProgramme (updated!) Wednesday September 24th\n12.30 – 13.00 Welcome and walk-in\n13.00 – 14.00 Keynote by Claus Beisbart: “AI and Non-Epistemic Values. Insights from the Debate on Science and Values”\n14.00 – 14.20 Break\n14.20 – 15.00 Talk by Tuğba Yoldaş: “Virtue Epistemology and Responsible Knowing with Generative AI”\n15.00 – 15.40 Talk by Anna Smadjor and Yael Friedman: “Epistemological and Ethical Dimensions of Synthetic Data for Trustworthy AI”\n15.40 – 16.00 Break\n16.00 – 16.40 Talk by Giacomo Figà Talamanca and Niel Conradie: “The Significance of Vulnerability for being Trustworthy about AI”\nFrom 17.30 Informal gathering and drinks! \nKeynote: AI and Non-Epistemic Values. Insights from the Debate on Science and Values by Claus Beisbart (Bern University\, Switzerland)\nTo what extent do AI systems incorporate non-epistemic values\, such as moral values? And to what degree must they do so? To answer these questions\, I propose examining the debate on values in science. In this debate\, proponents of value-free science have tried to show that science can be kept free of non-epistemic values. Detractors have argued that this is neither possible nor desirable\, mainly drawing on Rudner’s argument. In this talk\, I will first summarize key insights and arguments from the debate on values in science. I’ll then draw consequences for the question of whether AI systems do\, or must\, incorporate tradeoffs between non-epistemic values. A key conclusion is that the answer depends on how AI is used. Some moves made in the debate on values in science can be used to propose uses of AI that minimize the impact of non-epistemic values. However\, it’s another question of whether such uses are feasible in practice. \nThursday September 25th\n10.15 – 10.30 Walk-in\n10.30 – 11.10 Talk by Shaoyu Han: “Accidental Hate Speech and the Extended Mind: Rethinking Epistemic Responsibility in AI-Generated Content”\n11.10 – 11.50 Talk by Johan Largo. Human-AI interaction\, epistemic credit and moral responsibility\n12.00 – 13.00 Lunch Break\n13.00 – 13.40 Talk by Hatice Tülün: “The Invisible Third: The Ethical and Epistemic Role of Recommender Systems in Mediated Social Interactions”\n13.40 – 14.20 Talk by Karim Barakat: “Algorithmic Censorship and the Public Sphere”\n14.20 – 14.40 Break\n14.40 – 15.40 Keynote by Eva Schmidt: “Engineering a Concept of AI Trustworthiness as Competence and Character”\n15.40 – 16.00 Coffee\nFrom 18.30 Dinner \nKeynote: Engineering a Concept of AI Trustworthiness as Competence and Character by Eva Schmidt (TU Dortmund\, Germany)\nIn their paper\, they critique two prominent views on AI trustworthniess: The skeptical view\, which holds that the concept of trustworthiness cannot be applied to AI systems\, and the reductive view\, which maintains that AI trustworthiness can be reduced to reliability\, competence\, or well-functioning. They contest both views by pointing out that something like good character or goodwill is highly relevant when interacting with AI systems. They then propose that AI systems are trustworthy for a stakeholder just in case they meet two conditions: (1) the system’s goals align with the stakeholder’s goals and (2) the system is competent in pursuing these goals. The concept of AI trustworthiness can\, if conceptualized in this way\, fulfill several theoretical and practical functions that cannot be fulfilled by the concept of reliability alone. This becomes apparent as soon as one takes the issue to be a conceptual engineering problem \nFriday September 26th\n09.45 – 10.00 Walk-in\n10.00 – 10.40 Talk by Joshua Hatherley: “Federation Opacity and the Promise of Federated Learning in Healthcare”\n10.40 – 11.20 Talk by Omkar Chattar: “What Is Phantom Trust? Ontology and Epistemology in AI Reliance”\n11.20 – 11.40 Break\n11.40 – 12.20 Talk by Eric Owens: “On Machines and Medical Schools: Machine Learning\, Epistemic Institutions and the Physician”\n12.20 – 13.30 Lunch Break\n13.30 – 14.30 Keynote by Karin Jongsma and Megan Milota: “Ethics\, Epistemology and Praxis: Making AI’s Impact on the Diagnostic Workflow Visible”\n14.30 Goodbye \nKeynote: Ethics\, Epistemology and Praxis: Making AI’s Impact on the Diagnostic Workflow Visible by Karin Jongsma and Megan Milota (UMC Utrecht\, The Netherlands)\nMachine learning and deep learning have proven to be particularly useful for pattern recognition. This may explain why the majority of current AI applications in medicine are used to aid image-based diagnostics in fields like radiology and pathology. In their interactions with these new technologies\, medical professionals will have to renegotiate their position and role in the digital transition; they will also have to critically consider what tasks they are willing to outsource to AI tools and which (new) competencies and expertise medical professionals need to responsibly use these technologies. \nWe conducted ethnographic work and produced an ethnographic film to study the ways in which AI influences the daily work of pathologists. The film shows the skills and knowledge pathologists and lab technicians require when conducting their work and provides a clearer image of the responsibilities they bear. In this session\, we will screen this film and will facilitate an interactive discussion of questions related to the ethics and epistemology of AI in image driven care. \nWorkshop Organisers Giorgia Pozzi\nTU Delft\nChirag Arora\nTU Delft\nJuan M. Durán\nTU Delft\nEmma-Jane Spencer\nErasmus MC & TU Delft
URL:https://mondai.tudelftcampus.nl/event/ethics-epistemology-of-ai/
LOCATION:Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD
ATTACH;FMTTYPE=image/jpeg:https://mondai.tudelftcampus.nl/wp-content/uploads/sites/7/2025/05/EthicsEpistomology_uitgelicht.jpg
ORGANIZER;CN="Mondai | House of AI":MAILTO:mondai@tudelft.nl
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20250911T153000
DTEND;TZID=Europe/Amsterdam:20250911T190000
DTSTAMP:20260404T191708
CREATED:20250630T085841Z
LAST-MODIFIED:20250820T130805Z
UID:10000237-1757604600-1757617200@mondai.tudelftcampus.nl
SUMMARY:Grand Opening Huis voor Robotica en AI\, Data & Digitalisatie op TU Delft Campus
DESCRIPTION:Alle partners in het nieuwe huis voor Robotica en AI\, Data & Digitalisatie nodigen je van harte uit voor de officiële opening!(de voertaal van dit event is Nederlands) \nOp donderdag 11 september openen wij officieel de deuren van het nieuwe huis voor Robotica en AI\, Data & Digitalisatie op de TU Delft Campus. Aan de Molengraaffsingel komen AI & Robotica samen. In samenwerking met TU Delft Fieldlab DoIoT en de baanbrekende start-ups MomoMedical\, DuckDuckGoose\, PercivAI\, Dalco Robotics en de Robot Engineers bouwen Mondai | House of AI en RoboHouse aan een sterk ecosysteem van onderzoek en innovatie\, technologie en praktijk. \nOnder genot van een hapje en een drankje wordt de opening luister bijgezet door Wouter Kolff (Commissaris van de Koning\, Provincie Zuid-Holland)\, Tim van der Hagen (Rector Magnificus TU Delft)\, en Maaike Zwart (Wethouder Duurzaamheid\, Werk & Inkomen & Economie\, Gemeente Delft). \nKom in ContactVragen over de opening? Neem vooral contact met ons op! \nRoos-Anne Albers\nProjectcoördinator Mondai | House of AI\nOsman Akin\nMarketing & Communications Manager RoboHouse
URL:https://mondai.tudelftcampus.nl/event/grand-opening-ai-robotica/
LOCATION:Mondai & RoboHouse – Molengraaffsingel 29\, Molengraaffsingel 29\, Delft\, 2629 JD\, Netherlands
ATTACH;FMTTYPE=image/jpeg:https://mondai.tudelftcampus.nl/wp-content/uploads/sites/7/2025/05/Home_thuisbasis.jpg
ORGANIZER;CN="Mondai | House of AI":MAILTO:mondai@tudelft.nl
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20250703T100000
DTEND;TZID=Europe/Amsterdam:20250703T173000
DTSTAMP:20260404T191708
CREATED:20250528T140545Z
LAST-MODIFIED:20250702T122218Z
UID:10000234-1751536800-1751563800@mondai.tudelftcampus.nl
SUMMARY:Delft FinTech Summit 2025: Where Collaboration meets Innovation
DESCRIPTION:(Registrations are closed for this event)\nDelft FinTech Summit 2025: Sparking collaborative innovation \nWe are proud to announce the third edition of the Delft FinTech Summit\, a gathering that celebrates trusted partnerships and sparks new ideas to shape the future of finance with technology innovation. \nJoin leading experts from IBM\, Rabobank\, Robeco\, Erasmus University\, University of Luxembourg and TU Delft to explore how generative AI\, machine learning\, blockchain and privacy-preserving technologies are transforming finance in areas such as trading\, risk management\, knowledge extraction\, identity management\, crime detection and more. \nRegister now to secure one of the limited spaces. \nProgramme10:00 – 10:30 Workshop Walk-in\, Registration\, Coffee/Tea\nArrival\, check-in and refreshments \n10:30 – 12:00 Hands-on Workshop: Generative AI in Finance\nLed by: Prof. dr. D.G.J. (Dion) Bongaerts (RSM Erasmus University) & Dr. Avishek Anand (TU Delft)\nA practical session exploring the application of generative AI in financial contexts (max 25 participants). \n12:00 – 12:45 Registration & Networking Lunch\nArrivals and informal networking \n12:45 – 13:00 Opening Remarks\nSpeakers: Prof. dr. A. (Arie) van Deursen & Venkatesh Chandrasekar (TU Delft)\nWelcome\, vision and what’s ahead for FinTech at TU Delft. \n13:00 – 13:30 Keynote: Lighting the Darkness: Synthetic Data Opportunities for FinTech\nSpeaker: Dr. Erik Altman (IBM T.J. Watson Research Center)\nHow synthetic data generation can address key challenges in financial compliance\, risk and innovation. \n13:30 – 14:00 Decentralized Privacy-preserving Robust intelligent Machine Learning for fighting against Financial Crime\nSpeaker: Dr. Zekeriya Erkin (TU Delft)\nBuilding smarter\, safer AI to fight financial crime while protecting privacy and working across borders. \n14:00 – 14:30 Machine Learning in buy-side equity trading\nSpeaker: Martin van der Schans (Robeco)\nInsights into how Robeco leverages machine learning for trading strategies and investment research. \n14:30 – 15:00 Coffee Break \n15:00 – 15:30 Keynote: Looking Beyond Blockchain: A Technology in Context\nSpeaker: Prof.dr. Gilbert Fridgen (University of Luxembourg)\nExplores blockchain’s regulatory challenges\, privacy trade-offs and its convergence with digital identities\, zero-knowledge proofs and large language models. \n15:30 – 15:50 Breaking Consensus to Build Trust: Thorough Software Testing of Blockchain Systems \nSpeakers: Wishaal Kanhai and Lucas Witte \n15:50 – 16:20 Convergence: AI for Inclusive Finance\nSpeakers: Prof.dr. S. (Sjoerd) van Bekkum (Erasmus School of Economics) & Dr. Fang Fang (TU Delft)\nExploring how AI can support financial inclusion\, equitable access and resilient financial systems. \n16:20 – 16:50 GenAI Use Cases with FEC: From Idea to Production\nSpeaker: Mart Gombert (Rabobank\, Senior Tech Lead FEC)\nA practical look at deploying Generative AI to enhance financial economic crime investigations at Rabobank. \n17:00 onwards Closing & Networking Drinks\nWrap-up\, informal conversations and opportunities for collaboration.
URL:https://mondai.tudelftcampus.nl/event/delft-fintech-summit-25/
LOCATION:Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD
ATTACH;FMTTYPE=image/jpeg:https://mondai.tudelftcampus.nl/wp-content/uploads/sites/7/2024/04/Mondai-Fintech-Lab-013-scaled.jpg
ORGANIZER;CN="Mondai | House of AI":MAILTO:mondai@tudelft.nl
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20250612T130000
DTEND;TZID=Europe/Amsterdam:20250612T173000
DTSTAMP:20260404T191708
CREATED:20250401T144629Z
LAST-MODIFIED:20250603T122759Z
UID:10000222-1749733200-1749749400@mondai.tudelftcampus.nl
SUMMARY:AI PhD & Postdoc Spring Symposium
DESCRIPTION:Mondai | House of AI is happy to host AI PhD & Postdoc Spring Symposium\, together with the TU Delft AI Initiative and the AI PhD Committee! (De voertaal van dit event is Engels) \n\nAre you a PhD or postdoc researcher at TU Delft working on AI-related topics? You are cordially invited!  \n\nWe are excited to announce the next edition of the Spring Symposium dedicated to the AI PhD & postdoc community\, including the yearly Poster Event! Hosted on June 12th (13:00 – 17:30) at Panorama XL@Mondai | House of AI\, the event includes poster pitches\, an interactive panel discussion on interdisciplinary publication\, keynotes from final year PhDs\, and the ever-important borrel. It’s an excellent opportunity to present your research and network with fellow AI-focused scholars across campus. \nProgramme (preliminary) 13:00 – 17:30 \n\nKeynotes & talks from final year PhDs working in/with AI at TU Delft\nPanel discussion on interdisciplinary research & publications\nPoster market: You are invited to contribute to this event with your own poster! See poster requirements on below. Final deadline for participating with poster: June 4th\n\nThe afternoon session will end in a casual manner with drinks\, refreshments and an opportunity for networking. More details to be announced so keep an eye on this page for more updates on speakers and panellists. \nPosters Researchers at all stages of their PhD and working in all different areas of AI are welcome: \n\nMachine learning and foundational AI techniques\nHuman-centered AI systems\nApplication of AI\nFairness\, bias\, legal\, and ethical considerations of AI\nEducation and AI\nDesign with AI\nReflexive and critical research on AI\nAnd more…\n\nRequirements \n\nA0 size – printing available via AI Initiative\n\nPDF (Portrait mode\, 300 DPI)\n\n\nThe reuse of existing A1 or A0 posters is allowed\nSubmissions can be made via the registration form available on this page\n\nRegister & submit your poster by Wednesday\, 4 June. \nAny questions? Please contact the AI PhD Committee at AI-PhD-Committee@tudelft.nl \nSpeakers Keynotes & talks from final year PhDs \nThe Learning Curve: Lessons after 5 Years of Training by Alexander Garzón\nAlexander Garzón is a final-year Ph.D. candidate at the AIdroLab\, at the intersection of AI and water engineering. Over the past few years\, he has been developing machine learning models—mainly graph neural networks—to help simulate drainage infrastructure more efficiently. In this talk\, he will share some of the highs and lows from his PhD journey\, the tools that made a difference\, and what he wishes he knew when he started. \nUnderstanding 3D Urban Environments from Point Cloud Data by Shenglan Du\n \nShenglan Du is a last-year PhD candidate from the Architecture faculty. She has a background in remote sensing and Geo information science from Wuhan\, China. Her research interests include deep learning for 3D data analysis\, 3D segmentation\, and 3D modelling. \nThe Development of a Reflective & Slow AI Design Practice by Vera van der Burg\nVera van der Burg is a designer and researcher pursuing her PhD at the Technical University in Delft’s Designing Intelligence Lab\, where she challenges conventional AI narratives by repositioning these systems as reflective tools rather than mere optimization machines. Viewing AI as a material to be disentangled and explored\, she emphasizes annotation and training phases as spaces for designers to examine their own practices and subjectivities. Through publications\, workshops\, talks\, and installations\, she reveals AI’s potential to create productive friction in creative processes\, reimagining human-AI interactions beyond automation. \nDesigning Trustworthy Human-AI Collaboration: The Interdependence and Trust Analysis Framework by Carolina Centeio Jorge\nCarolina Centeio Jorge (pronunciation [kɐɾulˈinɐ] [sẽ tˈɐju] [ʒˈɔɾʒɨ]) is a PhD candidate in the Interactive Intelligence\, focusing on mental models in the context of human-AI teams. Her goal is to enable interactive and intelligent agents (e.g.\, robots) to understand their human teammates and respond to them transparently and effectively. Specifically\, she has been investigating the concept of artificial trust in decision-making within human-AI teamwork\, particularly in modelling context-dependent human trustworthiness for collaboration scenarios. \n\nPanel on AI-related interdiscplinary research & publishing \n\n\nPanelists \n\n\nThe aim of this panel is to give insights in AI-related interdisciplinary research. It can be challenging to find the right conferences or journals to publish such work\, for example because it is difficult to completely fit in one corner of the research scope. Join us to learn from the experiences of our panelists in finding suitable outlets for publishing AI-related interdisciplinary work\, finding the right narrative styles based on your audience (when facing people from either of two disciplines)\, and related research challenges. \nThis panel is moderated by Fatemeh Mostafavi (PhD at AiDAPT Lab and Faculty of A+BE) and other members of the AI PhD Committee \n\nJie Yang (Faculty of Electrical Engineering\, Mathematics & Computer Science)\nJie Yang is an assistant professor at TU Delft and the manager of the ICAI Lab GENIUS on Generative AI development and usage in large organizations. Before joining TU Delft\, Jie was a scientist at Amazon (Seattle) and a senior researcher at the University of Fribourg (Switzerland). His research interests span computer science\, AI ethics\, and human-computer interaction\, with a focus on developing human-centered computation for robust AI systems\, especially for natural language processing (NLP) systems. His work has received six “best paper” awards or nominations at premier AI and information systems conferences\, including ACM TheWebConf/WWW (both 2022 and 2023)\, AAAI/ACM AIES (2023)\, AAAI HCOMP (2022)\, ACM SIGIR (2024)\, and ACM HT (2017). His work finds application across a wide range of societal domains\, via collaboration with medical centers\, libraries\, and industrial companies\, and funded through national and international projects. Jie serves as an associate editor for the Journal of Human Computation and Frontiers of Artificial Intelligence\, and regularly serves on the senior program committees of TheWebConf/WWW\, AAAI\, and CIKM. \nCristina Zaga (University of Twente)\nCristina Zaga is an Assistant Professor of Human-Centred Design group and DesignLab at the University of Twente. Cristina’s research aims to develop methodology to foster societal transitions towards justice\, care\, and solidarity. She has developed Responsible Futuring\, a transdisciplinary approach to imagine future worth wanting and foster more-than human communities of belonging. She is currently working on approaches to Design for Resistance to contest and re-imagine the future of work and care with robots and AI. She leads the JEDAI network\, a transdisciplinary collective\, working towards mitigating the dehumanizing effects of AI and promoting social and environmental justice. Her award-winning work has received many accolades\, including the NWO Science Price for DEI initiatives (2022)\, the Dutch High Education Award (2022)\, and the Google Women Techmaker Award and scholarship (2018). She was selected as top 3 Diversity Leaders in AI in the Netherlands. \nMartin Sand (Faculty of Technology\, Policy & Management)\nMartin Sand works as an Assistant Professor of Ethics and Philosophy of Technology at TPM. He is interested in a broad range of topics from technological utopianism and justice to the problems of responsibility and moral luck in innovation. His work falls uncomfortable between philosophy and ethical research\, Science and Technology Studies\, Utopian Studies and Technology Assessment. \nSeyran Khademi (Faculty of Architecture and the Built Environment))\nSeyran Khademi is an Assistant Professor at the Faculty of Architecture and the Built Environment (ABE) at TU Delft\, where she also serves as co-director of the AiDAPT Lab. Her interdisciplinary research integrates computer vision into architectural design\, focusing on how data and deep learning can be applied to architectural representations—including drawings\, renders\, photographs\, 3D models\, and maps. In 2020\, she was awarded a Research-in-Residence Fellowship at the Royal Library of the Netherlands\, where she developed visual recognition tools for the children’s book collection. Prior to that\, in 2017\, she joined the Computer Vision Lab as a postdoctoral researcher on the ArchiMediaL project\, developing computer vision and deep learning methods for the automatic detection of buildings and architectural elements in archival and street-view imagery. Seyran earned her Ph.D. in statistical signal processing and optimization from TU Delft in 2015. She continued her postdoctoral work in intelligent audio and speech algorithms before joining the computer vision lab. She holds an MSc in Signal Processing from Chalmers University of Technology in Gothenburg\, Sweden\, awarded in 2010. \nLuciano Cavalcante Siebert (Faculty of Electrical Engineering\, Mathematics & Computer Science)\nLuciano Cavalcante Siebert is an assistant professor at TU Delft’s Faculty of Electrical Engineering\, Mathematics\, and Computer Science (INSY department/Interactive Intelligence group). His research focus is on Responsible Artificial Intelligence. Luciano serves as the director of technology at the the Centre for Meaningful Human Control and co-director of the AiBLE lab. His interdisciplinary research aims to develop practical methods to ensure AI remains under meaningful human control. By integrating ethical and human behavior theories\, Luciano proposes interactive approaches that enable agents to elicit and align with human values and norms\, while empowering humans to maintain control and responsibility. \nArkady Zgonnikov (Faculty of Mechanical Engineering)\nArkady Zgonnikov is an interdisciplinary cognitive scientist specializing in cognitive modeling of human behavior in human-robot interactions\, with a particular focus on automated driving. He earned his MSc in Applied Mathematics from Saint Petersburg State University in 2009 and his PhD in Computer Science and Engineering from The University of Aizu in 2014. His early research concentrated on the mathematical modeling of intermittent motor control in human operators. In 2015\, Arkady joined the Department of Psychology at the University of Galway as an Irish Research Council Postdoctoral Fellow\, where he studied the response dynamics of decision making. In 2017\, he returned to The University of Aizu to explore the interplay between decision making and motor behavior. In 2019\, he joined the Department of Cognitive Robotics at Delft University of Technology as a postdoctoral researcher and was promoted to assistant professor in 2020. Arkady’s current research aims to understand human cognition in traffic interactions through both mathematical and data-driven modeling. He is deeply concerned with the ethical and societal impacts of robotics and AI technology\, striving to develop concrete methods that empower humans to meaningfully control artificial intelligent systems.
URL:https://mondai.tudelftcampus.nl/event/ai-phd-postdoc-spring-symposium/
LOCATION:Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD
ATTACH;FMTTYPE=image/jpeg:https://mondai.tudelftcampus.nl/wp-content/uploads/sites/7/2025/04/PhDPoster2024_uitgelichteafbeelding.jpg
ORGANIZER;CN="Mondai | House of AI":MAILTO:mondai@tudelft.nl
END:VEVENT
BEGIN:VEVENT
DTSTART;VALUE=DATE:20250415
DTEND;VALUE=DATE:20250417
DTSTAMP:20260404T191709
CREATED:20250115T151135Z
LAST-MODIFIED:20250313T133841Z
UID:10000221-1744675200-1744847999@mondai.tudelftcampus.nl
SUMMARY:AI in Bioscience: Redefining Frontiers
DESCRIPTION:Mondai is pleased to once again host the AI4b.io Symposium\nAI for Bioscience: Redefining FrontiersWe are excited to invite you to the fourth AI4b.io Symposium\, where cutting-edge Artificial Intelligence meets the complex challenges of Bioscience. Immerse yourself in two days of inspiring presentations and posters that dive into diverse topics ranging from large-scale factory scheduling to the genetic manipulation of microorganisms. Explore the intersection of AI and bioscience with engaging interdisciplinary discussions across various fields\, such as \n\nScheduling in process industries\nFluid dynamic modeling\nLab automation\nOptimal experimental design\nMicrobiome precision feed\nMetabolic engineering\nMolecular machine learning\nHuman-aware constrained optimization\n\nThis event offers a unique platform to exchange ideas\, present innovative research\, and forge meaningful connections across academia and industry. Whether you’re advancing theoretical frameworks or driving practical applications\, this symposium fosters collaboration to push the frontiers of AI in bioscience. \n(De voertaal van dit event is Engels) \nRegister here!Confirmed Speakers\nKaroline Faust\n Associate Professor in Microbiological Bioinformatics at KU Leuven\nGabriel D Weymouth\nProfessor of Ship Hydromechanics at TU Delft\nVitor A.P. Martins dos Santos\nProfessor of Biomanufacturing and Digital Twins\, Wageningen University & Research\nDaniel Probst\nAssistant Professor\, Wageningen University and Research\n\nCall for AbstractsAI4b.io invites AI practitioners\, researchers\, and innovators to share their work through poster and oral presentations. This is a valuable opportunity to showcase your expertise and contribute to the growing field of AI in bioscience! \nTo participate\, please submit your abstract following the required abstract format. Applications are welcome from academia\, established companies\, and start-ups alike. \nAbstract Submission Deadline: February 12th\, 2025 \nAbout AI4b.io \nThe Artificial Intelligence Laboratory for Bioscience (AI4b.io) is a collaboration between dsm-firmenich and Delft University of Technology. AI4b.io is aiming at long-term innovation in the fields of Artificial Intelligence and Bioscience to develop biobased products and to optimize biobased production technologies. Their mission is to develop a deep understanding of how novel AI technology (methods\, techniques\, theories\, and algorithms) can strengthen the effectiveness and efficiency of relevant research and/or business processes in the biotech industry. \nFor five years\, five PhD researchers will work in the lab. Research topics will include\, among other things\, Digital twin and smart plant scheduling\, Digital twin for large-scale fermentation\, Digital twin of lab automation processes and self-learning platforms and Machine learning for iterative metabolic engineering.
URL:https://mondai.tudelftcampus.nl/event/ai4bio-redefining-frontiers/
LOCATION:Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD
ORGANIZER;CN="Mondai | House of AI":MAILTO:mondai@tudelft.nl
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20250312T120000
DTEND;TZID=Europe/Amsterdam:20250312T133000
DTSTAMP:20260404T191709
CREATED:20250129T122909Z
LAST-MODIFIED:20250219T122358Z
UID:10000219-1741780800-1741786200@mondai.tudelftcampus.nl
SUMMARY:TU Delft AI Lunch - Inclusive AI: Approaches to Digital Inclusion
DESCRIPTION:Mondai | House of AI is happy to host the new edition of the TU Delft AI Lunch:\nInclusive AI: Approaches to Digital Inclusion Realising inclusive digital systems requires the development of tools and methods that can achieve identified goals. This often takes the form of mitigating negative effects that lead to digital exclusion\, for example biases. However\, it can also incorporate positive values as design requirements such as fairness\, contestability\, and accessibility. Ideally\, this leads to digital tools and systems that promote participatory processes and just outcomes. But\, by what metrics or standards should we evaluate these approaches? Is a fair system necessarily reliable and accurate\, or are trade-offs required? How can ‘digital inclusion’ be operationalised as a design requirement at different levels\, from algorithms to design processes to artifacts and infrastructures? \nWhat do you think about the current approaches to digital inclusion? Join us for an interactive discussion to explore how to make inclusive AI actionable. \nThis event includes free lunch for which registration is required (help us reduce food waste!) \n(This event will be held in English) \nProgramme\n12.00 – 12.30 Walk-in and Lunch\n12.30 – 13.30 Panel Discussion on Inclusive AI with Nazli Cila (IDE)\, Alessandro Bozzon (IDE)\, Kars Alfrink (IDE)\, Emir Demirović (EEMCS)\, Roberto Rocco (ABE)\, Marie-Therese Sekwenz (TPM) \nModerators & Panellists \n\n\n\nModerator: Nazli Cila\nAssistant Professor\, Faculty of Industrial Design Engineering\nNazli is an Assistant Professor at the Department of Human-Centered Design\, Faculty of Industrial Design Engineering. Her work combines interaction design with humanities\, integrating empirical work (i.e.\, experimentation\, future modelling\, and prototyping) with practical and ethical issues surrounding collaborations with agents. Nazli is co-director of the AI DeMoS Lab.\n\n\n\n\n\n\n\n\nModerator: Alessandro Bozzon\nProfessor\, Faculty of Industrial Design Engineering\nAlessandro is Professor of Human-Centered AI and head of the Department of Sustainable Design Engineering\, Faculty of Industrial Design Engineering. His research lies at the intersection of human-computer interaction\, human computation\, user modelling\, and machine learning – developing methods and tools that support the design\, development\, control\, and operation of AI-enabled systems that are well-situated to actual human characteristics\, values\, intentions\, and behaviours. \n\n\n\n\n\n\n\n\n\n\nPanellist: Kars Alfrink\nPostdoc\, Faculty of Industrial Design Engineering\nKars is a researcher in the Department of Sustainable Design Engineering\, focusing on contestable AI. His research investigates how to design public AI systems so that they remain subject to societal control. Before entering academia\, Kars spent over 15 years as an interaction design consultant\, entrepreneur\, and community organizer- experiences that now shape his research. \n\n\n\n\n\n\n\n\n\n\n\nPanellist: Emir Demirović\nAssistant Professor\, Faculty of Electrical Engineering\, Mathematics and Computer Science\nEmir is an Assistant Professor at the Algorithmics group. He leads the Constraint Solving (“ConSol”) research group\, is co-director of the Explainable AI in Transportation Lab (“XAIT”) as part of the Delft AI Labs\, and is an ELLIS Scholar. His primary research interest lies in solving complex real-world problems through combinatorial optimisation and its integration with machine learning. \n\n\n\n\n\n\n\n\n\n\nPanellist: Roberto Rocco\nAssociate Professor\, Faculty of Architecture and the Built Environment\nRoberto is an Associate Professor of Spatial Planning and Strategy at the Department of Urbanism. He specialises in governance for the built environment and social sustainability\, as well as issues of governance in regional planning and design. This includes issues of spatial justice as a crucial dimension of sustainability transitions.\n \n\n\n\n\n\n\n\n\n\n\nPanellist: Marie-Therese Sekwenz\nPhD Candidate\, Faculty of Technology\, Policy and Management\nMarie-Therese is a PhD candidate at the Department of Multi-Actor Systems and a member of the AI Futures Lab. In her research she asks questions addressing aspects of rights and justice\, focused on content moderation\, platform governance and regulation\, AI and socio-technical and legal system design. Marie-Therese is also active as a journalist for the Austrian Broadcasting Agency (ORF). \n\n\n\n\n\nAbout the series Inclusive AI\nInclusivity can be understood as a desirable quality of AI systems\, encompassing a broad range of pressing societal and technical challenges for the responsible development and deployment of AI systems. It manifests in machine learning through concerns related to fairness\, bias\, and trustworthiness; societal issues currently underrepresented in discourse (e.g.\, feminism\, neurodiversity\, disability studies\, care ethics\, intersectionality\, more-than-human perspectives); and\, in engineering and robotics application domains such as healthcare\, mobility\, urban AI\, and the future of work. This new series aims to bring together a growing research community on campus to exploring these topics and foster an interdisciplinary exchange.  \nThis lunch is the second in a series of discussions on Inclusive AI throughout 2024-25. Stay tuned for details on upcoming events! \nThe Delft AI (Lab) Lunch series\nThis series is part of the monthly Delft AI (Lab) Lunches\, a recurring meet-up hosted by the TU Delft AI Labs & Talent community at Mondai | House of AI.\nEvery month\, we host a panel to discuss challenges and developments made at the intersection of AI and a specific field. During these events\, you can participate\, learn\, make connections\, inspire and be inspired by and with the Delft AI Community. We invite all interested staff and students from TU Delft to join these sessions. Please contact community manager Charlotte Boelens for more information about this series or the TU Delft AI Labs & Talent Programme.\n \nNote for TU Delft PhDs\nThe TU Delft AI Lunch series is eligible for earning discipline related skills GSC with the ‘Form for earning GSC for TU Delft AI(-related) seminars’. Check with your local Faculty Graduate School (FGS) if your FGS offers this option for earning Discipline Related Skills GSC; and with your supervisors if they accept our seminars on your Doctoral Education (DE) list. If you already have a form\, don’t forget to bring it with you.
URL:https://mondai.tudelftcampus.nl/event/tudelft-ailunch-inclusiveai-3/
LOCATION:Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD
ORGANIZER;CN="Mondai | House of AI":MAILTO:mondai@tudelft.nl
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20250310T143000
DTEND;TZID=Europe/Amsterdam:20250310T173000
DTSTAMP:20260404T191709
CREATED:20250128T154052Z
LAST-MODIFIED:20250306T142042Z
UID:10000216-1741617000-1741627800@mondai.tudelftcampus.nl
SUMMARY:Climate & AI Workshop #2
DESCRIPTION:Mondai | House of AI is glad to host the second Climate & AI workshop with the Climate Action Programme and the TU Delft | AI Initiative On Monday March 10th (14:30 – 17:30) at Mondai | House of AI\, the Climate Action Programme and the TU Delft AI Initiative are co-organising their second worskhop together. The event will bring together local researchers working on Al and climate and explore relevant overlaps and related opportunities. Purpose is to (further) delve into generating and shaping innovative ideas together. The programme includes introductions on behalf of both university-wide programmes\, short pitches from researchers\, breakout groups on 3-5 themes and will be concluded with a borrel. A minimum of 5 participants per theme is required. \nThe following thematic crossroads will be further defined during this Climate & AI event and relevant opportunities explored: \n\nAI for Sustainable Energy Transition\nUrban Data Analytics\nAI & Earth (Systems)\nSustainable Materials & Manufacturing\nAI for Climate Policy Support\n\n(De voertaal van dit event is Engels) \nProgramme \n14.30 – 14.45 Walk in and drinks\n14.45 – 15.30 Opening\n15.30 – 16.45 Breakout per theme\n17.00 Borrel & (interthematic) networking \nThis event is aimed (early/earlier career) faculty staff of all TU Delft faculties. Are you a phd or postdoc and interested in these themes? Or are you an interested faculty staff member who can’t join the event on March 10th but do you want to stay updated about follow up? Get in touch with the organisation team via Charlotte Boelens. \nAI and Climate / Climate and AI at TU Delft \nRead the recap of the first Climate/AI Workshop (3 October 2024) here. \nIn March 2024\, the Climate Action Programme (CAP) dedicated their monthly lecture to ‘AI and Climate’ with a talk on “Machine-learning for understanding atmospheric physics” by Geet George (CEG) and Jing Sun (EEMCS) – recording and presentations available here. The CAP Academic Career Trackers of their 17 flagships recently also delved into AI with Angela Meyer (CEG) and AidroLab. A growing number of AI researchers also contribute to climate-related topics. A great example of this was the poster by Damla Akoluk (TPM)\, a PhD candidate from the HIPPO Lab\, who presented her work on aggregation at the Climate Action Festival. Read more about how this inspiring day went here. The last TU Delft AI Lunch of 2023/2024 was themed “The unprecedented environmental impacts of AI: a transdisciplinary discussion” with panellists Benedetta Brevini (New York University)\, Olya Kudina (TPM)\, Fanny Hidvégi (Policy Director at the AI Collaborative) and moderator Roel Dobbe (TPM). \nAre you interested in the intersection of AI & Climate and want to join or learn more about this growing climate/AI community at TU Delft? Join this workshop by registering above or below\, or reach out to Climate Action Programme (Climate-Action@tudelft.nl) or AI Initiative (AI-Initiative@tudelft.nl).
URL:https://mondai.tudelftcampus.nl/event/climate-ai-workshop-2/
LOCATION:Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD
ATTACH;FMTTYPE=image/jpeg:https://mondai.tudelftcampus.nl/wp-content/uploads/sites/7/2025/02/ClimateAI_2_uitgelicht.jpg
ORGANIZER;CN="Mondai | House of AI":MAILTO:mondai@tudelft.nl
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20241211T120000
DTEND;TZID=Europe/Amsterdam:20241211T133000
DTSTAMP:20260404T191709
CREATED:20241111T110257Z
LAST-MODIFIED:20241204T125902Z
UID:10000210-1733918400-1733923800@mondai.tudelftcampus.nl
SUMMARY:TU Delft AI Lunch - Inclusive AI: Caring with and for AI
DESCRIPTION:Mondai | House of AI is happy to host the new edition of the TU Delft AI Lunch:\nInclusive AI: Caring with and for AIWhat does it mean to realize care as a design requirement in the deployment of robots\, and more generally in human-machine interactions? And more broadly\, what is required to propel a care-based vision within the design of AI systems? Care can be understood as an ethical framework for responsible technology development\, as well as for engineering education. In practice\, it translates to embracing the principles of interrelation\, co-dependence\, diversity\, and inclusion – combining social\, technological\, and institutional levels of AI development. But how exactly does this translate into engineering and design practice? \nJoin us for an interactive discussion to explore what it means to care with\, and for\, AI\, and to kick-off the new series on Inclusive AI! \nThis event includes free lunch for which registration is required (help us reduce food waste!) \nProgramme\n12.00 – 12.30 Walk-in and Lunch\n12.30 – 13.30 Panel Discussion on Inclusive AI with Olya Kudina (TPM)\, Nazli Cila (IDE)\, Laura Marchal-Crespo (ME)\, Sara Colombo (IDE)\, and Arkady Zgonnikov (ME) \nPanellists\n\n\n\n\nOlya Kudina\nAssistant Professor of Ethics & Philosophy of Technology (Faculty of TPM)\nOlya is an interdisciplinary researcher in philosophy/ethics of technology who explores the relation between human values and technologies. Her recent focus has been on AI and democracy in the framework of the AI DeMoS Lab that she founded and co-leads. To anticipate the ethical challenges and opportunities of technologies\, Olya thinks it is essential to combine different academic practices and fields. \n\n\n\n\n\n\n\n\n\n\nNazli Cila\nAssistant Professor\, Faculty of Industrial Design Engineering\nNazli is an Assistant Professor at the Department of Human-Centered Design\, Faculty of Industrial Design Engineering. Her work combines interaction design with humanities\, integrating empirical work (i.e.\, experimentation\, future modelling\, and prototyping) with practical and ethical issues surrounding collaborations with agents. Nazli is co-director of the AI DeMoS Lab.\n\n\n\n\n\n\n\n\n\nLaura Marchal-Crespo\nAssociate Professor\, Faculty of Mechanical Engineering\nLaura is an Associate Professor at the Department of Cognitive Robotics\, Faculty of Mechanical Engineering. She is also affiliated with the ARTORG Center for Biomedical Engineering Research\, University of Bern. Laura carries out research in the general areas of human-machine interfaces and biological learning\, and\, specifically\, in the use of robotic assistance and virtual reality to aid people in learning motor tasks and rehabilitate after neurologic injuries. \n  \n\n\n\n\n\n\n\n\n\n\nSara Colombo\nAssistant Professor\, Faculty of Industrial Design Engineering\nSara’s research explores innovative approaches for the ethical design of AI applications and the critical examination of their societal impact. Her work involves engaging communities in envisioning AI futures with an emphasis on inclusivity and a participatory approach.\n \n\n\n\n\n\n\n\n\n\n\nArkady Zgonnikov\nAssistant Professor\, Faculty of Mechanical Engineering\nArkady’s research bridges the fields of cognitive science and robotics by developing cognitive models of human behaviour in human-robot interactions. He works in collaboration with some of the world’s best researchers in robotics and AI to incorporate cognitive models into the design of autonomous robots and automated driving systems. Arkady is co-director the newly launched Centre for Meaningful Human Control. \n\n\n\n\n\nAbout the series Inclusive AI\nInclusivity can be understood as a desirable quality of AI systems\, encompassing a broad range of pressing societal and technical challenges for the responsible development and deployment of AI systems. It manifests in machine learning through concerns related to fairness\, bias\, and trustworthiness; societal issues currently underrepresented in discourse (e.g.\, feminism\, neurodiversity\, disability studies\, care ethics\, intersectionality\, more-than-human perspectives); and\, in engineering and robotics application domains such as healthcare\, mobility\, urban AI\, and the future of work. This new series aims to bring together a growing research community on campus to exploring these topics and foster an interdisciplinary exchange. \nThis lunch will be the first in a series of three panel discussions on Inclusive AI throughout 2024-25. Stay tuned for details on the second lunch in spring 2025. \nThe Delft AI (Lab) Lunch series\nThis series is part of the monthly Delft AI (Lab) Lunches\, a recurring meet-up hosted by the TU Delft AI Labs & Talent community at Mondai | House of AI.\nEvery month\, we host a panel to discuss challenges and developments made at the intersection of AI and a specific field. During these events\, you can participate\, learn\, make connections\, inspire and be inspired by and with the Delft AI Community. We invite all interested staff and students from TU Delft to join these sessions. Please contact community manager Charlotte Boelens for more information about this series or the TU Delft AI Labs & Talent Programme.\n \nNote for TU Delft PhDs\nThe TU Delft AI Lunch series is eligible for earning discipline related skills GSC with the ‘Form for earning GSC for TU Delft AI(-related) seminars’. Check with your local Faculty Graduate School (FGS) if your FGS offers this option for earning Discipline Related Skills GSC; and with your supervisors if they accept our seminars on your Doctoral Education (DE) list. If you already have a form\, don’t forget to bring it with you.
URL:https://mondai.tudelftcampus.nl/event/tudelftailunch-inclusiveai2/
LOCATION:Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD
ORGANIZER;CN="Mondai | House of AI":MAILTO:mondai@tudelft.nl
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20241126T170000
DTEND;TZID=Europe/Amsterdam:20241126T200000
DTSTAMP:20260404T191709
CREATED:20241009T121605Z
LAST-MODIFIED:20241009T135300Z
UID:10000207-1732640400-1732651200@mondai.tudelftcampus.nl
SUMMARY:NL AIC Diner Pensant - AI-hub Zuid-Holland
DESCRIPTION:De Nederlandse AI Coalitie (NL AIC) en de AI-hub Zuid-Holland organiseren het Diner Pensant bij Mondai | House of AI te TU Delft. De gelegenheid voor de makers & shapers van AI om samen te komen!\nNa wereldwijde voorbeelden van discriminerende algoritmes\, waaronder ook de Nederlandse toeslagenaffaire en DUO\, is de roep om risicobeheersing wetgeving van AI gegroeid en gerealiseerd via de EU AI Act. Naast regelgeving over de impact en ethische aspecten van hoog-risico AI is er nu ook wettelijk vastgelegd dat algoritmen gevalideerd moeten worden op technisch inhoudelijke gronden. \n\nThema van de avond:\nHoe kunnen we er met het bedrijfsleven\, juristen\, wetenschappers\, beleidsmakers\, en mensen uit de praktijk voor zorgen dat AI-ontwikkeling een inhoudelijke waarborg krijgt? \n\nUit een recente review van 15 bekende AI impact assessments blijkt dat er überhaupt geen rekening wordt houden met wiskundige validatie. Daarop komt nog dat volgens een recent onderzoek van de Algemene Rekenkamer “het op valt dat de door ons onderzochte organisaties niet standaard beschikken over de expertise die nodig is voor risicobeheersing van het algoritme.”\nTot op heden is er geen invulling voor een integrale toetsing waarin het fundament van AI\, namelijk de wiskunde\, ook mee wordt genomen\, maar waar de wet nu wel om vraagt. \nMet oog op deze nieuwe wetgeving\, de groeiende roep van het bedrijfsleven en de verantwoordelijkheid van de opleiders\, vragen Marieke Kootte en Vandana Dwarka van de TU Delft in dit Diner Pensant aandacht voor de rol van wiskundige validatie als aanvulling op de bestaande ethische en impact-gedreven toetsing. Onder het genot van een goede maaltijd gaan we het gesprek aan met AI-experts en deelnemers aan het diner om vanuit verschillende expertises\, achtergronden en disciplines te verkennen hoe deze toetsen – validatie\, impact en ethiek – een integrale bescherming zouden kunnen bieden tegen discriminerende algoritmes. \nProgramma\n16.30 – 17.00 Inloop en Ontvangst\n17.00 – 17.15 Opening en Introductie\n17.15 – 18.00 Open themadiscussie: “Waarborg inhoudelijke AI-ontwikkeling”\n18.00 Netwerk Diner \nNL AIC en het Diner Pensant\nHet is voor de NL AIC belangrijk om in gesprek met de makers & shapers van AI inzichten\, uitdagingen en suggesties op te halen om die mee te kunnen nemen in gesprekken met beleidsmakers of in de vorming van departementale digitale strategieën. De 7 regionale AI Hubs in Nederland organiseren elk een Diner Pensant met partners\, ondernemers\, overheden en andere stakeholders in het regionale AI-ecosysteem. Naast inhoudelijke discussies is er zodoende natuurlijk ook gelegenheid om elkaar beter te leren kennen en samenwerkingsmogelijkheden te verkennen. \n\nThemavraag: Hoe waarborgen wij samen de inhoudelijke AI-ontwikkeling?\nHoe kunnen we er met het bedrijfsleven\, juristen\, wetenschappers\, beleidsmakers\, en mensen uit de praktijk voor zorgen dat AI-ontwikkeling een inhoudelijke waarborg krijgt? \nBij deze waarborg komen veel verschillende disciplines kijken: wiskunde\, rechten\, ethiek\, informatica\, etc. Deze disciplines weten elkaar vaak niet te vinden en de rol van wiskunde is nog ondergeschikt bij de ontwikkeling van AI. Een recente fiod vacature van de Belastingdienst vroeg bijvoorbeeld voor de ontwikkeling van hoog-risico algoritmes enkel praktijkervaring en geen kennis van statistiek\, lineaire algebra of numerieke wiskunde. \nDe dringende vraag is\, hoe kunnen we al deze aspecten verenigen en waarborgen? Om deze\, op dit moment nog losse expertises te harmoniseren\, zouden bijvoorbeeld net als advocaten\, artsen\, tolken en accountants\, AI-engineers een beroepsopleiding moeten volgen. Hier worden niet alleen de modellen wiskundig gevalideerd en gestress-test\, maar leren ze verantwoordelijkheid te nemen door de ethische gevolgen en impact van hun handelen in de praktijk te ervaren. \nGraag willen we hier samen met jullie over nadenken tijdens dit Diner Pensant. We brengen verschillende disciplines en werkvelden samen. Deelvragen die we onder andere willen stellen zijn de volgende: \n• Hoe kijken jullie vanuit de praktijk naar de rol van opleiders ?\n• Welke rol zien we per sector bij het inbedden van de wiskundige waarborg?\n• Welke concrete resources hebben we per sector (technologie\, rechten\, governance en opleiding) nodig om deze invulling vorm te geven?\n• Wat kunnen we leren van andere beroepen die een grote impact hebben op burgers?\n• Welke praktische bezwaren zijn er tegen meer wiskunde op de AI werkvloer?\n• Hoe laten we een techneut praten met een jurist? \nLees meer\nRecente voorbeelden\, zoals de toeslagenaffaire en de problematiek bij DUO\, hebben laten zien dat algoritmen\, die onder AI vallen\, kunnen discrimineren. Om dit te voorkomen wordt er steeds meer risicobeheersing wetgeving opgesteld\, waaronder de Europese AI Act\, die ook voor Nederland geldt. In deze act is nu wettelijk vastgelegd dat naast de impact en ethische aspecten van AI\, hoog-risico algoritmen gevalideerd moeten worden op inhoudelijk technische gronden\, zoals nauwkeurigheid en robuustheid van het algoritme zelf. \nHoewel we al meerdere impact assessments voor AI voorbij hebben zien komen\, bestaat er nog geen raamwerk voor de inhoudelijke invulling van begrippen als nauwkeurigheid en robuustheid van het algoritmen. Dit\, terwijl de wet daar nu wel om gaat vragen en de tijd dringt. \nUit een recente review van 15 bekende impact assessments\, blijkt dat er überhaupt geen rekening wordt houden met wiskundige validatie. Daarop komt nog dat volgens een recent onderzoek van de Algemene Rekenkamer “het op valt dat de door ons onderzochte organisaties niet standaard beschikken over de expertise die nodig is voor risicobeheersing van het algoritme.” \nOngeacht de verschillende expertises die samen komen bij AI-ontwikkeling\, moeten we voor inhoudelijke validatie van algoritmes en invulling van termen als ‘nauwkeurigheid’ en ‘robuustheid’ toch bij wiskundigen zijn. Dit\, omdat vaak enkel expliciete vormen van discriminatie\, zoals selecteren op etniciteit en religie\, herkenbaar is. Om AI-ontwikkelaars bewuster te maken van de impliciete varianten\, is kennis van de passende wiskundige toetsen en de aannames daarachter nodig. Mede hierom is in Silicon Valley recent een convenant ondertekend (MathMatters) door alle grote AI Tech giganten waarin ze oproepen tot een stevige wiskundige basis bij hun toekomstige werknemers. Deze wiskundige kennis kan een inherente bescherming tegen discriminerende algoritmes bieden\, maar wordt op dit moment in de praktijk niet of nauwelijks gevraagd aan AI-ontwikkelaars. \nMet oog op deze nieuwe wetgeving\, de groeiende roep van het bedrijfsleven en de verantwoordelijkheid van de opleiders\, vragen we in dit Diner Pensant aandacht voor de rol van (wiskundige) validatie als aanvulling op de bestaande ethische en impact-gedreven toetsing. Met verschillende expertises en disciplines willen we verkennen hoe deze toetsen (validatie\, impact en ethiek) een integrale bescherming zouden kunnen bieden tegen discriminerende algoritmes.
URL:https://mondai.tudelftcampus.nl/event/diner-pensant-nlaic-aihubzh/
LOCATION:Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD
ATTACH;FMTTYPE=image/png:https://mondai.tudelftcampus.nl/wp-content/uploads/sites/7/2024/10/Zuid-HollandAI-kleur.png
ORGANIZER;CN="Mondai | House of AI":MAILTO:mondai@tudelft.nl
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20241119T140000
DTEND;TZID=Europe/Amsterdam:20241119T170000
DTSTAMP:20260404T191709
CREATED:20240927T102843Z
LAST-MODIFIED:20241114T084859Z
UID:10000201-1732024800-1732035600@mondai.tudelftcampus.nl
SUMMARY:Symposium - Feminist AI: Shaping Ethical Futures
DESCRIPTION:Symposium – Feminist AI: Shaping Ethical FuturesShaping a more inclusive and ethical AI landscape \nWe are excited to officially launch the Feminist Generative AI Lab through the Symposium “Feminist AI: Shaping Ethical Futures”. Please join the Feminist Generative AI Lab\, Mondai | House of AI and Convergence AI\, D&D on November 19th. \nWhen – Symposium on November 19th 14.00 – 17.00 and PhD workshop on November 20th in the morning \nWhere – Mondai | House of AI\, Next Delft (TU Delft Campus) and online (More info to come) \nWhat – We explore the intersections of artificial intelligence\, ethics\, and feminist approaches to technology in a dialogue about the future of AI through a feminist lens. It is a conversation for all genders about less dominant alternatives\, bridging binary oppositions\, and embracing pluralism and differences in the design and development of AI. Join us to network\, exchange ideas\, explore questions on how to adopt feminist theories as a force of change\, and collaborate across academia\, practice and for a more inclusive and ethical AI landscape! \nProgramme \n13.30 – 14.00 – Arrival and Check-in\n14.00 – 14.10 – Welcome & Introduction\n14.10 – 15.00 – Keynotes by: \nEleanor Drage\, Senior Research Fellow\, Leverhulme Centre for the Future of Intelligence\, University of Cambridge \nAbigail Oppong\, Independent Researcher\, Ghana \n15.00 – 15.15 – Break and refreshments\n15.15 – 16.00 – Keynotes by: \nLaura Forlano\, Professor\, College of Art\, Media and Design (CAMD)\, Northeastern University  \nJoana Varon\, Founder Executive Directress\, Coding Rights; Tech and Human Rights Fellow\, Harvard Kennedy School \n16.00 – 16.15 – Break and refreshments \n16.15 – 17.00 – Panel discussion + Q&A with audience \nPanel Moderator: Sally Wyatt\, Professor of Digital Cultures\, Faculty of Arts and Social Sciences\, Maastricht University  \n17.00 – 18.00 – Closing remarks\, followed by drinks and networking \nEleanor Drage\nBio: \nEleanor is a Senior Research Fellow at the University of Cambridge\, and Co-Director of the AI: Narratives and Justice Programme. She is PI on the HEAT project\, an AI ethics and regulation project that helps companies respond to the EU AI act. She also uses feminist and anti-racist ideas to improve society’s understanding of AI. She is the co-host of the award-winning The Good Robot Podcast\, where she interviews top scholars and technologists about AI ethics. She has also worked with Google DeepMind\, The Financial Times\, The United Nations Data Science & Ethics Groups\, CNN\, BNP Paribas\, The Open Data Institute (ODI)\, and the Institute of Science & Technology. She’s co-editor of The Good Robot: Feminist Voices on the Future of Technology (Feb 2024)\, and Feminist AI: Critical Perspectives on Algorithms\, Data and Intelligent Machines (Oct 2023).  \nKeynote Abstract: \nCan the EU AI Act be Feminist? \nIn this talk\, Eleanor introduces HEAT\, a somewhat anarchic regulation tool that takes a feminist approach to helping companies meet the EU AI Act’s obligations. She explains why feminism brings the AI Act to life\, addresses its shortcomings\, and makes the Act meaningful and interesting for technologists. We explore why addressing ‘bias’ isn’t enough\, ‘diversity in tech teams’ needs to be properly defined and explained\, and there’s no such thing as an AI ethics expert.  \nLaura Forlano\nBio: \nLaura Forlano\, a Fulbright award-winning and National Science Foundation funded scholar\, is a disabled writer\, social scientist and design researcher. She is Professor in the departments of Art + Design and Communication Studies in the College of Arts\, Media\, and Design and Senior Fellow at The Burnes Center for Social Change at Northeastern University. She is the author of Cyborg (with Danya Glabau\, MIT Press 2024) and an editor of three books: Bauhaus Futures (MIT Press 2019)\, digitalSTS (Princeton University Press 2019) and From Social Butterfly to Engaged Citizen (MIT Press 2011). She received her Ph.D. in communications from Columbia University. \nKeynote Abstract: \nIn this keynote talk\, Laura Forlano will draw on her new book\, Cyborg (MIT Press)\, co-authored with Danya Glabau in order to examine the question of whether 21st century technologies—from smartphones to medical devices to the commonplace use of artificial intelligence—have made cyborgs of us all? The book takes feminist cyborg theory as their starting point to explore the myriad ways that technology traverses our daily lives and practices and to ask: how do social and cultural factors—from gender to race\, class to ability—affect how technologies are imagined\, developed\, put to use\, and\, crucially\, resisted? Forlano and Glabau present an approach called “critical cyborg literacy” that brings together insights from critical feminist\, race\, and disability thinkers in an effort to reframe popular and scholarly conversations around the affordances of cyborg theory and to reimagine the cyborg in light of emerging technologies like automation and AI. \nJoana Varon\nJoana Varon is the Executive Directress and Creative Chaos Catalyst at Coding Rights and a researcher affiliated to the Berkman Klein Center for Internet and Society at Harvard University. Alumni at Mozilla Foundation and at the German Institut für Auslandsbeziehungen (IFA). Using creativity\, hacker knowledge to disseminate feminist and decolonial approaches to technologies\, she is co-creator of several projects operating in the interplay between activism\, arts and technologies\, such as Tech Cartographies\, The Compost Engineers\, Oracle for Transfeminist Tech\, Musea Mami\, Chupadados – the data sucker\, Safer Nudes\, among others. \nAbigail Oppong\nBio: \nAbigail Oppong is a renowned advocate for AI ethics\, focusing on addressing biases in NLP and health systems and enhancing fairness in AI technologies\, especially for underserved communities. Named among the 100 Women in AI Ethics in 2023\, she collaborates with academia\, industry\, and NGOs to develop responsible AI systems tailored to local needs. Abigail’s expertise in data science and machine learning underpins her efforts to shape ethical AI governance\, particularly in Africa. Her interdisciplinary approach and previous experience in the nonprofit sector enrich her contributions to AI ethics\, emphasizing the importance of localization\, trust\, culture\, and representation in technological development. Her passion for community development influenced her research journey to investigate how local organizations can be empowered in the age of emerging technologies. \nAbstract:  \nAn Invisible Lens on AI: Developing Inclusive Technologies for Diverse Communities \nAddressing gender bias in AI systems\, particularly for low-resourced African languages and the continent’s rich cultural diversity\, tends to be a challenge. In this talk\, I will explore how using methods like informal sessions\, participant observation\, digital content analysis\, and AI model character analysis could help mitigate these biases. The insight gained from this research extends to assessing the current landscape of AI Technologies for marginalized communities in sectors such as health. Emphasizing a more feminist and community-centered approach\, this talk will highlight the importance of designing technologies that truly serve local needs\, gearing more into case studies that reveal the power dynamics influencing AI development across various stakeholders in the low-resourced settings and pointing out the ethical implications for sustainable impact. \nPhD workshopNovember 20th (morning) at Mondai | House of AI\, Next Delft at TU Delft campus \nThe PhD workshop investigates feminist approaches to generative AI by bringing together PhDs and other scholars from various fields to engage in cross- and inter-disciplinary discussions. Participants will have the opportunity to share their ideas and gain valuable feedback on their projects. More info to come! \nRegister here for the PhD WorkshopLab DirectorsDr. Sara Colombo (Co-Director)\, Assistant Professor in Designing responsible AI at the Faculty of Industrial Design Engineering\, TU Delft. Dr. Colombo’s research explores innovative approaches for the ethical design of AI applications and the critical examination of their societal impact. Her work involves engaging communities in envisioning AI futures with an emphasis on inclusivity and a participatory approach. \nDr. Francisca Grommé (Co-Director)\, Assistant Professor in Digitalisation in Work and Society at the EUR department of Sociology\, and AIPact TopTalent Research fellow. As an ‘ethnographer of data and AI’ she follows these technologies across different domains of work to understand how they affect marginalized groups\, social justice\, governance arrangements and the quality of work. \nThe Feminist Generative AI Lab is an initiative of TU Delft and Erasmus University Rotterdam and is funded by Convergence. For more information on the programme or the lab please check Convergence AI\, D&D or Mondai | House of AI.
URL:https://mondai.tudelftcampus.nl/event/symposium-feminist-ai-shaping-ethical-futures/
LOCATION:Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD
ATTACH;FMTTYPE=image/jpeg:https://mondai.tudelftcampus.nl/wp-content/uploads/sites/7/2024/09/Feminist-AI_Pic_V2logos-scaled.jpg
ORGANIZER;CN="Mondai | House of AI":MAILTO:mondai@tudelft.nl
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20241008T120000
DTEND;TZID=Europe/Amsterdam:20241008T133000
DTSTAMP:20260404T191709
CREATED:20240827T080121Z
LAST-MODIFIED:20241007T122716Z
UID:10000196-1728388800-1728394200@mondai.tudelftcampus.nl
SUMMARY:Seminar Series on Meaningful Human-AI Interactions for a Digital Society - #4: Meaningful human-AI Interactions in Healthcare
DESCRIPTION:A Seminar Series on Meaningful Human-AI Interactions for a Digital SocietyInterested in how to design AI to be meaningful\, transparent\, and ethical? Then join us for the continuation of our Meaningful-Human AI Interaction event series! Building off our previous events\, we will explore how to define meaningful human-AI interactions and deploy AI responsibly with a group of interdisciplinary experts. \nEvent #4: Meaningful Human-AI Interactions in Healthcare \nEngage in a discussion around how AI can support healthcare. Panelists Ujwal Gadiraju (TU Delft)\, Asra Aslam (University of Leeds)\, and Rik Wehrens (Erasmus University) will tackle the challenges of designing and deploying AI to meaningfully support healthcare workers and their patients.
URL:https://mondai.tudelftcampus.nl/event/seminar-series-on-meaningful-human-ai-interactions-for-a-digital-society-event4/
LOCATION:Social Data Lab at EEMCS 28\, Van Mourik Broekmanweg 6\, Delft
ORGANIZER;CN="Mondai | House of AI":MAILTO:mondai@tudelft.nl
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20241003T143000
DTEND;TZID=Europe/Amsterdam:20241003T170000
DTSTAMP:20260404T191709
CREATED:20240815T125222Z
LAST-MODIFIED:20241003T112444Z
UID:10000193-1727965800-1727974800@mondai.tudelftcampus.nl
SUMMARY:Climate & AI Workshop Event
DESCRIPTION:On October 3rd 14:30 – 17:00 at Mondai | House of AI\, TU Delft will have a first joint event between the Climate Action Programme and the AI Initiative. The event will bring together local researchers working on AI and climate related domains to explore relevant overlaps and related opportunities. Purpose is to (further) delve into generating and shaping innovative ideas together. The programme includes introductions from the Pro Vice Rectors Herman Russchenberg (Climate) and Geert-Jan Houben (AI)\, short pitches from researchers\, breakout groups on 3 themes and will be concluded with a borrel.. \nThe following thematic crossroads will be further defined during this Climate & AI event and relevant opportunities explored: \n\nAI & Water and Infrastructure\nAI & Weather and Climate Risks\n(Green) AI & Sustainability\n\nThese previously announced themes will be on the programme of a secondary Climate & AI event (date to be announced): \n\nAI & Finance and Governance\nAI & City Development and Urban Mobility\n\nProgramme\n14:30 – 14:45 Walk in and drinks\n14:45 – 15:30 Opening with introduction by Geert-Jan Houben & Herman Russchenberg\n15:30 – 16:30 Breakout per theme\n16:30 – 17:00 Plenary closing with call to action\n17:00 – 18:00 Borrel & (interthematic) networking \nThis event is aimed (early/earlier career) faculty staff of all TU Delft faculties. Are you a phd or postdoc and interested in these themes? Or are you an interested faculty staff member who can’t join the event on October 3rd but do you want to stay updated about follow up? Get in touch with the organisation team via Charlotte Boelens. \nAI and Climate / Climate and AI at TU Delft\nIn March 2024\, the Climate Action Programme (CAP) dedicated their monthly lecture to ‘AI and Climate’ with a talk on “Machine-learning for understanding atmospheric physics” by Geet George (CEG) and Jing Sun (EEMCS) – recording and presentations available here. The CAP Academic Career Trackers of their 17 flagships recently also delved into AI with Angela Meyer (CEG) and AidroLab. A growing number of AI researchers also contribute to climate-related topics. A great example of this was the poster by Damla Akoluk (TPM)\, a PhD candidate from the HIPPO Lab\, who presented her work on aggregation at the Climate Action Festival. Read more about how this inspiring day went here. The last TU Delft AI Lunch of 2023/2024 was themed “The unprecedented environmental impacts of AI: a transdisciplinary discussion” with panellists Benedetta Brevini (New York University)\, Olya Kudina (TPM)\, Fanny Hidvégi (Policy Director at the AI Collaborative) and moderator Roel Dobbe (TPM). \nAre you interested in the intersection of AI & Climate and want to join or learn more about this growing climate/AI community at TU Delft? Join this workshop by registering above or below\, or reach out to Climate Action Programme (Climate-Action@tudelft.nl) or AI Initiative (AI-Initiative@tudelft.nl)
URL:https://mondai.tudelftcampus.nl/event/climate-ai-workshop/
LOCATION:Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD
ORGANIZER;CN="Mondai | House of AI":MAILTO:mondai@tudelft.nl
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20241001T090000
DTEND;TZID=Europe/Amsterdam:20241001T180000
DTSTAMP:20260404T191709
CREATED:20240904T054106Z
LAST-MODIFIED:20241001T074045Z
UID:10000198-1727773200-1727805600@mondai.tudelftcampus.nl
SUMMARY:Symposium and Opening Centre for Meaningful Human Control
DESCRIPTION:Mondai | House of AI is glad to host the Symposium and Opening of the new Centre for Meaningful Human Control of the TU Delft!Hosted in collaboration with the Centre for Meaningful Human Control and the TU Delft AI Initiative. \nAs AI algorithms rapidly revolutionize various sectors – from healthcare through public services to transportation – they also raise concerns over their potential to spiral beyond human control and responsibility. In 2018\, TU Delft launched the AiTech initiative to address the transdisciplinary challenges related to designing systems under Meaningful Human Control. Concurrently\, the project “Meaningful Human Control over Automated Driving Systems” (2017-2021) further developed and operationalised this concept in the context of driving automation. These and other initiatives have produced impactful research\, fostered community building – both national and international\, and influenced key policy documents by Dutch and EU authorities. \nIn this event we invite you to celebrate with us two recent milestones from our MHC community! The release of the first Research Handbook on Meaningful Human Control and the launch of the Centre for Meaningful Human Control. \nRegister here!This event is a unique opportunity to engage with leading academics and practitioners in the field! Exchange perspectives – philosophical\, legal\, and technical – on the challenges and approaches towards keeping meaningful human control over technology. \nProgramme\n08.30 – 09.00 Walk-in and Registration\n09.00 – 10.15 Welcome by David Abbink (Scientific Director CMHC) and Luciano Cavalcante Siebert (Co-Director CMHC)\, interactive session by Nazli Cila and Deborah Forster (Research Team CMHC)\, keynote by Carles Sierra (Artificial Intelligence Research Institute\, Spanish National Research Council)\n10.15 – 10.40 Break and Handbook Gallery\n10.40 – 12.30 Keynotes by Johannes Himmelreich (Syracuse University)\, and Tanya Krupiy (Newcastle University)\, and a Panel Discussion with Authors of the Handbook\n12.30 – 13.30 Lunch\n13.30 – 14.50 Welcome by Arkady Zgonnikov (Co-Director CMHC). Keynotes by David Abbink (Scientific Director CMHC) and Kim van Sparrentak (European Parliament)\n14.25 – 14.50 Break\n14.50 – 16.40 Keynotes by Barbara Holder (Embry-Riddle Aeronautical University) and Ilse Harms (Dutch Vehicle Authority)\, and an interactive session on bringing MHC research to practice\n16.40 – 17.00 Official Launch of the Centre for Meaningful Human Control\n17.00 Celebratory Drinks! \nMorning Programme: Academic Challenges of Meaningful Human Control Exciting keynotes provide an integrated overview from various academic perspectives – ethical\, legal\, design and engineering. And\, an interactive session with the authors and editors provides a change to deep-dive into the Handbook on Meaningful Human Control. \nOn the Engineering of Social Values - Carles Sierra (Artificial Intelligence Research Institute)\nOn the Engineering of Social Values\nBy: Carles Sierra (Artificial Intelligence Research Institute) \nEthics in Artificial Intelligence is a wide-ranging field encompassing many open questions regarding the moral\, legal and technical issues of using and designing ethically-compliant autonomous agents. Under this umbrella\, the computational ethics area is concerned with formulating and codifying ethical principles into software components. In this talk\, I will look at a particular problem in computational ethics: engineering moral values into autonomous agents. I will focus on the essential role of human communities in defining social values and their associated norm-based social contracts. \n \nCarles Sierra is the Director of the Artificial Intelligence Research Institute (IIIA) of the Spanish National Research Council (CSIC) located in Barcelona. He is the President of EurAI\, the European Association of Artificial Intelligence. He has been contributing to Artificial Intelligence research since 1985 in the areas of Knowledge Representation\, Auctions\, Electronic Institutions\, Autonomous Agents\, Multiagent Systems and Agreement Technologies. He is a Fellow of the European Association of AI\, EurAI\, and recipient of the ACM/SIGAI Autonomous Agents Research Award 2019. \nFor Knowledge and Commitment — Or: What’s the Point of Meaningful Human Control? - Johannes Himmelreich (Syracuse University)\nFor Knowledge and Commitment — Or: What’s the Point of Meaningful Human Control?\nBy: Johannes Himmelreich (Syracuse University) \nJohannes suggests that Meaningful Human Control (MHC) may be missing the point. Theories of MHC typically concentrate on intention\, intervention\, and action; and the theories seek to warrant moral responsibility and avoid harms. That\, he takes it\, is the point of MHC. But this typical focus on intention\, action\, and intervention misses this point\, or so he argues. To ensure responsibility\, knowledge matters more than intention or intervention. And to avoid certain outcomes\, a commitment to refrain from acting may matter more than maintaining human control. In fact\, with partially superhuman Artificial Intelligence we need both. We need to *know* when AI outperforms humans to then *commit* to defer to AI. This often avoids harmful outcomes without undermining moral responsibility. \nJohannes Himmelreich is a philosopher who teaches and works in a policy school. He is an Assistant Professor in Public Administration and International Affairs in the Maxwell School at Syracuse University. He works in the areas of political philosophy\, applied ethics\, and philosophy of science. Currently\, he researches the ethical quandaries that data scientists face\, how the government should use AI\, and how to check for algorithmic fairness under uncertainty. He published papers on “Responsibility for Killer Robots\,” the trolley problem and the ethics of self-driving cars\, as well as on the role of embodiment in virtual reality. He holds a PhD in Philosophy from the London School of Economics (LSE). Prior to joining Syracuse\, he was a post-doctoral fellow at Humboldt University in Berlin and at the McCoy Family Center for Ethics in Society at Stanford University. During his time in Silicon Valley\, he consulted on tech ethics for Fortune 500 companies\, and taught ethics at Apple. \nGovernance of the Human-AI Coupling - Tanya Krupiy (Newcastle University)\n\nGovernance of the Human-AI Coupling\nBy: Tanya Krupiy (Newcastle University) \nJuliane Beck and Thomas Burri discuss the fact that the debate over what constitutes meaningful human control over artificial intelligence systems has been largely confined to the context of military applications of artificial intelligence. Scholars\, such as Jonathan Kwik and Frank Flemisch et al\, have proposed various definitions for the term meaningful human control in the context of the use of artificial intelligence systems. Article 14 of the Artificial Intelligence Act 2024 gives effect to the aspiration to have human oversight over the operation of high-risk artificial intelligence systems. This presentation will examine what duties the Netherlands has under the Convention on the Elimination of All Forms of Discrimination against Women (CEDAW) in regard to governing human-artificial intelligence coupling in the context of the organisations employing artificial intelligence as part of the decision-making process. It will conclude that the Netherlands needs to enact legislation in order to comply with its international human rights law obligations under CEDAW. This stems from the fact that there is a tension between obligations flowing from the CEDAW and the Artificial Intelligence Act 2024. \n\n \nTanya Krupiy is a lecturer in digital law\, policy and society at Newcastle University. She has expertise in international human rights law\, international humanitarian law and international criminal law. Tanya holds a Master of Laws with distinction in public international law from the London School of Economics and Political Science. She gained a Doctor of Philosophy in law from the University of Essex. She received funding from the Social Sciences and Humanities Research Council of Canada to carry out a postdoctoral fellowship at McGill University in Canada. Thereafter\, she was a postdoctoral fellow at Tilburg University. Tanya’s research appears in Oxford University Press\, University of Melbourne and Brill publications among others. \n  \nAfternoon Programme: Pragmatic Challenges of Meaningful Human Control In the afternoon we focus on the practical challenges of meaningful human control. Via talks and interactive sessions around real-world challenges from international experts we take a closer look at sectors like the aviation and automotive industry\, and Dutch and European policy making. We also take the time to explore how you might interact with its network and expertise. \nAt the end of the day we also officially launch and celebrate our Centre on Meaningful Human Control! \nTech Regulation: the way towards Ethical AI - Kim van Sparrentak (European Parliament)\n\nTech Regulation: the way towards Ethical AI\nBy: Kim van Sparrentak (European Parliament) \n\n \nKim van Sparrentak is a Dutch politician who has been serving as a Member of the European Parliament for the GroenLinks political party since 2019. She co-wrote legislation that limited the influence of major tech companies and that granted municipalities greater discretion in regulating which properties can be rented out for short-term homestays through platforms such as Airbnb. Van Sparrentak was re-elected in June 2024 as the fourth candidate on the shared GroenLinks–PvdA list\, which received a plurality in the Netherlands of eight seats. \n  \nWho’s flying this thing?! Considerations for a shared Human-Automation Future - Barbara Holder (Embry-Riddle Aeronautical University)\nWho’s flying this thing?! Considerations for a shared Human-Automation Future\nBy: Barbara Holder (Embry-Riddle Aeronautical University) \n \nBarbara Holder is Associate Professor and Presidential Fellow in the College of Aviation at Embry-Riddle Aeronautical University (ERAU). Before joining ERAU in November 2021\, Holder had worked since 2015 as a fellow in Advanced Technology at Honeywell Aerospace\, where she studied human-machine issues across a wide range of aircraft. Earlier\, she spent 15 years with The Boeing Company. There\, she was an associate technical fellow and lead scientist of the Flight Deck Concept Center. Holder was a post-doctoral research fellow at NASA Ames Research Center where she investigated how pilots come to understand the auto-flight system of the Airbus A320 while flying the line. \nHolder is chair of the Human Factors Subcommittee to the U.S. Federal Aviation Administration’s (FAA) Research\, Engineering and Development Advisory Committee.  She is also a member of the FAA’s Air Carrier Training Aviation Rulemaking Committee’s Flight Path Management Working Group. She is a fellow of the Royal Aeronautical Society. She has nine patents and multiple scholarly publications. \nThe Practical Challenges of Interacting with Automated Cars - Ilse Harms (Dutch Vehicle Authority))\nThe Practical Challenges of Interacting with Automated Cars\nBy: Ilse Harms (Dutch Verhicle Authority) \nDriving a car is a complex and dynamic task. It entails the execution of tasks varying from motor-executive tasks\, such as keeping the car within its lane\, to more cognitive tasks such as understanding the driving environment to decide whether it is safe to overtake\, till keeping track off which exit to take. These days\, in-vehicle systems are increasingly assisting with\, or taking over\, part of the driving tasks. Even up to the point that humans feel that the car is actually driving itself. Under specific conditions\, some cars actually can take over full control over the driving task. This interplay between the human driver and the machine driver has design implications for the vehicle\, which need to be assessed in vehicle type approval. Considering her work at the Dutch Vehicle Authority\, Ilse will share with you some of the practical challenges related to human control in the context of assisted and automated driving \n \n“Human Factors is an integral part of mobility.” This combination is also the recurring theme in Ilse Harms’s career. Ilse is a traffic psychologist who enjoys working at the intersection of theory and practice. She conducted her PhD research at the University of Groningen while working for the Dutch government. \nCurrently she works at RDW – the Dutch Vehicle Authority – where she is a leading figure in the field of human factors and vehicle automation. Furthermore\, Ilse has successfully worked to get the topic of human factors in Euro NCAP’s Vision 2030. At Euro NCAP she is both the alternate director for the Netherlands and the Chair for the HMI & Human Factors Working Group. \nA Moment of Celebration! We are very excited to celebrate the official launch of the Centre for Meaningful Human Control over systems with autonomous capabilities with you. The mission of the Centre is to connect academics and practitioners to better conceptualise\, design\, implement\, and assess systems under meaningful human control. We strive to be a lighthouse for collaboration among multiple stakeholders: to leverage interdisciplinary expertise\, existing initiatives at TU Delft\, and an international network of collaborators at the forefront of research and practice on meaningful human control. \nRegister here!
URL:https://mondai.tudelftcampus.nl/event/symposium-and-opening-centre-of-meaningful-human-control/
LOCATION:Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD
ATTACH;FMTTYPE=image/jpeg:https://mondai.tudelftcampus.nl/wp-content/uploads/sites/7/2024/09/MHCC_mobile.jpg
ORGANIZER;CN="Mondai | House of AI":MAILTO:mondai@tudelft.nl
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20240919T103000
DTEND;TZID=Europe/Amsterdam:20240919T160000
DTSTAMP:20260404T191709
CREATED:20240815T081821Z
LAST-MODIFIED:20240909T122243Z
UID:10000190-1726741800-1726761600@mondai.tudelftcampus.nl
SUMMARY:Legal Design Futures: Co-creating the New Center for Law\, Design & AI
DESCRIPTION:Mondai | House of AI is pleased to host the kick-off of the new Convergence Center for Law\, Design & AI!The Convergence Center for Law\, Design & Artificial Intelligence is an initiative by Erasmus School of Law and TU Delft. Our mission is to explore areas where design methods and technology can be used to develop innovations in legal systems and processes. \nWe aim to do this by bringing subject matter experts from across disciplines to create ideas\, strategies and  solutions through collaborative\, human-centered and participatory way. \nBy examining how AI can be harnessed to democratise legal expertise\, practices\, and discourse\, our goal is to provide valuable insights into effective AI applications within legal design. Our center seeks to serve as a hub for thought leadership\, research\, and co-creation\, where the future of law\, design\, and AI is shaped with a shared commitment to positive societal impact. \nSpeakers: Sascha van Schendel\, Matthijs van Dijk and Arnoud Engelfriet \n \nProgramme: ‘Law\, Design & AI’ This center represents a collaborative effort between TU Delft and Erasmus University. We promise a day filled with insights & discussions on the intersection of law\, design and artificial intelligence. \n10:00 Walk in \n10:30 Opening and Introduction \n11:00 Perspectives of the Center \n12:30 Lunch \n13:30 Defining the Priorities of the Center \nPractical sessions on:\nBuilding Networks\nDeveloping Research Methods\nLearning from Cases/Practice \n15:30 Discussion and Closing \n16:00 Drinks
URL:https://mondai.tudelftcampus.nl/event/legal-design-futures-co-creating-for-law-design-ai/
LOCATION:Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD
ORGANIZER;CN="Mondai | House of AI":MAILTO:mondai@tudelft.nl
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20240903T120000
DTEND;TZID=Europe/Amsterdam:20240903T133000
DTSTAMP:20260404T191709
CREATED:20240827T074634Z
LAST-MODIFIED:20240829T130226Z
UID:10000192-1725364800-1725370200@mondai.tudelftcampus.nl
SUMMARY:Seminar Series on Meaningful Human-AI Interactions for a Digital Society - #3: Defining “Meaningful” Human-AI Interactions for a Digital Society
DESCRIPTION:A Seminar Series on Meaningful Human-AI Interactions for a Digital SocietyInterested in how to design AI to be meaningful\, transparent\, and ethical? Then join us for the continuation of our Meaningful-Human AI Interaction event series! Building off our previous events\, we will explore how to define meaningful human-AI interactions and deploy AI responsibly with a group of interdisciplinary experts. \nEvent #3: Promoting Meaningful Human-AI Interactions: Societal and Legislative Perspectives \nExplore how to define and promote meaningful human-AI interactions with Marc Steen (TNO). Marc will present an “Extend Error Matrix” that understands AI in its societal context and can help us define meaning and promote interdisciplinary collaboration towards responsible AI. The session will be moderated by Stefan Buijsman (TU Delft) and Birna van Riemsdijk (University of Twente). \nSpeakerMarc Steen\, Senior Research Scientist at TNO: Responsible Innovation \nTalk: We need better images of AI and better conversations about AI\nThis presentation concerns a critique of the ways in which the people involved in the development and application of AI systems (and indeed: journalists and the general public) often visualize and talk about AI systems. Often\, they visualize such systems as shiny humanoid robots or as free-floating electronic brains. Such images convey misleading messages; as if AI works independently of people and can reason in ways superior to people. Instead\, we propose to visualize AI systems as parts of larger\, sociotechnical systems. Here\, we can learn\, for example\, from cybernetics. Similarly\, we propose that the people involved in the design and deployment of an algorithm would need to extend their conversations beyond the four boxes of the Error Matrix\, for example\, to critically discuss false positives and false negatives. We present two thought experiments\, with one practical example in each. We propose to understand\, visualize\, and talk about AI systems in relation to a larger\, complex reality. We also propose to enable people from diverse disciplines to collaborate around boundary objects\, for example: a drawing of an AI system in its sociotechnical context; or an ‘extended’ Error Matrix. Such interventions can promote meaningful human control\, transparency\, and fairness in the design and deployment of AI systems.
URL:https://mondai.tudelftcampus.nl/event/seminar-series-meaningful-human-ai-interactions-for-a-digital-society-event3/
LOCATION:Online
ORGANIZER;CN="Mondai | House of AI":MAILTO:mondai@tudelft.nl
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20240828T140000
DTEND;TZID=Europe/Amsterdam:20240828T173000
DTSTAMP:20260404T191709
CREATED:20240711T134410Z
LAST-MODIFIED:20240819T150352Z
UID:10000188-1724853600-1724866200@mondai.tudelftcampus.nl
SUMMARY:Meet the Brightest Minds in AI in Engineering from Top Universities in Europe
DESCRIPTION:Meet the brightest minds in AI in Engineering from top universities in Europe! \n\nLearn about the latest research in Artificial Intelligence for Engineering\nGet inspired by key corporate players on the promises\, opportunities and challenges of Artificial Intelligence for Engineering\nMeet talent through inspiring pitches\, engaging poster sessions & network drinks\n\nThis event is organised by Mondai\, the ’House of AI’ on the Campus of Delft University of Technology (TU Delft)\, together with our partner FME\, AI Hub South Holland and NLAIC\, as part of the IDEA League Summer School’s “Artificial Intelligence for Engineering Applications” led by professor Andrea Coraddu. \nThe event brings together some of the brightest minds in Europe from IDEA league universities. It is a great opportunity to learn about the latest developments in Artificial Intelligence for Engineering Applications\, meet upcoming European AI Talent\, and hear from start-ups and corporates about their views and cutting edge applications in Artificial Intelligence for Engineering. \nProgramme\n13.45 – 14.15 Walk-in\n14.15 – 15.35 Introduction and talk by prof. Andrea Coraddu (TU Delft)\, talk by prof. Luca Oneto (Univerista di Genova)\, Giacomo Lastrucci (TU Delft) and PhD Pitches\n15.35 – 16.30 Industry Presentations by Matthieu Worm (Siemens)\, Jurgen Bastiaansen (Festo)\, and PercivAI\n16.30 Drinks and poster session! \nAnd\, our already exciting programme had got even more interesting! With an extra pitch by PercivAI\, a startup with expertise in autonomous systems\, and a talk by TU Delft talent Giacomo Lastrucci\, who will tell us more about AI for chemical process engineering. \nSpeakers \nLuca Oneto\, Associate Professor in Computer Engineering at Università di Genova \nTrustworthy AI for Industrial Applications\nThe integration of Artificial Intelligence (AI) in industrial applications promises enhanced efficiency\, precision\, and innovation. However\, ensuring the trustworthiness of AI systems is paramount to their successful adoption and long-term viability. This seminar will explore the critical dimensions of trustworthy AI\, emphasizing fairness to prevent biases that could compromise ethical standards and operational integrity. Privacy will be examined to safeguard sensitive industrial data against breaches and misuse. Robustness will be highlighted to ensure AI systems maintain performance under diverse conditions. The importance of explainability will be discussed to facilitate transparency and accountability in AI-driven decisions. Additionally\, the seminar will delve into the concept of physics-informed AI\, which integrates physical laws into AI models to improve accuracy and reliability. Finally\, the implications of the AI Act on industrial applications will be reviewed\, outlining regulatory frameworks designed to foster safe and ethical AI deployment. Participants will gain a comprehensive understanding of how these elements collectively contribute to the development and implementation of trustworthy AI in industry. \nLuca Oneto\, born in 1986 in Rapallo\, Italy\, completed his BSc and MSc in Electronic Engineering at the University of Genoa in 2008 and 2010\, respectively. In 2014\, he earned his PhD in Computer Engineering from the same institution. From 2014 to 2016\, he worked as a Postdoc in Computer Engineering at the University of Genoa\, where he then served as an Assistant Professor from 2016 to 2019. Luca co-founded the company ZenaByte s.r.l. in 2018. In 2019\, he became an Associate Professor in Computer Science at the University of Pisa\, and from 2019 to 2024\, he held the position of Associate Professor in Computer Engineering at the University of Genoa. Currently\, he is an Associate Professor in Computer Engineering at the University of Genoa. He has been coordinator and local responsible in numerous industrial\, H2020\, and Horizon Europe projects. He has received prestigious recognitions\, including the Amazon AWS Machine Learning Award and the Somalvico Award for the best young AI researcher in Italy. His primary research interests lie in Statistical Learning Theory and Trustworthy AI. Additionally\, he focuses on data science\, utilizing and improving cutting-edge machine learning and AI algorithms to tackle real-world problems. \nMatthieu Worm\, Distinguished key expert for Simulation & Digital twin at Siemens \nAdvancing Industrial AI: Siemens’ Innovations in Engineering Solutions\nIndustrial AI refers to the application of AI in an industrial context designed to meet the rigorous requirements and standards of the most demanding industrial environments. With the ability to handle big data from machines and to detect complex patterns\, Industrial AI is helping to supercharge digital and sustainability transformation with speed and scale. \nIn his presentation Matthieu Worm will elaborate on how Siemens is using AI to improve efficiency in engineering tools. This has a wide scope\, ranging for support through industrial copilots and LLM through optimization of products or complete processes to fully autonomous systems. Siemens will have a demonstration of Siemens Industrial CoPilot present during the event. \nMatthieu is a TU Delft alumni\, graduating in Industrial Design in 2022. He is now part of Siemens Corporate Core Technology team focusing on Simulation & Digital Twins. \nJurgen Bastiaansen\, Manager Innovation Unit at Festo \nAI in Engineering at Festo\nFesto has developed the artificial intelligence tool Festo Automation Experience – Festo AX for short – which allows engineers to extract high added value from the data produced by their systems using machine learning algorithms. It is designed to address three key areas: preventive maintenance\, energy and quality optimization\, enabling customers to increase productivity and reduce costs. \nBesides the fact that in industry AI is now mainly used to monitor industry processes and explore technical (im)possibilities\, we also give a glimpse into the future\, where AI – in combination with high-end technology (including from Festo) – will be used to grow algae in a controlled way. Our earth’s natural resources are becoming increasingly scarce\, imagine what can be achieved with these and future technologies to be developed. \nJurgen has a background in the automation industry for over 20 years. Since 7 years he is responsible for the Innovation Unit within the North West Europe cluster at Festo. \nAndrea Coraddu\, Associate Professor Department of Maritime and Transport Technology\, TU Delft \nFrom Predictive to Prescriptive Analytics: Challenges and Advances for Sustainable Shipping Energy Systems\nPrescriptive Analytics describes automating a decision-making process starting from data without human intervention\, which requires knowledge coming from multiple aspects of Artificial Intelligence informed by a multitude of data sources. For maritime applications\, these requirements are not always available\, and Prescriptive Analytics has limited applications as the sector still needs final decisions undertaken by human operators. Additionally\, constraints and operators’ preferences need to exploit data describing the specific context in the form of ontologies and leveraging data and information not structured to achieve practical results. In this talk\, the path towards Prescriptive Analytics is discussed in the context of challenges and advances for sustainable shipping energy systems. \nAndrea Coraddu (Member\, IEEE) was born in Pietrasanta\, Italy\, in 1979. He received the Laurea degree in naval architecture and marine engineering from the University of Genoa\, Italy\, in 2006\, and the Ph.D. degree from the School of Fluid and Solid Mechanics\, University of Genoa\, in 2012. His Ph.D. dissertation was titled\, “Modeling and Control of Naval Electric Propulsion Plants.” He was an Associate Professor with the Department of Naval Architecture\, Ocean and Marine Engineering\, University of Strathclyde\, from October 2020 to August 2021. Currently\, he is an Associate Professor of intelligent and sustainable energy systems with the Maritime and Transport Technology Department\, Delft University of Technology\, Delft\, The Netherlands. His relevant professional and academic experiences\, include working as an Assistant Professor with the University of Strathclyde\, a Research Associate with the School of Marine Science and Technology\, Newcastle University\, and as a Research Engineer as part of the DAMEN Research and Development Department\, Singapore. He is also a Postdoctoral Research Fellow with the University of Genoa. He has been involved in a number of successful grant applications from research councils\, industry\, and international governmental agencies focusing on the design\, integration\, and control of complex marine energy and power management systems enabling the development of next-generation complex and multi-function vessels that can meet the pertinent social challenges regarding the environmental impact of human-related activities. \nAbout the IDEA League\nThe IDEA League\, a strategic alliance between five leading European universities of science and technology\, believes that we have the power to shape the future. By joining forces\, we will create valuable connections that inspire innovation and the pursuit of ambitious goals. \nThrough cross-border\, bottom-up collaboration\, we provide the environment for students\, researchers and staff at our partner universities to share a collective wealth of knowledge\, experience and resources. By doing so\, we aim to connect and inspire a new generation of European science and technology graduates\, champion innovation and entrepreneurship and steer Europe towards a more competitive and compassionate future.
URL:https://mondai.tudelftcampus.nl/event/save-the-date-meet-the-brightest-minds-in-ai-in-engineering-from-top-universities-in-europe/
LOCATION:Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD
ORGANIZER;CN="Mondai | House of AI":MAILTO:mondai@tudelft.nl
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20240702T120000
DTEND;TZID=Europe/Amsterdam:20240702T140000
DTSTAMP:20260404T191709
CREATED:20240530T131015Z
LAST-MODIFIED:20240701T133558Z
UID:10000186-1719921600-1719928800@mondai.tudelftcampus.nl
SUMMARY:TU Delft AI Lunch - The unprecedented environmental impacts of AI: a transdisciplinary discussion
DESCRIPTION:Mondai | House of AI is happy to host the next edition of the TU Delft AI Lunch:\nThe unprecedented environmental impacts of AI: a transdisciplinary discussion\n \nWhile AI is frequently touted as a panacea for numerous global issues\, including tackling chronic diseases\, economic revitalization\, and addressing societal needs such as national security challenges and the Climate Crisis\, this narrative frequently overlooks the significant environmental impacts stemming from the increasing demand for AI tools. These costs include unparalleled demand of rare metals\, massive energy expenditure and an unprecedented impact on land use\, water consumption and energy systems. For instance\, Microsoft’s most recent environmental report for 2022 after the launch of Open AI generative AI services reveals a significant 34% increase in its worldwide water consumption from 2021 to 2022\, reaching nearly 1.7 billion gallons. Given recent accelerations in investment of new data centers\, these dramatic numbers are expected to continue impacting ecosystems around the globe. During this panel\, we aim to open up a discussion to bring the environmental implications of AI into integral and actionable view.  We will discuss how the growing dependence of AI functionalities and their underlying computational infrastructures contributes to environmental impacts\, and what is needed in science\, engineering\, policy making and (global) politics to acknowledge and address the steep costs associated with these impacts. We focus on how to bring together different disciplines and stakeholders across engineering\, computer science\, political economy and other fields to build an integral understanding of the problem and what challenges as well as actionable strategies we should pursue to curb the environmental impacts of AI. \nThis event includes free lunch for which registration is required (help us reduce food waste!) \nPanellists\n\n\n\nBenedetta Brevini\nVisiting Professor at the Institute for Public Knowledge\,  New York University.\nBenedetta’s research is grounded in a critical political economy that investigates the social\,political\, economic and environmental implications of data-driven communication systems and in particular the relationship between Data Capitalism\, AI and the Climate Crisis. Before joining the academy\, she worked as journalist in Milan\, New York and London for CNBC\, RAI and the Guardian. She is the author of several books including Is AI good for the Planet(2022)\, Amazon: Understanding a Global Communication Giant(2020)\, Public Service Broadcasting online (2013) and the editor of Beyond Wikileaks(2013)\, Carbon Capitalism and Communication: Confronting Climate Crisis(2017)\, Climate Change and the Media(2018).  She is currently working on a new volume for Polity entitled “Communication systems\, Technology and the climate emergency”.\n\n\n\n\n\n\n\n\n\nOlya Kudina\nAssistant Professor of Ethics & Philosophy of Technology (Faculty of TPM)\nOlya is an interdisciplinary researcher in philosophy/ethics of technology who explores the relation between human values and technologies. Her recent focus has been on AI and democracy in the framework of the AI DeMoS Lab that she founded and co-leads. To anticipate the ethical challenges and opportunities of technologies\, Olya thinks it is essential to combine different academic practices and fields. \n\n\n\n\n\n\n\n\n\n\n\nJune Sallou\nPostdoc researcher on green software & green AI (Faculty of EEMCS)\nJune’s research focuses on software engineering and sustainability\, and more specifically how Approximate Computing can be applied to develop more sustainable (software) systems and green AI. “The energy and computational costs of traditional AI practices are not only high in financial terms\, but they also have major impacts on our planet’s resources. The use of ICT is constantly increasing\, with estimates that its energy consumption could reach as much as 21% of the global total by 2030. And climate change is something I am personally very concerned about\, especially since it is largely caused by human activities. As a software engineer\, I feel a strong responsibility to contribute to the solution\, and not contribute to the problem.” \n\n\n\n\n\n\n\n\n\n\nModerator: Roel Dobbe\nAssistant professor in Data-driven/Algorithmic Systems and Safety\, Justice and Sustainability (Faculty of TPM)\nRoel got to know Silicon Valley while living in the San Francisco Bay Area and doing his PhD on AI and control engineering at the Berkeley AI Research (BAIR) Lab. He works on an integrative systems theory and design approach for artificial intelligence and algorithms\, which will allow for more accurate work on well-functioning systems\, as well as a better understanding of risks. He is also co-founder of the AI Now Institute in New York where\, among other things\, he researches the relationship between AI and climate change \n\n\n\n\n\nAbout the Delft AI (Lab) Lunch series\nThe Delft AI (Lab) Lunch is a monthly meet-up hosted by the TU Delft AI Labs & Talent community at Mondai | House of AI.\nEvery month\, a Delft AI Talent moderates a panel to discuss challenges and developments made in their field. During these events\, you can participate\, learn\, make connections\, inspire and be inspired by and with the Delft AI Labs. We invite all interested staff and students from TU Delft to join these sessions. Please contact community manager Charlotte Boelens for more information about this series or the TU Delft AI Labs & Talent Programme. \nJoin this series in 2024 on: 15 February | 21 March | 16 May | 2 July | \nJoin the yearly Spring Symposium on 4 June 2024. Theme: AI Education \nNote for TU Delft PhDs\nThe TU Delft AI Lunch series is eligible for earning discipline related skills GSC with the ‘Form for earning GSC for TU Delft AI(-related) seminars’. Check with your local Faculty Graduate School (FGS) if your FGS offers this option for earning Discipline Related Skills GSC; and with your supervisors if they accept our seminars on your Doctoral Education (DE) list. If you already have a form\, don’t forget to bring it with you.
URL:https://mondai.tudelftcampus.nl/event/tu-delft-ai-lunch-unprecedented-environmental-impacts-of-ai/
LOCATION:Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD
ORGANIZER;CN="Mondai | House of AI":MAILTO:mondai@tudelft.nl
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20240626T120000
DTEND;TZID=Europe/Amsterdam:20240626T180000
DTSTAMP:20260404T191709
CREATED:20240422T114440Z
LAST-MODIFIED:20240626T070107Z
UID:10000178-1719403200-1719424800@mondai.tudelftcampus.nl
SUMMARY:Delft FinTech Summit: Pioneering Finance by Collaboration and Learning
DESCRIPTION:The Delft FinTech Lab is pleased to host the second Delft FinTech summit\, together with Mondai | House of AI! \nDelft FinTech Summit: Pioneering Finance by Collaboration and Learning Join us for an extraordinary day dedicated to the confluence of finance and technology at the Delft FinTech Summit. Discover how finance shapes society and expedite the adoption of FinTech innovations.  \nTechnology adoption is accelerating the financial industry at an unprecedented rate\, transforming both FinTech companies and conventional financial institutions. Following the resoundingly successful launch last year\, the Delft FinTech Lab invites you to join us at Delft University of Technology for an unparalleled opportunity to engage with peers across various financial sectors and explore the future of finance driven by technological impetus.  \nDiscover how finance shapes society through diverse topics such as Sustainable investing strategies for a better tomorrow\, predicting household financial distress\, revolutionizing finance with AI\, and building a resilient financial system\, digital Euro among many others from speakers from Robeco\, ING\, PerfectXL\, Delft University of Technology\, Erasmus University Rotterdam and many others.   \nSign up now to secure one of the limited spots and discover your technological edge in finance.   \nProgramme\n12.00 – 13.00 Walk-in and Lunch\n13.00 – 13.20 Welcome and Strategy Delft FinTech lab by Arie van Deursen & Venkatesh Chandrasekar\n13.20 – 13.40 European FinTech: Past progress and future prospects Joachim Schwerin\, European Commission\n13.40 – 14.00 Using ML for a Resilient Financial System by Preventing Money Laundering by Kubilay Atasu\, TU Delft\n14.00 – 14.15 Convergence of Payment Infrastructure and Financial markets by Leonard Franken\, AFM\n14.15 – 14.30 Break\n14.30 – 14.45 Incident Management in a Software-Defined Business by Eileen Kapel\, ING\n14.45 – 15.25 Startup presentations – Opportunities for AI in Finance with PerfectXL\, PocketCFO\, and Djeeni\n15.25 – 15.40 Break\n15.40 – 16.00 FinTech and RegTech: A software engineering perspective by Domenico Bianculli\, University of Luxembourg\n16.00 – 16.20 How Collaborative Ecosystems Accelerate Innovation by Liakos Papapoulos\, LP Trading and Innovation Advisory Services\n16.20 – 16.40 Anatomy of Household Financial Distress with TU Delft and Erasmus University of Rotterdam\n16.40 – 16.45 Closing\n16.45 Drinks!
URL:https://mondai.tudelftcampus.nl/event/delft-fintech-summit-2024/
LOCATION:Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD
ATTACH;FMTTYPE=image/jpeg:https://mondai.tudelftcampus.nl/wp-content/uploads/sites/7/2024/04/Mondai-Fintech-Lab-006_resized.jpg
ORGANIZER;CN="Mondai | House of AI":MAILTO:mondai@tudelft.nl
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20240611T140000
DTEND;TZID=Europe/Amsterdam:20240611T160000
DTSTAMP:20260404T191709
CREATED:20240521T091400Z
LAST-MODIFIED:20240522T115845Z
UID:10000182-1718114400-1718121600@mondai.tudelftcampus.nl
SUMMARY:Book Launch: 'Human Freedom in the Age of AI' by Filippo Santoni de Sio
DESCRIPTION:Mondai | House of AI is pleased to host the book launch of ‘Human Freedom in the Age of AI‘ by Filippo Santoni de Sio \nThe book Human Freedom in the Age of AI explains how artificial intelligence (AI) may affect our freedom at work\, in our daily life\, and in the political sphere. The author Filippo Santoni de Sio\, associate professor of ethics of technology at TU Delft and a leading expert in the field of AI ethics\, provides a philosophical framework to help make sense of and govern the ethical and political impact of AI in these domains. AI offers employers new forms of control of the workforce\, opening the door to new forms of domination and exploitation; it may reduce our capacity to remain in control of and responsible for our decisions and actions\, thereby affecting our free will and moral responsibility; and it may increase the power of governments and tech companies to steer the political life\, thereby affecting the possibility of a free and inclusive political participation. \nIt is still possible to promote human freedom in our interactions with AI\, but this requires designing AI systems that help promote workers’ freedom\, strengthen human control and responsibility\, and foster a free\, active\, and inclusive democratic participation. From ‘freedom as non-domination’ to ‘meaningful human control’ and  value-sensitive- and critical design\, the book critically discusses and connects a broad variety of topics in ethics\, political philosophy and design studies\, paving the way for an original\, comprehensive and multidisciplinary approach to AI Ethics\, centred on the concept of social and political freedom. \nIn this event the author Filippo Santoni will presents the book and engage in an in-person conversation with two Dutch-based experts in the ethics and politics of AI: \nAuthor: Filippo Santoni De Sio\nAssociate Professor in Ethics of Technology at the Faculty of Technology\, Policy\, and Managment \nFilippo Santoni De Sio is an Academic philosopher and professor specialized in interdisciplinary work in ethics of technology\, with a special interest in the ethics of AI.  His drive to pave new paths in the field led him to write Human Freedom in the Age of AI\, published in March 2024. \nHis research is focused on the questions as to how emerging technologies – especially AI systems – can affect our concepts and practices of freedom\, moral and legal responsibility\, and democracy; and how a philosophical understanding of the philosophical concepts and practices can in turn help understand and inform the design and development of emerging technologies. \nDiscussant: Cristina Zaga\nAssistant professor of the Human-Centred Design group and DesignLab at the University of Twente \nCristina’s research aims to foster societal transitions towards justice\, care\, and solidarity\, with a focus on care and the future of work with AI and robots. She develops transdisciplinary approaches\, blending-in design justice\, speculation and more-than-human design. Cristina also leads the Social Justice and AI network\, working towards mitigating the dehumanizing effects of AI and promoting social and environmental justice. Her award-winning work has received many accolades nationally and internationally\, including the NWO Science Price for DEI initiatives (2022)\, the Dutch High Education Award (2022)\, and the Google Women Techmaker Award and scholarship (2018). \nDiscussant: Uğur Aytaç\nAssistant Professor\, Department of Philosophy and Religious Studies at Utrecht University \nUgur’s research investigates how varying conceptualizations of power and domination should shape our normative judgments about the legitimacy of socio-political arrangements\, including digital platforms\, economic institutions\, and states. Fundamental questions such as ‘what is political power?’\, ‘how can we assess its legitimacy in different institutional domains? and ‘how should we address any legitimacy deficits arising from unaccountable powers?’ have driven his research from his PhD in Amsterdam to the projects he undertakes as an Assistant Professor in Utrecht. \nHe also takes academic citizenship seriously\, aiming to help cultivate a community among philosophers in the Netherlands. He is co-coordinator of the political philosophy study group at the Dutch Research School of Philosophy. In this capacity\, he co-organizes regular workshops where participants receive feedback on their research.
URL:https://mondai.tudelftcampus.nl/event/book-launch-filippo-santoni-de-sio/
LOCATION:Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD
ATTACH;FMTTYPE=image/png:https://mondai.tudelftcampus.nl/wp-content/uploads/sites/7/2024/05/FillipoBookLaunch.png
ORGANIZER;CN="Mondai | House of AI":MAILTO:mondai@tudelft.nl
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20240605T113000
DTEND;TZID=Europe/Amsterdam:20240605T130000
DTSTAMP:20260404T191709
CREATED:20240523T132804Z
LAST-MODIFIED:20240530T083234Z
UID:10000184-1717587000-1717592400@mondai.tudelftcampus.nl
SUMMARY:Seminar Series on Meaningful Human-AI Interactions for a Digital Society - #2: Promoting Meaningful Human-AI Interactions: Societal and Legislative Perspectives
DESCRIPTION:A Seminar Series on Meaningful Human-AI Interactions for a Digital SocietyEvent #2: Promoting Meaningful Human-AI Interactions: Societal and Legislative Perspectives  \nWith the increasing creation\, use\, and implementation of novel AI systems in every aspect of our society\, we must ensure that AI complements us by providing meaningful engagement\, centering systems on our human values\, and implementing such systems with due consideration of society at large. In this series\, we will bring together experts from diverse fields such as human-computer interaction (HCI)\, philosophy\, design\, and law with policymakers and practitioners in a mixture of single-speaker seminars\, multidisciplinary panels\, and expert-led workshops to define what it means to have meaningful human-AI interaction; how to design for these interactions in AI systems; and how to promote such interactions through legislation and policy making. \nThe second event of this series will be June 5th from 11:30am – 1pm and will discuss the role of legislation in promoting meaningful human-AI interaction. Through presentations and discussion with panelists Dr. Merel Noorman (Tilburg University)\, Dr. Els de Busser (Leiden University)\, and Dr. Dayana Spagnuelo (TNO)\, we will discuss: how can legislation support meaningful human AI-interactions? How do we ground AI interactions in the values and needs of public stakeholders? And: How should public bodies interact with AI? The event will be hosted by Drs. Giorgia Pozzi and Sarah Carter (TU Delft). \nAll experts\, policymakers\, and practitioners with an interest in human-AI interactions are welcome to attend! \nThis event is supported by the National Digital Society Programme\, Mondai | House of AI\, the TU Delft AI Initiative\, Web Information Systems (WIS)\, and the Digital Ethics Centre. \n(de voertaal van dit event is Engels) \nSpeakersMerel Noorman is assistant professor in AI\, Robotics\, and STS at the Tilburg Institute for Law\, Technology\, and Society (TILT)\, Tilburg University. Her research focuses on the regulation and governance of AI. She is interested in the relationships between AI and democracy in critical infrastructures\, such as energy networks. She studied Artificial Intelligence and Science & Technology studies at the University of Amsterdam and Edinburgh University and received her PhD from Maastricht University. Since then\, she has co-initiated and worked on various research projects in the U.S. and the Netherlands\, studying the ethical and social aspects of complex and intelligent computer technologies. She has also worked as advisor for the Dutch Council for Social Development (Raad voor Maatschappelijke Ontwikkeling) and was managing director for the software company VicarVision. \nDr. Els De Busser is Assistant Professor Cyber Security Governance at the Institute of Security and Global Affairs at Leiden University. She is the Principal Investigator of the project Cyber Security by Integrated Design (C-SIDe project) funded by the Dutch Research Council. \nEls teaches students and practitioners on a broad range of topics including the effects of digitalization on human rights\, data protection and privacy\, legal aspects of cyber security and AI and human rights. She specializes in multidisciplinary education and research in these topics as well. Els is also a researcher in the The Hague Program for International Cyber Security and a member of the Standing Committee of Experts on International Immigration\, Refugee and Criminal Law (also known as the Meijers Committee). \nDr. Dayana Spagnuelo is a researcher at TNO specialised in information security and its interplay with data protection and other legal requests\, as well as privacy coordinator for TNO Unit ISP. Having researched in the past how some of the data protection principles\, such as Privacy\, Transparency and Accountability\, should be realised in technical systems\, she now turns her attention to how current solutions can help to\, and how future ones should be tailored to\, accomplish the upcoming requests of the new digital regulatory framework.
URL:https://mondai.tudelftcampus.nl/event/seminar-series-meaningful-human-ai-interactions-for-a-digital-society-event2/
LOCATION:TU Delft Building 36 – Lipkenszaal (01.150)\, Mekelweg 4\, Delft\, Nederland
ORGANIZER;CN="Mondai | House of AI":MAILTO:mondai@tudelft.nl
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20240604T090000
DTEND;TZID=Europe/Amsterdam:20240604T170000
DTSTAMP:20260404T191709
CREATED:20240408T114234Z
LAST-MODIFIED:20240530T121507Z
UID:10000170-1717491600-1717520400@mondai.tudelftcampus.nl
SUMMARY:Spring Symposium - AI Education
DESCRIPTION:Spring Symposium – AI EducationPlease reserve the 4th of June in your calendar for the Spring Symposium on AI Education! \nHosted by the TU Delft | AI Initiative at Mondai | House of AI\, this event will shine a spotlight on AI education. Join us for a day filled with insights from TU Delft’s leading experts\, as we delve into the fostering of AI education such innovative best-practises for teaching AI\, interdisciplinary AI education\, and AI skills for the future engineer. \nRegister now!\n(De voertaal van dit event is Engels) \nIn the morning session\, gain insights into ‘AI and the changing world for graduates’ with real-world examples on the expectations from future engineers. The afternoon session will focus on practical AI education with workshops\, discussions\, and actionable takeaways. \nProgramme\n09.00 – 09.30 Walk-in and Registration\n09.30 – 12.15 AI and the Changing World for Graduates\n12.15 – 13.00 Lunch and Exhibitions\n13.00 – 15.00 Workshops and Discussions\n15.00 – 16.15 Closing and a Fun Surprise!\n16.15 – 17:00 Networking Drinks \n\nSpring Event 2023 \nMorning Programme: ‘AI and the changing world for graduates’ An inspiring keynote programme to highlight the growing presence of AI in various disciplines.\nNotable speakers will discuss AI in their respective fields: \nOpen and Connected Futures: Reimagining Design Education in the Age of AI\nBy: Kees Kaan and Georg Vrachliotis (Faculty Architecture and Built Environment\, TU Delft) \nGeorg Vrachliotis and Kees Kaan\, leading experts in the built environment\, explore the intersection of architecture and AI. Georg Vrachliotis explores the shifting landscape of knowledge discovery in the AI era\, unveiling new horizons in architectural innovation. Meanwhile\, Kees Kaan offers a pragmatic and business-oriented outlook on AI’s impact within architecture\, providing invaluable real-world insights into this transformative technology. \nKees Kaan is a distinguished Professor of Architectural Design – Complex Projects and currently chairs the Department of Architecture at the Faculty of Architecture and the Built Environment. With a visionary approach\, his research and studio work focus on ‘The PROJECT’ within the contemporary city and region\, emphasizing critical thinking and narrative construction for practice. Beyond academia\, Kees Kaan is renowned as the founder of KAAN Architecten\, recognized for transformative projects like the Supreme Court of the Netherlands\, the Royal Museum of Fine Arts in Antwerp\, and the Netherlands Forensic Institute. \nGeorg Vrachliotis\, a distinguished full professor in the Theory of Architecture and Digital Culture at TU Delft\, brings a wealth of experience and insight to the intersection of AI and architecture. Vrachliotis heads the Design\, Data and Society Group (DDS) at TU Delft\, as well as the flagship project ‘The New Open’. From 2016\, he was dean of the Karlsruher Institut für Technologie (KIT) Faculty of Architecture and Chair for Architecture Theory (2014-2020). \nAI for Physicists and Physicists for AI\nBy: Qian Tao (Faculty of Applied Sciences\, TU Delft) \nQian Tao will take us on a journey from working at the academic hospital where AI is making a huge impact on Radiology\, to the corridors of TU Delft Imaging Physics where she does pioneering work on developing trustworthy AI with physics principles. Qian advocates for integrating AI into the new physics education\, aiming to prepare graduates for the evolving scientific landscape and job market. AI is beyond a mysterious black box from the perspective of engineering\, Qian provides her insights on the symbiotic relationship between AI and physics. \nDr. Qian Tao\, with extensive expertise in biomedical engineering and pioneering AI work in radiology\, brings a wealth of experience to her role. Having spent over a decade at LUMC conducting multidisciplinary research on cardiac MRI and AI-based medical image analysis\, she joined TU Delft to expand her exploration of data- and knowledge-driven AI. Since 2021\, Dr. Tao has led the CHEME AI Lab at TU Delft’s Department of Imaging Physics\, focusing on developing trustworthy AI methodologies for critical healthcare applications\, particularly in medical imaging\, to enhance AI’s reliability and impact on scientific and clinical advancements. \n\nEducating Tomorrow’s Lawyers\nBy: Cees Zweistra (Erasmus University Rotterdam) \n\nCees Zweistra explores the future of legal education shaped by developments in technology. As the legal landscape changes\, education too must follow suit with innovative programs that integrate AI into law studies. Learn about how AI developments create many opportunities\, while also fuelling discussions around ethical considerations. Cees Zweistra offers insight into why teaching students about AI’s technical aspects is essential for lawyers to stay relevant in today’s evolving legal landscape. \nCees Zweistra is assistant professor of ethics\, law and technology at Erasmus University Rotterdam. He studied both law and philosophy at Utrecht University and obtained a PhD in the ethics and philosophy of technology from Delft University of Technology. His research is focused on understanding how technologies are co-shaping the future of the law and legal profession. \nAfternoon Programme: practical AI education with workshops\, discussions and actionable takeaways An enriching afternoon programme designed to equip educators with insights to foster AI education. Workshops and sessions include: \n\nSpring Event 2023 \nVisualisation and AI: More than just communicating results\nBy: Thomas Höllt\, Assistant Professor at the Faculty of Electrical Engineering\, Mathematics & Computer Science \nHow to use data visualisation? Visualization is commonly used for communication of research results\, from plots in scientific papers to illustrations in science communication. In this lecture\, we will discuss the basics for creating good\, legible visualizations of such results\, but also go beyond just communication. We will discuss the benefits of using visualization in combination with machine learning (ML) to gain deeper insight into the data and to better understand ML methods themselves. \nAI Pedagogy through a Design Lens\nBy: Kars Alfrink\, researcher at the Faculty of Industrial Design Engineering \nHow can a design view shape the pedagogical approach to AI? Explore the intersection of design and AI education in this workshop. We will discuss practical insights from the five-year run of the AI & Society IDE master elective\, including the unique contributions of studio-based teaching and thinking-through-making learning activities. \nExploration of AI tools for research software\nBy: Carlos Utrilla Guerrero & Halford Dace\, Research Data & Software Team\, TU Delft Library \nGenerative Artificial Intelligence (AI) and Large Language Models (LLMs) are reshaping research tasks. This workshop will explore the usage of generative AI in research\, including code assistance\, integration into research software development\, and the exchange of experiences and best practices. The agenda covers essential topics: introducing LLMs\, examining the impact of code assistants on research software development\, and exploring responsible use and legal implications of generative AI tools like GitHub Copilot\, concluding with a practical session using GitHub Copilot for software documentation. \nMachine Learning Education in the different faculties: Best Practices\, Tips and Sharing Materials \nBy: Gosia Migut\, Assistant Professor at the Faculty of Electrical Engineering\, Mathematics & Computer Science \nBianca Giovanardi\, Assistant Professor in the Department of Aerospace Structures and Materials \nHow do we teach machine learning? Do you want to learn and share best practices\, tips and educational resources? Then this session is for you!  In this workshop we share some conclusions from Machine Learning Teachers Community meetups and we invite you to also share your best practices and tips. We are also developing a platform (OER4AI) to share machine learning education materials between TU Delft teachers. We are looking for feedback to improve this platform and make it useful for you! In conclusion: an interactive session to foster more collaborations between teachers. \nImpact with your AI Education\nBy: Bertien Broekhans\, Life Long Learning portfolio manager AI\, Data & Digitalisation \nAre you passionate about AI education? Have you developed or are you developing AI courses for campus education? Want to expand your impact? Consider developing your course into a Massive Online Open Course (MOOC)! By developing online course materials for the edX platform\, support professionals to keep on track. Moreover these support learning flexibility for your campus students. This can ease your workload while extending the reach of your AI education. Join our workshop to learn from educators who have successfully adapted campus courses into MOOCs. Let’s brainstorm together on future course development opportunities! \n(Re)defining education in the age of AI\nBy: Marcell Varkonyi\, Open Education\, TU Delft \nTechnological advancements\, such as AI\, present new challenges to education\, both in terms of how these technologies may be integrated into educational processes\, and in terms of the role these need to take up in curricula. While these are important considerations to make\, they often eclipse another equally important question: to what end do we educate? This session revolves around the fundamental question: what is the purpose of education\, and (how) do the developments around AI change the way we approach this question? We will explore this question through small groups and plenary discussions in this session. \nRegister now for the Spring Symposium via the form above or below!
URL:https://mondai.tudelftcampus.nl/event/spring-symposium-ai-education/
LOCATION:Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD
ATTACH;FMTTYPE=image/jpeg:https://mondai.tudelftcampus.nl/wp-content/uploads/sites/7/2022/08/RoyBorghoutsFotografie-140622-TUDSpring2022-093.jpg
ORGANIZER;CN="Mondai | House of AI":MAILTO:mondai@tudelft.nl
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20240516T120000
DTEND;TZID=Europe/Amsterdam:20240516T133000
DTSTAMP:20260404T191709
CREATED:20240425T144759Z
LAST-MODIFIED:20240429T074448Z
UID:10000180-1715860800-1715866200@mondai.tudelftcampus.nl
SUMMARY:TU Delft AI Lunch: AI4Freight
DESCRIPTION:Mondai | House of AI is happy to host the next edition of the TU Delft AI Lunch: AI4Freight\nThe role of AI in tackling the challenges faced by the logistics sector and exploring pathways for academia-industry collaboration \nThe complex landscape of logistics presents businesses with various operational challenges across the supply chain. Artificial Intelligence offers tools to transform this sector by breaking down these complexities into streamlined and smart operations. Through AI\, businesses gain the ability to predict\, adapt\, and respond dynamically and in real time\, revolutionizing how logistic operations are performed today. What is the role of academia in empowering the logistics sector in the face of opportunities and challenges using AI technologies? At the intersection of AI and freight logistics\, panellists will discuss various both barriers and innovation from different perspectives: from challenges faced by industry to novel AI solutions and their practical implementation. \nThis event includes free lunch for which registration is required (help us reduce food waste!) \nPanellists\n\n\n\n\nModerator: Mahnam Saeednia\nAssistant Professor of Freight and Logistics (Faculty of CEG)\nMahnam’s area of research addresses current challenges of freight transport sector\, specifically in the domains of railway freight\, intermodal freight transportation\, automation\, digitalization and energy transition. To achieve this\, she utilizes techniques such as agent technology\, distributed optimization\, discrete events simulation\, and AI.  Additionally\, she is keen on exploring novel and innovative methods in this domain\, including bio-inspired and self-organization algorithms. Previously\, she was R&D project lead at Siemens Mobility (Hacon Ingenieurgesellschaft mbH) and a postdoctoral researcher at Swiss Federal Institute of Technology (ETH Zurich) where she was a recipient of Swiss National Science Foundation (SNSF) grant. \n\n\n\n\n\n\n\n\n\n\n\nPanellist: Cornelis Versteegt\nPartner & project manager at Macomi\n \nCornelis Versteegt is the founder and owner of Macomi\, a company that provides Decision Support Systems based on simulation\, prescriptive analytics and Artificial Intelligence. He received his PhD from TU Delft on automated logistic systems under the supervision of prof.dr. H.G. Sol. He has a MSc in Systems Engineering\, Policy Analysis and Management. His expertise lies in the areas of logistics and transportation. \n\n\n\n\n\n\n\n\n\n\nPanellist: Neil Yorke-Smith\nAssociate Professor of Socio-Technical Algorithmics (Faculty of EEMCS)\nNeil Yorke-Smith directs the Socio-Technical Algorithmic Research (STAR) Lab at TU Delft. His research addresses a fundamental question of the AI era: how can technology help people make decisions in complex socio-technical situations? Neil is a Senior Member of AAAI\, a Senior Member of ACM\, and a member of CLAIRE and ELLIS. In addition to directing the STAR Lab\, Neil currently serves as manager of the Dutch Citizens and Society in the Energy Transition (CaSET) ELSA AI Lab.\n \n\n\n\n\n\nAbout the Delft AI (Lab) Lunch series\nThe Delft AI (Lab) Lunch is a monthly meet-up hosted by the TU Delft AI Labs & Talent community at Mondai | House of AI.\nEvery month\, a Delft AI Talent moderates a panel to discuss challenges and developments made in their field. During these events\, you can participate\, learn\, make connections\, inspire and be inspired by and with the Delft AI Labs. We invite all interested staff and students from TU Delft to join these sessions. Please contact community manager Charlotte Boelens for more information about this series or the TU Delft AI Labs & Talent Programme. \nJoin this series in 2024 on: 15 February | 21 March | 16 May | 1 July | \nJoin the yearly Spring Symposium on 4 June 2024. Theme: AI Education \nNote for TU Delft PhDs\nThe TU Delft AI Lunch series is eligible for earning discipline related skills GSC with the ‘Form for earning GSC for TU Delft AI(-related) seminars’. Check with your local Faculty Graduate School (FGS) if your FGS offers this option for earning Discipline Related Skills GSC; and with your supervisors if they accept our seminars on your Doctoral Education (DE) list. If you already have a form\, don’t forget to bring it with you.
URL:https://mondai.tudelftcampus.nl/event/tu-delft-ai-lunch-ai4freight/
LOCATION:Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD
ORGANIZER;CN="Mondai | House of AI":MAILTO:mondai@tudelft.nl
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20240508T120000
DTEND;TZID=Europe/Amsterdam:20240508T130000
DTSTAMP:20260404T191709
CREATED:20240415T103250Z
LAST-MODIFIED:20240506T112429Z
UID:10000172-1715169600-1715173200@mondai.tudelftcampus.nl
SUMMARY:Seminar Series on Meaningful Human-AI Interactions for a Digital Society - #1: Designing for Meaningful Human-AI Interactions
DESCRIPTION:A Seminar Series on Meaningful Human-AI Interactions for a Digital SocietyEvent #1: Designing for Meaningful Human-AI Interactions  \nWith the increasing creation\, use\, and implementation of novel AI systems in every aspect of our society\, we must ensure that AI complements us by providing meaningful engagement\, centering systems on our human values\, and implementing such systems with due consideration of society at large. In this series\, we will bring together experts from diverse fields such as human-computer interaction (HCI)\, philosophy\, design\, and law with policymakers and practitioners in a mixture of single-speaker seminars\, multidisciplinary panels\, and expert-led workshops to define what it means to have meaningful human-AI interaction; how to design for these interactions in AI systems; and how to promote such interactions through legislation and policy making. \nThe kickoff event will be May 8th at noon to discuss how to design meaningful human-AI interactions for a digital society. Through presentations and discussion with panelists Dr. Mike Lensink (The Green Land)\, Dr. Dorian Peters (University of Cambridge)\, and Dr. Matthew Dennis (TU Eindhoven)\, we will discuss: what makes human-AI interactions meaningful? What features must be considered to ensure “meaningfulness”? How do we design for meaning in an ethically-responsible manner? The event will be moderated by Dr. Jie Yang (TU Delft). \nAll experts\, policymakers\, and practitioners with an interest in human-AI interactions are welcome to attend! \nThis event is supported by the National Digital Society Programme\, Mondai | House of AI\, the TU Delft AI Initiative\, Web Information Systems (WIS)\, and the Digital Ethics Centre. \n(de voertaal van dit event is Engels) \nSpeakersMike Lensink (34) is as a researcher\, facilitator\, and advisor specialised in the ethics of technological innovation\, data and algorithmic decision making. Mike has a background in philosophy\, and has previously worked as an academic researcher at the University Medical Center Utrecht\, where he wrote a dissertation on the ethics and responsible governance of stem cell banks. He currently works at The Green Land\, an agency that supports public sector organisations in the Netherlands with responsible and humane use of data and algorithms. \nIn his work\, Mike is particularly concerned with assessing the impact of algorithmic systems on the lived experience of human beings\, and investigating the dynamics of the interaction between algorithmic systems\, its users (staff)\, and citizens\, in order to translate these insights into more humane technological innovations. Over the past years\, he has worked together with a variety of governmental organisations\, supporting them with various approaches to “putting ethics into practice”\, i.e. operationalising ethics within and organisational processes\, developing strategy and governance\, and embedding ethical considerations in concrete technological solutions. \nDorian Peters is Associate Director at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge as well as a Research Fellow at Imperial College London. A designer and design researcher\, she specialises in Human-computer Interaction for learning\, health\, and wellbeing\, and on digital ethics in practice. Her books include Positive Computing: Technology for Wellbeing and Human Potential (MIT Press)\, and Interface Design for Learning (Pearson). \nMatthew J. Dennis is an assistant professor in ethics of technology at TU Eindhoven in the Netherlands. He is also the co-director of Eindhoven Centre for the Philosophy of Artificial Intelligence\, and a Senior Researcher in the Ethics of Socially Disruptive Technologies research consortium (ESDiT). He specialises in the ethics of artificial intelligence\, autonomy and well-being\, and the future of work. His recent publications focus on intercultural perspectives (Buddhism & Confucianism) on how to flourish with emerging technologies. He has recently held visiting positions at the University of Oxford’s Institute for Ethics in Artificial Intelligence (2023)\, Erasmus Centre for Data Analytics (2023)\, University of Amsterdam’s Institute for Advanced Studies (2022)\, and University of Cambridge’s Centre for the Future of Intelligence (2020).
URL:https://mondai.tudelftcampus.nl/event/seminar-series-meaningful-human-ai-interactions-for-a-digital-society-event1/
LOCATION:Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD
ATTACH;FMTTYPE=image/png:https://mondai.tudelftcampus.nl/wp-content/uploads/sites/7/2024/04/Seminar_MeaningfulAI_blue_food_v3.pdf.png
ORGANIZER;CN="Mondai | House of AI":MAILTO:mondai@tudelft.nl
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20240424T160000
DTEND;TZID=Europe/Amsterdam:20240424T180000
DTSTAMP:20260404T191709
CREATED:20240418T173506Z
LAST-MODIFIED:20240418T174106Z
UID:10000174-1713974400-1713981600@mondai.tudelftcampus.nl
SUMMARY:Escaping the Echo Chamber: The Quest for the Normative News Recommender Systems and a new notion of Computer Science
DESCRIPTION:Mondai | House of AI is pleased to host the new edition of the Distinguished Speaker Series: \nEscaping the Echo Chamber: The Quest for the Normative News Recommender Systems and a new notion of Computer Science Recommender systems and social networks are often faulted to be the cause for creating Echo Chambers – environments where people mostly encounter news that match their previous choices or those that are popular among similar users\, resulting in their isolation inside familiar but insulated information silos. Echo chambers\, in turn\, have been attributed to be one cause for the polarization of society\, which leads to the increased difficulty to promote tolerance\, build consensus\, and forge compromises. To escape these echo chambers\, we propose to change the focus of recommender systems from optimizing prediction accuracy only to considering measures for social cohesion. \nThis proposition raises questions in three spheres: In the technical sphere\, we need to investigate how to build “socially considerate” recommender systems. To that end\, we develop a novel recommendation framework with the goal of improving information diversity using a modified random walk exploration of the user-item graph. In the social sphere\, we need to investigate if the adapted recommender systems have the desired effect. To that end\, we present an empirical pilot study that exposed users to various sets (some diverse) of news with surprising results. Finally\, in the normative sphere\, these studies raise the question what kind of diversity is desirable for the functioning of democracy. \nReflecting the consequences of these findings for our discipline\, this talk highlights that computer science needs to increasingly engage with both the social and normative challenges of our work\, possibly producing a new understanding of our discipline. It proposes similar consequences for other disciplines in that they increasingly need to embrace all three spheres. \nDistinguished Speaker: Abraham Bernstein\, Prof.\, PhD\nis Full Professor at the Department of Informatics (Institut für Informatik) of the University of Zurich. He mainly conducts research on the Semantic Web and Knowledge Discovery. His work draws from both social science (organizational psychology/sociology) and technical foundations (computer science\, artificial intelligence).\nBefore coming to Zurich he was an Assistant Professor at the Information Systems Department of New York University’s Stern School of Business and received a Ph.D. at MIT’s Sloan School of Management\, where he worked with Prof. Thomas W. Malone at the Center for Coordination Science.\n \nThe seminar starts 16.00 sharp! Conversation and contemplation can continue over drinks and snacks after the lecture\,\nWe hope to see you there!
URL:https://mondai.tudelftcampus.nl/event/escaping-the-echo-chamber-the-quest-for-the-normative-news-recommender-systems-and-a-new-notion-of-computer-science/
LOCATION:Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD
ATTACH;FMTTYPE=image/jpeg:https://mondai.tudelftcampus.nl/wp-content/uploads/sites/7/2024/04/avi-talk.jpg
ORGANIZER;CN="Mondai | House of AI":MAILTO:mondai@tudelft.nl
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20240321T120000
DTEND;TZID=Europe/Amsterdam:20240321T133000
DTSTAMP:20260404T191709
CREATED:20240305T102559Z
LAST-MODIFIED:20240318T091310Z
UID:10000168-1711022400-1711027800@mondai.tudelftcampus.nl
SUMMARY:TU Delft AI Lunch: Inclusive AI
DESCRIPTION:Mondai | House of AI is happy to host the TU Delft AI Lunch! \nOn Thursday 21 March (12:00 – 13:30) join us for AI Lunch: Inclusive AI. This panel lunch is moderated by Olya Kudina\, Assistant Professor in Ethics & Philosophy of Technology the Faculty of TPM\, and Nazli Cila\, Assistant Professor of Human-Agent Partnerships at the Faculty of IDE. Join the conversation on exploring how AI can be inclusive from different perspectives. This panel will delve into navigating challenges and strategies for creating more inclusive AI systems\, from addressing biases to offering different perspectives and lessons. Learn from the insights of AI researchers to broaden your understanding of the intersection between AI and inclusivity! \nThis event includes free lunch for which registration is required (help us reduce food waste!) \nPanellists\n\n\n\n\nModerator: Nazli Cila\nAssistant professor Professor of Human-Agent Partnerships  (Faculty of Industrial Design Engineering)\nNazli focuses on collaborations with autonomous agents\, such as smart products\, robots\, and AI systems\, and their socio-technical implications. She integrates empirical work (i.e.\, experimentation\, future modelling\, and prototyping) with philosophical\, ethical\, and practical issues regarding trust\, responsibility\, control\, and intelligence. Her mission is to create foundational theory on Human-Agent Partnerships and reveal interaction patterns for meaningful\, aesthetic\, empowering collaborations with agents. \n\n\n\n\n\n\n\n\n\n\n\nModerator: Olya Kudina\nAssistant professor of Ethics & Philosophy of Technology (Faculty of Technology\, Policy & Management)\nOlya is an interdisciplinary researcher in philosophy/ethics of technology who explores the relation between human values and technologies. Her recent focus has been on AI and democracy in the framework of the AI DeMoS Lab that she founded and co-leads. To anticipate the ethical challenges and opportunities of technologies\, Olya thinks it is essential to combine different academic practices and fields. \n\n\n\n\n\n\n\n\n\n\n\nPanellist: Cristina Zaga\nAssistant Professor of Human-Centered Design (ET\, DesignLab\, University of Twente) \nCristina’s research aims to foster societal transitions towards justice\, care\, and solidarity\, with a focus on health care\, automation\, and digitalization of work\, and health-transitions technology. She develops transdisciplinary approaches\, drawing from design research methodologies to tackle societal challenges through emergent embodied social learning and critical theories blending-in a justice take on post-human/more-than-human design. Cristina also leads the Social Justice and AI networks\, working towards mitigating the dehumanizing effects of AI and promoting social and environmental justice. \n\n\n\n\n\n\n\n\n\n\n\nPanellist: Odette Scharenborg\nAssociate Professor of Intelligent Systems (EEMCS)\nOdette’s research aims to build inclusive speech technology\, i.e. making speech technology available for everyone irrespective of how they speak or what language they speak. In her research she considers technical aspects as well as ethical and societal aspects. She has an interest in anything and everything speech\, ranging from human to automatic speech processing. She uses different research techniques including human listening experiments\, EEG\, and deep neural networks. Odette is the president of the International Speech Communication Association (ISCA): The largest international society on speech science and technology. \n\n\n\n\n\n\n\n\n\n\n\nPanellist: Cynthia Liem\nAssociate Professor of Intelligent Systems (EEMCS)\nCynthia’s research focuses on the broadening of perspectives: the algorithmic surfacing of undiscovered information that may stimulate the development of new interests in users\, techniques to ensure measurement and construct validity in human-interpreted data-driven decision-making\, and methodological considerations in conducting transdisciplinary research into responsible and trustworthy AI. Cynthia is a member of the National Young Academy and recently won the Women in AI Netherlands Diversity Leader Award\, acknowledging her efforts in making AI more inclusive for underrepresented groups\, both in research and through public engagement. \n\n\n\n\n\nAbout the Delft AI (Lab) Lunch series\nThe Delft AI (Lab) Lunch is a monthly meet-up hosted by the TU Delft AI Labs & Talent community at Mondai | House of AI.\nEvery month\, a Delft AI Talent moderates a panel to discuss challenges and developments made in their field. During these events\, you can participate\, learn\, make connections\, inspire and be inspired by and with the Delft AI Labs. We invite all interested staff and students from TU Delft to join these sessions. Please contact community manager Charlotte Boelens for more information about this series or the TU Delft AI Labs & Talent Programme. \nJoin this series in 2024 on: 15 February | 21 March | 18 April | 16 May | 4 June: Spring Symposium | 13 June | 11 July  \nNote for TU Delft PhDs\nThe TU Delft AI Lunch series is eligible for earning discipline related skills GSC with the ‘Form for earning GSC for TU Delft AI(-related) seminars’. Check with your local Faculty Graduate School (FGS) if your FGS offers this option for earning Discipline Related Skills GSC; and with your supervisors if they accept our seminars on your Doctoral Education (DE) list. If you already have a form\, don’t forget to bring it with you.
URL:https://mondai.tudelftcampus.nl/event/tu-delft-ai-lunch-inclusive-ai/
LOCATION:Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD
ORGANIZER;CN="Mondai | House of AI":MAILTO:mondai@tudelft.nl
END:VEVENT
END:VCALENDAR