BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//Mondai - ECPv6.15.17//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:Mondai
X-ORIGINAL-URL:https://mondai.tudelftcampus.nl
X-WR-CALDESC:Evenementen voor Mondai
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:Europe/Amsterdam
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20230326T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20231029T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20240331T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20241027T010000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:+0100
TZOFFSETTO:+0200
TZNAME:CEST
DTSTART:20250330T010000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:+0200
TZOFFSETTO:+0100
TZNAME:CET
DTSTART:20251026T010000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=Europe/Amsterdam:20241001T090000
DTEND;TZID=Europe/Amsterdam:20241001T180000
DTSTAMP:20260417T102331
CREATED:20240904T054106Z
LAST-MODIFIED:20241001T074045Z
UID:7508-1727773200-1727805600@mondai.tudelftcampus.nl
SUMMARY:Symposium and Opening Centre for Meaningful Human Control
DESCRIPTION:Mondai | House of AI is glad to host the Symposium and Opening of the new Centre for Meaningful Human Control of the TU Delft!Hosted in collaboration with the Centre for Meaningful Human Control and the TU Delft AI Initiative. \nAs AI algorithms rapidly revolutionize various sectors – from healthcare through public services to transportation – they also raise concerns over their potential to spiral beyond human control and responsibility. In 2018\, TU Delft launched the AiTech initiative to address the transdisciplinary challenges related to designing systems under Meaningful Human Control. Concurrently\, the project “Meaningful Human Control over Automated Driving Systems” (2017-2021) further developed and operationalised this concept in the context of driving automation. These and other initiatives have produced impactful research\, fostered community building – both national and international\, and influenced key policy documents by Dutch and EU authorities. \nIn this event we invite you to celebrate with us two recent milestones from our MHC community! The release of the first Research Handbook on Meaningful Human Control and the launch of the Centre for Meaningful Human Control. \nRegister here!This event is a unique opportunity to engage with leading academics and practitioners in the field! Exchange perspectives – philosophical\, legal\, and technical – on the challenges and approaches towards keeping meaningful human control over technology. \nProgramme\n08.30 – 09.00 Walk-in and Registration\n09.00 – 10.15 Welcome by David Abbink (Scientific Director CMHC) and Luciano Cavalcante Siebert (Co-Director CMHC)\, interactive session by Nazli Cila and Deborah Forster (Research Team CMHC)\, keynote by Carles Sierra (Artificial Intelligence Research Institute\, Spanish National Research Council)\n10.15 – 10.40 Break and Handbook Gallery\n10.40 – 12.30 Keynotes by Johannes Himmelreich (Syracuse University)\, and Tanya Krupiy (Newcastle University)\, and a Panel Discussion with Authors of the Handbook\n12.30 – 13.30 Lunch\n13.30 – 14.50 Welcome by Arkady Zgonnikov (Co-Director CMHC). Keynotes by David Abbink (Scientific Director CMHC) and Kim van Sparrentak (European Parliament)\n14.25 – 14.50 Break\n14.50 – 16.40 Keynotes by Barbara Holder (Embry-Riddle Aeronautical University) and Ilse Harms (Dutch Vehicle Authority)\, and an interactive session on bringing MHC research to practice\n16.40 – 17.00 Official Launch of the Centre for Meaningful Human Control\n17.00 Celebratory Drinks! \nMorning Programme: Academic Challenges of Meaningful Human Control Exciting keynotes provide an integrated overview from various academic perspectives – ethical\, legal\, design and engineering. And\, an interactive session with the authors and editors provides a change to deep-dive into the Handbook on Meaningful Human Control. \nOn the Engineering of Social Values - Carles Sierra (Artificial Intelligence Research Institute)\nOn the Engineering of Social Values\nBy: Carles Sierra (Artificial Intelligence Research Institute) \nEthics in Artificial Intelligence is a wide-ranging field encompassing many open questions regarding the moral\, legal and technical issues of using and designing ethically-compliant autonomous agents. Under this umbrella\, the computational ethics area is concerned with formulating and codifying ethical principles into software components. In this talk\, I will look at a particular problem in computational ethics: engineering moral values into autonomous agents. I will focus on the essential role of human communities in defining social values and their associated norm-based social contracts. \n \nCarles Sierra is the Director of the Artificial Intelligence Research Institute (IIIA) of the Spanish National Research Council (CSIC) located in Barcelona. He is the President of EurAI\, the European Association of Artificial Intelligence. He has been contributing to Artificial Intelligence research since 1985 in the areas of Knowledge Representation\, Auctions\, Electronic Institutions\, Autonomous Agents\, Multiagent Systems and Agreement Technologies. He is a Fellow of the European Association of AI\, EurAI\, and recipient of the ACM/SIGAI Autonomous Agents Research Award 2019. \nFor Knowledge and Commitment — Or: What’s the Point of Meaningful Human Control? - Johannes Himmelreich (Syracuse University)\nFor Knowledge and Commitment — Or: What’s the Point of Meaningful Human Control?\nBy: Johannes Himmelreich (Syracuse University) \nJohannes suggests that Meaningful Human Control (MHC) may be missing the point. Theories of MHC typically concentrate on intention\, intervention\, and action; and the theories seek to warrant moral responsibility and avoid harms. That\, he takes it\, is the point of MHC. But this typical focus on intention\, action\, and intervention misses this point\, or so he argues. To ensure responsibility\, knowledge matters more than intention or intervention. And to avoid certain outcomes\, a commitment to refrain from acting may matter more than maintaining human control. In fact\, with partially superhuman Artificial Intelligence we need both. We need to *know* when AI outperforms humans to then *commit* to defer to AI. This often avoids harmful outcomes without undermining moral responsibility. \nJohannes Himmelreich is a philosopher who teaches and works in a policy school. He is an Assistant Professor in Public Administration and International Affairs in the Maxwell School at Syracuse University. He works in the areas of political philosophy\, applied ethics\, and philosophy of science. Currently\, he researches the ethical quandaries that data scientists face\, how the government should use AI\, and how to check for algorithmic fairness under uncertainty. He published papers on “Responsibility for Killer Robots\,” the trolley problem and the ethics of self-driving cars\, as well as on the role of embodiment in virtual reality. He holds a PhD in Philosophy from the London School of Economics (LSE). Prior to joining Syracuse\, he was a post-doctoral fellow at Humboldt University in Berlin and at the McCoy Family Center for Ethics in Society at Stanford University. During his time in Silicon Valley\, he consulted on tech ethics for Fortune 500 companies\, and taught ethics at Apple. \nGovernance of the Human-AI Coupling - Tanya Krupiy (Newcastle University)\n\nGovernance of the Human-AI Coupling\nBy: Tanya Krupiy (Newcastle University) \nJuliane Beck and Thomas Burri discuss the fact that the debate over what constitutes meaningful human control over artificial intelligence systems has been largely confined to the context of military applications of artificial intelligence. Scholars\, such as Jonathan Kwik and Frank Flemisch et al\, have proposed various definitions for the term meaningful human control in the context of the use of artificial intelligence systems. Article 14 of the Artificial Intelligence Act 2024 gives effect to the aspiration to have human oversight over the operation of high-risk artificial intelligence systems. This presentation will examine what duties the Netherlands has under the Convention on the Elimination of All Forms of Discrimination against Women (CEDAW) in regard to governing human-artificial intelligence coupling in the context of the organisations employing artificial intelligence as part of the decision-making process. It will conclude that the Netherlands needs to enact legislation in order to comply with its international human rights law obligations under CEDAW. This stems from the fact that there is a tension between obligations flowing from the CEDAW and the Artificial Intelligence Act 2024. \n\n \nTanya Krupiy is a lecturer in digital law\, policy and society at Newcastle University. She has expertise in international human rights law\, international humanitarian law and international criminal law. Tanya holds a Master of Laws with distinction in public international law from the London School of Economics and Political Science. She gained a Doctor of Philosophy in law from the University of Essex. She received funding from the Social Sciences and Humanities Research Council of Canada to carry out a postdoctoral fellowship at McGill University in Canada. Thereafter\, she was a postdoctoral fellow at Tilburg University. Tanya’s research appears in Oxford University Press\, University of Melbourne and Brill publications among others. \n  \nAfternoon Programme: Pragmatic Challenges of Meaningful Human Control In the afternoon we focus on the practical challenges of meaningful human control. Via talks and interactive sessions around real-world challenges from international experts we take a closer look at sectors like the aviation and automotive industry\, and Dutch and European policy making. We also take the time to explore how you might interact with its network and expertise. \nAt the end of the day we also officially launch and celebrate our Centre on Meaningful Human Control! \nTech Regulation: the way towards Ethical AI - Kim van Sparrentak (European Parliament)\n\nTech Regulation: the way towards Ethical AI\nBy: Kim van Sparrentak (European Parliament) \n\n \nKim van Sparrentak is a Dutch politician who has been serving as a Member of the European Parliament for the GroenLinks political party since 2019. She co-wrote legislation that limited the influence of major tech companies and that granted municipalities greater discretion in regulating which properties can be rented out for short-term homestays through platforms such as Airbnb. Van Sparrentak was re-elected in June 2024 as the fourth candidate on the shared GroenLinks–PvdA list\, which received a plurality in the Netherlands of eight seats. \n  \nWho’s flying this thing?! Considerations for a shared Human-Automation Future - Barbara Holder (Embry-Riddle Aeronautical University)\nWho’s flying this thing?! Considerations for a shared Human-Automation Future\nBy: Barbara Holder (Embry-Riddle Aeronautical University) \n \nBarbara Holder is Associate Professor and Presidential Fellow in the College of Aviation at Embry-Riddle Aeronautical University (ERAU). Before joining ERAU in November 2021\, Holder had worked since 2015 as a fellow in Advanced Technology at Honeywell Aerospace\, where she studied human-machine issues across a wide range of aircraft. Earlier\, she spent 15 years with The Boeing Company. There\, she was an associate technical fellow and lead scientist of the Flight Deck Concept Center. Holder was a post-doctoral research fellow at NASA Ames Research Center where she investigated how pilots come to understand the auto-flight system of the Airbus A320 while flying the line. \nHolder is chair of the Human Factors Subcommittee to the U.S. Federal Aviation Administration’s (FAA) Research\, Engineering and Development Advisory Committee.  She is also a member of the FAA’s Air Carrier Training Aviation Rulemaking Committee’s Flight Path Management Working Group. She is a fellow of the Royal Aeronautical Society. She has nine patents and multiple scholarly publications. \nThe Practical Challenges of Interacting with Automated Cars - Ilse Harms (Dutch Vehicle Authority))\nThe Practical Challenges of Interacting with Automated Cars\nBy: Ilse Harms (Dutch Verhicle Authority) \nDriving a car is a complex and dynamic task. It entails the execution of tasks varying from motor-executive tasks\, such as keeping the car within its lane\, to more cognitive tasks such as understanding the driving environment to decide whether it is safe to overtake\, till keeping track off which exit to take. These days\, in-vehicle systems are increasingly assisting with\, or taking over\, part of the driving tasks. Even up to the point that humans feel that the car is actually driving itself. Under specific conditions\, some cars actually can take over full control over the driving task. This interplay between the human driver and the machine driver has design implications for the vehicle\, which need to be assessed in vehicle type approval. Considering her work at the Dutch Vehicle Authority\, Ilse will share with you some of the practical challenges related to human control in the context of assisted and automated driving \n \n“Human Factors is an integral part of mobility.” This combination is also the recurring theme in Ilse Harms’s career. Ilse is a traffic psychologist who enjoys working at the intersection of theory and practice. She conducted her PhD research at the University of Groningen while working for the Dutch government. \nCurrently she works at RDW – the Dutch Vehicle Authority – where she is a leading figure in the field of human factors and vehicle automation. Furthermore\, Ilse has successfully worked to get the topic of human factors in Euro NCAP’s Vision 2030. At Euro NCAP she is both the alternate director for the Netherlands and the Chair for the HMI & Human Factors Working Group. \nA Moment of Celebration! We are very excited to celebrate the official launch of the Centre for Meaningful Human Control over systems with autonomous capabilities with you. The mission of the Centre is to connect academics and practitioners to better conceptualise\, design\, implement\, and assess systems under meaningful human control. We strive to be a lighthouse for collaboration among multiple stakeholders: to leverage interdisciplinary expertise\, existing initiatives at TU Delft\, and an international network of collaborators at the forefront of research and practice on meaningful human control. \nRegister here!
URL:https://mondai.tudelftcampus.nl/event/symposium-and-opening-centre-of-meaningful-human-control/
LOCATION:Mondai | House of AI – Delft\, Molengraaffsingel 29\, Delft\, 2629 JD
ATTACH;FMTTYPE=image/jpeg:https://mondai.tudelftcampus.nl/wp-content/uploads/sites/7/2024/09/MHCC_mobile.jpg
ORGANIZER;CN="Mondai | House of AI":MAILTO:mondai@tudelft.nl
END:VEVENT
END:VCALENDAR