On June 2nd, 2022, at the IBM Innovation Studio in the Watson Center in Munich, IBM and the TUM IEAI co-hosted an event to discuss the need to bring together actors from research and business to facilitate the development of “Responsible Artificial Intelligence”. This evening included research project pitches about AI explainability, AI fairness, and AI governance, along with a panel discussion with AI experts.
Wolfgang Rodler started by introducing the role of IBM in the space of trustworthy AI. 43% of companies globally are exploring the topic of AI, and 31% have already deployed it. He explained that IBM has three guiding principles aimed at establishing trust and explainability in AI: (1) the purpose of AI is solely to support human beings, (2) the data and insights collected belong to their creator, and (3) technology must remain transparent and explainable. These principles inform their research in the field of governed data and AI tech and highlight the need for a strong and diverse ecosystem with interdisciplinary collaboration between institutions and users.
Six current projects, three for the IEAI and three from IBM were then “pitched” to the audience. Auxane Boch and Ellen Hohma presented their research on an Accountability Framework for AI Systems. They focus on who must be held accountable for intentional or unintentional damages caused by AI technology and offer an answer by proposing an accountability framework with a risk-based approach (involving risk assessment, management, and responsibility assessment). Ultimately, they aim to address how we can approach AI ethical dilemmas. They are currently working with Fujitsu to develop their project.
Then Dr. Michael Vössing presented his project on Human-AI Collaboration which aims to find information systems that combine the complementary capabilities of human experts and artificial intelligence. Ideally, Michael explains that we should reach a point where both AI and humans can collaborate. This is seen in the presented case of IBM’s industry partners in construction companies through object detection.
Franziska Poszler and Max Geisslinger followed with their project on Autonomous Driving Ethics, which is interdisciplinary research effort between engineering and ethics into how ethical decision-making can be implemented into the trajectory planning of an autonomous vehicle. Through analysis of philosophy and ethics literature, they identified three pillars that guided their research question: the need for risk management, the importance of integrated approaches, and finally, the transformation of ethical and philosophical theories into code. They do this by adding various ethical principles and analyzing their description, which would then be integrated into mathematical formulations. These ethical algorithms can be used to calculate risks and trajectory planning.
In the same theme, Francis Powlesland presented AI Explainability in Autonomous Driving by demonstrating a demo showing the future of assisted driving in systems by developing an AI model with AI explainability. This machine learning model would be able to predict lane changes and explain them. The speaker explained that the challenge within this project is to identify the interface between complex explanations and humans. This demo was made in co-creation by IBM and Fortiss.
Next, Jan Mikolon spoke on AI Explainability in Stamp Recognition, in this case, to provide a Human-in-the-loop support system within the public sector that would help government agencies to detect fraudulent documents. This system would be an AI-based validation check of documents, increasing security by prioritizing dubious papers for a manual inspection. This application would improve through continuous human feedback. Given its importance, it is crucial for there to be no margin of error.
Finally, Charlotte Haid and Charlotte Unruh presented their research in Human-Centred Algorithmic Scheduling. They explored the questions of personal data availability following Industry 4.0, the fairness in algorithmic decision-making within a company, and how autonomy at work can be maintained with AI technology. Through their research, they developed a shift-scheduling system using employee data through a matching algorithm that would provide suggestions to managers. In order to address concerns of autonomy, privacy, or fairness, the system would improve through a feedback loop that would rely on willing worker participation in preference questionnaires that are kept transparent and adaptable.
The evening came to a close with a panel discussion from experts on responsible use of AI composed of Professor Christoph Lütge, Ellen Hohma, Franziska Poszler, Jan Mikolon, and Dr. Michael Vossing. This focused on the benefits and challenges of collaboration between research institutions and companies. The discussion was followed by reception where the attendees could further discuss the pitches and network.
Thanks to IBM for the warm welcome!