Fujitsu and TUM enter Year 2 of their collaboration to move AI accountability from a conceptual stage to industry practice.

With the increasing prevalence of AI-based systems and related regulation attempts in response, the question of accountability for consequences of the development and use of these technologies becomes ever more pressing. However, concrete strategies and approaches to solve related challenges do not seem to have been suitably developed for, or communicated with, AI practitioners yet. Building off the project results from our first year of research, the TUM Institute for Ethics in Artificial Intelligence (IEAI) aims to reduce the gaps between the development and practical use of AI accountability frameworks through the continuation of our research collaboration with Fujitsu Limited.

Year 1 of the joint project “Towards an Accountability Framework for AI Systems” focused on the conceptualisation of a practical and unified accountability framework to support the AI-providing organization on transparent decision-making for AI risks. An overview of the current state of accountability for AI systems from a legal and ethical perspective has been created which can be found in our Research Brief published in February 2022. Key questions for accountability have been defined, such as who is accountable, for what and against whom as well as how identified responsibilities can be fulfilled in a transparent and traceable manner. The developed backbone and proposed approach for the framework was published in our White Paper in August 2022. More details on the outcomes of the first year, the reports of two co-creation workshops on accountability requirements and risk management for AI systems, can be found on the project webpage. Additionally, details are provided on the aspects that are targeted in the continuation of the project, taking various stakeholder interests into account.

With these developments in mind, the focus for Year 2 of our collaborative research lies in the implementation of the derived concepts through detailing, designing and refining answers to the guiding questions. More specifically, two workshops are planned to be held with industry stakeholders to investigate the responsibilities within an organization throughout the AI lifecycle as well as the ways to ensure ethical compliance for companies. With this, we aim to better understand the AI ecosystem and educate actors on how to frame and practice accountability for AI systems. Additionally, a special focus for the project will be placed on the financial sector by defining a specific case study to be examined through the accountability lens. Our approach includes investigating individuals or groups who are required to take responsibility from a legal and ethical perspective, their expected compliance behavior, as well as the types of risk management, resolution actions and precise risks that need to be governed. By carefully outlining and defining these factors, the TUM-Fujitsu collaboration’s research strives to move AI accountability from a conceptual stage to industry practice.

“The IEAI uses both its networks and partnerships to enhance its mission of promoting research on AI ethics and providing a platform for meaningful cooperation between a wide range of important stakeholders. Our collaboration with Fujitsu sets a perfect example of how academia and industry can join forces, bring different perspectives as well as address and tackle AI ethics-related challenges. I am thrilled to see how much we can achieve by breaking silos and working together.” – Prof. Dr. Christoph Lütge

The head of the Fujitsu Research Center for AI Ethics, Dr. Daisuke Fukuda, furthermore underscored their support for the project: “The first year of our collaboration unveiled the practical requirements for accountability of AI that does not only make things compliant with EU AI Act but also takes into account the implications to society. This concept consolidating its flip-sides is a major insight yielded by multidisciplinary research, contrasting policy making. In the year 2 collaboration with TUM, I am quite certain that practicing the first deliverables will unlock advanced use cases in sectors, which pave the way to standards aligned to the legislation.”