Towards an Accountability Framework for AI Systems2023-09-12T11:28:47+02:00

Towards an Accountability Framework for AI Systems

The progress in Artificial Intelligence (AI) technology is tremendous. Today, the first autonomous vehicles are already driving thousands of kilometers on test routes without the need for major intervention of human drivers. Developments in this area are accompanied by increasingly powerful algorithms and methods from the field of machine learning. However, the increasing complexity of the used techniques also creates a more opaque environment in terms of decisions made by the technological tools. This lack of transparency means that certain decisions made by the AI system are neither recognizable nor understandable to the user, the developer or the legislator.

AI’s non-transparent behavior is usually referred to as a “black-box,” meaning that only the input and output variables are known to the developer. Explainable AI methods are concerned with resolving precisely this opacity and making complex AI systems more understandable and interpretable. This often conflicts with the fact that developers and researchers are searching for quick solutions to technical problems, leaving the questions of transparency and accountability on the sidelines.

However, transparency is necessary for a broad market introduction of AI-accelerated systems, as it is the basis of trust and the effective implementation of legislation. The aim of the research project is to develop a practical and unified accountability framework supported by transparent decisions in regard to AI risks, taking various stakeholder interests into account. To work towards the overall research goal of defining accountability for AI systems, four main questions will be investigated:

(1) Who is accountable?

(2) For what is someone accountable and against whom?

(3) How can you comply with your accountability duties?

(4) How can you give satisfactory explanation?

The research project will play a fundamental role in developing new approaches and designing comprehensive tools and frameworks to help navigate these issues and find reasonable and defensible answers.

Research Output:

Investigating accountability for Artificial Intelligence through risk governance: A workshop-based exploratory study

Towards an Accountability Framework for AI: Ethical and Legal Considerations

Workshop “Accountability Requirements for AI Applications”

Workshop “Risk Management and Responsibility Assessment for AI Systems”

White Paper “Towards an Accountability Framework for Artificial Intelligence Systems”

Workshop “Risks of AI Systems” – Slides

Workshop “Risks of AI Systems” – Write-Up

Workshop “System of AI Accountability in Financial Services” – Handout

Workshop “System of AI Accountability in Financial Services” – Write-up

Workshop “System of AI Accountability in Financial Services” – Slides

Related Thesis Work:
“Impacts of AI on Human Rights” by Immanuel Klein.

“The impact of risks perception on trust and acceptability of Artificial Intelligence systems” by Yusuf Olajide Ahmed.

“Risks of AI Systems for Sustainable Development Goals” by Jose Muriel.

“AI-ducation: Can Standardized AI Labels Effectively Enhance Public Understanding of AI?” by Nora Nora Walkembach.

“Understanding Risks: The Impact of Risk Perception on Trusts and Acceptability of AI-Systems” by Abdullah Ejaz Ahmed. Ongoing.

News & Updates

Principle Investigator

Christoph Lütge
Christoph LütgeTUM School of Social Sciences and Technology

Researchers

Auxane BochTUM School of Social Sciences and Technology
Ellen HohmaTUM School of Social Sciences and Technology
Maria PokholkovaTUM School of Social Sciences and Technology
Go to Top