Motivation
Given recent advances in Artificial Intelligence (AI) and Machine Learning (ML) among others in the context of deep learning, more and more intelligent systems are entering the spheres of professional and private life – from credit ranking systems, to smartwatches. Given this growing involvement in (and also influence over) people’s everyday activities, the question for the comprehensibility and/or explainability of intelligent systems, their decisions and their behavior, is receiving growing attention.
From the perspective of AI and ML research, the corresponding questions are varied, ranging from theory (What is an explanation in a technological context? What does it mean to comprehend a system?) to applied research (How can interfaces be made better comprehensible? How does information about system states and processing have to be presented as to serve as explanation?). These issues tie into several long-standing strands of research in AI and ML, among others including work in neural-symbolic integration, research into ontologies and knowledge representation, but also into human-computer interaction and actual system design.
Since it seems very likely that questions of comprehensibility and explanation in AI and ML will become even more pressing in the near future, and that the AI and ML research community are driving the development of the underlying technologies, a workshop on these topics seems timely and relevant for the large majority of AI*IA attendees.
From the perspective of AI and ML research, the corresponding questions are varied, ranging from theory (What is an explanation in a technological context? What does it mean to comprehend a system?) to applied research (How can interfaces be made better comprehensible? How does information about system states and processing have to be presented as to serve as explanation?). These issues tie into several long-standing strands of research in AI and ML, among others including work in neural-symbolic integration, research into ontologies and knowledge representation, but also into human-computer interaction and actual system design.
Since it seems very likely that questions of comprehensibility and explanation in AI and ML will become even more pressing in the near future, and that the AI and ML research community are driving the development of the underlying technologies, a workshop on these topics seems timely and relevant for the large majority of AI*IA attendees.
cex 2017 : Goals
With the growing number of AI systems and applications which are deployed in everyday life, more and more frequently demands for better comprehensible and/or explainable AI and ML systems are being put forward – both in relation to questions of, e.g., liability and responsibility in (partially or fully) automated decision scenarios, as well as regarding more efficient, more enjoyable, and/or more reliable approaches in human-machine interaction.
Unfortunately, as things currently stand, only very rudimentary answers to the corresponding questions are available. Research into the interpretability of ML systems with few exceptions stops long before addressing actual complex interaction scenarios with users, and few (if any) generally accepted definitions or characterizations of what it means for an intelligent system to be “comprehensible” or “explainable” are currently available.
CEx therefore addresses fundamental questions for the nature of “comprehensibility” and “explanation” in an AI and ML context from a theoretical and an applied perspective. Research into philosophical approximations to what an explanation in AI and ML is (or can be) or how the comprehensibility of an intelligent system can formally be defined will be presented next to work addressing practical questions of how to assess a systems comprehensibility from a psychological perspective, or how to build design better explainable AI and ML systems.
To this end, the workshop brings together a diverse audience, ranging from participants from core areas of AI and ML, to ontologists, cognitive scientists, psychologists, and HCI researchers, as well as to practitioners from industry contexts.
The main aim of CEx is to trigger an active exchange of ideas and perspectives, with the mentioned communities mutually informing each other about their respective approaches and results. The workshop shall, thus, serve as starting point for a longer-lasting discussion and exchange, fostering future cross-disciplinary collaborations.
Unfortunately, as things currently stand, only very rudimentary answers to the corresponding questions are available. Research into the interpretability of ML systems with few exceptions stops long before addressing actual complex interaction scenarios with users, and few (if any) generally accepted definitions or characterizations of what it means for an intelligent system to be “comprehensible” or “explainable” are currently available.
CEx therefore addresses fundamental questions for the nature of “comprehensibility” and “explanation” in an AI and ML context from a theoretical and an applied perspective. Research into philosophical approximations to what an explanation in AI and ML is (or can be) or how the comprehensibility of an intelligent system can formally be defined will be presented next to work addressing practical questions of how to assess a systems comprehensibility from a psychological perspective, or how to build design better explainable AI and ML systems.
To this end, the workshop brings together a diverse audience, ranging from participants from core areas of AI and ML, to ontologists, cognitive scientists, psychologists, and HCI researchers, as well as to practitioners from industry contexts.
The main aim of CEx is to trigger an active exchange of ideas and perspectives, with the mentioned communities mutually informing each other about their respective approaches and results. The workshop shall, thus, serve as starting point for a longer-lasting discussion and exchange, fostering future cross-disciplinary collaborations.