NO.166 Visualization for XAI (Explanable AI)
November 9 - 12, 2020 (Check-in: November 8, 2020 )
- Seok-Hee Hong
- The University of Sydney Australia
- Daniel Keim
- University of Konstanz Germany
- Issei Fujishiro
- Keio University Japan
Explainable AI (XAI) refers to artificial intelligence (AI) techniques which can be easily understood by humans to increase their trustworthiness. In contrast, many AI techniques generate "black box" models where even their designers cannot explain why they arrive at specific decisions. Visualization of data or the generated models plays a significant role in making AI methods understandable and accessible to the user.
The technical challenge of XAI is to make AI decisions understandable and interpretable, leading to a higher degree of trust by the users of AI methods. In general, AI methods learn useful rules from the test-set; however, they may also learn incorrect rules, which fail to generalize outside the test set, or even inappropriate rules, which lead to an undesired social, political, or racial profiling. XAI methods help humans understand the learned models as well as the relationship to the training and application data. The goal is to get an idea how likely the AI decisions will generalize to future real-world data outside the test-set and understand their economic, social, and political implications.
XAI is of interest to a wide variety of user groups, including
- AI method developers who are interested in understanding and improving their AI algorithms but they are in general application-agnostic,
- Application end users who are interested in understanding the AI decisions since they are responsible for the consequences but they are in general model-agnostic,
- Data scientists who are interested in understanding which AI methods they should apply to the concrete data set at hand.
The role of visualization in XAI gained significant attention in recent years. Visualizations play an important role since they allow users to quickly get an overview of the learned models as well as their relationship to various data sets involved (training data, test, data, and application data). With the growing complexity of AI models, the critical need for understanding their inner-workings has increased and the complex relationships between training, test, and application data and their effect on the usefulness and trustworthiness of the AI decisions has to be carefully taken into account. Visualizations are powerful techniques to fill such critical needs.
The main goal of this workshop is to discuss the role of visualization for XAI methods. In particular, we aim at identifying research opportunities for using visualization in XAI, focusing on the Asia-Pacific context. We believe the visualization community can leverage their expertise in creating visual narratives to bring new insight into the often-obfuscated complexity of AI systems.
This workshop aims at bringing world-renowned researchers in visualization and AI together, and collaboratively develop innovative visualization approaches for XAI with specific applications of large and complex networks to solve the scalability and complexity issues for analyzing big data arising from various application domains including social networks, business intelligence, and network security.
Our specific objectives are:
- We will identify research opportunities in XAI, focusing on the visualization perspectives in the Asia-Pacific context.
- We will form a broader research community with cross-disciplinary collaboration, including computer science and machine learning, with a particular focus on visualization and visual analytics for XAI.
- We will foster exchange between visualization researchers and practitioners, and draw more researchers in the Asia-Pacific region to enter this rapidly growing area of research.
We will assist emerging researchers to link to international researchers, find industrial contacts, and apply for competitive research grants.
Significance and Innovation
XAI is one of the biggest fundamental challenges in IT research due to the wide spread use of AI methods in research and industry. In many application contexts, the applicability of the results, however, suffers from the black-box nature of many AI methods which prevents understandability and interpretability, and ultimately limits the trust into the results.
Visualization and visual analytics provide a great potential to overcome the current limits of AI methods and significantly increase understandability, interpretability, and trust in AI methods. Innovative visualization- and visual analytics-based XAI methods may therefore be the key enabler for researchers and end users in many application domains and other disciplines.
We believe that this workshop has the potential to set a new research agenda for visualization and visual analytics based XAI research.
Innovative visualization and visual analytics techniques and solutions for XAI, which will be used by domain experts and end users in various applications.
Joint publications at top conferences and journals in visualization and visual analytics, jointly authored by visualization and AI researchers.
Joint funding applications for long-term research collaborations continuing the research beyond 2020.
- Academic impact: Research publications at top conferences and journals, with high number of citations.
- Societal impact: Visualization and visual analytics techniques for XAI will be in high-demand by researchers and practitioners in various applications and disciplines to solve complex problems in their domains.
Plan for Workshop Program
- Before the workshop: Organizers send related research papers to participants for reading.
- Day 1: 6 Invited Talks on XAI and Open Problem Session (organizers).
- Day 2: Problem Solving Session I based on small group discussion.
- Day 3: Problem Solving Session II based on small group discussion.
- Day 4: Group Report Presentation (from each group) and Planning (for continuation of the research).