Parallel Session 1.2
Why is Explainable AI important for sustainable & efficient future cognitive computing solutions
- Welcome & Introduction - Martin Kaltenböck (Semantic Web Company) & Malte Beyer-Katzenberger (EC)
- Panel Discussion
- Summary and takeaways - Malte Beyer-Katzenberger (EC) & Martin Kaltenböck
With computing power, new methods and algorithms become more widely available, Artificial Intelligence has become THE topic in and around data management. Huge amounts of (big) data are harvested and ingested into AI & cognitive computing engines to analyse, calculate patterns and prediction to enable powerful applications. One concern is that often these engines are “black boxes” including self-learning algorithms, furthermore, that input data is noisy and often not pre-selected along the requirements of the output. This leads to AI solutions that (i) do not provide useful results, (ii) provide applications that are not fulfilling the requirements and (iii) make it very difficult to explain the processes that have led to a certain outcome or decision.
Explainable AI or Transparent AI refers to applications of artificial intelligence (AI) whose actions can be understood and explained by humans. It contrasts with "black box" AIs that employ complex opaque algorithms, where even their designers cannot explain why the AI arrived at a specific decision. Explainable AI can be used to implement a right to explanation wherever such right exists. The technical challenge of explaining AI decisions is sometimes known as the interpretability problem (Source Wikipedia).
To enable Explainable AI and its full potential, semantic technologies can help - to provide better data quality, provide the possibility to configure the engine by making use of Knowledge Graphs, and finally to help AI engines to understand language and thereby ensure that context and meaning is taken into account to realise really useful data-driven AI applications for the future.
This session introduces the concept of Explainable AI and why this is of such high importance. It explains the benefits of Explainable AI for powerful industry application. In the course of a panel discussion the risks of ‘black box AI’ and the opportunities of Explainable AI are discussed, but also the challenges and bottlenecks. Transparency rarely comes for free; there are often tradeoffs between how "smart" an AI is and how transparent it is, and these tradeoffs are expected to grow larger as AI systems increase in internal complexity. Experts from industry, research, ethics and law will bring in different viewpoints on the topic and the audience is invited to get involved into this discussion together with the panellists.
The results of this session will be summarised in the form of the EBDVF 2018 Explainable AI report that will be published via BDVA (and brought in into the BDVA AI Action Group) and via social media (e.g. XAI LinkedIn group).
Martin Kaltenböck is co-founder and managing partner of the Semantic Web Company and as CFO responsible for financial, legal and organisational issues. Furthermore, he leads and works in several national and international research, industry and government projects - mainly in the areas of project management, requirements engineering and communication & community activities. He is a tutor and publishes in the fields of semantic data-, information & knowledge management, Linked (Open) Data as well as Open (Government) Data and the social semantic web. He is a lecturer at national and international conferences and business events in the mentioned topics.
Dr Malte Beyer-Katzenberger studied law & political sciences at Trier and Aix-en-Provence universities and at the College of Europe, Bruges.
After having worked at the Academy of European Law in Trier from 2007-2011, he joined the European Commission, DG CONNECT, in November 2011. He is a policy developer working on an enabling policy framework for data-driven innovation, including discussions on "data ownership", Open Data, and working towards a human-centric data economy.
Andreas Blumauer, MSc IT studied Computer Sciences and Business Administration at the University of Vienna. Since 2004, he is managing partner of the Semantic Web Company which is an internationally acknowledged provider of semantic technologies. Andreas is experienced with large-scale IT-projects in various industry sectors, and he is also responsible for the product management of PoolParty Semantic Suite. He has been a pioneer in the area of the semantic web since 2002; he is co-founder of SEMANTiCS conference series, co-editor and editor of one of the first comprehensive books in the area of the semantic web for the German-speaking community, and he gave talks about linked data, knowledge management systems, social software and semantic technologies at numerous international events. Andreas has been a lecturer at the University of Applied Sciences Vienna and the Danube University Krems in the areas of Knowledge Management Systems and Semantic Technologies.
Since 2009 Sarah Spiekermann chairs the Institute for Information Systems & Society at Vienna University of Economics and Business (WU Vienna) where she also founded the Privacy & Sustainable Computing Lab in 2016. She is the author of the books “Ethical IT Innovation: A Value-based System Design Approach” as well as “Networks of Control” and “User Control in Ubiquitous Computing”. Currently, she also co-chairs the IEEE standardization effort P7000 (Model Process for addressing Ethical Concerns during System Design) and is involved in the Council on Extended Intelligence (CXI) founded by MIT, Harvard & IEEE.
Dietmar Millinger is an IT enthusiast with experience as founder, managing director, and CTO in the area of automotive embedded systems. He has studied computer science at the Technical University Vienna and promoted in the area of fault-tolerant real-time systems. His specialities are distributed embedded systems and machine learning.
After 12 years in the automotive industry with DECOMSYS and Elektrobit, followed by some time as consultant he has started again to look into startup ideas resulting in active participation in twingz and GREX Professional Makers. As part of his work for twingz, he enjoys hands-on activities related to machine learning, embedded Linux, Java, Python and natural language processing. At GREX Professional Makers they support clients to develop and prototype innovative ideas.
A strong personal interest focuses on AI and understanding of consciousness since both topics promise fundamental changes to their societies in the near future.
Since 2014 he is working as a lecturer at FH Hagenberg teaching distributed real-time systems. Since 2017 he has extended the portfolio with Machine Learning and AI. And since 2018 he gives also lectures at the FH St. Pölten and FH Technikum Wien.
Sonja Zillner studied mathematics and psychology and accomplished her PhD in computer science specializing on the topic of Semantics. She is Senior Key Experts in the field of Semantic Technologies and Artificial Intelligence at Corporate Technology, Siemens AG and the leading editor of the Strategic Research and Innovation Agenda of the Big Data Value Association (BDVA). She is the author of more than 20 patents in the area of semantics and data analytics. Her research focus lies in the area of semantic technologies, AI and data-driven innovation.
Zbigniew Jerzak is the Head of the Deep Learning Center of Excellence and Machine Learning Research at SAP. Our mission is to research and develop machine learning models behind existing and new SAP products. In his previous roles at SAP Zbigniew has been responsible for a number of research and development projects covering, among others: columnar store database engineering, elastic data stream processing, and data visualization. Zbigniew holds a PhD degree in Distributed Systems from the TU Dresden, Germany.