Seminars

NO.178 Formal Methods for Trustworthy AI-based Autonomous Systems

Shonan Village Center

October 16 - 19, 2023 (Check-in: October 15, 2023 )

Organizers

  • Ichiro Hasuo
    • National Institute of Informatics, Japan
  • Einar Broch Johnsen
    • University of Oslo, Norway
  • Martin Leucker
    • University of Lübeck, Germany

Overview

Background and Motivation

Formal methods (FM) have a long and successful history in contributing to the reliability and risk minimization of safety-critical software systems [1]. Like other human engineering disciplines, they are based on rigorous mathematical modeling and analysis of system properties. Due to tremendous advances in artificial intelligence (AI), the situation is changing as several of the system functions are increasingly implemented and enabled by machine learning (ML) components; that is, they are not explicitly programmed by humans, but instead automatically generated from large sets of data. Such ML-enabled components are instrumental for the automation of functions in smart, autonomous cyber-physical systems, which already affect our society in many application domains, including self-driving cars, smart grids, smart healthcare, and smart manufacturing. For instance, deep neural networks (DNN) have become an essential tool for tasks such as image perception and object classification in autonomous vehicles. However, data-driven learning capabilities are inherently hard to characterize and their combination with other, more traditional AI (e.g. rule-based) and system engineering tools is not well understood and can introduce additional risks. For example, DNNs are not robust to adversarial perturbations (APs), meaning that even minimal changes applied to the input, often imperceptible to the human senses, can cause the network to completely misclassify the input [2-5]. In safety-critical systems, such misclassification can lead to dramatic consequences, such as a fatal self-driving accident where the sensor system in the car failed to identify a large truck with a trailer crossing the highway [6]. As the operational behavior of learning-enabled components is a function of the data they are trained on, it can also be distorted by bias or crippled by gaps in the data set. For example, an AI-based recruiting tool at Amazon had to be scrapped because it systematically favored men for technical jobs, just as reflected in the actual data [7]. A fundamental problem here is that it is not possible to learn ethical and legal principles like “all human beings are equal” from existing data only, but such guidelines need to be represented and ensured by other techniques, including logical formalisms. As data-driven learning capabilities may cause unpredictable emergent behavior and can lead to harmful events or violations of social norms and conventions, recent initiatives such as the EU-HLEG Ethics Guidelines for Trustworthy AI [8] call for regulation of AI-based systems, asking to design and operate them in a way that they are trustworthy and comply with safety, security, privacy, robustness, legal, ethical, and explainability requirements up to a level adequate for the application. Similar considerations are also beginning to find their way into pre-normative standards for system design, such as the DIN SPEC 92001-1:2019-04 on AI Life Cycle Processes and Quality Requirements [9].

State of the Art and Challenges

The current state-of-the-art on safety and security assurance is based on model-driven system design that consists in rigorously specifying the structure and the behavior using formal models. These models enable static and dynamic analysis and verification techniques that can provide comprehensive guarantees about the correctness of systems [10]. While static techniques such as model checking can perform sophisticated analysis of software without actually executing it, their exhaustive nature makes it difficult to scale them without losing precision. Runtime verification (RV) [11] is a more lightweight verification technique that complements design-time analysis by observing the system at runtime, checking intended properties online and possibly mitigating violations through specified recovery actions. Though model-driven and formal approaches are increasingly adopted in industry, they are generally still applicable only to non-learning systems operating in well-characterized environments, as data-driven learning components and environmental aspects that are not fully known or foreseen at design-time introduce uncertainty in the design process in the form of “black boxes” in the system model. As the state-of-art of formal verification techniques for ML components is still quite limited, the great difficulty to provide formal guarantees about their behavior is a major obstacle to using ML and AI methods in safety-critical autonomous systems engineering [12]. A recent approach is Reluplex [13], an SMT solver for verifying DNNs that has been successfully evaluated on a DNN implementation of a next-generation airborne collision avoidance system for unmanned aircraft. There also has been a significant effort in recent years to develop tools such as DeepFool [14] and cleverhans [15] that generate perturbations to add to the original training set to further test the ML classifiers. Other recent and more sophisticated testing pproaches [16] follow a semantic-based approach, using simulation environments to generate more plausible adversarial examples rather than just small perturbations over the input image. However, because attackers can use inputs never considered before in the testing process, testing is not enough to provide guarantees, and providing formal guarantees about the space of inputs that will be correctly processed remains a major challenge that plays a fundamental role in improving trustworthiness of autonomous systems. We thus need novel formal techniques that offer high levels of guarantees for large-scale autonomous systems with learning-enabled components. For example, a promising direction is new mechanisms to enforce dynamic assurance of safety and dependability at runtime. A related architectural approach [17] in control theory uses a decision module to safeguard a high-performance controller (e.g. DNN) by switching to a pre-certified baseline controller if certain safe operating conditions are close to being violated. Potential approaches to this challenge may have a tremendous societal and economical impact on our society by reducing the warranty, liability, certification and design costs of autonomous systems and subsystems with learning components.

Objectives and Topics of the Workshop

The workshop will be a joint and multi-disciplinary effort to develop novel formal methods for the rigorous design of autonomous systems with learning components. While AI and data-driven learning is essential for achieving autonomy, we believe that it needs to be properly combined and complemented with formal, knowledge/model-based techniques to make the system design and operation as trustworthy as needed. Because in principle, we cannot guarantee that complex engineered systems will be completely free of risks and design flaws, we seek to define different classes of trustworthiness and tradeoffs to decide if a system is trustworthy enough for its intended use in terms of functionality, safety, security, privacy, lawfulness and comprehensibility for humans. The workshop aims at fostering novel approaches, capable of capturing and effectively balancing these concerns in autonomous systems, by bringing together key people from the FM and the AI communities.

Technically, we aim for a multi-stage approach where trustworthiness will be established as far as possible at design-time, while ensuring that possible variations during the autonomous system’s operation - due to changes in the environmental context, or the self-adaptation behavior of learning capabilities - can be assessed and dealt with at runtime. The envisaged approach will determine which requirements can be guaranteed at the design-time and which ones are left to be monitored and possibly enforced at runtime. This requires tight collaboration between different research communities on FM (prominently from the rigorous system design and the runtime verification area), control theory and robotics, ML and AI, and semantic technologies. We seek to establish and consolidate a group of experts, with different perspectives and both from academic and industrial research, to coordinate currently fragmented research activities and address the discussed challenge in diverse application areas such as autonomous driving, robotics, industrial IoT and smart manufacturing, smart medical devices and healthcare, and smart energy grids. We foresee that research topics and questions that will be addressed and dealt with during the meeting include:

  • Formal modelling languages capable of expressing key properties related to trustworthiness of AI-based autonomous systems with learning components;
  • Design-time analysis approaches for static verification of system properties expressed in formal languages, including e.g. techniques for model checking properties of DNNs;
  • Runtime verification techniques for AI-based systems including online generation of monitors, detection and diagnosis of critical events, and enforcement of requirements;
  • Novel methods to enforce correctness requirements both during design and at runtime to achieve overall autonomic correctness, including properly balancing between them;
  • Methods for testing and formal verification of learning and self-adapting capabilities of autonomous systems to provide high-level dependability and robustness guarantees;
  • Approaches to couple knowledge generated by runtime analysis with high-level autonomic decision processes (e.g. triggering selective re-learning of components).

These topics will be addressed in the form of mini-tutorials given by leading researchers in FM and machine learning/AI to familiarize everyone with the proper terminology, research methodologies and current approaches, a selection of shorter and deeper technical presentations that report problems and the state of the art (including tool demonstrations for diverse application domains), and focused group discussions moderated by the organizers.

We expect that the seminar will lead to some fundamental insights into FM for AI. In order to make these insights available to a wider scientific community, we plan to publish a post-meeting volume in the Communications of NII Shonan Meetings series by Springer.

Related Previous Workshops and Seminars:

We acknowledge that there are both past and upcoming scientific events with related topics that we aim to build upon thematically. Some of them are:

  • Shonan seminar No. 153: The 3rd Controlled Adaptation of Self-adaptive Systems (CASaS) (January 2020): This newest in a series of seminars will focus on self-adaptive systems (such as robots) that follow a Monitor-Analyze-Plan-Execute control scheme. Control-theoretic methods together with formal guarantees about their properties are one important aspect in our proposed seminar.
  • Shonan seminar No. 156: Software Engineering for Machine Learning Systems (SEMLS) (October 2019): This seminar focused on the implementation of complex ML systems from a software engineering perspective, including change and release management, design patterns, and security. It thus complements our effort of formally verified ML with software development aspects.
  • Shonan seminar No. 118: Modelling and Analysing Resilient Cyber-Physical Systems (CPS) (December 2018), No. 121: Towards industrial application of advanced formal methods for cyber-physical system engineering (November 2018), and No. 055: Science and Practise of Engineering Trustworthy Cyber-Physical Systems (TCPS) (October 2014): These events were concerned with theory, methodology, and tools for the systematic design and engineering of CPS, thus dealing with the system engineering aspects of intelligent systems, and formal methods for engineering dependable cyber-physical systems. While we are also leveraging mathematically rigorous formal methods, we will focus more on the learning-enabled components of such systems.
  • Shonan seminar No. 062: Static Analysis Meets Runtime Verification (March 2015): This event focused on runtime verification as a practical means of formal verification based on observing a system at runtime to check its expected behavior, and static analysis as a fundamental tool for off-line program verification. Our proposed seminar will build on both formal methods as techniques to achieve trustworthy autonomous systems.

References:

[1] D. Bjørner and K. Havelund: 40 Years of Formal Methods: Some Obstacles and Some Possibilities. Proceedings Formal Methods: 19th International Symposium , Springer (2014)

[2] B. Biggio, et al., Evasion Attacks against Machine Learning at Test Time. European Conference on Machine Learning and Knowledge Discovery in Databases (ECML/PKDD), Springer (2013)

[3] N. Papernot et al., Practical Black-Box Attacks against Machine Learning. Asia Conference on Computer and Communications Security (AsiaCCS), ACM (2017)

[4] I.J. Goodfellow et al., Explaining and Harnessing Adversarial Examples. CoRR abs/1412.6572, arXiv (2014)

[5] K. Eykholt et al., Robust Physical-World Attacks on Deep Learning Visual Classification. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2018)

[6] https://www.theguardian.com/technology/2016/jun/30/tesla-autopilot-death-self-driving-car-elon-musk

[7] h ttps://www.theguardian.com/technology/2018/oct/10/amazon-hiring-ai-gender-bias-recruiting-engin e

[8] https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai

[9] https://www.beuth.de/en/technical-rule/din-spec-92001-1/303650673

[10] E. Bartocci et al., Specification-Based Monitoring of Cyber-Physical Systems: A Survey on Theory, Tools and Applications. Lectures on Runtime Verification, Springer (2018)

[11] M Leucker and C. Schallhart, A brief account of runtime verification. Journal of Logic and Algebraic Programming 78(3), Elsevier (2009)

[12] X. Huang et al., Safety Verification of Deep Neural Networks. International Conference on Computer Aided Verification (CAV), Springer (2017)

[13] G. Katz et al., Reluplex: an Efficient SMT Solver for Verifying Deep Neural Networks. International Conference on Computer Aided Verification (CAV), Springer (2017)

[14] S. Moosavi-Dezfooli et al., DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), (2016)

[15] I.J. Goodfellow et al., Cleverhans v0.1: An Adversarial Machine Learning Library. CoRR abs/1610.00768, arXiv (2016)

[16] T. Dreossi et al., Semantic Adversarial Deep Learning. International Conference on Computer Aided Verification (CAV), Springer (2018)

[17] L. Sha, Using Simplicity to Control Complexity. IEEE Software 18(4), (2001)