Seminars

NO.139 Causal Reasoning in Systems

Shonan Village Center

June 24 - 27, 2019 (Check-in: June 23, 2019 )

Organizers

  • Gregor Gössler
    • INRIA, France
  • Stefan Leue
    • University of Konstanz, Germany
  • Shin Nakajima
    • National Institute of Informatics, Japan

Overview

Abstract

The discussion of causality, which has its roots in the philosophy of sciences, has recently become the subject of discussion in relation to IT systems, in particular software, hardware, and cyber-physical systems. Determining causalities is of essential importance when asserting the safety of a system (what can cause a hazardous situation to occur?), when analyzing failures of critical systems (why and how did an accident occur?) or when performing fault localization in hardware or software, amongst others. The goal of this seminar is to gain and deepen an understanding of the available means to reason about causality, the approaches that various disciplines inside computer science as well as in adjacent fields are using to determine causality, and what notions of causality need to be developed in order to deal with changing paradigms of computing.

Keywords. Causality, explanations, philosophy of sciences, artificial intelligence, counterfactual rea- soning, failure analysis, fault localization and repair, systems engineering, formal verification, logics.

1 Background

Causal reasoning aims at establishing causal relationships between events, deemed to be causes, and other events, considered to be effects. Historically, the scientific discourse on causality has been conducted in philosophy and dates back to even the presocratic times. Reasoning about causality is often seen as the construction of explanation models for real life phenomena or situations. It is therefore an important concern in engineering — see e.g. [19] –, the sciences [21] and in legal proceedings, see e.g. [8, 16]. More recently, reasoning about causality has become an important concern in the philosophy of sciences [10]. This is due to the fact that we are designing and implementing increasingly complex systems and that hence the discovery of causal relationships is of pivotal importance.

Causality notions are commonly based on some generally agreed upon assumptions, in particular that there is a temporal order between the cause and the effect. Other than that, various notions of causality have been proposed, see e.g. [15, 4, 18, 7]. Which of those is to be selected in a given particular field depends on domain specific conventions and on the suitability for reasoning in a given domain. While it is easy to define a “wrong” notion of causality, for instance one that does not respect the temporal order of cause and effect, there is also no universally agreed upon “right” definition of causality.

Of particular importance in engineering science is the notion of counterfactual causality. It was first proposed by Hume in the 18th century [9], rephrased for use in a technical setting by Lewis in the 1970s [15], and further developed by Halpern and Pearl in the 2000s [18, 7, 6].The counterfactual definition of causality has become a dominant foundation for causal reasoning in the sciences and in engineering. It goes as follows. An event a is considered to be a cause for another event b, called effect, if I) both a and b occur in the actual world (execution, log, …) w, and II) whenever a does not occur in the set of worlds that are “close” to w, then b will not occur either. However, the notion of “closeness” in this seemingly simple notion of causality has caused much ink to flow.

Halpern and Pearl’s binary definition of causality has subsequently been extended to notions related to causal relevance [1]: the concepts of responsibility and blame assign a probabilistic measure to events as being causal factors.

2 State of the Art

The main focus of this seminar will be on discussing causal reasoning in the context of computing systems, that is systems in which software or hardware play an important role. Causal reasoning for computing systems has been investigated in the following areas:

  • In concurrent and distributed systems, notions of causality have been defined to related cau-  ses and effects not occurring on the same local machine. To be mentioned in this area are the Happened-Before relation of Lamport [12], which establishes a temporal order on events occur- ring on different processes, as well as analysis of causality in models of true concurrency such as Event Structures [17].
  • Causal analysis plays a central rôle when localizing faults in software and systems engineering. These methods typically contrast fault-free and faulty system executions in order to identify and locate events or code statements that are causal for a system failure. They thereby perform an implicit counterfactual analysis. Examples of these methods include Delta debugging [22] and nearest neighbor methods [5].
  • In hardware systems, causal reasoning has been used to identify and explain property violations that have been detected by model checking [2].
  • Causal analysis, in particular counterfactual reasoning, has been used to support model-based safety analysis of system architecture models. This is helpful in generating Fault Trees which are a dominant technique used in safety cases for critical systems [14, 13].
  • Similarly, counterfactual causal reasoning is used in the ex post facto analysis of accidents in safety-critical systems which leads to models for failure explanation and responsibility attribu- tion [11, 20, 3].

3 Objectives of the Seminar

Up to this point, there is no clearly identifiable community addressing causal reasoning in computing systems. A first step towards community building was achieved through the Workshop on Causal Re- asoning for Embedded and safety-critical Systems Technologies (CREST) workshops, co-located with the ETAPS conferences and organized in 2016 by G. Gössler and O. Sokolsky and in 2017 by A. Groce and S. Leue. The third edition of this workshop will be co-located with ETAPS 2018 in Thessaloniki.

A major goal of this seminar is to support community building in the area of causal reasoning in computing systems.   The community building will be supported by exchange on the methods used   in causal reasoning and by considering application examples and case studies. Further key questions address the questions what ground truths causal reasoning is based on and what insight can be derived from causal analysis. A further important aspect will be the use of automation and tools in causal reasoning.

We envision that the following research questions will be addressed in the course of the seminar:

  • What are current research activities in the area of causal reasoning, and what application sce- narios are considered? What new and promising application areas of causal reasoning can be identified? In the light of changing paradigms of computing, how will causal reasoning have to change? For instance, what is the impact of the autonomy of cyber-physical systems on the no- tion of causality? What impact does emergent behavior of large collections of computing devices have on causality? Can causality analysis help in explaining the result of a program, for instance, the decisions of deep neural networks? How to generate useful explanations?
  • How to characterize causality? Is there a better way to design “good” definitions of causality  than relying on the trial-and-error scheme assessing candidate definitions on a host of textbook examples?
  • What can causal reasoning on computing systems, and in social contexts — for instance in litiga- tion, tort law, or economy — learn from each other?
  • How can causal reasoning be applied to security and privacy properties, e.g., to determine the actors responsible for information leakage?
  • What calculi and tools are available to support causal reasoning? For which type of tools is there a demand, and what are the desiderata for such tools?
  • How to scale causal analysis, other than statistical approaches, to real-world applications? How do causal analysis and abstraction compose? How to design systems for accountability, in the sense that in the case of a system failure, the causes can be determined automatically?
  • Is there a compendium of open or unsatisfactorily solved problems?

We expect that the seminar will lead to some fundamental insights into causal reasoning for compu- ting systems. In order to make this insight available to a wider scientific community we plan to publish an post-meeting volume in the Communications of NII Shonan Meetings series of Springer.

References

[1] H. Chockler and J.Y. Halpern. Responsibility and blame: A structural-model approach. J. Artif. Intell. Res. (JAIR), 22:93–115, 2004.
[2] G. Fey, S. Staber, R. Bloem, and R. Drechsler. Automatic fault localization for property checking.  IEEE Trans. on CAD of Integrated Circuits and Systems, 27(6):1138–1149, 2008.
[3] G. Gössler and D. Le Métayer. A general framework for blaming in component-based systems.  Science of Computer Programming, 113(3):223–235, 2015.
[4] C.W.J. Granger. Testing for causality. Journal of Economic Dynamics and Control, 2:329 – 352, 1980.
[5] A. Groce, S. Chaki, D. Kroening, and O. Strichman. Error explanation with distance metrics. STTT, 8(3):229–247, 2006.
[6] J. Y. Halpern. A modification of the halpern-pearl definition of causality. In Qiang Yang and Michael Wooldridge, editors, Proc. Twenty-Fourth International Joint Conference on Artificial Intelligence, IJCAI 2015, Buenos Aires, Argentina, July 25-31, 2015, pages 3022–3033. AAAI Press, 2015.
[7] J.Y. Halpern and J. Pearl. Causes and explanations: A structural-model approach. part I: Causes.  British Journal for the Philosophy of Science, 56(4):843–887, 2005.
[8] H.L.A. Hart and T. Honoré. Causation in the Law. Oxford University Press, 2nd edition, 1985.
[9] D. Hume. A Treatise of Human Nature. 1739.
[10] M. Kistler. Causation in contemporary analytical philosophy. Quaestio-Annuario di storia della metafisica, 2:635–668, 2002.
[11] P. B. Ladkin. Causal reasoning about aircraft accidents. In Floor Koornneef and Meine van der Meulen, editors, Computer Safety, Reliability and Security: 19th International Conference, SAFE- COMP 2000 Rotterdam, The Netherlands, October 24–27, 2000 Proceedings, pages 344–360, Berlin, Heidelberg, 2000. Springer Berlin Heidelberg.
[12] L. Lamport. Time, clocks, and the ordering of events in a distributed system. CACM, 21(7):558– 565, 1978.
[13] Florian Leitner-Fischer and Stefan Leue. Causality checking for complex system models. In VM- CAI, volume 7737 of Lecture Notes in Computer Science, pages 248–267. Springer, 2013.
[14] Florian Leitner-Fischer and Stefan Leue. Probabilistic fault tree synthesis using causality compu- tation. IJCCBS, 4(2):119–143, 2013.
[15] D. Lewis. Counterfactuals. Blackwell, 1973.
[16] M.S. Moore. Causation and Responsibility. Oxford, 1999.
[17] Mogens Nielsen, Gordon D. Plotkin, and Glynn Winskel. Petri nets, event structures and domains, part I. Theor. Comput. Sci., 13:85–108, 1981.
[18] J. Pearl. Causality: Models, Reasoning, and Inference. Cambridge University Press, 2000.
[19] M. Steinder and A.S. Sethi. A survey of fault localization techniques in computer networks. Science of Computer Programming, 53(2):165 – 194, 2004. Topics in System Administration.
[20] S. Wang, A. Ayoub, R. Ivanov, O. Sokolsky, and I. Lee. Contract-based blame assignment by trace analysis. In Linda Bushnell, Larry Rohrbough, Saurabh Amin, and Xenofon D. Koutsoukos, editors, 2nd ACM International Conference on High Confidence Networked Systems (part of CPS Week), HiCoNS 2013, Philadelphia, PA, USA, April 9-11, 2013, pages 117–126. ACM, 2013.
[21] D.S. Weld and J. de Kleer, editors. Readings in Qualitative Reasoning about Physical Systems, chapter 9: Causal Explanations of Behavior. Morgan Kaufmann, 1990.
[22] A. Zeller. Why Programs Fail. Elsevier, 2009.

*Special Web Page for participants of Seminar No. 139 :
https://project.inria.fr/shonan139/

Report

No.139.pdf