- The ReX project page is updated regularly!
- Quantifying Harm, with Sander Beckers and Joe Halpern, appeared in IJCAI’23.
- On Testing for Discrimination Using Causal Models, with Joe Halpern, appeared in AAAI’22.
- A Causal Analysis of Harm, with Sander Beckers and Joe Halpern, appeared in NeurIPS’22.
Prior to joining King’s College in 2013, I was a Research Staff Member at IBM Research between 2005-2013, a Postdoctoral Associate in Worcester Polytechnic Institute (WPI) and in Northeastern University, and a visiting scientist at the Computer Science and Artificial Intelligence Laboratory (CSAIL) of Massachusetts Institute of Technology (MIT) between 2003 – 2005.
I obtained my PhD in Computer Science from Hebrew University of Jerusalem in 2003.
My research interests lie in the area of causal reasoning and explainability. I am interested both in the theoretical concepts and in applications of these concepts to software engineering and machine learning systems (neural networks).
My current applied research project is the explainability platform for black-box AI: Causal Responsibility-based Explanations (ReX). I am looking for students and postdoctoral researchers for a variety of projects related to ReX and extending ReX to other domains.
Historically, my interest in causality arose from investigating the reasons and causes for the results of verification of hardware and software systems. Formal verification amounts to automatically proving that a mathematical model of the system satisfies the formal specification. Problems arise when answers are not accompanied by explanations, thus reducing users’ trust in the positive answer and the ability to debug the system in case of errors. I brought the concepts of causality from AI to formal verification and demonstrated their usefulness to the causal analysis and explanations of verification procedures. Together with Joe Halpern, I wrote a paper that introduces quantification to the concept of causality, thus allowing to rank the potential causes from the most influential to the least influential and to focus on the most influential causes. I pioneered the use of the concepts of causality in software engineering, resulting in first industrial applications (explanations of counterexamples produced by an IBM hardware verification tool and efficient evaluation of hardware circuits in Intel).
In other directions, I have an ongoing research activity in the area of hardware synthesis, collaborating with Tu Graz and the Technion, and in the area of learning for software analysis and exploration, collaborating with Ben-Gurion University.
I am actively collaborating with a number of academic institutions world-wide including Oxford, TU Graz, Technion, Cornell, University of Lugano, UCL, Imperial College London, and others.
I have been fortunate to work with a number of talented students and postdocs.
I am a co-inventor of several patents.
Notable professional activities and appointments
- Keynote speaker at FMCAD 2022 .
- Editor-in-Chief, IET Software Journal, since 2016.
- Program co-Chair, Computer Aided Verification (CAV) 2018.
- Sponsorship Chair of Federated Logic Conferences (FLoC) 2018.
- Program Committee member of numerous conferences in formal verification and software engineering, including FSE, CAV, TACAS, ICSE, FASE, FMCAD, VMCAI, and others (see my CV for a full list).
- Reviewing proposals for ERC, EPSRC, The Netherlands Organisation for Scientific Research (NWO), Austrian Science Fund (FWF), French National research Agency (ANR), and Israeli Science Foundation (ISF).