- Lightweight Statistical Explanations for Deep Neural Networks, with Youcheng Sun, Xiaowei Huang, and Daniel Kroening, is accepted to ECCV 2020.
- Combining Experts’ Causal Judgments, with Dalal Alrajeh and Joseph Y. Halpern, is accepted to the Journal of Artificial Intelligence (AIJ).
- Learning the Language of Software Errors, with Pascal Kesseli, Daniel Kroening, and Ofer Strichman, is published in the Journal of Artificial Intelligence Research (JAIR) 67: 881-903 (2020).
I am a Reader in the Department of Informatics, King’s College London. I am the Head of the Software Systems group and the coordinator of the Year in Industry programme in the department.
Prior to joining King’s College in 2013, I was a Research Staff Member at IBM Research between 2005-2013, a Postdoctoral Associate in Worcester Polytechnic Institute (WPI) and in Northeastern University, and a visiting scientist at the Computer Science and Artificial Intelligence Laboratory (CSAIL) of Massachusetts Institute of Technology (MIT) between 2003 – 2005.
I obtained my PhD in Computer Science from Hebrew University of Jerusalem in 2003.
My research interests are broadly in the area of investigating reasons, causes, and explanations of software engineering and machine learning procedures. Historically, I first investigated the reasons and causes for the results of verification of hardware and software systems. Formal verification amounts to automatically proving that a mathematical model of the system satisfies the formal specification. Problems arise when answers are not accompanied by explanations, thus reducing users’ trust in the positive answer and the ability to debug the system in case of errors. I brought the concepts of causality from AI to formal verification and demonstrated their usefulness to the causal analysis and explanations of verification procedures. Together with Joe Halpern, I wrote a paper that introduces quantification to the concept of causality, thus allowing to rank the potential causes from the most influential to the least influential and to focus on the most influential causes. I pioneered the use of the concepts of causality in software engineering, resulting in first industrial applications (explanations of counterexamples produced by an IBM hardware verification tool and efficient evaluation of hardware circuits in Intel).
My current research focus in the area of causes and explanations is mostly on the explanations of the results of deep neural networks’ decisions. I also have an ongoing research project on explanations of reinforcement learning policies.
In other directions, I have an ongoing research activity in the area of hardware synthesis, collaborating with Tu Graz and the Technion, and in the area of learning for software analysis and exploration, collaborating with Ben-Gurion University.
My work is supported by the Royal Society International Exchanges Grant, Coleman-Cohen Exchange Programme Grant, and Google Faculty Award.
I am actively collaborating with a number of academic institutions world-wide including Oxford, TU Graz, Technion, Cornell, UCL, Imperial College London, and Belfast University.
I have been fortunate to work with a number of talented students and postdocs.
I am a co-inventor of several patents.
Notable professional activities and appointments
- Editor-in-Chief, IET Software Journal, since 2016.
- Program co-Chair, Computer Aided Verification (CAV) 2018.
- Sponsorship Chair of Federated Logic Conferences (FLoC) 2018.
- Program Committee member of numerous conferences in formal verification and software engineering, including CAV, TACAS, ICSE, FASE, FMCAD, VMCAI, and others (see my CV for a full list).
- Reviewing proposals for ERC, EPSRC, The Netherlands Organisation for Scientific Research (NWO), Austrian Science Fund (FWF), French National research Agency (ANR), and Israeli Science Foundation (ISF).