Research

Research

I have developed a distinct international research profile by exploring reasons, causes, and explanations to the results of verification of hardware and software systems.

Formal verification amounts to automatically proving that a mathematical model of the system satisfies the formal specification. Problems arise when answers are not accompanied by explanations, thus reducing users’ trust in the positive answer and the ability to debug the system in case of errors.

An ongoing direction of my research is using the concepts of causality from AI in formal verification and demonstrating their usefulness to the causal analysis and explanations of verification procedures. I co-authored a paper that introduced quantification to the concept of causality, thus allowing to rank the potential causes from the most influential to the least influential and to construct explanations efficiently.

We were the first to use of the concepts of causality in software engineering, resulting in first industrial applications (explanations of counterexamples produced by an IBM verification tool and efficient evaluation of hardware circuits in Intel).

There is currently an annual workshop on Causal Reasoning for Embedded and Safety-Critical Systems and Technologies (CREST), affiliated with ETAPS, and the community is growing. I gave the first keynote talk at CREST in 2016.

My work benefits from 8 years of industrial research experience at IBM, where I had a unique opportunity to contribute to a range of the company’s flagship products.

In the last years, my research has broadened in the following directions:

  • Learning algorithms for formal verification and synthesis of hardware and software that captures user’s intent. Both directions integrate learning algorithms with formal verification and synthesis approaches, and aid in automation, explanations, and better interaction between the humans and the tools.
  • Explainable AI, specifically, explanations for deep neural networks’ decisions and reinforcement learning policies, with several publications currently in preparation. We are exploring the possibility of a collaboration with the researchers at the Experimental Psychology department at UCL, who are interested in human perception of explanations.