I am now an AI Research Scientist at Invitae.

I am excited to announce the publication of the book Explainable Human-AI Interaction: A Planning Perspective that I co-wrote with Sarath Sreedharan and Prof. Subbarao Kambhampati.

I received my Ph.D. in Computer Science from Arizona State University. At ASU, I was a member of Yochan research group directed by Prof. Subbarao Kambhampati. My Ph.D. thesis outlines how an AI agent can reason over the human’s mental model of itself to synthesize human-aware AI behaviors and establishes a taxonomy of different types of interpretable as well as obfuscatory AI behaviors.

Before joining ASU in 2015, I completed my M.S. in Computer Science at University of Southern California. At USC, I worked with Dr. T. K. Satish Kumar on multi-agent path planning problems.

If you’d like to contact me, please drop me a mail or find me on LinkedIn.

Research

Ph.D. Thesis

Synthesis of Interpretable and Obfuscatory Behaviors in Human-Aware AI Systems

[Dissertation]      [Video]

Advisor

Subbarao Kambhampati   Arizona State University

Committee Members

Ece Kamar Microsoft Research
David E. Smith (Retired) NASA Ames Research Center
Siddharth Srivastava Arizona State University
Yu Zhang Arizona State University

Abstract

In settings where a human and an AI agent coexist, the agent has to be capable of reasoning with the human’s preconceived notions about the environment as well as with the human’s perception limitations. In addition, it should be capable of communicating intentions and objectives effectively to the human-in- the-loop. When an embodied AI agent acts in the presence of human observers, it can synthesize interpretable behaviors like explicable, legible, and assistive behaviors by accounting for the human’s mental model (inclusive of her sensor model) in its reasoning process. This thesis will study different behavior synthesis algorithms which focus on improving the interpretability of the agent’s behavior in the presence of a human observer. Further, this thesis will study how environment redesign strategies can be leveraged to improve the overall interpretability of the agent’s behavior. At times, the agent’s environment may also consist of purely adversarial entities or mixed entities (i.e. adversarial as well as cooperative entities), that are trying to infer information from the AI agent’s behavior. In such settings, it is crucial for the agent to exhibit obfuscatory behavior that prevents sensitive information from falling into the hands of the adversarial entities. This thesis will show that it is possible to synthesize interpretable as well as obfuscatory behaviors using a single underlying algorithmic framework.

Publications

Planning for Attacker Entrapment in Adversarial Settings
B. Cates, A. Kulkarni, S. Sreedharan
in Proceedings of International Conference on Automated Planning and Scheduling (ICAPS) 2023.

Trajectory Constraint Heuristics for Optimal Probabilistic Planning
J. Peterson, A. Kulkarni, E. Keyder, J. Kim, S. Zilberstein
in Proceedings of International Symposium on Combinatorial Search (SoCS) 2022.

Explainable Human-AI Interaction: A Planning Perspective
S. Sreedharan, A. Kulkarni and S. Kambhampati
Synthesis Lectures on Artificial Intelligence and Machine Learning
Morgan & Claypool Publishers (184 pages) 2022.

Synthesis of Interpretable and Obfuscatory Behaviors in Human-Aware AI Systems
A. Kulkarni
Ph.D. Thesis, Arizona State University 2021.
Award: [ICAPS Best Dissertation Award Honorable Mention 2022]

A Unifying Bayesian Formulation of Measures of Interpretability in Human-AI Interaction
S. Sreedharan, A. Kulkarni, D. Smith, & S. Kambhampati
in Proceedings of International Joint Conferences on Artificial Intelligence (IJCAI) Survey Track 2021.

Planning for Proactive Assistance in Environments with Partial Observability
A. Kulkarni, S. Srivastava, & S. Kambhampati
in Proceedings of Workshop on Explainable AI Planning at International Conference on Automated Planning and Scheduling (ICAPS) 2021.

A Bayesian Account of Measures of Interpretability in Human-AI Interaction
S. Sreedharan, A. Kulkarni, T. Chakraborti, D. Smith, & S. Kambhampati
in Proceedings of Workshop on Cooperative AI at Conference on Neural Information Processing Systems (NeurIPS), 2020, also appeared in the Workshop on Explainable AI Planning at International Conference on Automated Planning and Scheduling (ICAPS) 2020.

Designing Environments Conducive to Interpretable Robot Behavior
A. Kulkarni, S. Sreedharan, S. Keren, T. Chakraborti, D. Smith, & S. Kambhampati
in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2020.

Signaling Friends and Head-Faking Enemies Simultaneously: Balancing Goal Obfuscation and Goal Legibility
A. Kulkarni, S. Srivastava, & S. Kambhampati
as an Extended Abstract in Proceedings of International Conference on Autonomous Agents and Multiagent Systems (AAMAS) 2020.

Explicability? Legibility? Predictability? Transparency? Privacy? Security? The Emerging Landscape of Interpretable Agent Behavior
T. Chakraborti, A. Kulkarni, S. Sreedharan, D. Smith, & S. Kambhampati
in Proceedings of International Conference on Automated Planning and Scheduling (ICAPS) 2019.

Design for Interpretability
A. Kulkarni, S. Sreedharan, S. Keren, T. Chakraborti, D. Smith, & S. Kambhampati
in Proceedings of Workshop on Explainable AI Planning at International Conference on Automated Planning and Scheduling (ICAPS) 2019.

Explicability as Minimizing Distance from Expected Behavior
A. Kulkarni, Y. Zha, T. Chakraborti, S. Vadlamudi, Y. Zhang, & S. Kambhampati
as an Extended Abstract in Proceedings of International Conference on Autonomous Agents and Multiagent Systems (AAMAS) 2019.

A Unified Framework for Planning in Adversarial and Cooperative Environments
A. Kulkarni, S. Srivastava, & S. Kambhampati
in Proceedings of AAAI 2019, also appeared in the International Conference on Automated Planning and Scheduling (ICAPS) 2018 Workshop on Planning and Robotics.
Media: [YouTube]

Resource Bounded Secure Goal Obfuscation
A. Kulkarni, M. Klenk, S. Rane, & H. Souroush
appeared in the AAAI 2018 Fall Symposium on Integrating Planning, Diagnosis and Causal Reasoning, and in AAAI 2019 Workshop on Plan, Activity and Intent Recognition.

Projection-Aware Task Planning and Execution for Human-in-the-Loop Operation of Robots in a Mixed-Reality Workspace
T. Chakraborti, S. Sreedharan, A. Kulkarni, & S. Kambhampati
in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2018, also appeared in HRI 2018 Workshop on Virtual, Augmented and Mixed Reality for Human-Robot Interaction, and in ICAPS 2018 Workshop on User Interfaces & Scheduling & Planning.
Media: [YouTube]

Explicability as Minimizing Distance from Expected Behavior
A. Kulkarni, Y. Zha, T. Chakraborti, S. Vadlamudi, Y. Zhang, & S. Kambhampati
in the International Conference on Automated Planning and Scheduling (ICAPS) 2018 Workshop on Explainable AI Planning.
Media: [YouTube]

Augmented Workspace for Human-in-the-Loop Plan Execution
T. Chakraborti, S. Sreedharan, A. Kulkarni, & S. Kambhampati
in ICAPS 2017 Workshop on User Interfaces & Scheduling & Planning; and ICAPS 2017 System Demonstrations and Exhibits.
Media: [YouTube] [U.S. Microsoft Imagine Cup 2017 Finalist] [PBS 8 Cronkite News] [ASU Fulton School News] [ACM Tech News]

Plan Explicability and Predictability for Robot Task Planning
Y. Zhang, S. Sreedharan, A. Kulkarni, T. Chakraborti, H. H. Zhuo, & S. Kambhampati
in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) 2017, and also appeared in Robotics: Science and Systems (RSS) 2016 Workshop on Planning for Human-Robot Interaction: Shared Autonomy and Collaborative Robotics.
Media: [YouTube]

Explicable Plans for Human-Robot Teams
A. Kulkarni
in AIJ Student Spotlight, Robotics: Science and Systems (RSS) 2016 Workshop on Planning for Human-Robot Interaction: Shared Autonomy and Collaborative Robotics.
Media: [YouTube]