About me
I am a postdoctoral fellow (until Sept’24) at the Schwartz Reisman Institute for Technology and Society at the University of Toronto in Prof. Gillian Hadfield’s group and research associate with the Vector Institute for Artificial Intelligence (until Dec’24).
My research focuses on human-centric multiagent systems. This spans areas of multi-agent systems, empirical/behavioural game theory, software engineering and human-robot interaction. I am also interested in computational methods for safety in AI systems; both at the level of a specific application as well as society. This is closely connected to the research umbrella of Cooperative AI, which bridges together artificial intelligence with game-theoretic foundations. Some specific themes I have explored are as follows:
Behavioural game theory: How do we formalize “boundedly rational” human behaviour for automated systems that rely primarily on rational choice and utility-optimizing underpinnings? What safety risks can arise as a consequence of that? Sample publications: 1, 2, 3
Safety validation of autonomous systems: What new approaches are needed for safety validation that take into account the idiosyncracies, diversity, and unpredictability of human behaviour? Sample publications: 1, 2, 3
Norms and institutions for cooperative AI agents: How do we design advanced ‘AI agents’ that are increasingly interacting with our economic, political, and social systems in an unprecedented way? What methodological frameworks are needed for AI agents to safely interact within existing human communities, norms, and institutions that improve welfare and well-being for all? Sample publications: 1, 2
I received my PhD from David R Cheriton School of Computer Science at the University of Waterloo. You can check out my thesis here.
Prior to starting grad school, I spent eight years working in industry, most of which was at IBM Software Labs in India.
News
(Summer 2024) I will be at the following conferences/workshops over the summer: CHAI @ Asilomar, California; SIOE @ UChicago; EC @ Yale. Please feel free to say Hi! if you are attending any of these and would like to chat.
(May 2024) New paper! Drawing from human social structures of norms and institutions, our new paper shows how to design LLM-based generative agents that have capacity for cooperative behaviour. Link
(Feb 2024) New paper out on arxiv with insights on opinion and rhetoric expression in online social systems.
Main takeaway: When there is ideological polarization within a population, ideological institutions (e.g., partisan media) can distort beliefs about the outgroup population to move the expressed opinion to one extreme. Affective polarization is a by-product of this dynamic. In other words, “Institutions matter” - in online social systems too. Arxiv
(Sep 2023) Attended the NBER Economics of Artificial Intelligence Conference, Fall 2023. Toronto. Canada.
(Aug 2023) I will be at the annual conference of the Society for Institutional and Organizational Economics (SIOE) workshoping our paper on norms and information stewarding. Frankfurt, Germany.
(July 2023) I gave an invited talk at Cooperative AI foundation on normative systems as a research agenda for cooperative AI foundation. London, UK. Slides
(June 2023) I will be at CHAI (Center for Human-Compatible Artificial Intelligence) workshop in California, USA.
(May-June 2023) I will be at AAMAS 2023 in London, UK.
(Feb 2023) New working paper on information stewards and normative persuasion in online social systems.
(Dec 2022) Our paper Revealed multi-objective utility aggregation in human driving (with Kate Larson and Krzysztof Czarnecki) accepted for AAMAS 2023.
(Feb 2022) Starting Sept’22 I will be joining Schwartz Reisman Institute for Technology and Society at University of Toronto as a postdoctoral fellow under Prof. Gillian Hadfield.
(Jan 2022) Our paper I Know You Can’t See Me: Dynamic Occlusion-Aware Safety Validation of Strategic Planners for Autonomous Vehicles Using Hypergames (with Maximilian Kahn and Krzysztof Czarnecki) accepted for ICRA 2022.
(Dec 2021) Our paper Generalized dynamic cognitive hierarchy models for strategic driving behavior (with Kate Larson and Krzysztof Czarnecki) accepted for AAAI 2022.
(Oct 2021) Paper on a taxonomy towards better understanding of decisions made by an autonomous vehicle planner. A taxonomy of strategic human interactions in traffic conflicts arXiv link accepted for NeurIPS 2021 Cooperative AI Workshop.
(Sept 2021) Paper on application of general theories of behavioral game theory, such as level-k, for dynamic games. Generalized dynamic cognitive hierarchy models for strategic driving behavior arXiv link code
(Sept 2021) Paper on white-box safety validation framework for AV strategic planners (joint work with Maximilian Kahn). I Know You Can’t See Me: Dynamic Occlusion-Aware Safety Validation of Strategic Planners for Autonomous Vehicles Using Hypergames arXiv link video
(Aug 2021) Waterloo Multi-Agent (WMA) Traffic Dataset released. Dataset
(July 2021) Talk on solving hierarchical games using solution concepts from behavioral game theory and application to human driving behavior.
(March 2021) Paper on Solution Concepts in Hierarchical Games under Bounded Rationality with Applications to Autonomous Driving published in AAAI’21 paper code