Symposium at the AAAI 2023 Fall Symposium Series

Westin Arlington Gateway, Arlington, VA, USA
October 25-27, 2023

Talks and Presentations

The slides of the Keynote talks and paper presentations can be found at

About the symposium

This symposium focuses on agent teaming in mixed-motive situations that arise naturally when agents have different goals, incentives, and decision-making processes. Such multi-party interactions are common in various settings, ranging from organizational group decision-making to online social networks. In this symposium, we consider an “agent” to be either a goal-directed computational agent, whether embodied or not, or a human. Agents may be tempted to prioritize their individual interests over the long-term success of the group, leading to competition, cooperation, coordination, or indifference towards different subgroups. To navigate these dynamics, agents need to understand the goals and intentions of others, identify potential allies or adversaries, align their values with others, and carefully manage information sharing between allies and adversaries. Additionally, when humans and computational AI agents collaborate in mixed teams, additional complexities arise, including issues of language comprehension, decision-making transparency, and social cue interpretation.

We aim to bring together experts and researchers from various research communities with diverse backgrounds (multi-agent/multi-robot systems, human-agent/robot interaction, artificial intelligence, organizational behavior, etc.,) to discuss the key challenges in agent teaming within mixed-motive situations:

  • How should agents take actions, given that actions reveal information to both allies and adversaries?
  • How can agents actively align their values with their teammates or take actions to align the values of their teammates with theirs?
  • How can agents identify allies, adversaries, or other non-cooperative agents and model subteams and subteam motives within the mixed-motive scenario?
  • What kinds of representations best enable mixed-motive interactions?
  • How can agents assess the intent and proficiency of other agents?
  • How can agents assess the degree of cooperation by other agents?
  • How should agents communicate with cooperative and non-cooperative teammates?

Symposium format

The symposium is in-person and will include invited keynote talks, panels, breakout discussions, talks by authors of accepted papers, and poster sessions.

Keynote Speakers

  • Prof. Subbarao Kambhampati (Arizona State University)

Speaker Bio:
Dr. Subbarao Kambhampati is a professor of computer science at Arizona State University. Kambhampati studies fundamental problems in planning and decision making, motivated in particular by the challenges of human-aware AI systems. He is a fellow of Association for the Advancement of Artificial Intelligence, American Association for the Advancement of Science,  and Association for Computing machinery, and was an NSF Young Investigator. He served as the president of the Association for the Advancement of Artificial Intelligence, a trustee of the International Joint Conference on Artificial Intelligence,  the chair of AAAS Section T (Information, Communication and Computation), and a founding board member of Partnership on AI. Kambhampati’s research as well as his views on the progress and societal impacts of AI have been featured in multiple national and international media outlets. He can be followed on Twitter @rao2z.

Talk title: “Leveraging Mental Models for Cooperation & Competition in Human-AI Teams
Reasoning with the mental models of the humans in the loop is critical for human-AI interaction–be it cooperative or adversarial. I will describe our mental-model based framework for human-AI interaction, and show how it can be used for cooperative or deceptive interactions. We will consider both settings that don’t include explicit communication beyond the observations on the behavior–to support explicable and obfuscative behavior, and settings that do include communication–leading to explainable and manipulative behavior. Much of this work is described in our monograph on Explainable Human-AI Interaction. 

  • Prof. Gita Sukthankar (University of Central Florida)
    Prof. Gita Sukthankar

Speaker Bio:
Dr. Gita Sukthankar is a Professor in the Computer Science Department at University of Central Florida. Dr. Sukthankar received her Ph.D. from the Robotics Institute at Carnegie Mellon and an A.B. in psychology from Princeton University. She is a recipient of AFOSR Young Investigator, DARPA CSSG, and NSF CAREER awards, as well as numerous UCF awards for research excellence.  Her current research centers on multi-agent systems, computational social science, and human-robot interaction. Dr. Sukthankar has served on the boards of the International Foundation for Autonomous Agents and Multi-agent Systems (IFAAMAS) and DARPA’s Information Science and Technology (ISAT) advisory group. She has edited two books: Plan, Activity, and Intent Recognition and Social Interactions in Virtual Worlds.

Talk title: “Debugging Dysfunctional Teams
Andrew Carnegie had a famous quote about teamwork: “Teamwork is the ability to work together toward a common vision. The ability to direct individual accomplishments toward organizational objectives. It is the fuel that allows common people to attain uncommon results.” However, the reality is that working in teams, particularly virtual ones, can be frustrating, and many teams consistently fail to reach their objectives. This talk examines teams across several different domains: software engineering, search and rescue, and coordinated game play. First, we present a case study on the productivity of software engineering teams based on their GitHub activities. We introduce a new sequential pattern mining technique for extracting patterns that distinguish good teams from bad ones and also examine the use of natural language processing to detect conflict in teams in a way that is generalizable across datasets. Our findings have implications for the development of automation to assist human teamwork.

  • Dr. Marc Steinberg (Office of Naval Research)

Speaker Bio:
Dr. Marc Steinberg has been the Science of Autonomy Program Officer at the Office of Naval Research (ONR)from the creation of that program in 2009, and is now also a member of the team that manages the Science of Artificial Intelligence program. His program focuses on highly multi-disciplinary research to develop the foundations of these areas in terms of rigorous mathematical methods, general scientific principles, new experimental paradigms, and theory-based tools to facilitate adoption such as for verification and validation, safety, and robustness. Some of the types of fields that are involved include dynamics and control theory, planning, optimization, machine learning, information theory, game theory, physics, human factors, and related fields such as biology, oceanography, cognitive science, psychology, and neuroscience.  Prior to coming to ONR, he worked in multiple positions within the naval laboratory system for 20 years, and reached the level of technical fellow. As a laboratory researcher, he worked on basic and applied research projects exploring neural network and knowledge-based forms of artificial intelligence, autonomous control, vehicle management systems, prognostics and health management, aviation safety, and robust, adaptive, nonlinear, and reconfigurable control. He has authored or co-authored papers across this range of subjects, and received numerous professional society awards for his contributions including the Derek George Astridge Award for Contribution to Aerospace Safety (British Institution of Mechanical Engineers), the Dr. George Rappaport Best Paper Award (IEEE),  the 2nd Best Paper of Conference Award  for AIAA Guidance, Navigation, and Control Conference, and has twice-won Pathfinder Best Paper awards  for AUVSI Unmanned Systems North America. In 2014, he received the Navy Meritorious Service Award for his contributions, the third highest career award that can be received by a civilian. He has B.S. and M.S degrees in Mechanical Engineering from Lehigh University and a second M.S. degree in Industrial and Human Factors Engineering.

Talk Title: “Interactive Discussion: Research Challenges in Mixed-Motive Teams

  • Prof. Jean Oh (Carnegie Mellon University)

Speaker Bio:
Dr. Jean Oh is an Associate Research Professor at the Robotics Institute at Carnegie Mellon University. Her general research areas reside at the intersection of vision, language, and planning in robotics. Jean’s current research is focused on developing high-level intelligence for robots in the domains of autonomous navigation and creativity. Her research group, roBot Intelligence Group (BIG), includes members from diverse disciplines including Robotics, Language Technologies, Machine Learning, Computer Science, and Mechanical Engineering. Her team’s works have won several best paper awards at IEEE International Conference on Robotics and Automation (ICRA) and IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). Her recent work on FRIDA robot painting and AI Pilot have been featured in various media around the world including New York Times, The Telegraph, Petit Quotidien, Australian Broadcasting Corporation, Aviation Today, Maeil Business Newspaper, and Korean Broadcasting Systems. Jean received Ph.D. in Language and Information Technologies at Carnegie Mellon University, M.S. in Computer Science at Columbia University, and B.S. in Biotechnology at Yonsei University in South Korea.

Talk Title: “Making Artificial Intelligence Measurable

Evaluation is a key part of scientific research and technological development where researchers strive to define standard evaluation metrics and methodologies. As the field of Artificial Intelligence (AI) is gradually pervading those areas that have been previously considered human abilities such as social compliance or creativity, we are challenged with the need for an innovative evaluation paradigm for measuring AI. For instance, the Turing test, introduced in 1950, was devised to evaluate an AI system whether its behavior is indistinguishable from that of a human’s. In this talk, I would like to invite the audience to think about how we can assess success in developing sophisticated AI beyond the Turing test. I will use two problem domains, social robot navigation and creative AI for this discussion. I will also briefly share ongoing efforts on developing metrics for measuring AI.


  • Dr. Marc Steinberg (Office of Naval Research)
  • Prof. Matthew Taylor (University of Alberta)
  • Prof. Missy Cummings (George Mason University)
  • Dr. Samantha Dubrow (MITRE)
  • Dr. Laura Hiatt (Naval Research Lab)
  • Dr. Edmund Hunt (University of Bristol)

Call for Papers

The symposium invites submissions related (but not limited) to the following topics in mixed-motive situations involving computational AI agents (AI-AI/human-AI interactions).

  • Strategies for proficiency communication with allies and adversaries
  • Strategies for controlled deception (hiding information from adversaries but sharing information with allies)
  • Methods for explicit and implicit proficiency communication
  • Methods for goal alignment through goal and intention communication
  • Coalition formation among agents.
  • Strategies for negotiation and consensus
  • Assessment of proficiency of self and other agents
  • Assessment of goals, intentions of self, and other agents
  • Identification of allies, adversaries, or other non-cooperative agents
  • Identification and modeling of subteams and subteam goals/intentions
  • Metrics for assessment of proficiency, degree of cooperation, and other related team measures


Please submit one of the following types of submissions via the AAAI FSS-23 EasyChair site.
– Regular papers (6 pages + references)
– Position papers (2 pages + references)
– Summary of previously published papers (2 pages)
The submission format is the standard double-column AAAI Proceedings Style. Submissions need not be anonymized.

We do accept non-archival parallel submissions, i.e., papers submitted to other venues can still be submitted to the symposium but will not be included in the symposium proceedings.

Registration details and additional information can be found at the AAAI Fall Symposium Series website.

Important Dates

Paper submission deadline: August 17, 2023 August 25, 2023 (Submissions closed)

Paper notifications: August 30, 2023 September 1, 2023

Camera-ready submission: September 15, 2023

Symposium Dates: October 25-27, 2023

Publication and Attendance

For those interested, accepted papers will be published by AAAI as part of the AAAI Fall Symposium Series.  All accepted papers will be presented at the workshop both as short talks and as posters. At least one of the authors of an accepted paper should be registered to attend the symposium in person.

Program Schedule

Day 1 – 10/25
Session 19:00 – 9:15 amWelcome and Introductions
9:15 – 9:30 amShort talk by organizers
9:30 – 10:30 amPaper presentations – 1

Paper ID 4951: “Steps Towards Satisficing Distributed Dynamic Team Trust”
Edmund Hunt, Chris Baber, Mehdi Sobhani, Sanja Milivojevic, Sagir Yusuf, Mirco Musolesi, Patrick Waterson and Sally Maynard

Paper ID 2504: “Inferring the Goals of Communicating Agents from Actions and Instructions”
Lance Ying, Tan Zhi-Xuan, Vikash Mansinghka and Joshua Tenenbaum
10:30 – 11:00 amCoffee Break
Session 211:00 – 12:00 pmInvited Keynote: Prof. Subbarao Kambhampati (ASU)

Talk title: “Leveraging Mental Models for Cooperation & Competition in Human-AI Teams
12:00 – 12.30 pmPaper presentations – 2

Paper ID 3536: “Hybrid Navigation Acceptability and Safety”
Benoit Clement, Marie Dubromel, Paulo Santos, Karl Sammut, Michelle Oppert and Feras Dayoub
12.30 – 2:00 pmLunch Break
Session 32:00 pm – 3:30 pmPaper presentations – 3

Paper ID 1223: “Disentangling Interaction using Maximum Entropy Reinforcement Learning in Multi-Agent Systems”
David Rother, Thomas H. Weisswange and Jan Peters

Paper ID 1461: “Agent Assessment of Others Through the Lens of Self—A Position Paper”

Paper ID 1956: “Some Thoughts On Robustness in Multi-Agent Path Finding”
Roman Barták

Paper ID 7466: “Beyond Rejection Justification: the Case for Constructive Elaborations to Command Rejections by Autonomous Agents in Mixed-Motive Scenarios”
Gordon Briggs
3:30 – 4:00 pmCoffee Break
Session 44:00 – 5:30 pmPanel session on “Striking the balance between cooperation, non-cooperation, and competition in mixed-motive situations”
6:00 – 7:00 pmAAAI Reception
Day 2 – 10/26
Session 59:00 – 10:00 amInvited Keynote: Prof. Gita Sukthankar (UCF)

Talk Title: “Debugging Dysfunctional Teams
10:00 – 10:30 amPaper presentations – 4

Paper ID 7172: “Effect of Adapting to Human Preferences on Trust in Human-Robot Teaming”
Shreyas Bhat, Joseph Lyons, Cong Shi and X. Jessie Yang
10:30 – 11:00 amCoffee Break
Session 611:00 – 12:30 pmPoster session
12.30 – 2:00 pmLunch
Session 72:00 – 3:30 pmPanel session on “Explicit and implicit communication in human-AI teams in mixed-motive situations”
3:30 – 4:00 pmCoffee Break
Session 84:00 – 5:30 pmBreakout group discussion and report – 1
6:00 – 7:00 pmAAAI Plenary session
Day 3 – 10/27
Session 99:00 – 10:00Invited Talk by Dr. Marc Steinberg (ONR)

Talk Title: “Interactive Discussion: Research Challenges in Mixed-Motive Teams”
10:00 – 10:30Invited Talk by Prof. Jean Oh (CMU)

Talk Title: “Making Artificial Intelligence Measurable
10:30 – 11:00Coffee Break
Session 1011:00 – 12:00 pmBreakout group discussion and report – 2
12:00 – 12:30 pmClosing Remarks

Organizing Committee

  • Suresh Kumaar Jayaraman (Carnegie Mellon University),
  • Jacob Crandall (Brigham Young University), 
  • Jiaoyang Li (Carnegie Mellon University),
  • Gordon Briggs (Naval Research Laboratory),
  • Aaron Steinfeld (Carnegie Mellon University),
  • Michael A. Goodrich (Brigham Young University),
  • Reid Simmons (Carnegie Mellon University),
  • Holly Yanco (University of Massachusetts Lowell).

    Contact: Suresh Kumaar Jayaraman (email:


The organizational effort is supported under the SUCCESS MURI, a project funded by the Office of Naval Research (N00014-18-1-2503).