The inability of artificial intelligence (AI) to represent and model human partners is the single biggest challenge preventing effective human-machine teaming today. Current AI agents are able to respond to commands and follow through on instructions that are within their training, but are unable to understand intentions, expectations, emotions, and other aspects of social intelligence that are inherent to their human counterparts. This lack of understanding stymies efforts to create safe, efficient, and productive human-machine collaboration.
“As humans, we are able to infer unobservable states, such as situational beliefs and goals, and use those to predict the subsequent actions, reactions, or needs of another individual,” said Dr. Joshua Elliott, a program manager in DARPA’s Information Innovation Office (I2O). “Machines need to be able to do the same if we expect them to collaborate with us in a useful and effective way or serve as trusted members of a team.”
Teaching machines social intelligence however is no small feat. Humans intuitively build mental models of the world around them that include approximations of the mental models of other humans – a skill called Theory of Mind (ToM). Humans use their ToM skill to infer the mental states of their teammates from observed actions and context, and are able to predict future actions based on those inferences. These models are built on each individual’s existing sets of experiences, observations, and beliefs. Within a team setting, humans build shared mental models by aligning around key aspects of their environment, team, and strategies. ToM and shared mental models are key elements of human social intelligence that work together to enable effective human collaboration.
DARPA’s Artificial Social Intelligence for Successful Teams (ASIST) program seeks to develop foundational AI theory and systems that demonstrate the basic machine social skills necessary to facilitate effective machine-human collaboration. ASIST aims to create AI agents that demonstrate a Machine ToM, as well as the ability to participate effectively in a team by observing and understanding their environment and human partners, developing useful context-aware actions, and executing those actions at appropriate times.
The agents developed under ASIST will need to operate across a number of scenarios, environments, and other variable circumstances, making the ability for them to evolve and adapt as needed critical. As such, ASIST will work to develop agents that can operate in increasingly complex environments, adapt to sudden change, and use observations to develop complex inferences and predictions.
During the first phase of the program, ASIST plans to conduct experiments with single human-machine interactions to see how well the agents can infer human goals and situational awareness, using those insights to then predict their teammate’s actions and provide useful recommended actions. As the program progresses, the complexity will increase with teams of up to 10 members interacting with the AI agents. During these experiments, ASIST will test the agents’ ability to understand the cognitive model of the team – not just that of a single human – and use that understanding to develop appropriate situationally relevant actions.