Protecting artificial team-mates: more seems like less

Timothy Merritt, Kevin McGee

Publications: Contribution to conferencePaperResearchpeer-review

Abstract

Previous research on conversational, competitive, and cooperative systems suggests that people respond differently to humans and AI agents in terms of perception and evaluation of observed team-mate behavior. However, there has not been research examining the relationship between participants' protective behavior toward human/AI team-mates and their beliefs about their behavior. A study was conducted in which 32 participants played two sessions of a cooperative game, once with a "presumed" human and once with an AI team-mate; players could "draw fire" from a common enemy by "yelling" at it. Overwhelmingly, players claimed they "drew fire" on behalf of the presumed human more than for the AI team-mate; logged data indicates the opposite. The main contribution of this paper is to provide evidence of the mismatch in player beliefs about their actions and actual behavior with humans or agents and provides possible explanations for the differences.
Original languageEnglish
Publication date10 May 2012
Number of pages10
DOIs
Publication statusPublished - 10 May 2012
EventACM annual conference on Human Factors in Computing Systems - ACM, Austin, United States
Duration: 5 May 201210 May 2012

Conference

ConferenceACM annual conference on Human Factors in Computing Systems
LocationACM
Country/TerritoryUnited States
CityAustin
Period05/05/201210/05/2012

Keywords

  • team-mates
  • artificial intelligence
  • cooperation
  • CSCW
  • CSCP
  • game studies

Artistic research

  • No

Cite this