Non-human Factors: Exploring Conformity and Compliance with Non-human Agents

Nicholas Hertz

Advisor: Eva Wiese, PhD, CHSSWeb Design Preview

Committee Members: Tyler Shaw, Ewart de Visser

Research Hall, #161
November 29, 2018, 11:00 AM to 01:00 PM

Abstract:

As non-human agents become integrated into the workforce and parts of our daily lives it is important to understand when advice from non-human agents is sought, used, misused, or disused. This project considers whether humans are willing to consider the advice of non-human agents, what extent advice-seeking depends on the perceived agent-task fit, and the dynamics of group interaction with these agents. It explores whether nonhuman agents are distrusted in general or if the agent-task fit is considered when choosing and trusting human and nonhuman agents individually and in groups.

The first study examined advice seeking and advice compliance with human and non-human agents on a social and an analytical task when choosing an agent to interact with before or after the task was known. It was found that when choosing an agent before a task is known, human agents were chosen significantly more often than the machines, but if the task is known before agent selection, advisor choices were calibrated based on perceived agent-task fit.

The second study expanded on this and looked at whether a group of non-human agents (computers and robots) could bias individual’s decision making in the form of social conformity on the same set of tasks. It was found that nonhuman agents were able to exert a social conformity effect, but as in Study 1 this relationship was modulated by the perceived match between agent and task type. Participants conformed to comparable degrees with agents during the analytical task but conformed significantly more strongly on the social task as the group’s human-likeness increased.

Study three builds on finding from the two previous studies and explored whether mixed groups of humans and computers further modulated conformity. Results showed higher rates of conformity for an analytical versus a social task, but more importantly found an impact of the ratio of human to nonhuman agents per group on conformity on the social task.

Results of this project may have implications for human-machine interaction, in particular for the design of automated agents so that trust in human-machine teams is maximized (see Salas, Cooke, & Rosen, 2008) and reliance on mechanistic decision aid systems is optimized (see Oppenheimer & Kelso, 2015). Designers of these systems can leverage the results of this project to understand how operators will perform when interacting with human versus non-human groups and whether the task being performed moderates those performance outcomes.