Exploring Human-Agent Interactions in Complex Environments
Andres Rosero
Advisor: Elizabeth Phillips, PhD, Department of Psychology
Committee Members: Tyler H. Shaw, Gerald Matthews
Johnson Center, #325, Meeting Room A
June 27, 2025, 12:00 PM to 02:00 PM
Abstract:
Artificial agents and robots are no longer viewed as tools to be used, but team members which collaborate with humans to complete complex tasks. As human-agent teams become commonplace, it is important to provide research tools to examine interactions between humans and agents in increasingly complex conditions, and to examine how humans perceive artificial team members in morally ambiguous scenarios. This dissertation is composed of three manuscripts that serve to advance these goals. In study one, I propose, construct, and validate a theoretical framework for identifying changes in task load for commercial off the shelf games that are being utilized as testing environments for human-agent teams. In study two, I examined the effectiveness of the trust repair strategy justification to foster resilience of robot trust loss and moral criticism in the face of multiple norm violations. In study three, I explored three distinct robot deception behaviors theorized in the AI ethics literature and provide some of the first quantitative and qualitative data about human responses to robot deception. Collectively, these studies aim to provide AI and robot researchers with a greater understanding of how humans perceive and behave in the face of complex human-agent interactions and provide a bevy of experimental stimuli for future research in this field.