Online Location, Online
September 14, 2020, 10:00 AM to 12:00 PM
As robots and AI are designed for social interactions and incorporated into the world, it is important to examine the extent to which humans can perceive such artificial agents as humanlike based solely on the observation of their behaviors. The while psychologists, neuroscientists, computer scientists and designers have studied related phenomena for many years, more work is required to develop a deep understanding of how people perceive the actions of AI compared to humans within a complex, interactive environments, and how this varies as a function of the AI’s competence in navigating that environment.
The experiments outlined in this dissertation attempt to clarify how individuals make distinctions of human-likeness on the basis of observable behavior, how the perception of human-likeness is related to the perceptions of competence at a given task as well as how predictable or explainable these behaviors are perceived to be, and finally how the context in which these judgements are made contributes to expectations and overall perceptions of a complex mind.
The results from these experiments suggest (1) that increased competence at a task is accompanied with an increase in perception of human-likeness, (2) the relationship between perceived predictability/explainability and human-likeness depends on how competent an entity is perceived to be and (3) the context of an interaction can lead to different expectations of human-likeness and can potentially affect the overall relationship individuals develop with human and AI entities.