Eileen Roesler

Eileen Roesler

Eileen Roesler

Assistant Professor


Eileen Roesler is a researcher in the field of human-automation interaction, with a specialized focus on the complex challenges stemming from interactions with cutting-edge technologies such as artificial intelligence (AI) and robots. With a profound curiosity for the evolving landscape of human-automation synergy, Dr. Roesler's work delves into unraveling the complexities of how individuals engage with and respond to novel technological advancements. Her expertise lies in dissecting the dynamics that arise at the intersection of human relations with these emerging technologies, scrutinizing the behavioral, cognitive, and social aspects of such interactions.

Through the investigations at the Human-Agent Collaboration Laboratory (HAC-Lab), Dr. Roesler and her team shed light on the multifaceted challenges that characterize the interactions between humans and novel (embodied) technologies, thereby paving the way for a more intuitive and socially conscious relationship with the technologies that will shape our future.

Current Research

Our Lab Meetings on Tuesdays from 3:30 p.m. - 4:30 p.m. in David King Hall 2073A serve as a focal point for nurturing and advancing our research initiatives. Everyone is welcome to join our meetings if the topic aligns with their interests. However, if you are interested in exploring opportunities to conduct research at the HAC Lab, please feel free to reach out to Dr. Roesler for more information and guidance.

Selected Publications

All publications can be found on Google Scholar, while this section exclusively features the latest publications from August 2023 onwards.

Roesler, E. (2023). Anthropomorphic framing and failure comprehensibility influence different facets of trust towards industrial robots. Frontiers in Robotics and AI, 10. https://doi.org/10.3389/frobt.2023.1235017

  • Taken together, participants displayed no different levels of general trust in both technically and anthropomorphically framed robots. Nonetheless, the anthropomorphic robot was perceived as more transparent, although there were no actual differences in transparency. This result suggests that trust in robots should be assessed from multiple dimensions, including among other concepts transparency, rather than solely focusing on reliability.

Rieger*, T., Kugler°, L., Manzey, D., & Roesler*, E. (2023). The (Im)perfect Automation Schema: Who Is Trusted More, Automated or Human Decision Support? Human Factors. https://doi.org/10.1177_00187208231197347

  • In essence, the expertise of the human that is compared to the automated decision support plays a crucial role in determining the presence of an (im)perfect automation schema. Expert human support is trusted more than artificial intelligence for decision support, whereas this dynamic is reversed when compared to novice human support.

 

* indicates equal contribution | ° indicates contribution of graduate student