Eileen Roesler

Eileen Roesler

Eileen Roesler

Assistant Professor

Eileen Roesler is a researcher in the field of human-automation interaction, with a specialized focus on the complex challenges stemming from interactions with cutting-edge technologies such as artificial intelligence (AI) and robots. With a profound curiosity for the evolving landscape of human-automation synergy, Dr. Roesler's work delves into unraveling the complexities of how individuals engage with and respond to novel technological advancements. Her expertise lies in dissecting the dynamics that arise at the intersection of human relations with these emerging technologies and scrutinizing the behavioral, cognitive, and social aspects of such interactions.

Through the investigations at the human-agent collaboration laboratory (hac-lab), Dr. Roesler and her team shed light on the multifaceted challenges that characterize the interactions between humans and novel (embodied) technologies, thereby paving the way for a more intuitive and socially conscious relationship with the technologies that will shape our future.

Current Research

Our internal Lab Meetings on Thursdays from 4:30 p.m. - 5:30 p.m. in David King Hall 2073A serve as a focal point for nurturing and advancing our research initiatives. If you are interested in exploring opportunities to conduct research at the hac lab, please feel free to reach out to Dr. Roesler for more information and guidance.

Moreover, you can find what is going on at the hac lab in our Blog "what the hac" and on X.

Explore a chance to gain research experience in our new human-robot interaction (hri) lab, open to all. The hri lab presents various involvement opportunities for students at varying levels. Be part of the lab's inaugural event on January 31st!

Selected Publications

All publications can be found on Google Scholar, while this section exclusively features the latest publications from August 2023 onwards.

Roesler, E., Rudolph°, S. & Siebert, F.W. Exploring the Role of Sociability, Ownership, and Affinity for Technology in Shaping Acceptance and Intention to Use Personal Assistance Robots. Int J of Soc Robotics (2024). https://doi.org/10.1007/s12369-024-01098-1

  • Service robots' lower sociability and higher affinity for technology contributed to increased acceptance. More sociable robots were favored when characterized by higher levels of anthropomorphism. 

Fahnenstich°, H., Rieger*, T., & Roesler*, E. (2023). Trusting under risk–comparing human to AI decision support agents. Computers in Human Behavior, 108107. https://doi.org/10.1016/j.chb.2023.108107 [50 days' free access] [Post Print]

  • The presence of risk is associated with heightened trust behavior towards an AI support agent in contrast to a human support agent. Additionally, individuals tend to attribute greater responsibility to human support agents for potential negative outcomes in collaborative tasks compared to AI support agents. Interestingly, the level of trust in a support agent remains unaffected by both the presence of risk and the specific type of support agent involved.

Rieger*, T., Manzey, D., Meussling°, B., Onnasch, L., & Roesler*, E. (2023). Be careful what you explain: Benefits and costs of explainable AI in a simulated medical task. Computers in Human Behavior: Artificial Humans, 100021. https://doi.org/10.1016/j.chbah.2023.100021

  • Enhancing human-AI interaction is possible through explainability, which provides contextual information regarding the AI's vulnerability to errors, in non-error-prone scenarios. However, in situations where errors are prevalent, the introduction of explainability may detrimentally impact the interaction when the AI occasionally makes correct recommendations, illustrating the dual nature of explainability as a mixed blessing in such contexts.

Roesler, E., Vollmann°, M., Manzey, D. & Onnasch, L. (2023).The dynamics of human-robot trust attitude and behavior — Exploring the effects of anthropomorphism and type of failure. Computers in Human Behavior. https://doi.org/10.1016/j.chb.2023.108008 [50 days' free access] [Post Print]

  • Taken together, anthropomorphic appearance can reduce trust in task-related settings without leading to more forgivingness compared to technical appearance. Moreover, failures in information acquisition/processing seemed to lead to more trust dissolution than failures in action implementation.

Rieger*, T., Kugler°, L., Manzey, D., & Roesler*, E. (2023). The (Im)perfect Automation Schema: Who Is Trusted More, Automated or Human Decision Support? Human Factors. https://doi.org/10.1177_00187208231197347

  • In essence, the expertise of the human that is compared to the automated decision support plays a crucial role in determining the presence of an (im)perfect automation schema. Expert human support is trusted more than artificial intelligence for decision support, whereas this dynamic is reversed when compared to novice human support.

Roesler, E. (2023). Anthropomorphic framing and failure comprehensibility influence different facets of trust towards industrial robots. Frontiers in Robotics and AI, 10. https://doi.org/10.3389/frobt.2023.1235017

  • Taken together, participants displayed no different levels of general trust in both technically and anthropomorphically framed robots. Nonetheless, the anthropomorphic robot was perceived as more transparent, although there were no actual differences in transparency. This result suggests that trust in robots should be assessed from multiple dimensions, including among other concepts transparency, rather than solely focusing on reliability.


* indicates equal contribution | ° indicates contribution of graduate student