TRUMAN Lab Image

The Trustworthy and Mindful Automation (TRUMAN) Lab, led by Dr. Ewart de Visser, is concerned with how different automated agents interact with humans and the ways in which these agents may affect performance, trust, reliance, and compliance during a task. Topics of interest include adaptive aiding, calibrated trust, human-robot interaction, trust cues, human performance and complacency during nonoptimal conditions, performance within supervisory control human-machine systems, effects of imperfect automation on human trust, individual differences in trust, how varying levels of risk affects trust, and ways to create extreme trust calibration using trust cues.

Dr. Ewart de Visser is a GMU graduate who earned his Ph.D. in Human Factors and Applied Cognition in 2012. Ewart is now a Senior Human Factors Scientist at Perceptronics Solutions Inc. and continues his research interests at Mason. In addition to supervising several undergraduate students, Ewart is conducting various research projects with graduate students in the HFAC program.

Several members of the TRUMAN Lab are currently Ph.D and M.A. students in HFAC program. A few of these lab members are also involved in projects with Perceptronics Solutions Inc. and serve as interns for the company.

DICON Experiment

“I am currently collaborating on a project with a team of researchers at Perceptronics Solutions Inc. to assess the performance and trust levels of operators interacting with automated systems. The current study, also known as the DICON Experiment,  will measure and model the trust of an operator in real-time and adapt the display so that an operator who incorrectly trusts too much is encouraged to trust less and an operator who incorrectly trusts too little is encouraged to trust more. The goal of the study is to enhance the operator’s decision making using adaptive trust cues to calibrate trust.”- Kaitlyn Marinaccio (Lead Researcher)

*Note: The DICON Experiment is part of an ongoing study with Perceptronics Solutions Inc. titled "Adaptive Trustworthiness Calibration Interface (ATCI)", which is currently being funded by the US Air Force Research Labs.

Effects of Levels of Risk on Trust Calibration

“I am currently looking at the effects of levels of risk on trust calibration. In other words, my research is aimed at understanding how varying increasing levels of risk affects trust.”- Kelly Satterfield (Lead Researcher)

Extreme Trust Calibration Training with Trust Cues

“I am interested in examining cues that can be used to calibrate an individual’s trust in an automated system to the extremes of complete trust and complete distrust. Ultimately, this information can be used to adjust an operator’s level of trust in a specific automated system to an appropriate level that optimizes productivity and minimizes errors.”-Ari Mandell (Lead Researcher)

Other Projects

“I am currently working on projects related to operator trust calibration in automated systems. Specifically, I am pursuing the design of visual cues that indicate the real-time reliability of an automated system, such that a human operator might recognize when to either increase or decrease trust in a given system-generated recommendation. Striking a balance between a rich information display with limited complexity requires an understanding of both visual design and human cognition. Beyond first compiling empirically effective cues, future plans include external user evaluation of the novel designs within the TRUMAN Lab. In addition, I am also helping to collaborate on the DICON experiment, as well as conduct data analysis and the review of literature for Human vs. Machine Agent-related work.” -Alix Dorfman (Lead Researcher)

“I am working on a number of projects related to improving human decision-making through novel and intuitive data displays. This research is designed to manage the increasingly complex information involved with operating high-tech systems. I am also using linear mixed effects models to analyze the role of 'humanness' (both appearance and behavior) in automated agents that offer decision advice to a human operator.” Sam Monfort (Lead Researcher)

“I am currently a Research Assistant in the TRUMAN lab as part of the OSCAR work study program, where I am involved in research on human trust in varying levels of automation. I have always been interested in learning more about Psychology and I enjoy studying topics such as trust and perception. As a research assistant, I have many responsibilities such as collecting and analyzing data, searching for articles to summarize and present to the lab, and assisting in running experiments.” -Roohussaba Khairullah (Undergraduate RA)

“This is my first time assisting in a research project and so far I have found the experience to be very rewarding.  My current research is concerned with real world problems of trust in automation/machines. I was particularly interested in joining this lab because I wanted to learn more about human factors psychology.”- Alexia Webster (Undergraduate RA)

“I am currently exploring the role of etiquette in human-automation interaction and assisting with data analysis. I am very interested in the application of cognitive science and human factors principles to solving real-world problems.” -Shiva Hassanzadeh (Undergraduate RA)