Auditory Research Group (Dr. Carryl Baldwin). The Auditory Research Group, led by Dr. Baldwin, is dedicated to research in all areas of Applied Auditory Cognition. This includes the design of auditory displays, (particularly collision avoidance and navigation systems for air and ground), communication systems, and strategies for improving speech intelligibility in adverse listening conditions and among those with hearing impairments. The laboratory includes two acoustically shielded chambers for recording and testing of auditory stimuli, a host of neurophysiological and physiological recording equipment and a suite of high quality sound generation, digital recording, analysis, and presentation equipment and software.
Cognitive Aging and Cognitive Training Research Group (Dr. Pam Greenwood). Dr. Greenwood has long investigated how the aging process affects the mind and brain in the context of normal genetic variation. She has recently begun to investigate ways to ameliorate cognitive decline late in life. The ultimate goal of training in healthy older people is to improve cognitive functioning in daily life. Dr. Greenwood has shown such transfer of training in concert with changes in functional connectivity and white matter integrity in healthy older people. She is currently investigating the elements of successful training – working memory, attention, executive functioning – in young people. She also investigates use of non-invasive brain stimulation as a means of heightening effects of training on cognition, regional brain activation, functional connectivity, and white matter integrity.
Hemodynamics, Automation, Resilience & Trust (HeART) Lab (Dr. Tyler Shaw). The HeART lab is focused on human-automation interaction and the effects of different levels and types of automation on human operator attention, decision-making, trust, resilience, and other aspects of cognition. Topics of interest include adaptive aiding, calibrated trust, trust cues, human performance and complacency during nonoptimal conditions, performance within supervisory control human-machine systems, effects of imperfect automation on human trust, individual differences in trust, how varying levels of risk affects trust, and ways to create extreme trust calibration using trust cues.
Mason Transportation Institute (Dr. Carryl Baldwin). Our Driving Simulation facilities include a high-fidelity, motion-based simulator, and several lower fidelity desktop simulators. The motion-based simulator is equipped with a digital dashboard and a touch screen ancillary display console for examining new visual display as well as two custom designed seat pans for presenting vibrotactile signals.
Measurement Research Methodology Evaluation Statistics (MRES) Lab (Dr. Patrick McKnight). MRES is a group of social and behavioral scientists dedicated to applying our methodological skills to real world problems. The MRES (pronounced mysteries) lab consults with government, educational, and private organizations as well as conducts independent research. Our collective interests cover most social and behavioral areas with primary focus on clinical, criminal justice, health, education, and science policy related concerns.
Perception and Action Neuroscience Group (Dr. Jim Thompson). The ability to see how other people move is essential for many aspects of daily life - from things as simple as avoiding collisions to detecting suspicious behavior or recognizing someone else's emotions. The research efforts of the Perception & Action Neuroscience Group (PANGlab) led by Dr. Jim Thompson are focused on examining how we recognize human movement and make sense of other peoples' actions, and how we code our own actions in relation to the external environment. We investigate these issues using a combination of behavioral paradigms, virtual reality, functional magnetic resonance imaging (fMRI), and electroencephalography (EEG). The goal of the group's research is to further the understanding of how we see and act with others as part of everyday life, in specialized settings such as surveillance, and in conditions in which human movement recognition may be impaired.
Predicting Cognition Lab (Dr. Greg Trafton). Why do people make errors? How do people interact with robots? We collect data on how and why people make errors and how they interact with robots. We then build theoretical models of people making errors and people interacting with robots, not only so that we can understand people, but also so that we can help prevent errors and help people interact with robots better. Our theories are instantiated in ways that make predictions of what people will do in the future, and this information can then be used to change people's behavior.
Social Robotics & Embodied Cognition (SREC) Lab (Dr. Eva Wiese). The SREC Lab focuses on research in social attention and embodied cognition and its application to Social Robotics and Design Thinking. With regard to Social Robotics, the goal is to unravel what sort of information humans use when judging the degree of intentionality underlying the actions of social agents (i.e., robots) and how attributing a mind to others influences attention, perception and performance. With regard to Design Thinking, the SREC lab is interested in the role of embodied cognition involved in designing and in particular how perception and action processes interact during design thinking. In order to investigate these questions, we use behavioral measures, eye tracking and EEG.
Visual Attention and Cognition Lab (Dr. Matt Peterson). The Visual Attention and Cognition Lab, led by Dr. Matt Peterson, is concerned with how attention, working memory, and eye movements interact to affect cognition and perception in both well-controlled laboratory settings and more complex environments. Topics of interest include how environmental factors capture attention, how memory guides visual search, how attention affects scene perception, and how working memory is affected by eye movements. Our lab uses a variety of methods to study cognition, including psychophysical methods, high-speed eye tracking, EEG, brain-computer interfaces that utilize machine-learning algorithms to match patterns in ERP signals, transcranial direct-current stimulation (tDCS), and salivary cortisol measures of stress.