This proposed project aims to make machine intelligence-driven physical and non-physical interventions in computer-assisted driving more acceptable and dependable.
In this project, we will investigate what impacts an intelligent automotive physical system that delivers relevant interactions can have on drivers. This project will use a motion platform and a VR headset to simulate various driver / driving scenarios where computers autonomously determine the vehicle’s physical actions (i.e., self-driving or computer-assisted driving situations) (Figure 13). Also, in this project, we will employ a range of sensors to track driver cognition, attention, and behavior in real time to detect / predict a driver's condition and experience.
(a) Experiment setup
(b) Driver activities in self-driving cars
Figure 13. Simulated intelligent automotive physical systems in this project.
In particular, this project deals with issues that arise from situations where an intelligent automotive physical system’s machine intelligence cannot exercise sufficient capacity to make decisions about how to interact with other vehicles and in-situ contextual events for various reasons (e.g., multiple jaywalkers in different directions, vehicles with an emergency light on, damaged road signs, broken signal lights, or heavy fog or mist). More specifically, we focus on how to make the system itself act to improve safety by regulating the vehicle's physical behaviors (e.g., slowing down, keeping the lane, leaving more space between the vehicle in front, or turning on an emergency light), by requesting the human driver's intervention in the given situation, and, if necessary, by taking control of the vehicle's physical actions in order to manage the situation as safely as possible until the human driver takes over.
Thus, this project aims to improve the system’s intelligence in order to be able to help the driver quickly and accurately understand and address both the difficulties that the system is experiencing and the conditions of the driving situation. For this, we will explore how fast and accurate the intelligent physical system needs to be to enhance driver performance and satisfaction, given the critical nature of the driving context, and we will identify what sensor features and contextual events allow for the effective delivery of interactions to a driver of an intelligent automotive physical system. The project deliverables include a design guideline for a mixed-initiative vehicle control between human driver and computer driver and a series of sensor-based models that actively adapt physical or non-physical behaviors of such intelligent systems to drivers of different demographics and to drivers whose cognitive abilities are changing over time.
In a user study, I examined the advantages and disadvantages of types of visual feedback that indicate the motion of the computer car and its decision-making states. I then discussed the expected effects of feedback type combinations with respect to intelligibility in a simulated autonomous driving environment. I will revise “Supporting Mobility Independence for Elderly Drivers Using Semi-Autonomous Vehicular Technologies Enhanced by Human-Centered Situational Awareness,” which we submitted to NSF Cyber-Physical Systems, and resubmit it to NSF Smart and Autonomous Systems.