TY - GEN
T1 - Using Physiological Metrics to Improve Reinforcement Learning for Autonomous Vehicles
AU - Fleicher, Michael
AU - Musicant, Oren
AU - Azaria, Amos
N1 - Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - Thanks to recent technological advances Autonomous Vehicles (AVs) are becoming available at some locations. Safety impacts of these devices have, however, been difficult to assess. In this paper we utilize physiological metrics to improve the performance of a reinforcement learning agent attempting to drive an autonomous vehicle in simulation. We measure the performance of our reinforcement learner in several aspects, including the amount of stress imposed on potential passengers, the number of training episodes required, and a score measuring the vehicle's speed as well as the distance successfully traveled by the vehicle, without traveling off-track or hitting a different vehicle. To that end, we compose a human model, which is based on a dataset of physiological metrics of passengers in an autonomous vehicle. We embed this model in a reinforcement learning agent by providing negative reward to the agent for actions that cause the human model an increase in heart rate. We show that such a 'passenger-aware' reinforcement learner agent does not only reduce the stress imposed on hypothetical passengers, but, quite surprisingly, also drives safer and its learning process is more effective than an agent that does not obtain rewards from a human model.
AB - Thanks to recent technological advances Autonomous Vehicles (AVs) are becoming available at some locations. Safety impacts of these devices have, however, been difficult to assess. In this paper we utilize physiological metrics to improve the performance of a reinforcement learning agent attempting to drive an autonomous vehicle in simulation. We measure the performance of our reinforcement learner in several aspects, including the amount of stress imposed on potential passengers, the number of training episodes required, and a score measuring the vehicle's speed as well as the distance successfully traveled by the vehicle, without traveling off-track or hitting a different vehicle. To that end, we compose a human model, which is based on a dataset of physiological metrics of passengers in an autonomous vehicle. We embed this model in a reinforcement learning agent by providing negative reward to the agent for actions that cause the human model an increase in heart rate. We show that such a 'passenger-aware' reinforcement learner agent does not only reduce the stress imposed on hypothetical passengers, but, quite surprisingly, also drives safer and its learning process is more effective than an agent that does not obtain rewards from a human model.
KW - autonomous vehicles
KW - comfort
KW - driving style
KW - passengers
KW - physiological sensing
KW - reinforcement learning
UR - http://www.scopus.com/inward/record.url?scp=85156136427&partnerID=8YFLogxK
U2 - 10.1109/ICTAI56018.2022.00186
DO - 10.1109/ICTAI56018.2022.00186
M3 - ???researchoutput.researchoutputtypes.contributiontobookanthology.conference???
AN - SCOPUS:85156136427
T3 - Proceedings - International Conference on Tools with Artificial Intelligence, ICTAI
SP - 1223
EP - 1230
BT - Proceedings - 2022 IEEE 34th International Conference on Tools with Artificial Intelligence, ICTAI 2022
A2 - Reformat, Marek
A2 - Zhang, Du
A2 - Bourbakis, Nikolaos G.
PB - IEEE Computer Society
T2 - 34th IEEE International Conference on Tools with Artificial Intelligence, ICTAI 2022
Y2 - 31 October 2022 through 2 November 2022
ER -