Archive Number: 4.491
Status of Thesis: Open
Date of start: 2019-08-14
Automated, cooperative vehicles have to make decisions in road traffic in a highly dynamic, interacting and incompletely perceptible environment. Previous attempts are usually limited to situation assessment from an egocentric perspective, without taking cooperation aspects into account, or interactions between other road users.
The goal of this work is to create reinforcement learning models capable of learning an RSS-compliant driving policy within safety margins based on low-risk actions. For that, the state space needs to consider model and sensor uncertainties in its representation. Building on this representation, previous works such as RSS that assure onesided safety while driving, need to be extended to account for the uncertainty of the state space. RSS conducts risk assessment (classification of safe/dangerous situations) as well as risk mitigation (generation of the proper response). These aspects should supervise the actual learning process as well as constrain the final policy.
- An interdisciplinary research environment with partners from science and industry
- A constructive collaboration with bright, motivated employees
- A pleasant working atmosphere
- Knowledge in depth and breadth in the field of artificial intelligence (especially machine learning), game theory or closely related areas
- Ability to implement both state of the art, as well as experimental algorithms
- Good Python Skills
- Sound English skills
- High creativity and productivity
- Experience with Reinforcement Learning is a plus
- current transcript of records