Stage-oe-small.jpg

Inproceedings3933

Aus Aifbportal
Wechseln zu:Navigation, Suche


Safety Aware Reinforcement Learning by Identifying Comprehensible Constraints in Expert Demonstrations


Safety Aware Reinforcement Learning by Identifying Comprehensible Constraints in Expert Demonstrations



Published: 2022

Buchtitel: Proceedings of the Workshop on Artificial Intelligence Safety 2022 (SafeAI∂AAAI'22)
Verlag: AAAI

Referierte Veröffentlichung

BibTeX

Kurzfassung
When used in real-world environments, agents must meet high safety requirements as errors have direct consequences. Besides the safety aspect, the explainability of the systems is of particular importance. Therefore, not only should errors be avoided during the learning process, but also the decision process should be made transparent. Existing approaches are limited to solving a single problem. For real-world use, however, several criteria must be fulfilled at the same time. In this paper we derive comprehensible rules from expert demonstrations which can be used to monitor the agent. The developed approach uses state of the art classification and regression trees for deriving safety rules combined with concepts in the field of association rule mining. The result is a compact and comprehensible rule set that explains the expert’s behavior and ensures safety. We evaluate our framework in common OpenAI environments. Results show that the elaborated approach is able to identify safety-relevant rules and imitate expert behavior especially in edge cases. Evaluations on higher dimensional observation spaces and continuous action spaces highlight the transferability of the approach to new tasks while maintaining compactness and comprehensibility of the rule set.

Download: Media:Safety-Aware_RL_SafeAI-AAAI2022.pdf
Weitere Informationen unter: Link



Forschungsgruppe

Web Science


Forschungsgebiet

Künstliche Intelligenz, Trustworthy AI, Reliable Information Systems