Published: 2008 Dezember
Herausgeber: Xiaodong Li, Michael Kirley, Mengjie Zhang, David Green, Vic Ciesielski, Hussein Abbass, Zbigniew Michalewicz, Tim Hendtlass, Kalyanmoy Deb, Kay Chen Tan, Jürgen Branke, and Yuhui Shi
Buchtitel: Proceedings of the 7th International Conference on Simulated Evolution And Learning (SEAL 2008)
Learning Classifier Systems (LCS) are rule-based evolutionary reinforcement learning (RL) systems. Today, especially variants of Wilson's eXtended Classifier System (XCS) are widely applied for machine learning. Despite their widespread application, LCSs have drawbacks: The number of reinforcement cycles an LCS requires for learning largely depends on the complexity of the learning task. A straightforward way to reduce this complexity is to split the task into smaller sub-problems. Whenever this can be done, the performance should be improved significantly. In this paper, a nature-inspired multi-agent scenario is used to evaluate and compare different distributed LCS variants. Results show that improvements in learning speed can be achieved by cleverly dividing a problem into smaller learning sub-problems.
DOI Link: 10.1007/978-3-540-89694-4_12