Effective Learning in Non-stationary Multiagent Environments

Effective Learning in Non-stationary Multiagent Environments PDF Author: Dong Ki Kim (Artificial intelligence expert)
Publisher:
ISBN:
Category :
Languages : en
Pages : 0

Get Book Here

Book Description
Multiagent reinforcement learning (MARL) provides a principled framework for a group of artificial intelligence agents to learn collaborative and/or competitive behaviors at the level of human experts. Multiagent learning settings inherently solve much more complex problems than single-agent learning because an agent interacts both with the environment and other agents. In particular, multiple agents simultaneously learn in MARL, leading to natural non-stationarity in the experiences encountered and thus requiring each agent to its behavior with respect to potentially large changes in other agents' policies. This thesis aims to address the non-stationarity challenge in multiagent learning from three important topics: 1) adaptation, 2) convergence, and 3) state space. The first topic answers how an agent can learn effective adaptation strategies concerning other agents' changing policies by developing a new meta-learning framework. The second topic answers how agents can adapt and influence the joint learning process such that policies converge to more desirable limiting behaviors by the end of learning based on a new game-theoretical solution concept. Lastly, the last topic answers how state space size can be reduced based on knowledge sharing and context-specific abstraction such that the learning complexity is less affected by non-stationarity. In summary, this thesis develops theoretical and algorithmic contributions to provide principled answers to the aforementioned topics on non-stationarity. The developed algorithms in this thesis demonstrate their effectiveness in a diverse suite of multiagent benchmark domains, including the full spectrum of mixed incentive, competitive, and cooperative environments.