Hybrid reasoning and RL to detect, adapt, and recover from novelties.
"Open world" environments are those in which novel objects, agents, events, and more can appear and contradict previous understandings of the environment. This contradicts the "closed world" assumption used in most AI research, where the environment is assumed to be fully understood and unchanging.The types of environments AI agents can be deployed in arelimited by the inability to handle the novelties that occur in open world environments. In this project, we develop Cognitive architectures, and general algorithms and frameworks that enable agents to detect, adapt, and recover from novelties in open world environments. We use a combination of symbolic reasoning and reinforcement learning to achieve this. Our work includes the development of neurosymbolic cognitive architectures, goal-conditioned continual learning algorithms, and rapid recovery mechanisms.