Neurosymbolic Agents for Open-world Novelty Handling

Hybrid reasoning and RL to detect, adapt, and recover from novelties.

Goal-conditioned continual learningNovelty detection and recoveryCognitive-architecture
Neurosymbolic
Description
"Open world" environments are those in which novel objects, agents, events, and more can appear and contradict previous understandings of the environment. This contradicts the "closed world" assumption used in most AI research, where the environment is assumed to be fully understood and unchanging.The types of environments AI agents can be deployed in arelimited by the inability to handle the novelties that occur in open world environments. In this project, we develop Cognitive architectures, and general algorithms and frameworks that enable agents to detect, adapt, and recover from novelties in open world environments. We use a combination of symbolic reasoning and reinforcement learning to achieve this. Our work includes the development of neurosymbolic cognitive architectures, goal-conditioned continual learning algorithms, and rapid recovery mechanisms.
Related Publications
  • 2024A neurosymbolic cognitive architecture framework for handling novelties in open worldsAI Journal
  • 2024A framework for neurosymbolic goal-conditioned continual learning in open world environmentsIEEE IROS
  • 2022Rapid-learn: A framework for learning to recover for handling novelties in open-world environmentsIEEE ICDL
  • 2021Spotter: Extending symbolic planning operators through targeted reinforcement learningAAMAS
  • 2021A novelty-centric agent architecture for changing worldsAAMAS
People
  • Shivam Goel
  • Yash Shukla
  • Panagiotis Lymperpolous
  • Pierrick Lorang
  • Ravenna Thielstorm
  • Vasanth Sarathy
Shivam Goel