

Symbolic Policies for Knowledge Transfer in Sequential Decision-Making
โ๐ Abstract: A key challenge in sequential decision-making is how to leverage prior experience to act effectively in new, unseen environments. In this talk, we examine the role of symbolic policies as one possible approach to capturing and transferring high-level knowledge. We discuss two complementary methods that integrate symbolic reasoning with decision-making under uncertainty.
โThe first combines event calculus with Partially Observable Markov Decision Processes (POMDPs), where macro-actions are learned as logical constructs from execution traces and used to guide exploration in Monte Carlo Tree Search-based planning.
โThe second considers a reinforcement learning framework in which knowledge from simpler domains is provided as logical rules and incorporated into the learning process to guide the agent in the early stages of training.
โThese approaches provide concrete examples of how symbolic structure can be used to bias learning and planning, enabling the reuse of previously acquired knowledge to improve sample efficiency, convergence speed, and generalization in settings characterized by sparse rewards and long horizons.
โโ๐ Link to papers: https://proceedings.mlr.press/v284/veronese25a.html
https://arxiv.org/abs/2601.02850v2
โโ๐ฉ๐ปโ๐ฌ Bio: Celeste is PhD candidate in Computer Science at the University of Verona, Italy, advised by Dr. Daniele Meli and Prof. Alessandro Farinelli.
โHer research lies at the intersection of neurosymbolic AI and agentic decision-making, with a focus on logic programming as a principled framework for acquiring, representing, and exploiting symbolic knowledge in learning-based agents. Her work explores how learned and human-provided logical abstractions can be integrated with deep reinforcement learning and planning to guide decision-making, improve data efficiency and generalization, and enhance the transparency and interpretability of agentic systems.