Neurosymbolic AI
Reinforcement Learning via Symbolic Feedback (RLSF)
TL;DR:
A novel fine-tuning paradigm where symbolic reasoning tools (e.g., solvers, provers, RDKit) provide token-level feedback to LLMs, achieving large accuracy gains on code synthesis, chemistry, and math tasks while remaining far smaller than closed-source models.
CoTran: An LLM-based Code Translator using Reinforcement Learning with Feedback from Compiler and Symbolic Execution
TL;DR:
Fine-tunes LLMs for end-to-end code translation using compiler and symbolic-execution feedback to improve compilability and functional equivalence.
Grounding Neural Inference with Satisfiability Modulo Theories
TL;DR:
Introduces SMTLayer, enabling neural networks to enforce logical constraints directly during both forward and backward passes without requiring the solver to be differentiable.
CGDTest (Talk)
TL;DR:
Applies constrained gradient descent to systematically test deep networks, uncovering adversarial robustness and fairness issues more effectively than existing testing tools.
Solver + Gradient Descent Training of Deep Neural Networks
TL;DR:
Combines gradient descent with MILP solver steps to escape poor local minima and accelerate convergence on classification and regression tasks.
Amnesiac Machine Learning
TL;DR:
Develops unlearning techniques that remove the influence of specific training data to satisfy privacy laws like GDPR while maintaining model performance.
xAI-GAN: Enhancing Generative Adversarial Networks via Explainable AI Systems
TL;DR:
Leverages explainability to guide GAN training, improving both the interpretability and quality of generated outputs.