Neuro-Symbolic Entropy Regularization

by   Kareem Ahmed, et al.

In structured prediction, the goal is to jointly predict many output variables that together encode a structured object – a path in a graph, an entity-relation triple, or an ordering of objects. Such a large output space makes learning hard and requires vast amounts of labeled data. Different approaches leverage alternate sources of supervision. One approach – entropy regularization – posits that decision boundaries should lie in low-probability regions. It extracts supervision from unlabeled examples, but remains agnostic to the structure of the output space. Conversely, neuro-symbolic approaches exploit the knowledge that not every prediction corresponds to a valid structure in the output space. Yet, they does not further restrict the learned output distribution. This paper introduces a framework that unifies both approaches. We propose a loss, neuro-symbolic entropy regularization, that encourages the model to confidently predict a valid object. It is obtained by restricting entropy regularization to the distribution over only valid structures. This loss is efficiently computed when the output constraint is expressed as a tractable logic circuit. Moreover, it seamlessly integrates with other neuro-symbolic losses that eliminate invalid predictions. We demonstrate the efficacy of our approach on a series of semi-supervised and fully-supervised structured-prediction experiments, where we find that it leads to models whose predictions are more accurate and more likely to be valid.


Semi-Supervised Models via Data Augmentationfor Classifying Interactive Affective Responses

We present semi-supervised models with data augmentation (SMDA), a semi-...

Semantic Probabilistic Layers for Neuro-Symbolic Learning

We design a predictive layer for structured-output prediction (SOP) that...

Transformation Consistency Regularization- A Semi-Supervised Paradigm for Image-to-Image Translation

Scarcity of labeled data has motivated the development of semi-supervise...

Semantic Strengthening of Neuro-Symbolic Learning

Numerous neuro-symbolic approaches have recently been proposed typically...

Deep Neural Networks Regularization for Structured Output Prediction

A deep neural network model is a powerful framework for learning represe...

Taming Overconfident Prediction on Unlabeled Data from Hindsight

Minimizing prediction uncertainty on unlabeled data is a key factor to a...

Please sign up or login with your details

Forgot password? Click here to reset