Premise Selection for Theorem Proving by Deep Graph Embedding

09/28/2017
by   Mingzhe Wang, et al.
0

We propose a deep learning-based approach to the problem of premise selection: selecting mathematical statements relevant for proving a given conjecture. We represent a higher-order logic formula as a graph that is invariant to variable renaming but still fully preserves syntactic and semantic information. We then embed the graph into a vector via a novel embedding method that preserves the information of edge ordering. Our approach achieves state-of-the-art results on the HolStep dataset, improving the classification accuracy from 83

READ FULL TEXT
research
05/24/2019

Graph Representations for Higher-Order Logic and Theorem Proving

This paper presents the first use of graph neural networks (GNNs) for hi...
research
05/15/2023

An Ensemble Approach for Automated Theorem Proving Based on Efficient Name Invariant Graph Neural Representations

Using reinforcement learning for automated theorem proving has recently ...
research
11/15/2019

Improving Graph Neural Network Representations of Logical Formulae with Subgraph Pooling

Recent advances in the integration of deep learning with automated theor...
research
06/14/2016

DeepMath - Deep Sequence Models for Premise Selection

We study the effectiveness of neural sequence models for premise selecti...
research
01/22/2021

A Study of Continuous Vector Representationsfor Theorem Proving

Applying machine learning to mathematical terms and formulas requires a ...
research
11/27/2019

Property Invariant Embedding for Automated Reasoning

Automated reasoning and theorem proving have recently become major chall...

Please sign up or login with your details

Forgot password? Click here to reset