Mingzhe Wang; Yihe Tang; Jian Wang; Jia Deng

Abstract
We propose a deep learning-based approach to the problem of premise selection: selecting mathematical statements relevant for proving a given conjecture. We represent a higher-order logic formula as a graph that is invariant to variable renaming but still fully preserves syntactic and semantic information. We then embed the graph into a vector via a novel embedding method that preserves the information of edge ordering. Our approach achieves state-of-the-art results on the HolStep dataset, improving the classification accuracy from 83% to 90.3%.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| automated-theorem-proving-on-holstep | FormulaNet-basic | Classification Accuracy: 0.891 |
| automated-theorem-proving-on-holstep | FormulaNet | Classification Accuracy: 0.903 |
| automated-theorem-proving-on-holstep-1 | FormulaNet-basic | Classification Accuracy: 0.890 |
| automated-theorem-proving-on-holstep-1 | FormulaNet | Classification Accuracy: 0.900 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.