HyperAI
HyperAI
Home
Console
Docs
News
Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
Terms of Service
Privacy Policy
English
HyperAI
HyperAI
Toggle Sidebar
Search the site…
⌘
K
Command Palette
Search for a command to run...
Console
Home
SOTA
Graph Classification
Graph Classification On Mnist
Graph Classification On Mnist
Metrics
Accuracy
Results
Performance results of various models on this benchmark
Columns
Model Name
Accuracy
Paper Title
ESA (Edge set attention, no positional encodings, tuned)
98.917±0.020
An end-to-end attention-based approach for learning on graphs
NeuralWalker
98.760 ± 0.079
Learning Long Range Dependencies on Graphs via Random Walks
ESA (Edge set attention, no positional encodings)
98.753±0.041
An end-to-end attention-based approach for learning on graphs
GatedGCN+
98.712 ± 0.137
Can Classic GNNs Be Strong Baselines for Graph-level Tasks? Simple Architectures Meet Excellence
CKGCN
98.423
CKGConv: General Graph Convolution with Continuous Kernels
Exphormer
98.414±0.038
Exphormer: Sparse Transformers for Graphs
GCN+
98.382 ± 0.095
Can Classic GNNs Be Strong Baselines for Graph-level Tasks? Simple Architectures Meet Excellence
EIGENFORMER
98.362
Graph Transformers without Positional Encodings
TIGT
98.230±0.133
Topology-Informed Graph Transformer
EGT
98.173
Global Self-Attention as a Replacement for Graph Convolution
GRIT
98.108
Graph Inductive Biases in Transformers without Message Passing
GPS
98.05
Recipe for a General, Powerful, Scalable Graph Transformer
GatedGCN
97.340
Benchmarking Graph Neural Networks
0 of 13 row(s) selected.
Previous
Next
HyperAI
HyperAI
Home
Console
Docs
News
Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
Terms of Service
Privacy Policy
English
HyperAI
HyperAI
Toggle Sidebar
Search the site…
⌘
K
Command Palette
Search for a command to run...
Console
Home
SOTA
Graph Classification
Graph Classification On Mnist
Graph Classification On Mnist
Metrics
Accuracy
Results
Performance results of various models on this benchmark
Columns
Model Name
Accuracy
Paper Title
ESA (Edge set attention, no positional encodings, tuned)
98.917±0.020
An end-to-end attention-based approach for learning on graphs
NeuralWalker
98.760 ± 0.079
Learning Long Range Dependencies on Graphs via Random Walks
ESA (Edge set attention, no positional encodings)
98.753±0.041
An end-to-end attention-based approach for learning on graphs
GatedGCN+
98.712 ± 0.137
Can Classic GNNs Be Strong Baselines for Graph-level Tasks? Simple Architectures Meet Excellence
CKGCN
98.423
CKGConv: General Graph Convolution with Continuous Kernels
Exphormer
98.414±0.038
Exphormer: Sparse Transformers for Graphs
GCN+
98.382 ± 0.095
Can Classic GNNs Be Strong Baselines for Graph-level Tasks? Simple Architectures Meet Excellence
EIGENFORMER
98.362
Graph Transformers without Positional Encodings
TIGT
98.230±0.133
Topology-Informed Graph Transformer
EGT
98.173
Global Self-Attention as a Replacement for Graph Convolution
GRIT
98.108
Graph Inductive Biases in Transformers without Message Passing
GPS
98.05
Recipe for a General, Powerful, Scalable Graph Transformer
GatedGCN
97.340
Benchmarking Graph Neural Networks
0 of 13 row(s) selected.
Previous
Next
Graph Classification On Mnist | SOTA | HyperAI