HyperAI
HyperAI
Home
Console
Docs
News
Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
Terms of Service
Privacy Policy
English
HyperAI
HyperAI
Toggle Sidebar
Search the site…
⌘
K
Command Palette
Search for a command to run...
Console
Home
SOTA
Common Sense Reasoning
Common Sense Reasoning On Swag
Common Sense Reasoning On Swag
Metrics
Test
Results
Performance results of various models on this benchmark
Columns
Model Name
Test
Paper Title
DeBERTalarge
90.8
DeBERTa: Decoding-enhanced BERT with Disentangled Attention
RoBERTa
89.9
RoBERTa: A Robustly Optimized BERT Pretraining Approach
BERT-LARGE
86.3
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
ESIM + ELMo
59.2
SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference
ESIM + GloVe
52.7
SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference
0 of 5 row(s) selected.
Previous
Next
HyperAI
HyperAI
Home
Console
Docs
News
Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
Terms of Service
Privacy Policy
English
HyperAI
HyperAI
Toggle Sidebar
Search the site…
⌘
K
Command Palette
Search for a command to run...
Console
Home
SOTA
Common Sense Reasoning
Common Sense Reasoning On Swag
Common Sense Reasoning On Swag
Metrics
Test
Results
Performance results of various models on this benchmark
Columns
Model Name
Test
Paper Title
DeBERTalarge
90.8
DeBERTa: Decoding-enhanced BERT with Disentangled Attention
RoBERTa
89.9
RoBERTa: A Robustly Optimized BERT Pretraining Approach
BERT-LARGE
86.3
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
ESIM + ELMo
59.2
SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference
ESIM + GloVe
52.7
SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference
0 of 5 row(s) selected.
Previous
Next
Common Sense Reasoning On Swag | SOTA | HyperAI