HyperAI
HyperAI
Home
Console
Docs
News
Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
Terms of Service
Privacy Policy
English
HyperAI
HyperAI
Toggle Sidebar
Search the site…
⌘
K
Command Palette
Search for a command to run...
Console
Home
SOTA
Question Answering
Question Answering On Trecqa
Question Answering On Trecqa
Metrics
MAP
MRR
Results
Performance results of various models on this benchmark
Columns
Model Name
MAP
MRR
Paper Title
TANDA DeBERTa-V3-Large + ALL
0.954
0.984
Structural Self-Supervised Objectives for Transformers
TANDA-RoBERTa (ASNQ, TREC-QA)
0.943
0.974
TANDA: Transfer and Adapt Pre-Trained Transformer Models for Answer Sentence Selection
DeBERTa-V3-Large + SSP
0.923
0.946
Pre-training Transformer Models with Sentence-Level Objectives for Answer Sentence Selection
Contextual DeBERTa-V3-Large + SSP
0.919
0.945
Context-Aware Transformer Pre-Training for Answer Sentence Selection
RLAS-BIABC
0.913
0.998
RLAS-BIABC: A Reinforcement Learning-Based Answer Selection Using the BERT Model Boosted by an Improved ABC Algorithm
RoBERTa-Base Joint + MSPP
0.911
0.952
Paragraph-based Transformer Pre-training for Multi-Sentence Inference
RoBERTa-Base + PSD
0.903
0.951
Pre-training Transformer Models with Sentence-Level Objectives for Answer Sentence Selection
Comp-Clip + LM + LC
0.868
0.928
A Compare-Aggregate Model with Latent Clustering for Answer Selection
NLP-Capsule
0.7773
0.7416
Towards Scalable and Reliable Capsule Networks for Challenging NLP Applications
HyperQA
0.770
0.825
Hyperbolic Representation Learning for Fast and Efficient Neural Question Answering
PWIN
0.7588
0.8219
-
aNMM
0.750
0.811
aNMM: Ranking Short Answer Texts with Attention-Based Neural Matching Model
CNN
0.711
0.785
Deep Learning for Answer Sentence Selection
0 of 13 row(s) selected.
Previous
Next
HyperAI
HyperAI
Home
Console
Docs
News
Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
Terms of Service
Privacy Policy
English
HyperAI
HyperAI
Toggle Sidebar
Search the site…
⌘
K
Command Palette
Search for a command to run...
Console
Home
SOTA
Question Answering
Question Answering On Trecqa
Question Answering On Trecqa
Metrics
MAP
MRR
Results
Performance results of various models on this benchmark
Columns
Model Name
MAP
MRR
Paper Title
TANDA DeBERTa-V3-Large + ALL
0.954
0.984
Structural Self-Supervised Objectives for Transformers
TANDA-RoBERTa (ASNQ, TREC-QA)
0.943
0.974
TANDA: Transfer and Adapt Pre-Trained Transformer Models for Answer Sentence Selection
DeBERTa-V3-Large + SSP
0.923
0.946
Pre-training Transformer Models with Sentence-Level Objectives for Answer Sentence Selection
Contextual DeBERTa-V3-Large + SSP
0.919
0.945
Context-Aware Transformer Pre-Training for Answer Sentence Selection
RLAS-BIABC
0.913
0.998
RLAS-BIABC: A Reinforcement Learning-Based Answer Selection Using the BERT Model Boosted by an Improved ABC Algorithm
RoBERTa-Base Joint + MSPP
0.911
0.952
Paragraph-based Transformer Pre-training for Multi-Sentence Inference
RoBERTa-Base + PSD
0.903
0.951
Pre-training Transformer Models with Sentence-Level Objectives for Answer Sentence Selection
Comp-Clip + LM + LC
0.868
0.928
A Compare-Aggregate Model with Latent Clustering for Answer Selection
NLP-Capsule
0.7773
0.7416
Towards Scalable and Reliable Capsule Networks for Challenging NLP Applications
HyperQA
0.770
0.825
Hyperbolic Representation Learning for Fast and Efficient Neural Question Answering
PWIN
0.7588
0.8219
-
aNMM
0.750
0.811
aNMM: Ranking Short Answer Texts with Attention-Based Neural Matching Model
CNN
0.711
0.785
Deep Learning for Answer Sentence Selection
0 of 13 row(s) selected.
Previous
Next
Question Answering On Trecqa | SOTA | HyperAI