HyperAI
HyperAI超神经
首页
算力平台
文档
资讯
论文
教程
数据集
百科
SOTA
LLM 模型天梯
GPU 天梯
顶会
开源项目
全站搜索
关于
服务条款
隐私政策
中文
HyperAI
HyperAI超神经
Toggle Sidebar
全站搜索…
⌘
K
Command Palette
Search for a command to run...
算力平台
首页
SOTA
问答
Question Answering On Wikiqa
Question Answering On Wikiqa
评估指标
MAP
MRR
评测结果
各个模型在此基准测试上的表现结果
Columns
模型名称
MAP
MRR
Paper Title
TANDA-DeBERTa-V3-Large + ALL
0.927
0.939
Structural Self-Supervised Objectives for Transformers
RLAS-BIABC
0.924
0.908
RLAS-BIABC: A Reinforcement Learning-Based Answer Selection Using the BERT Model Boosted by an Improved ABC Algorithm
TANDA-RoBERTa (ASNQ, WikiQA)
0.920
0.933
TANDA: Transfer and Adapt Pre-Trained Transformer Models for Answer Sentence Selection
DeBERTa-V3-Large + ALL
0.909
0.920
Pre-training Transformer Models with Sentence-Level Objectives for Answer Sentence Selection
DeBERTa-Large + SSP
0.901
0.914
Pre-training Transformer Models with Sentence-Level Objectives for Answer Sentence Selection
RoBERTa-Base + SSP
0.887
0.899
Pre-training Transformer Models with Sentence-Level Objectives for Answer Sentence Selection
RoBERTa-Base Joint MSPP
0.887
0.900
Paragraph-based Transformer Pre-training for Multi-Sentence Inference
Comp-Clip + LM + LC
0.764
0.784
A Compare-Aggregate Model with Latent Clustering for Answer Selection
RE2
0.7452
0.7618
Simple and Effective Text Matching with Richer Alignment Features
HyperQA
0.712
0.727
Hyperbolic Representation Learning for Fast and Efficient Neural Question Answering
PWIM
0.7090
0.7234
-
Key-Value Memory Network
0.7069
0.7265
Key-Value Memory Networks for Directly Reading Documents
LDC
0.7058
0.7226
Sentence Similarity Learning by Lexical Decomposition and Composition
PairwiseRank + Multi-Perspective CNN
0.7010
0.7180
Noise Contrastive Estimation and Negative Sampling for Conditional Models: Consistency and Statistical Efficiency
AP-CNN
0.6886
0.6957
Attentive Pooling Networks
Attentive LSTM
0.6886
0.7069
Neural Variational Inference for Text Processing
LSTM (lexical overlap + dist output)
0.682
0.6988
Neural Variational Inference for Text Processing
MMA-NSE attention
0.6811
0.6993
Neural Semantic Encoders
SWEM-concat
0.6788
0.6908
Baseline Needs More Love: On Simple Word-Embedding-Based Models and Associated Pooling Mechanisms
LSTM
0.6552
0.6747
Neural Variational Inference for Text Processing
0 of 25 row(s) selected.
Previous
Next
HyperAI
HyperAI超神经
首页
算力平台
文档
资讯
论文
教程
数据集
百科
SOTA
LLM 模型天梯
GPU 天梯
顶会
开源项目
全站搜索
关于
服务条款
隐私政策
中文
HyperAI
HyperAI超神经
Toggle Sidebar
全站搜索…
⌘
K
Command Palette
Search for a command to run...
算力平台
首页
SOTA
问答
Question Answering On Wikiqa
Question Answering On Wikiqa
评估指标
MAP
MRR
评测结果
各个模型在此基准测试上的表现结果
Columns
模型名称
MAP
MRR
Paper Title
TANDA-DeBERTa-V3-Large + ALL
0.927
0.939
Structural Self-Supervised Objectives for Transformers
RLAS-BIABC
0.924
0.908
RLAS-BIABC: A Reinforcement Learning-Based Answer Selection Using the BERT Model Boosted by an Improved ABC Algorithm
TANDA-RoBERTa (ASNQ, WikiQA)
0.920
0.933
TANDA: Transfer and Adapt Pre-Trained Transformer Models for Answer Sentence Selection
DeBERTa-V3-Large + ALL
0.909
0.920
Pre-training Transformer Models with Sentence-Level Objectives for Answer Sentence Selection
DeBERTa-Large + SSP
0.901
0.914
Pre-training Transformer Models with Sentence-Level Objectives for Answer Sentence Selection
RoBERTa-Base + SSP
0.887
0.899
Pre-training Transformer Models with Sentence-Level Objectives for Answer Sentence Selection
RoBERTa-Base Joint MSPP
0.887
0.900
Paragraph-based Transformer Pre-training for Multi-Sentence Inference
Comp-Clip + LM + LC
0.764
0.784
A Compare-Aggregate Model with Latent Clustering for Answer Selection
RE2
0.7452
0.7618
Simple and Effective Text Matching with Richer Alignment Features
HyperQA
0.712
0.727
Hyperbolic Representation Learning for Fast and Efficient Neural Question Answering
PWIM
0.7090
0.7234
-
Key-Value Memory Network
0.7069
0.7265
Key-Value Memory Networks for Directly Reading Documents
LDC
0.7058
0.7226
Sentence Similarity Learning by Lexical Decomposition and Composition
PairwiseRank + Multi-Perspective CNN
0.7010
0.7180
Noise Contrastive Estimation and Negative Sampling for Conditional Models: Consistency and Statistical Efficiency
AP-CNN
0.6886
0.6957
Attentive Pooling Networks
Attentive LSTM
0.6886
0.7069
Neural Variational Inference for Text Processing
LSTM (lexical overlap + dist output)
0.682
0.6988
Neural Variational Inference for Text Processing
MMA-NSE attention
0.6811
0.6993
Neural Semantic Encoders
SWEM-concat
0.6788
0.6908
Baseline Needs More Love: On Simple Word-Embedding-Based Models and Associated Pooling Mechanisms
LSTM
0.6552
0.6747
Neural Variational Inference for Text Processing
0 of 25 row(s) selected.
Previous
Next