HyperAI
HyperAI
Home
Console
Docs
News
Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
Terms of Service
Privacy Policy
English
HyperAI
HyperAI
Toggle Sidebar
Search the site…
⌘
K
Command Palette
Search for a command to run...
Console
Home
SOTA
Question Answering
Question Answering On Webquestions
Question Answering On Webquestions
Metrics
EM
Results
Performance results of various models on this benchmark
Columns
Model Name
EM
Paper Title
CoA
70.7
Chain-of-Action: Faithful and Multimodal Question Answering through Large Language Models
CoA w/o actions
64.7
Chain-of-Action: Faithful and Multimodal Question Answering through Large Language Models
DSP
59.4
Chain-of-Action: Faithful and Multimodal Question Answering through Large Language Models
DSP
59.4
DSPy: Compiling Declarative Language Model Calls into Self-Improving Pipelines
FiE+PAQ
56.3
FiE: Building a Global Probability Space by Leveraging Early Fusion in Encoder for Open-Domain Question Answering
FiE
52.4
FiE: Building a Global Probability Space by Leveraging Early Fusion in Encoder for Open-Domain Question Answering
FiDO
51.1
FiDO: Fusion-in-Decoder optimized for stronger performance and faster inference
RAG
45.2
Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks
Few-shot
44.7
Language Models are Few-Shot Learners
Few-shot
44.7
Chain-of-Action: Faithful and Multimodal Question Answering through Large Language Models
PaLM-540B (Few-Shot)
43.5
PaLM: Scaling Language Modeling with Pathways
Zero-shot
43
Chain-of-Action: Faithful and Multimodal Question Answering through Large Language Models
Zero-shot
43
Language Models are Unsupervised Multitask Learners
T5.1.1-XXL+SSM
42.8
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
CoT
42.5
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
CoT
42.5
Chain-of-Action: Faithful and Multimodal Question Answering through Large Language Models
DPR
42.4
Dense Passage Retrieval for Open-Domain Question Answering
GPT-3-175B (Few-Shot)
41.5
Language Models are Few-Shot Learners
REALM
40.7
REALM: Retrieval-Augmented Language Model Pre-Training
React
38.3
ReAct: Synergizing Reasoning and Acting in Language Models
0 of 37 row(s) selected.
Previous
Next
HyperAI
HyperAI
Home
Console
Docs
News
Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
Terms of Service
Privacy Policy
English
HyperAI
HyperAI
Toggle Sidebar
Search the site…
⌘
K
Command Palette
Search for a command to run...
Console
Home
SOTA
Question Answering
Question Answering On Webquestions
Question Answering On Webquestions
Metrics
EM
Results
Performance results of various models on this benchmark
Columns
Model Name
EM
Paper Title
CoA
70.7
Chain-of-Action: Faithful and Multimodal Question Answering through Large Language Models
CoA w/o actions
64.7
Chain-of-Action: Faithful and Multimodal Question Answering through Large Language Models
DSP
59.4
Chain-of-Action: Faithful and Multimodal Question Answering through Large Language Models
DSP
59.4
DSPy: Compiling Declarative Language Model Calls into Self-Improving Pipelines
FiE+PAQ
56.3
FiE: Building a Global Probability Space by Leveraging Early Fusion in Encoder for Open-Domain Question Answering
FiE
52.4
FiE: Building a Global Probability Space by Leveraging Early Fusion in Encoder for Open-Domain Question Answering
FiDO
51.1
FiDO: Fusion-in-Decoder optimized for stronger performance and faster inference
RAG
45.2
Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks
Few-shot
44.7
Language Models are Few-Shot Learners
Few-shot
44.7
Chain-of-Action: Faithful and Multimodal Question Answering through Large Language Models
PaLM-540B (Few-Shot)
43.5
PaLM: Scaling Language Modeling with Pathways
Zero-shot
43
Chain-of-Action: Faithful and Multimodal Question Answering through Large Language Models
Zero-shot
43
Language Models are Unsupervised Multitask Learners
T5.1.1-XXL+SSM
42.8
Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
CoT
42.5
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models
CoT
42.5
Chain-of-Action: Faithful and Multimodal Question Answering through Large Language Models
DPR
42.4
Dense Passage Retrieval for Open-Domain Question Answering
GPT-3-175B (Few-Shot)
41.5
Language Models are Few-Shot Learners
REALM
40.7
REALM: Retrieval-Augmented Language Model Pre-Training
React
38.3
ReAct: Synergizing Reasoning and Acting in Language Models
0 of 37 row(s) selected.
Previous
Next