HyperAI
HyperAI
Home
Console
Docs
News
Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
Terms of Service
Privacy Policy
English
HyperAI
HyperAI
Toggle Sidebar
Search the site…
⌘
K
Command Palette
Search for a command to run...
Console
Home
SOTA
GSM8K
Gsm8K On Gsm8K
Gsm8K On Gsm8K
Metrics
0-shot MRR
Results
Performance results of various models on this benchmark
Columns
Model Name
0-shot MRR
Paper Title
Orange-mini
98
MyGO Multiplex CoT: A Method for Self-Reflection in Large Language Models via Double Chain of Thought Thinking
AlphaLLM (with MCTS)
-
Toward Self-Improvement of LLMs via Imagination, Searching, and Criticizing
0 of 2 row(s) selected.
Previous
Next
HyperAI
HyperAI
Home
Console
Docs
News
Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
Terms of Service
Privacy Policy
English
HyperAI
HyperAI
Toggle Sidebar
Search the site…
⌘
K
Command Palette
Search for a command to run...
Console
Home
SOTA
GSM8K
Gsm8K On Gsm8K
Gsm8K On Gsm8K
Metrics
0-shot MRR
Results
Performance results of various models on this benchmark
Columns
Model Name
0-shot MRR
Paper Title
Orange-mini
98
MyGO Multiplex CoT: A Method for Self-Reflection in Large Language Models via Double Chain of Thought Thinking
AlphaLLM (with MCTS)
-
Toward Self-Improvement of LLMs via Imagination, Searching, and Criticizing
0 of 2 row(s) selected.
Previous
Next