HyperAI
HyperAI
Home
Console
Docs
News
Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
Terms of Service
Privacy Policy
English
HyperAI
HyperAI
Toggle Sidebar
Search the site…
⌘
K
Command Palette
Search for a command to run...
Console
Home
SOTA
Atari Games
Atari Games On Atari 2600 Gravitar
Atari Games On Atari 2600 Gravitar
Metrics
Score
Results
Performance results of various models on this benchmark
Columns
Model Name
Score
Paper Title
Agent57
19213.96
Agent57: Outperforming the Atari Human Benchmark
R2D2
15680.7
Recurrent Experience Replay in Distributed Reinforcement Learning
MuZero (Res2 Adam)
8006.93
Online and Offline Reinforcement Learning by Planning with a Learned Model
Go-Explore
7588
First return, then explore
SND-VIC
6712
Self-supervised network distillation: an effective approach to exploration in sparse reward environments
MuZero
6682.70
Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model
GDI-H3
5915
Generalized Data Distribution Iteration
GDI-I3
5905
GDI: Rethinking What Makes Reinforcement Learning Different From Supervised Learning
GDI-I3
5905
Generalized Data Distribution Iteration
SND-STD
4643
Self-supervised network distillation: an effective approach to exploration in sparse reward environments
RND
3906
Exploration by Random Network Distillation
DreamerV2
3789
Mastering Atari with Discrete World Models
UCT
2850
The Arcade Learning Environment: An Evaluation Platform for General Agents
SND-V
2741
Self-supervised network distillation: an effective approach to exploration in sparse reward environments
CGP
2350
Evolving simple programs for playing Atari games
NoisyNet-Dueling
2209
Noisy Networks for Exploration
DNA
2190
DNA: Proximal Policy Optimization with a Dual Network Architecture
A2C + SIL
1874.2
Self-Imitation Learning
Ape-X
1598.5
Distributed Prioritized Experience Replay
FQF
1406.0
Fully Parameterized Quantile Function for Distributional Reinforcement Learning
0 of 53 row(s) selected.
Previous
Next
HyperAI
HyperAI
Home
Console
Docs
News
Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
Terms of Service
Privacy Policy
English
HyperAI
HyperAI
Toggle Sidebar
Search the site…
⌘
K
Command Palette
Search for a command to run...
Console
Home
SOTA
Atari Games
Atari Games On Atari 2600 Gravitar
Atari Games On Atari 2600 Gravitar
Metrics
Score
Results
Performance results of various models on this benchmark
Columns
Model Name
Score
Paper Title
Agent57
19213.96
Agent57: Outperforming the Atari Human Benchmark
R2D2
15680.7
Recurrent Experience Replay in Distributed Reinforcement Learning
MuZero (Res2 Adam)
8006.93
Online and Offline Reinforcement Learning by Planning with a Learned Model
Go-Explore
7588
First return, then explore
SND-VIC
6712
Self-supervised network distillation: an effective approach to exploration in sparse reward environments
MuZero
6682.70
Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model
GDI-H3
5915
Generalized Data Distribution Iteration
GDI-I3
5905
GDI: Rethinking What Makes Reinforcement Learning Different From Supervised Learning
GDI-I3
5905
Generalized Data Distribution Iteration
SND-STD
4643
Self-supervised network distillation: an effective approach to exploration in sparse reward environments
RND
3906
Exploration by Random Network Distillation
DreamerV2
3789
Mastering Atari with Discrete World Models
UCT
2850
The Arcade Learning Environment: An Evaluation Platform for General Agents
SND-V
2741
Self-supervised network distillation: an effective approach to exploration in sparse reward environments
CGP
2350
Evolving simple programs for playing Atari games
NoisyNet-Dueling
2209
Noisy Networks for Exploration
DNA
2190
DNA: Proximal Policy Optimization with a Dual Network Architecture
A2C + SIL
1874.2
Self-Imitation Learning
Ape-X
1598.5
Distributed Prioritized Experience Replay
FQF
1406.0
Fully Parameterized Quantile Function for Distributional Reinforcement Learning
0 of 53 row(s) selected.
Previous
Next
Atari Games On Atari 2600 Gravitar | SOTA | HyperAI