HyperAI
HyperAI
Home
Console
Docs
News
Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
Terms of Service
Privacy Policy
English
HyperAI
HyperAI
Toggle Sidebar
Search the site…
⌘
K
Command Palette
Search for a command to run...
Console
Home
SOTA
Atari Games
Atari Games On Atari 2600 Venture
Atari Games On Atari 2600 Venture
Metrics
Score
Results
Performance results of various models on this benchmark
Columns
Model Name
Score
Paper Title
Agent57
2623.71
Agent57: Outperforming the Atari Human Benchmark
Go-Explore
2281
First return, then explore
SND-VIC
2188
Self-supervised network distillation: an effective approach to exploration in sparse reward environments
SND-STD
2138
Self-supervised network distillation: an effective approach to exploration in sparse reward environments
GDI-I3
2035
Generalized Data Distribution Iteration
GDI-H3
2000
Generalized Data Distribution Iteration
GDI-H3(200M frames)
2000
Generalized Data Distribution Iteration
R2D2
1970.7
Recurrent Experience Replay in Distributed Reinforcement Learning
RND
1859
Exploration by Random Network Distillation
Ape-X
1813
Distributed Prioritized Experience Replay
SND-V
1787
Self-supervised network distillation: an effective approach to exploration in sparse reward environments
MuZero (Res2 Adam)
1731.47
Online and Offline Reinforcement Learning by Planning with a Learned Model
C51 noop
1520.0
A Distributional Perspective on Reinforcement Learning
RUDDER
1350
RUDDER: Return Decomposition for Delayed Rewards
IQN
1318
Implicit Quantile Networks for Distributional Reinforcement Learning
DQNMMCe+SR
1241.8
Count-Based Exploration with the Successor Representation
DDQN+Pop-Art noop
1172.0
Learning values across many orders of magnitude
Sarsa-φ-EB
1169.2
Count-Based Exploration in Feature Space for Reinforcement Learning
NoisyNet-Dueling
815
Noisy Networks for Exploration
ES FF (1 hour) noop
760.0
Evolution Strategies as a Scalable Alternative to Reinforcement Learning
0 of 55 row(s) selected.
Previous
Next
HyperAI
HyperAI
Home
Console
Docs
News
Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
Terms of Service
Privacy Policy
English
HyperAI
HyperAI
Toggle Sidebar
Search the site…
⌘
K
Command Palette
Search for a command to run...
Console
Home
SOTA
Atari Games
Atari Games On Atari 2600 Venture
Atari Games On Atari 2600 Venture
Metrics
Score
Results
Performance results of various models on this benchmark
Columns
Model Name
Score
Paper Title
Agent57
2623.71
Agent57: Outperforming the Atari Human Benchmark
Go-Explore
2281
First return, then explore
SND-VIC
2188
Self-supervised network distillation: an effective approach to exploration in sparse reward environments
SND-STD
2138
Self-supervised network distillation: an effective approach to exploration in sparse reward environments
GDI-I3
2035
Generalized Data Distribution Iteration
GDI-H3
2000
Generalized Data Distribution Iteration
GDI-H3(200M frames)
2000
Generalized Data Distribution Iteration
R2D2
1970.7
Recurrent Experience Replay in Distributed Reinforcement Learning
RND
1859
Exploration by Random Network Distillation
Ape-X
1813
Distributed Prioritized Experience Replay
SND-V
1787
Self-supervised network distillation: an effective approach to exploration in sparse reward environments
MuZero (Res2 Adam)
1731.47
Online and Offline Reinforcement Learning by Planning with a Learned Model
C51 noop
1520.0
A Distributional Perspective on Reinforcement Learning
RUDDER
1350
RUDDER: Return Decomposition for Delayed Rewards
IQN
1318
Implicit Quantile Networks for Distributional Reinforcement Learning
DQNMMCe+SR
1241.8
Count-Based Exploration with the Successor Representation
DDQN+Pop-Art noop
1172.0
Learning values across many orders of magnitude
Sarsa-φ-EB
1169.2
Count-Based Exploration in Feature Space for Reinforcement Learning
NoisyNet-Dueling
815
Noisy Networks for Exploration
ES FF (1 hour) noop
760.0
Evolution Strategies as a Scalable Alternative to Reinforcement Learning
0 of 55 row(s) selected.
Previous
Next
Atari Games On Atari 2600 Venture | SOTA | HyperAI