HyperAI
HyperAI
Home
Console
Docs
News
Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
Terms of Service
Privacy Policy
English
HyperAI
HyperAI
Toggle Sidebar
Search the site…
⌘
K
Command Palette
Search for a command to run...
Console
Home
SOTA
Atari Games
Atari Games On Atari 2600 Gopher
Atari Games On Atari 2600 Gopher
Metrics
Score
Results
Performance results of various models on this benchmark
Columns
Model Name
Score
Paper Title
GDI-I3
488830
Generalized Data Distribution Iteration
GDI-H3
473560
Generalized Data Distribution Iteration
MuZero
130345.58
Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model
R2D2
124776.3
Recurrent Experience Replay in Distributed Reinforcement Learning
MuZero (Res2 Adam)
122882.5
Online and Offline Reinforcement Learning by Planning with a Learned Model
Ape-X
120500.9
Distributed Prioritized Experience Replay
IQN
118365
Implicit Quantile Networks for Distributional Reinforcement Learning
Agent57
117777.08
Agent57: Outperforming the Atari Human Benchmark
QR-DQN-1
113585
Distributional Reinforcement Learning with Quantile Regression
Prior+Duel hs
105148.4
Deep Reinforcement Learning with Double Q-learning
Prior+Duel noop
104368.2
Dueling Network Architectures for Deep Reinforcement Learning
ASL DDQN
103514.4
Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity
DreamerV2
92282
Mastering Atari with Discrete World Models
DNA
80104
DNA: Proximal Policy Optimization with a Dual Network Architecture
IMPALA (deep)
66782.30
IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures
DDQN+Pop-Art noop
56218.2
Learning values across many orders of magnitude
NoisyNet-Dueling
38909
Noisy Networks for Exploration
Prior hs
34858.8
Prioritized Experience Replay
C51 noop
33641.0
A Distributional Perspective on Reinforcement Learning
Prior noop
32487.2
Prioritized Experience Replay
0 of 43 row(s) selected.
Previous
Next
HyperAI
HyperAI
Home
Console
Docs
News
Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
Terms of Service
Privacy Policy
English
HyperAI
HyperAI
Toggle Sidebar
Search the site…
⌘
K
Command Palette
Search for a command to run...
Console
Home
SOTA
Atari Games
Atari Games On Atari 2600 Gopher
Atari Games On Atari 2600 Gopher
Metrics
Score
Results
Performance results of various models on this benchmark
Columns
Model Name
Score
Paper Title
GDI-I3
488830
Generalized Data Distribution Iteration
GDI-H3
473560
Generalized Data Distribution Iteration
MuZero
130345.58
Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model
R2D2
124776.3
Recurrent Experience Replay in Distributed Reinforcement Learning
MuZero (Res2 Adam)
122882.5
Online and Offline Reinforcement Learning by Planning with a Learned Model
Ape-X
120500.9
Distributed Prioritized Experience Replay
IQN
118365
Implicit Quantile Networks for Distributional Reinforcement Learning
Agent57
117777.08
Agent57: Outperforming the Atari Human Benchmark
QR-DQN-1
113585
Distributional Reinforcement Learning with Quantile Regression
Prior+Duel hs
105148.4
Deep Reinforcement Learning with Double Q-learning
Prior+Duel noop
104368.2
Dueling Network Architectures for Deep Reinforcement Learning
ASL DDQN
103514.4
Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity
DreamerV2
92282
Mastering Atari with Discrete World Models
DNA
80104
DNA: Proximal Policy Optimization with a Dual Network Architecture
IMPALA (deep)
66782.30
IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures
DDQN+Pop-Art noop
56218.2
Learning values across many orders of magnitude
NoisyNet-Dueling
38909
Noisy Networks for Exploration
Prior hs
34858.8
Prioritized Experience Replay
C51 noop
33641.0
A Distributional Perspective on Reinforcement Learning
Prior noop
32487.2
Prioritized Experience Replay
0 of 43 row(s) selected.
Previous
Next