HyperAI
HyperAI
Home
Console
Docs
News
Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
Terms of Service
Privacy Policy
English
HyperAI
HyperAI
Toggle Sidebar
Search the site…
⌘
K
Command Palette
Search for a command to run...
Console
Home
SOTA
Atari Games
Atari Games On Atari 2600 Kangaroo
Atari Games On Atari 2600 Kangaroo
Metrics
Score
Results
Performance results of various models on this benchmark
Columns
Model Name
Score
Paper Title
Agent57
24034.16
Agent57: Outperforming the Atari Human Benchmark
MuZero
16763.60
Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model
Prior noop
16200.0
Prioritized Experience Replay
IQN
15487
Implicit Quantile Networks for Distributional Reinforcement Learning
QR-DQN-1
15356
Distributional Reinforcement Learning with Quantile Regression
NoisyNet-Dueling
15227
Noisy Networks for Exploration
Bootstrapped DQN
14862.5
Deep Exploration via Bootstrapped DQN
Duel noop
14854.0
Dueling Network Architectures for Deep Reinforcement Learning
GDI-H3
14636
Generalized Data Distribution Iteration
GDI-I3
14500
GDI: Rethinking What Makes Reinforcement Learning Different From Supervised Learning
GDI-I3
14500
Generalized Data Distribution Iteration
DNA
14373
DNA: Proximal Policy Optimization with a Dual Network Architecture
R2D2
14130.7
Recurrent Experience Replay in Distributed Reinforcement Learning
DreamerV2
14064
Mastering Atari with Discrete World Models
MuZero (Res2 Adam)
13838
Online and Offline Reinforcement Learning by Planning with a Learned Model
DDQN+Pop-Art noop
13150.0
Learning values across many orders of magnitude
ASL DDQN
13027
Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity
DDQN (tuned) noop
12992.0
Dueling Network Architectures for Deep Reinforcement Learning
C51 noop
12853.0
A Distributional Perspective on Reinforcement Learning
Prior hs
12185.0
Prioritized Experience Replay
0 of 47 row(s) selected.
Previous
Next
HyperAI
HyperAI
Home
Console
Docs
News
Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
Terms of Service
Privacy Policy
English
HyperAI
HyperAI
Toggle Sidebar
Search the site…
⌘
K
Command Palette
Search for a command to run...
Console
Home
SOTA
Atari Games
Atari Games On Atari 2600 Kangaroo
Atari Games On Atari 2600 Kangaroo
Metrics
Score
Results
Performance results of various models on this benchmark
Columns
Model Name
Score
Paper Title
Agent57
24034.16
Agent57: Outperforming the Atari Human Benchmark
MuZero
16763.60
Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model
Prior noop
16200.0
Prioritized Experience Replay
IQN
15487
Implicit Quantile Networks for Distributional Reinforcement Learning
QR-DQN-1
15356
Distributional Reinforcement Learning with Quantile Regression
NoisyNet-Dueling
15227
Noisy Networks for Exploration
Bootstrapped DQN
14862.5
Deep Exploration via Bootstrapped DQN
Duel noop
14854.0
Dueling Network Architectures for Deep Reinforcement Learning
GDI-H3
14636
Generalized Data Distribution Iteration
GDI-I3
14500
GDI: Rethinking What Makes Reinforcement Learning Different From Supervised Learning
GDI-I3
14500
Generalized Data Distribution Iteration
DNA
14373
DNA: Proximal Policy Optimization with a Dual Network Architecture
R2D2
14130.7
Recurrent Experience Replay in Distributed Reinforcement Learning
DreamerV2
14064
Mastering Atari with Discrete World Models
MuZero (Res2 Adam)
13838
Online and Offline Reinforcement Learning by Planning with a Learned Model
DDQN+Pop-Art noop
13150.0
Learning values across many orders of magnitude
ASL DDQN
13027
Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity
DDQN (tuned) noop
12992.0
Dueling Network Architectures for Deep Reinforcement Learning
C51 noop
12853.0
A Distributional Perspective on Reinforcement Learning
Prior hs
12185.0
Prioritized Experience Replay
0 of 47 row(s) selected.
Previous
Next