HyperAI
HyperAI
Home
Console
Docs
News
Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
Terms of Service
Privacy Policy
English
HyperAI
HyperAI
Toggle Sidebar
Search the site…
⌘
K
Command Palette
Search for a command to run...
Console
Home
SOTA
Atari Games
Atari Games On Atari 2600 Krull
Atari Games On Atari 2600 Krull
Metrics
Score
Results
Performance results of various models on this benchmark
Columns
Model Name
Score
Paper Title
GDI-H3
594540
Generalized Data Distribution Iteration
MuZero
269358.27
Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model
Agent57
251997.31
Agent57: Outperforming the Atari Human Benchmark
R2D2
218448.1
Recurrent Experience Replay in Distributed Reinforcement Learning
GDI-I3
97575
GDI: Rethinking What Makes Reinforcement Learning Different From Supervised Learning
GDI-I3
97575
Generalized Data Distribution Iteration
MuZero (Res2 Adam)
72570.5
Online and Offline Reinforcement Learning by Planning with a Learned Model
DreamerV2
50061
Mastering Atari with Discrete World Models
VPN
15930
Value Prediction Network
Ape-X
11741.4
Distributed Prioritized Experience Replay
Duel noop
11451.9
Dueling Network Architectures for Deep Reinforcement Learning
QR-DQN-1
11447
Distributional Reinforcement Learning with Quantile Regression
DNA
10956
DNA: Proximal Policy Optimization with a Dual Network Architecture
NoisyNet-Dueling
10754
Noisy Networks for Exploration
IQN
10707
Implicit Quantile Networks for Distributional Reinforcement Learning
A2C + SIL
10614.6
Self-Imitation Learning
ASL DDQN
10422.5
Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity
Prior+Duel noop
10374.4
Dueling Network Architectures for Deep Reinforcement Learning
DDQN+Pop-Art noop
9745.1
Learning values across many orders of magnitude
C51 noop
9735.0
A Distributional Perspective on Reinforcement Learning
0 of 45 row(s) selected.
Previous
Next
HyperAI
HyperAI
Home
Console
Docs
News
Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
Terms of Service
Privacy Policy
English
HyperAI
HyperAI
Toggle Sidebar
Search the site…
⌘
K
Command Palette
Search for a command to run...
Console
Home
SOTA
Atari Games
Atari Games On Atari 2600 Krull
Atari Games On Atari 2600 Krull
Metrics
Score
Results
Performance results of various models on this benchmark
Columns
Model Name
Score
Paper Title
GDI-H3
594540
Generalized Data Distribution Iteration
MuZero
269358.27
Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model
Agent57
251997.31
Agent57: Outperforming the Atari Human Benchmark
R2D2
218448.1
Recurrent Experience Replay in Distributed Reinforcement Learning
GDI-I3
97575
GDI: Rethinking What Makes Reinforcement Learning Different From Supervised Learning
GDI-I3
97575
Generalized Data Distribution Iteration
MuZero (Res2 Adam)
72570.5
Online and Offline Reinforcement Learning by Planning with a Learned Model
DreamerV2
50061
Mastering Atari with Discrete World Models
VPN
15930
Value Prediction Network
Ape-X
11741.4
Distributed Prioritized Experience Replay
Duel noop
11451.9
Dueling Network Architectures for Deep Reinforcement Learning
QR-DQN-1
11447
Distributional Reinforcement Learning with Quantile Regression
DNA
10956
DNA: Proximal Policy Optimization with a Dual Network Architecture
NoisyNet-Dueling
10754
Noisy Networks for Exploration
IQN
10707
Implicit Quantile Networks for Distributional Reinforcement Learning
A2C + SIL
10614.6
Self-Imitation Learning
ASL DDQN
10422.5
Train a Real-world Local Path Planner in One Hour via Partially Decoupled Reinforcement Learning and Vectorized Diversity
Prior+Duel noop
10374.4
Dueling Network Architectures for Deep Reinforcement Learning
DDQN+Pop-Art noop
9745.1
Learning values across many orders of magnitude
C51 noop
9735.0
A Distributional Perspective on Reinforcement Learning
0 of 45 row(s) selected.
Previous
Next