HyperAI
HyperAI
Home
Console
Docs
News
Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
Terms of Service
Privacy Policy
English
HyperAI
HyperAI
Toggle Sidebar
Search the site…
⌘
K
Command Palette
Search for a command to run...
Console
Home
SOTA
Atari Games
Atari Games On Atari 2600 Bowling
Atari Games On Atari 2600 Bowling
Metrics
Score
Results
Performance results of various models on this benchmark
Columns
Model Name
Score
Paper Title
MuZero
260.13
Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model
Go-Explore
260
First return, then explore
Agent57
251.18
Agent57: Outperforming the Atari Human Benchmark
R2D2
219.5
Recurrent Experience Replay in Distributed Reinforcement Learning
GDI-H3
205.2
Generalized Data Distribution Iteration
GDI-I3
201.9
GDI: Rethinking What Makes Reinforcement Learning Different From Supervised Learning
GDI-I3
201.9
Generalized Data Distribution Iteration
DNA
181
DNA: Proximal Policy Optimization with a Dual Network Architecture
RUDDER
179
RUDDER: Return Decomposition for Delayed Rewards
MuZero (Res2 Adam)
131.65
Online and Offline Reinforcement Learning by Planning with a Learned Model
FQF
102.3
Fully Parameterized Quantile Function for Distributional Reinforcement Learning
DDQN+Pop-Art noop
102.1
Learning values across many orders of magnitude
IQN
86.5
Implicit Quantile Networks for Distributional Reinforcement Learning
CGP
85.8
Evolving simple programs for playing Atari games
C51 noop
81.8
A Distributional Perspective on Reinforcement Learning
Reactor 500M
81.0
The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning
QR-DQN-1
77.2
Distributional Reinforcement Learning with Quantile Regression
Persistent AL
71.59
Increasing the Action Gap: New Operators for Reinforcement Learning
DDQN (tuned) hs
69.6
Deep Reinforcement Learning with Double Q-learning
DDQN (tuned) noop
68.1
Dueling Network Architectures for Deep Reinforcement Learning
0 of 44 row(s) selected.
Previous
Next
HyperAI
HyperAI
Home
Console
Docs
News
Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
Terms of Service
Privacy Policy
English
HyperAI
HyperAI
Toggle Sidebar
Search the site…
⌘
K
Command Palette
Search for a command to run...
Console
Home
SOTA
Atari Games
Atari Games On Atari 2600 Bowling
Atari Games On Atari 2600 Bowling
Metrics
Score
Results
Performance results of various models on this benchmark
Columns
Model Name
Score
Paper Title
MuZero
260.13
Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model
Go-Explore
260
First return, then explore
Agent57
251.18
Agent57: Outperforming the Atari Human Benchmark
R2D2
219.5
Recurrent Experience Replay in Distributed Reinforcement Learning
GDI-H3
205.2
Generalized Data Distribution Iteration
GDI-I3
201.9
GDI: Rethinking What Makes Reinforcement Learning Different From Supervised Learning
GDI-I3
201.9
Generalized Data Distribution Iteration
DNA
181
DNA: Proximal Policy Optimization with a Dual Network Architecture
RUDDER
179
RUDDER: Return Decomposition for Delayed Rewards
MuZero (Res2 Adam)
131.65
Online and Offline Reinforcement Learning by Planning with a Learned Model
FQF
102.3
Fully Parameterized Quantile Function for Distributional Reinforcement Learning
DDQN+Pop-Art noop
102.1
Learning values across many orders of magnitude
IQN
86.5
Implicit Quantile Networks for Distributional Reinforcement Learning
CGP
85.8
Evolving simple programs for playing Atari games
C51 noop
81.8
A Distributional Perspective on Reinforcement Learning
Reactor 500M
81.0
The Reactor: A fast and sample-efficient Actor-Critic agent for Reinforcement Learning
QR-DQN-1
77.2
Distributional Reinforcement Learning with Quantile Regression
Persistent AL
71.59
Increasing the Action Gap: New Operators for Reinforcement Learning
DDQN (tuned) hs
69.6
Deep Reinforcement Learning with Double Q-learning
DDQN (tuned) noop
68.1
Dueling Network Architectures for Deep Reinforcement Learning
0 of 44 row(s) selected.
Previous
Next
Atari Games On Atari 2600 Bowling | SOTA | HyperAI