HyperAI
HyperAI
Home
Console
Docs
News
Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
Terms of Service
Privacy Policy
English
HyperAI
HyperAI
Toggle Sidebar
Search the site…
⌘
K
Command Palette
Search for a command to run...
Console
Home
SOTA
OpenAI Gym
Openai Gym On Walker2D V4
Openai Gym On Walker2D V4
Metrics
Average Return
Results
Performance results of various models on this benchmark
Columns
Model Name
Average Return
Paper Title
SAC
5745.27
Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor
MEow
5526.66
Maximum Entropy Reinforcement Learning via Energy-Based Normalizing Flow
DDPG
2994.54
Continuous control with deep reinforcement learning
PPO
2739.81
Proximal Policy Optimization Algorithms
TD3
2612.74
Addressing Function Approximation Error in Actor-Critic Methods
0 of 5 row(s) selected.
Previous
Next
HyperAI
HyperAI
Home
Console
Docs
News
Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
Terms of Service
Privacy Policy
English
HyperAI
HyperAI
Toggle Sidebar
Search the site…
⌘
K
Command Palette
Search for a command to run...
Console
Home
SOTA
OpenAI Gym
Openai Gym On Walker2D V4
Openai Gym On Walker2D V4
Metrics
Average Return
Results
Performance results of various models on this benchmark
Columns
Model Name
Average Return
Paper Title
SAC
5745.27
Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor
MEow
5526.66
Maximum Entropy Reinforcement Learning via Energy-Based Normalizing Flow
DDPG
2994.54
Continuous control with deep reinforcement learning
PPO
2739.81
Proximal Policy Optimization Algorithms
TD3
2612.74
Addressing Function Approximation Error in Actor-Critic Methods
0 of 5 row(s) selected.
Previous
Next
Openai Gym On Walker2D V4 | SOTA | HyperAI