HyperAI
HyperAI
Home
Console
Docs
News
Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
Terms of Service
Privacy Policy
English
HyperAI
HyperAI
Toggle Sidebar
Search the site…
⌘
K
Command Palette
Search for a command to run...
Console
Home
SOTA
Visual Navigation
Visual Navigation On Room To Room 1
Visual Navigation On Room To Room 1
Metrics
spl
Results
Performance results of various models on this benchmark
Columns
Model Name
spl
Paper Title
SUSA
0.6383
Agent Journey Beyond RGB: Unveiling Hybrid Semantic-Spatial Environmental Representations for Vision-and-Language Navigation
Meta-Explore
0.61
Meta-Explore: Exploratory Hierarchical Vision-and-Language Navigation Using Scene Object Spectrum Grounding
BEV-BERT
0.60
BEVBert: Multimodal Map Pre-training for Language-guided Navigation
NaviLLM
0.60
Towards Learning a Generalist Model for Embodied Navigation
HOP
0.59
HOP: History-and-Order Aware Pre-training for Vision-and-Language Navigation
VLN-PETL
0.58
VLN-PETL: Parameter-Efficient Transfer Learning for Vision-and-Language Navigation
DUET
0.58
Think Global, Act Local: Dual-scale Graph Transformer for Vision-and-Language Navigation
VLN-BERT
0.57
A Recurrent Vision-and-Language BERT for Navigation
Prevalent
0.51
Towards Learning a Generic Agent for Vision-and-Language Navigation via Pre-training
RCM+SIL(no early exploration)
0.38
Reinforced Cross-Modal Matching and Self-Supervised Imitation Learning for Vision-Language Navigation
Seq2Seq baseline
0.18
Vision-and-Language Navigation: Interpreting visually-grounded navigation instructions in real environments
0 of 11 row(s) selected.
Previous
Next
HyperAI
HyperAI
Home
Console
Docs
News
Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
Terms of Service
Privacy Policy
English
HyperAI
HyperAI
Toggle Sidebar
Search the site…
⌘
K
Command Palette
Search for a command to run...
Console
Home
SOTA
Visual Navigation
Visual Navigation On Room To Room 1
Visual Navigation On Room To Room 1
Metrics
spl
Results
Performance results of various models on this benchmark
Columns
Model Name
spl
Paper Title
SUSA
0.6383
Agent Journey Beyond RGB: Unveiling Hybrid Semantic-Spatial Environmental Representations for Vision-and-Language Navigation
Meta-Explore
0.61
Meta-Explore: Exploratory Hierarchical Vision-and-Language Navigation Using Scene Object Spectrum Grounding
BEV-BERT
0.60
BEVBert: Multimodal Map Pre-training for Language-guided Navigation
NaviLLM
0.60
Towards Learning a Generalist Model for Embodied Navigation
HOP
0.59
HOP: History-and-Order Aware Pre-training for Vision-and-Language Navigation
VLN-PETL
0.58
VLN-PETL: Parameter-Efficient Transfer Learning for Vision-and-Language Navigation
DUET
0.58
Think Global, Act Local: Dual-scale Graph Transformer for Vision-and-Language Navigation
VLN-BERT
0.57
A Recurrent Vision-and-Language BERT for Navigation
Prevalent
0.51
Towards Learning a Generic Agent for Vision-and-Language Navigation via Pre-training
RCM+SIL(no early exploration)
0.38
Reinforced Cross-Modal Matching and Self-Supervised Imitation Learning for Vision-Language Navigation
Seq2Seq baseline
0.18
Vision-and-Language Navigation: Interpreting visually-grounded navigation instructions in real environments
0 of 11 row(s) selected.
Previous
Next