HyperAI
HyperAI
Home
Console
Docs
News
Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
Terms of Service
Privacy Policy
English
HyperAI
HyperAI
Toggle Sidebar
Search the site…
⌘
K
Command Palette
Search for a command to run...
Console
Home
SOTA
Video Question Answering
Video Question Answering On Mvbench
Video Question Answering On Mvbench
Metrics
Avg.
Results
Performance results of various models on this benchmark
Columns
Model Name
Avg.
Paper Title
LinVT-Qwen2-VL (7B)
69.3
LinVT: Empower Your Image-level Large Language Model to Understand Videos
Tarsier (34B)
67.6
Tarsier: Recipes for Training and Evaluating Large Video Description Models
InternVideo2
67.2
InternVideo2: Scaling Foundation Models for Multimodal Video Understanding
LongVU (7B)
66.9
LongVU: Spatiotemporal Adaptive Compression for Long Video-Language Understanding
Oryx(34B)
64.7
Oryx MLLM: On-Demand Spatial-Temporal Understanding at Arbitrary Resolution
VideoLLaMA2 (72B)
62.0
VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs
mPLUG-Owl3(7B)
59.5
mPLUG-Owl3: Towards Long Image-Sequence Understanding in Multi-Modal Large Language Models
PPLLaVA (7b)
59.2
PPLLaVA: Varied Video Sequence Understanding With Prompt Guidance
VideoGPT+
58.7
VideoGPT+: Integrating Image and Video Encoders for Enhanced Video Understanding
PLLaVA
58.1
PLLaVA : Parameter-free LLaVA Extension from Images to Videos for Video Dense Captioning
ST-LLM
54.9
ST-LLM: Large Language Models Are Effective Temporal Learners
VideoChat2
51.9
MVBench: A Comprehensive Multi-modal Video Understanding Benchmark
HawkEye
47.55
HawkEye: Training Video-Text LLMs for Grounding Text in Videos
SPHINX-Plus
39.7
SPHINX-X: Scaling Data and Parameters for a Family of Multi-modal Large Language Models
TimeChat
38.5
TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video Understanding
LLaVa
36.0
Visual Instruction Tuning
VideoChat
35.5
VideoChat: Chat-Centric Video Understanding
VideoLLaMA
34.1
Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding
Video-ChatGPT
32.7
Video-ChatGPT: Towards Detailed Video Understanding via Large Vision and Language Models
InstructBLIP
32.5
InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning
0 of 21 row(s) selected.
Previous
Next
HyperAI
HyperAI
Home
Console
Docs
News
Papers
Tutorials
Datasets
Wiki
SOTA
LLM Models
GPU Leaderboard
Events
Search
About
Terms of Service
Privacy Policy
English
HyperAI
HyperAI
Toggle Sidebar
Search the site…
⌘
K
Command Palette
Search for a command to run...
Console
Home
SOTA
Video Question Answering
Video Question Answering On Mvbench
Video Question Answering On Mvbench
Metrics
Avg.
Results
Performance results of various models on this benchmark
Columns
Model Name
Avg.
Paper Title
LinVT-Qwen2-VL (7B)
69.3
LinVT: Empower Your Image-level Large Language Model to Understand Videos
Tarsier (34B)
67.6
Tarsier: Recipes for Training and Evaluating Large Video Description Models
InternVideo2
67.2
InternVideo2: Scaling Foundation Models for Multimodal Video Understanding
LongVU (7B)
66.9
LongVU: Spatiotemporal Adaptive Compression for Long Video-Language Understanding
Oryx(34B)
64.7
Oryx MLLM: On-Demand Spatial-Temporal Understanding at Arbitrary Resolution
VideoLLaMA2 (72B)
62.0
VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs
mPLUG-Owl3(7B)
59.5
mPLUG-Owl3: Towards Long Image-Sequence Understanding in Multi-Modal Large Language Models
PPLLaVA (7b)
59.2
PPLLaVA: Varied Video Sequence Understanding With Prompt Guidance
VideoGPT+
58.7
VideoGPT+: Integrating Image and Video Encoders for Enhanced Video Understanding
PLLaVA
58.1
PLLaVA : Parameter-free LLaVA Extension from Images to Videos for Video Dense Captioning
ST-LLM
54.9
ST-LLM: Large Language Models Are Effective Temporal Learners
VideoChat2
51.9
MVBench: A Comprehensive Multi-modal Video Understanding Benchmark
HawkEye
47.55
HawkEye: Training Video-Text LLMs for Grounding Text in Videos
SPHINX-Plus
39.7
SPHINX-X: Scaling Data and Parameters for a Family of Multi-modal Large Language Models
TimeChat
38.5
TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video Understanding
LLaVa
36.0
Visual Instruction Tuning
VideoChat
35.5
VideoChat: Chat-Centric Video Understanding
VideoLLaMA
34.1
Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding
Video-ChatGPT
32.7
Video-ChatGPT: Towards Detailed Video Understanding via Large Vision and Language Models
InstructBLIP
32.5
InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning
0 of 21 row(s) selected.
Previous
Next