Antoine Yang Antoine Miech Josef Sivic Ivan Laptev Cordelia Schmid

Abstract
We consider the problem of localizing a spatio-temporal tube in a videocorresponding to a given text query. This is a challenging task that requiresthe joint and efficient modeling of temporal, spatial and multi-modalinteractions. To address this task, we propose TubeDETR, a transformer-basedarchitecture inspired by the recent success of such models for text-conditionedobject detection. Our model notably includes: (i) an efficient video and textencoder that models spatial multi-modal interactions over sparsely sampledframes and (ii) a space-time decoder that jointly performs spatio-temporallocalization. We demonstrate the advantage of our proposed components throughan extensive ablation study. We also evaluate our full approach on thespatio-temporal video grounding task and demonstrate improvements over thestate of the art on the challenging VidSTG and HC-STVG benchmarks. Code andtrained models are publicly available athttps://antoyang.github.io/tubedetr.html.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| spatio-temporal-video-grounding-on-hc-stvg1 | TubeDETR | |
| spatio-temporal-video-grounding-on-hc-stvg2 | TubeDETR | |
| spatio-temporal-video-grounding-on-vidstg | TubeDETR | Declarative m_vIoU: 30.4 Declarative [email protected]: 42.5 Declarative [email protected]: 28.2 Interrogative m_vIoU: 25.7 Interrogative [email protected]: 35.7 Interrogative [email protected]: 23.2 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.