HiCM2: Hierarchical Compact Memory Modeling for Dense Video
Captioning
HiCM2: Hierarchical Compact Memory Modeling for Dense Video Captioning
Minkuk Kim¹, Hyeon Bae Kim¹, Jinyoung Moon², Jinwoo Choi¹*, Seong Tae Kim¹*

Abstract
With the growing demand for solutions to real-world video challenges,interest in dense video captioning (DVC) has been on the rise. DVC involves theautomatic captioning and localization of untrimmed videos. Several studieshighlight the challenges of DVC and introduce improved methods utilizing priorknowledge, such as pre-training and external memory. In this research, wepropose a model that leverages the prior knowledge of human-orientedhierarchical compact memory inspired by human memory hierarchy and cognition.To mimic human-like memory recall, we construct a hierarchical memory and ahierarchical memory reading module. We build an efficient hierarchical compactmemory by employing clustering of memory events and summarization using largelanguage models. Comparative experiments demonstrate that this hierarchicalmemory recall process improves the performance of DVC by achievingstate-of-the-art performance on YouCook2 and ViTT datasets.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| dense-video-captioning-on-vitt | HiCM² | CIDEr: 51.2 METEOR: 9.6 SODA: 0.150 |
| dense-video-captioning-on-youcook2 | HiCM² | BLEU4: 6.11 CIDEr: 71.84 F1: 32.51 METEOR: 12.80 Precision: 32.51 Recall: 32.51 SODA: 10.73 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.