ELEVATER: A Benchmark and Toolkit for Evaluating Language-Augmented Visual Models
ELEVATER: A Benchmark and Toolkit for Evaluating Language-Augmented Visual Models
Chunyuan Li∗1♠, Haotian Liu∗2, Liunian Harold Li3, Pengchuan Zhang1, Jyoti Aneja1, Jianwei Yang1, Ping Jin1, Houdong Hu1, Zicheng Liu1, Yong Jae Lee2, Jianfeng Gao1

Abstract
Learning visual representations from natural language supervision has recently shown great promise in a number of pioneering works. In general, these language-augmented visual models demonstrate strong transferability to a variety of datasets and tasks. However, it remains challenging to evaluate the transferablity of these models due to the lack of easy-to-use evaluation toolkits and public benchmarks. To tackle this, we build ELEVATER (Evaluation of Language-augmented Visual Task-level Transfer), the first benchmark and toolkit for evaluating(pre-trained) language-augmented visual models. ELEVATER is composed of three components. (i) Datasets. As downstream evaluation suites, it consists of 20 image classification datasets and 35 object detection datasets, each of which is augmented with external knowledge. (ii) Toolkit. An automatic hyper-parameter tuning toolkit is developed to facilitate model evaluation on downstream tasks. (iii) Metrics. A variety of evaluation metrics are used to measure sample-efficiency (zero-shot and few-shot) and parameter-efficiency (linear probing and full model fine-tuning). ELEVATER is a platform for Computer Vision in the Wild (CVinW), and is publicly released at at https://computer-vision-in-the-wild.github.io/ELEVATER/
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| object-detection-on-elevater | GLIP-T | AP: 62.6 |
| object-detection-on-odinw-full-shot-35-tasks | GLIP-T | AP: 62.6 |
| zero-shot-image-classification-on-icinw | CLIP (ViT B-32) | Average Score: 56.64 |
| zero-shot-image-classification-on-odinw | GLIP (Tiny A) | Average Score: 11.4 |
| zero-shot-object-detection-on-odinw | GLIP (Tiny A) | Average Score: 11.4 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.