MRR-Benchmark Multimodal Reading Benchmark Dataset
The Multimodal Reading (MMR) benchmark includes 550 annotated question-answer pairs in 11 different tasks covering text, fonts, visual elements, bounding boxes, spatial relations, and ground truth with well-designed evaluation metrics.
The benchmark was published in 2024 by researchers from the State University of New York at Buffalo and Adobe Research.MMR: Evaluating Reading Ability of Large Multimodal Models".
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.