LooGLE Long Context Understanding Ability Benchmark Dataset
Date
Size
Paper URL

This dataset is a benchmark dataset, LooGLE, proposed by the Beijing Institute of General Artificial Intelligence (GIAI) and the Peking University Institute of Artificial Intelligence team for testing and evaluating the long-context understanding capabilities of large language models (LLMs).
LooGLE evaluated the 9 most popular long text LLMs and found that these models performed poorly in multi-information retrieval, time reordering, computation, and comprehension reasoning in complex long-dependency tasks. Commercial models (Claude3-200k, GPT4-32k, GPT4-8k, GPT3.5-turbo-6k, LlamaIndex) had an average accuracy of only 40%, and open source models (ChatGLM2-6B, LongLLaMa-3B, RWKV-4-14B-pile, LLaMA-7B-32K) had an accuracy of only 10%.
The research paper is titled "LooGLE: Can Long-Context Language Models Understand Long Contexts?The paper has been accepted by ACL2024. The co-first authors are Jiaqi Li and Mengmeng Wang from the Institute of Communications and Information Technology, and the corresponding authors are Zilong Zheng, a researcher at the Institute of Communications and Information Technology, and Muhan Zhang, an assistant professor at Peking University.
LooGLE addresses the shortcomings of previous datasets by providing ultra-long texts, using relatively recent documents, and carefully designed and annotated real long dependency tasks. The launch of the LooGLE benchmark dataset not only provides new tools for evaluating and improving long text LLMs, but also provides a new direction for the development of artificial intelligence language processing technology.
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.