Who Gets Cited Most? Benchmarking Long-Context Numerical Reasoning on Scientific Articles

arXiv:2509.21028v3 Announce Type: replace Abstract: We introduce SciTrek, a diagnostic question-answering benchmark designed to probe long-context numerical reasoning in large language models (LLMs). Existing long-context benchmarks mostly focus on simple information retrieval, rely on artificial...

Who Gets Cited Most? Benchmarking Long-Context Numerical Reasoning on Scientific Articles
arXiv:2509.21028v3 Announce Type: replace Abstract: We introduce SciTrek, a diagnostic question-answering benchmark designed to probe long-context numerical reasoning in large language models (LLMs). Existing long-context benchmarks mostly focus on simple information retrieval, rely on artificial contexts, or leave numerical reasoning unexplored. SciTrek addresses these limitations through questions that require counting, sorting, aggregating, and comparing information across multiple full-text scientific articles. Questions are automatically generated by formulating them as SQL queries over a database constructed from article metadata (titles, authors, and references), with ground-truth answers obtained via query execution. This design provides verifiable reasoning traces for fine-grained error analysis and enables efficient scaling to longer contexts with minimal human supervision. Extensive experiments on thirteen frontier open-weight and proprietary LLMs reveal that SciTrek poses a significant challenge: even the best-performing model achieves only 46.5% exact match at 128K tokens, with performance declining as the context length increases. Models particularly struggle with citation-related questions and compound logical conditions, including negation.