ATLAS

API Endpoint
Leaderboard
Loading leaderboard...
Implementation of
README

ATLAS

OpenReward Environment Hugging Face Dataset

Description

ATLAS (A High-Difficulty, Multidisciplinary Benchmark for Frontier Scientific Reasoning) is an environment for evaluating expert-level scientific reasoning capabilities. It contains 798 tasks across 7 core fields: Mathematics, Physics, Chemistry, Biology, Computer Science, Earth Science, and Materials Science. Questions are created by PhD-level experts from 25+ institutions.

Capabilities

  • Expert-level scientific reasoning
  • Multi-disciplinary problem solving
  • Bilingual evaluation (English and Chinese)
  • PhD-level question complexity

Compute Requirements

Agents are given a standard environment with no sandbox or file system access.

License

CC BY-SA 4.0.

Tasks

There are two splits in this environment:

  • val: 301 tasks
  • test: 497 tasks

Tasks span 7 scientific fields with sub-discipline categorization.

Reward Structure

This is a single-turn environment. The agent submits an answer via the submit_answer tool. An LLM grader (GPT-5) evaluates using a three-label system:

  • A (CORRECT): Exact or semantically equivalent match (±0.1 numeric tolerance)
  • B (INCORRECT): Any deviation from standard answer
  • C (INVALID): Incomplete, repetitive, or refusal to answer

Reward is binary: 1.0 if label A, 0.0 otherwise.

Data

Data consists of Parquet files (atlas_val.parquet, atlas_test.parquet) sourced from HuggingFace opencompass/ATLAS. Each row contains a question, standard answers, subject, and sub-subject. Data is stored on the OpenReward platform.

Tools

ToolDescription
submit_answerSubmit your final answer to the scientific question. Ends the episode.

Time Horizon

Single-turn. The agent reads the scientific problem and submits one answer.

Environment Difficulty

ATLAS evaluates expert-level scientific reasoning designed to challenge frontier AI systems with PhD-level questions across 7 scientific fields.

ModelAccuracy
GPT-5-High42.9%
Grok 434.1%
DeepSeek-R1-052826.4%

Other Environment Requirements

OpenAI API key required for LLM-based grading. Pass via secrets={"openai_api_key": "..."}.

Safety

Agents in ATLAS solve expert-level scientific problems in a standard environment. The environment does not present direct safety risks.

Citation

@article{liu2025atlas,
  title={ATLAS: A High-Difficulty, Multidisciplinary Benchmark for Frontier Scientific Reasoning},
  author={Liu, Hongwei and Liu, Junnan and Liu, Shudong and Duan, Haodong and Li, Yuqiang and Su, Mao and Liu, Xiaohong and Zhai, Guangtao and Fang, Xinyu and Ma, Qianhong and Zhang, Taolin and Ma, Zihan and Zhao, Yufeng and Zhou, Peiheng and Xiao, Linchen and Zhang, Wenlong and Zhou, Shijie and Ma, Xingjian and Sun, Siqi and Ge, Jiaye and Li, Meng and Liu, Yuhong and Dong, Jianxin and Li, Jiaying and Wu, Hui and Liang, Hanwen and Lin, Jintai and Wang, Yanting and Dong, Jie and Zhu, Tong and Fu, Tianfan and He, Conghui and Zhang, Qi and Zhang, Songyang and Bai, Lei and Chen, Kai},
  journal={arXiv preprint arXiv:2511.14366},
  year={2025}
}
GeneralReasoning/ATLAS | OpenReward