MMLU-Redux-2

API Endpoint
Leaderboard
Loading leaderboard...
README

MMLU-Redux-2

OpenReward Environment Hugging Face Dataset

Description

MMLU-Redux-2 is an expanded environment for evaluating language models on MMLU with enhanced quality annotations. It contains 5,700 re-annotated questions across 57 subjects, building on the original MMLU-Redux methodology with broader coverage and improved annotation quality.

Capabilities

  • Multi-domain knowledge assessment
  • Multiple-choice question answering
  • Subject-specific evaluation across 57 academic domains

Compute Requirements

Agents are given a standard environment with no sandbox or file system access.

License

CC BY 4.0.

Tasks

There is one split in this environment:

  • test: 5,700 tasks

Questions span 57 subjects with quality annotations for each question.

Reward Structure

This is a single-turn environment. The agent submits an answer letter (A, B, C, or D) via the submit_answer tool. Validation is deterministic exact match. Reward is binary: 1.0 if correct, 0.0 if incorrect.

Data

Data consists of a Parquet file (mmlu_redux_2.0.parquet) sourced from HuggingFace edinburgh-dawg/mmlu-redux-2.0. Each row contains a question, choices, correct answer, subject, and quality annotation. Data is stored on the OpenReward platform.

Tools

ToolDescription
submit_answerSubmit your answer choice (A, B, C, or D). Ends the episode.

Time Horizon

Single-turn. The agent reads the question and options, then submits one answer.

Environment Difficulty

MMLU-Redux-2 evaluates multi-domain knowledge with expanded coverage and quality-verified questions across 57 academic subjects.

Other Environment Requirements

There are no further environment requirements; MMLU-Redux-2 works out of the box with the OpenReward endpoint without any external API keys.

Safety

Agents in MMLU-Redux-2 answer multiple-choice knowledge questions in a standard environment. The environment does not present direct safety risks.

Citation

@misc{gema2024mmlu,
  title={Are We Done with MMLU?},
  author={Aryo Pradipta Gema and Joshua Ong Jun Leang and Giwon Hong and Alessio Devoto and Alberto Carlo Maria Mancino and Rohit Saxena and Xuanli He and Yu Zhao and Xiaotang Du and Mohammad Reza Ghasemi Madani and Claire Barale and Robert McHardy and Joshua Harris and Jean Kaddour and Emile van Krieken and Pasquale Minervini},
  year={2024},
  eprint={2406.04127},
  archivePrefix={arXiv},
  primaryClass={cs.CL}
}
GeneralReasoning/MMLU-Redux-2 | OpenReward