MMLU-Redux
MMLU-Redux
Description
MMLU-Redux is an environment for evaluating language models on a curated subset of MMLU (Massive Multitask Language Understanding). It contains 3,000 manually re-annotated questions across 30 subjects, with quality annotations identifying issues like wrong ground truth, ambiguous questions, and multiple correct answers.
Capabilities
- Multi-domain knowledge assessment
- Multiple-choice question answering
- Subject-specific evaluation across 30 academic domains
Compute Requirements
Agents are given a standard environment with no sandbox or file system access.
License
Tasks
There are 31 splits in this environment:
- test: 3,000 tasks (all subjects)
- test-{subject}: 30 per-subject splits (100 tasks each)
Subjects include: College Mathematics, Virology, College Chemistry, High School Mathematics, Global Facts, Formal Logic, High School Physics, Professional Law, Machine Learning, and 21 more.
Reward Structure
This is a single-turn environment. The agent submits an answer letter (A, B, C, or D) via the submit_answer tool. Validation is deterministic exact match. Reward is binary: 1.0 if correct, 0.0 if incorrect.
Data
Data consists of a Parquet file (mmlu_redux.parquet) sourced from HuggingFace edinburgh-dawg/mmlu-redux. Each row contains a question, four choices, correct answer index, subject, and quality annotation. Data is stored on the OpenReward platform.
Tools
| Tool | Description |
|---|---|
submit_answer | Submit your answer choice (A, B, C, or D). Ends the episode. |
Time Horizon
Single-turn. The agent reads the question and options, then submits one answer.
Environment Difficulty
MMLU-Redux evaluates multi-domain knowledge with quality-verified questions. 91% of questions are verified correct, with the remaining 9% annotated for various quality issues.
Other Environment Requirements
There are no further environment requirements; MMLU-Redux works out of the box with the OpenReward endpoint without any external API keys.
Safety
Agents in MMLU-Redux answer multiple-choice knowledge questions in a standard environment. The environment does not present direct safety risks.
Citation
@misc{gema2024mmlu,
title={Are We Done with MMLU?},
author={Aryo Pradipta Gema and Joshua Ong Jun Leang and Giwon Hong and Alessio Devoto and Alberto Carlo Maria Mancino and Rohit Saxena and Xuanli He and Yu Zhao and Xiaotang Du and Mohammad Reza Ghasemi Madani and Claire Barale and Robert McHardy and Joshua Harris and Jean Kaddour and Emile van Krieken and Pasquale Minervini},
year={2024},
eprint={2406.04127},
archivePrefix={arXiv},
primaryClass={cs.CL}
}