MathCanvas

API Endpoint
Leaderboard
Loading leaderboard...
Implementation of
README

MathCanvas

OpenReward Environment Hugging Face Dataset

Description

MathCanvas is an environment for evaluating Visual Chain-of-Thought (VCoT) capabilities on multimodal mathematical reasoning tasks. It contains 3,079 problems with interleaved text and images (diagrams, graphs, geometric figures) across 8 mathematical domains. The benchmark tests models' ability to reason about visual mathematical content spanning high school to undergraduate level.

Capabilities

  • Multimodal mathematical reasoning with interleaved text and images
  • Visual Chain-of-Thought evaluation across geometry, calculus, algebra, and more
  • GPT-based flexible answer grading for equivalent mathematical expressions

Compute Requirements

Agents are given a standard environment with no sandbox or file system access.

License

Apache 2.0.

Tasks

There is one split in this environment:

  • test: 3,079 tasks

Tasks span 8 mathematical domains:

DomainDescription
AlgebraAlgebraic manipulation and equations
Analytic GeometryCoordinate geometry and curves
Calculus & VectorDifferentiation, integration, vectors
Plane Geometry2D geometric reasoning
Solid Geometry3D spatial reasoning
StatisticsProbability and data analysis
Transformational GeometryGeometric transformations
TrigonometryTrigonometric functions and identities

Reward Structure

Single-turn evaluation with LLM-graded rewards. The agent submits an answer via the submit_answer tool. The answer is graded by gpt-5-mini which evaluates mathematical equivalence across different representations (fractions, decimals, equivalent expressions). Reward is 1.0 if correct, 0.0 if incorrect.

Data

test.parquet (327 MB, 3,079 problems) sourced from HuggingFace shiwk24/MathCanvas-Bench. Stored on the OpenReward platform.

Tools

ToolDescription
submit_answerSubmit a mathematical answer. LLM-graded for mathematical equivalence. Ends the episode.

Time Horizon

Single-turn. The agent reads the multimodal problem (text and images) and submits one answer.

Environment Difficulty

MathCanvas evaluates multimodal mathematical reasoning with visual chain-of-thought:

ModelWeighted Score
Gemini-2.5-Pro69.9%
GPT-566.5%
Gemini-2.5-Flash64.6%
Seed-1.6-Thinking60.7%
GLM-4.5V59.8%
Qwen3-VL-Plus58.9%
Claude-Sonnet-447.6%
Qwen-2.5-VL-72B48.9%

Even frontier multimodal models achieve under 70% accuracy, demonstrating the challenge of visual mathematical reasoning.

Other Environment Requirements

OpenAI API key required for LLM-based grading. Pass via secrets={"openai_api_key": "..."} when creating a session.

Safety

Agents in MathCanvas solve multimodal mathematics problems in a standard environment. The environment does not present direct safety risks.

Citation

@misc{shi2025mathcanvasintrinsicvisualchainofthought,
  title={MathCanvas: Intrinsic Visual Chain-of-Thought for Multimodal Mathematical Reasoning},
  author={Weikang Shi and Aldrich Yu and Rongyao Fang and Houxing Ren and Ke Wang and Aojun Zhou and Changyao Tian and Xinyu Fu and Yuxuan Hu and Zimu Lu and Linjiang Huang and Si Liu and Rui Liu and Hongsheng Li},
  year={2025},
  eprint={2510.14958},
  archivePrefix={arXiv}
}
GeneralReasoning/MathCanvas | OpenReward