BFCL

API Endpoint
Leaderboard
Loading leaderboard...
README

BFCL

OpenReward Environment Hugging Face Dataset

Description

BFCL is an environment for evaluating function calling capabilities. Based on the Berkeley Function Calling Leaderboard, agents are given function schemas and user queries, and must produce correct function calls. Tasks span single-turn, multi-turn, parallel, and live function calling scenarios across multiple programming languages.

Capabilities

  • Function calling and tool use
  • Producing correctly formatted function invocations
  • Handling parallel and multiple function calls
  • Evaluating function relevance and irrelevance

Compute Requirements

Agents are given a standard environment with no sandbox or file system access.

License

Apache 2.0

Tasks

One split: test. Tasks span multiple categories:

  • Single-turn: simple, irrelevance, parallel, multiple, parallel_multiple, java, javascript
  • Live: live_simple, live_multiple, live_parallel, live_parallel_multiple, live_irrelevance, live_relevance
  • Multi-turn: multi_turn_base, multi_turn_miss_func, multi_turn_miss_param, multi_turn_long_context

Reward Structure

Single-turn. Agent submits function call response via answer tool. LLM grader (gpt-4.1) evaluates correctness using category-specific grading templates. Binary reward: 1.0 if correct, 0.0 if incorrect.

Data

JSONL files sourced from HuggingFace gorilla-llm/Berkeley-Function-Calling-Leaderboard. Stored on the OpenReward platform.

Tools

  • answer — Submit a function calling response for evaluation.

Time Horizon

Single-turn (for most categories). Multi-turn categories involve conversation context but still use a single submission.

Environment Difficulty

Selected results from the BFCL V4 leaderboard:

ModelOverall Accuracy
GLM-4.5 (FC)70.85%
Claude Opus 4.170.36%
Claude Sonnet 470.29%
GPT-559.22%

Other Environment Requirements

OpenAI API key required for LLM-based grading. Pass via secrets={"openai_api_key": "..."}.

Safety

Agents in BFCL produce function call strings in a standard environment. The environment does not present direct safety risks.

Citation

@inproceedings{patil2025bfcl,
  title={The Berkeley Function Calling Leaderboard (BFCL): From Tool Use to Agentic Evaluation of Large Language Models},
  author={Patil, Shishir G. and Mao, Huanzhi and Ji, Charlie Cheng-Jie and Yan, Fanjia and Suresh, Vishnu and Stoica, Ion and Gonzalez, Joseph E.},
  booktitle={Proceedings of the 42nd International Conference on Machine Learning (ICML)},
  year={2025}
}
GeneralReasoning/BFCL | OpenReward