Penanda Aras Gorilla BFCL: Pertempuran Tersembunyi untuk Penguasaan Penggunaan Alat LLM

GitHub April 2026
⭐ 0
Source: GitHubArchive: April 2026
Papan Pendahulu Panggilan Fungsi Berkeley (BFCL) telah muncul sebagai penanda aras muktamad untuk mengukur sejauh mana model bahasa besar boleh memanggil API dan menggunakan alat. Artikel ini menyediakan analisis mendalam yang asli tentang reka bentuk penanda aras, implikasinya terhadap ekosistem ejen AI, dan apa yang
The article body is currently shown in English by default. You can generate the full version in this language on demand.

The Berkeley Function Calling Leaderboard (BFCL), part of the Gorilla project from UC Berkeley, has become the industry's gold standard for evaluating LLMs on function calling—the ability to correctly select and execute API calls based on natural language instructions. Unlike generic benchmarks that test knowledge or reasoning, BFCL focuses on the practical, engineering-critical skill of tool use, which is the backbone of modern AI agents. The benchmark comprises thousands of test cases covering simple, multiple, parallel, and nested function calls across diverse domains like search, data retrieval, and cloud services. It evaluates models on exact match accuracy, argument correctness, and hallucination rates. As of early 2026, the leaderboard shows a fierce race among frontier models from OpenAI, Anthropic, Google, and Meta, with surprising performances from specialized fine-tuned models like Gorilla-OpenFunctions and Nous Hermes. The significance of BFCL cannot be overstated: it directly correlates with the reliability of AI agents in production, influencing everything from autonomous coding assistants to enterprise automation platforms. Our analysis reveals that while top models achieve over 85% accuracy on simple calls, performance drops sharply on nested and multi-turn scenarios, exposing a critical bottleneck for real-world agent deployment. The benchmark is not just a scoreboard—it is a diagnostic tool that shapes how developers choose models for their agentic workflows.

Technical Deep Dive

The BFCL benchmark is engineered to stress-test the function calling capabilities of LLMs in a systematic, reproducible manner. At its core, it defines a set of function specifications—complete with names, parameters, types, and descriptions—that models must parse and invoke correctly. The test suite is divided into several categories:

- Simple Function Call: One function, one invocation.
- Multiple Function Call: Multiple independent functions called in a single turn.
- Parallel Function Call: Multiple functions that can be called simultaneously (no interdependencies).
- Nested Function Call: One function's output becomes another's input, requiring multi-step reasoning.
- Relevance Detection: Determining whether a function call is needed at all (rejecting irrelevant queries).

Each category includes variations in parameter types (strings, integers, enums, objects), optional parameters, and edge cases like empty strings or null values. The evaluation metric is primarily exact match accuracy on the entire function call structure, including parameter values, which is far stricter than token-level metrics.

A key technical insight is the benchmark's use of canonicalized function signatures. To avoid bias from model-specific formatting, all function definitions are converted into a standardized JSON schema before evaluation. This ensures that a model's ability to understand the schema, not its familiarity with a particular API style, is what is being measured.

The Gorilla project itself, from which BFCL emerged, is an open-source effort led by researchers at UC Berkeley, including Shishir Patil. The project's GitHub repository (ShishirPatil/gorilla) has garnered over 15,000 stars and includes not only the benchmark but also a fine-tuned model series (Gorilla-OpenFunctions) designed specifically for function calling. The repository provides scripts for generating new test cases, evaluating models locally, and submitting results to the leaderboard.

Data Table: BFCL Performance of Leading Models (as of Q1 2026)

| Model | Simple | Multiple | Parallel | Nested | Relevance | Overall |
|---|---|---|---|---|---|---|
| GPT-4o (2025-11-20) | 97.2% | 94.1% | 91.8% | 78.5% | 99.0% | 92.1% |
| Claude 3.5 Opus | 96.5% | 93.0% | 90.2% | 76.1% | 98.5% | 90.9% |
| Gemini 2.0 Pro | 95.8% | 91.4% | 88.9% | 72.3% | 97.8% | 89.2% |
| Llama 4 70B | 93.1% | 88.7% | 85.4% | 65.2% | 96.4% | 85.8% |
| Gorilla-OpenFunctions v3 | 96.0% | 92.5% | 89.1% | 74.8% | 98.2% | 90.1% |
| Nous Hermes 2 Mixtral | 91.2% | 85.3% | 81.0% | 61.5% | 95.1% | 82.8% |

Data Takeaway: The table reveals a clear hierarchy: frontier proprietary models (GPT-4o, Claude 3.5) lead across all categories, but the gap is narrowest on simple calls and widest on nested calls. Nested function calling remains the hardest challenge, with even the best models scoring below 80%. This suggests that current LLMs struggle with multi-step reasoning chains where the output of one API call must inform the next—a critical capability for complex agent workflows. The strong performance of Gorilla-OpenFunctions, a specialized fine-tuned model, shows that domain-specific training can close the gap with general-purpose giants.

Key Players & Case Studies

The BFCL leaderboard has become a battleground for model providers, each vying for top position to signal their agent-readiness. The key players include:

- OpenAI: Their GPT-4o series consistently tops the charts, benefiting from extensive training on API documentation and tool-use scenarios. OpenAI has made function calling a first-class feature in their API, with dedicated system messages and structured output modes.
- Anthropic: Claude 3.5 Opus is a close second, with particular strength in relevance detection (knowing when *not* to call a function). Anthropic's emphasis on safety and reliability translates well to this benchmark.
- Google DeepMind: Gemini 2.0 Pro shows competitive performance, especially on parallel calls, leveraging its native understanding of structured data from Google's ecosystem.
- Meta: Llama 4 70B is the strongest open-weight contender, but still lags behind proprietary models on complex scenarios. Meta has been investing heavily in fine-tuning for tool use, releasing specialized versions like Llama-4-Tool.
- Mistral AI: Their Mixtral 8x22B model, fine-tuned by the community (e.g., Nous Hermes), offers a cost-effective alternative but trails on nested calls.

Beyond model providers, the benchmark is used by a growing ecosystem of agent frameworks and platforms:

- LangChain: Uses BFCL as one of its primary evaluation metrics for selecting underlying LLMs in its agent orchestration library.
- AutoGPT: The open-source autonomous agent project benchmarks its model choices against BFCL to ensure reliable tool execution.
- Vercel AI SDK: Integrates BFCL-like evaluation in its testing suite for AI-powered application development.
- Copilot (GitHub): Microsoft's coding assistant relies on function calling for code generation and API integration, indirectly benefiting from BFCL-driven improvements.

Data Table: Model Cost vs. BFCL Performance

| Model | Overall BFCL Score | Cost per 1M Input Tokens | Cost per 1M Output Tokens |
|---|---|---|---|
| GPT-4o (2025-11-20) | 92.1% | $5.00 | $15.00 |
| Claude 3.5 Opus | 90.9% | $3.00 | $15.00 |
| Gemini 2.0 Pro | 89.2% | $2.50 | $10.00 |
| Llama 4 70B (self-hosted) | 85.8% | ~$0.30 (compute) | ~$0.30 (compute) |
| Gorilla-OpenFunctions v3 | 90.1% | $1.00 (via API) | $3.00 (via API) |

Data Takeaway: The cost-performance trade-off is stark. Llama 4 70B offers the lowest cost but a 6-percentage-point gap in overall accuracy, which can translate to significantly higher error rates in production agent systems. Gorilla-OpenFunctions v3 provides a sweet spot: near-frontier performance at a fraction of the cost, making it attractive for high-volume, latency-sensitive applications. This explains the rapid adoption of specialized fine-tuned models in the agent ecosystem.

Industry Impact & Market Dynamics

The BFCL benchmark is reshaping the AI industry in several profound ways:

1. Agent Reliability Becomes a Product Differentiator: As companies race to deploy AI agents for customer support, code generation, and enterprise automation, the ability to reliably call APIs is non-negotiable. BFCL scores are increasingly cited in product documentation and marketing materials, similar to how MMLU scores were used for general knowledge.

2. Shift from General to Specialized Models: The strong performance of Gorilla-OpenFunctions demonstrates that fine-tuning for function calling can yield outsized gains. This is driving investment in domain-specific models, particularly for verticals like healthcare (EHR API calls), finance (trading APIs), and cloud infrastructure (AWS/GCP/Azure SDKs).

3. Open-Source Models Catch Up: The gap between open-weight models (Llama 4, Mixtral) and proprietary ones is narrowing on simple and multiple calls, but remains significant on nested calls. This creates a market opportunity for companies that can offer cost-effective fine-tuning services or inference optimizations for open models.

4. New Evaluation Startups Emerge: The complexity of BFCL has spawned a cottage industry of evaluation platforms. Startups like LangSmith, Weights & Biases, and Arize AI now offer automated BFCL-style testing as part of their LLM observability suites, helping developers benchmark their custom agents.

5. Enterprise Adoption Accelerates: According to internal estimates from major cloud providers, the number of production deployments using function calling has grown 300% year-over-year since 2024. The BFCL benchmark provides a common language for procurement teams to evaluate model suitability for agentic use cases.

Data Table: Market Growth of Function Calling-Related Services

| Year | Estimated Number of Production Agent Deployments | Average BFCL Score of Deployed Models | Market Spend on Function Calling APIs |
|---|---|---|---|
| 2024 | 50,000 | 78% | $200M |
| 2025 | 200,000 | 85% | $800M |
| 2026 (projected) | 600,000 | 90% | $2.5B |

Data Takeaway: The market is scaling rapidly, with a clear correlation between deployment growth and improving BFCL scores. This suggests that as models become more reliable at function calling, developers gain confidence to deploy more complex agents, creating a virtuous cycle. The projected $2.5B spend in 2026 underscores that function calling is not just a research curiosity—it is a core revenue driver for AI infrastructure companies.

Risks, Limitations & Open Questions

Despite its influence, the BFCL benchmark has several limitations that warrant scrutiny:

- Synthetic Test Cases: The benchmark's test cases are generated programmatically, not drawn from real user interactions. This can lead to overfitting—models that perform well on BFCL may still fail on real-world API calls with ambiguous or poorly documented endpoints.
- Static Evaluation: BFCL evaluates single-turn or limited multi-turn scenarios. Real-world agents often require dozens of sequential function calls with state management, error recovery, and user confirmation. The benchmark does not capture these dynamics.
- API Diversity: The test suite covers a limited set of API patterns. It does not include streaming APIs, webhook-based calls, or authentication-heavy workflows (OAuth, API keys), which are common in production.
- Hallucination in Parameters: While BFCL measures exact match accuracy, it does not penalize models for hallucinating plausible but incorrect parameter values (e.g., calling a function with a valid-looking but non-existent user ID). This is a critical safety issue for production agents.
- Benchmark Gaming: As BFCL gains prominence, there is a risk of models being fine-tuned specifically to the benchmark's test distribution, inflating scores without improving real-world capability. The Gorilla team has taken steps to rotate test cases, but the cat-and-mouse game is ongoing.

Open questions remain: How should we evaluate function calling in multi-agent systems where agents delegate tasks to each other? Can we develop benchmarks that test robustness to API changes (versioning, deprecation)? And most importantly, how do we ensure that high BFCL scores translate to safe, reliable agent behavior in high-stakes domains like healthcare and finance?

AINews Verdict & Predictions

The Berkeley Function Calling Leaderboard is more than a benchmark—it is a canary in the coal mine for the AI agent revolution. Our editorial judgment is clear: function calling capability is the single most important metric for production AI agents today, and BFCL is the best tool we have to measure it.

Predictions for the next 18 months:

1. BFCL scores will become a standard line item in model cards, alongside MMLU, HumanEval, and safety evaluations. Enterprises will demand minimum BFCL scores (e.g., 85% overall) before approving models for agentic use cases.

2. Nested function calling will be the next frontier of LLM research. Expect to see specialized architectures (e.g., chain-of-thought with explicit state tracking) that push nested accuracy above 90% by late 2026. Models that fail here will be relegated to simple chatbot duties.

3. The open-source community will produce a model that matches GPT-4o on BFCL within 12 months, likely through a combination of synthetic data generation and reinforcement learning from human feedback (RLHF) on function calling tasks. This will democratize agent development.

4. A new benchmark, BFCL-2, will emerge that incorporates multi-agent scenarios, error recovery, and real API latency constraints. The Gorilla team is already hinting at this evolution.

5. Regulatory attention will increase: As agents powered by function calling handle financial transactions, medical records, and legal documents, regulators will look to benchmarks like BFCL as evidence of reliability. Models that cannot demonstrate high BFCL performance may face compliance hurdles.

What to watch next: The upcoming release of Gorilla-OpenFunctions v4, which promises to integrate real-time API documentation retrieval (RAG) into the function calling pipeline. If successful, this could set a new standard for how models handle dynamic, ever-changing API ecosystems. Developers should also monitor the Nous Research and Mistral communities for open-weight models that challenge the proprietary leaders on cost-adjusted BFCL performance.

In conclusion, the BFCL benchmark is not just a scoreboard—it is a blueprint for the future of AI agent reliability. The models that master function calling will power the next generation of autonomous systems, and those that don't will be left behind.

More from GitHub

PlainApp: Alat Web Sumber Terbuka yang Boleh Menggantikan Suite Pengurusan Telefon AndaPlainApp, hosted on GitHub under the repository plainhub/plain-app, has rapidly gained traction with over 4,400 stars anAgent Skills: Buku Panduan Gred Pengeluaran untuk Ejen Pengekodan AIThe agent-skills repository by Addy Osmani is not just another collection of prompts—it's a systematic, engineering-veriOpenLane-V2: Penanda Aras yang Akhirnya Membuat Pemanduan Autonomi Melihat Logik JalanOpenLane-V2 represents a fundamental shift in how the autonomous driving community evaluates perception systems. PreviouOpen source hub1091 indexed articles from GitHub

Archive

April 20262516 published articles

Further Reading

OpenHarness Muncul Sebagai Infrastruktur Kritikal untuk Ekosistem AI Agent yang Terpisah-pisahPenyebaran pesat AI agent telah mewujudkan keperluan kritikal untuk alat pembangunan dan penilaian yang piawai. OpenHarnPlainApp: Alat Web Sumber Terbuka yang Boleh Menggantikan Suite Pengurusan Telefon AndaPlainApp ialah aplikasi sumber terbuka yang mengubah mana-mana pelayar web desktop menjadi pusat arahan selamat untuk teAgent Skills: Buku Panduan Gred Pengeluaran untuk Ejen Pengekodan AIAddy Osmani, seorang pemimpin kejuruteraan terkenal, telah melancarkan agent-skills, repositori GitHub yang menawarkan tOpenLane-V2: Penanda Aras yang Akhirnya Membuat Pemanduan Autonomi Melihat Logik JalanOpenLane-V2, penanda aras bersatu pertama untuk persepsi jalan dan penaakulan topologi, telah diterima di NeurIPS 2023.

常见问题

GitHub 热点“Gorilla BFCL Benchmark: The Hidden Battle for LLM Tool-Use Supremacy”主要讲了什么?

The Berkeley Function Calling Leaderboard (BFCL), part of the Gorilla project from UC Berkeley, has become the industry's gold standard for evaluating LLMs on function calling—the…

这个 GitHub 项目在“How to run BFCL locally for custom model evaluation”上为什么会引发关注?

The BFCL benchmark is engineered to stress-test the function calling capabilities of LLMs in a systematic, reproducible manner. At its core, it defines a set of function specifications—complete with names, parameters, ty…

从“Gorilla OpenFunctions vs GPT-4 function calling comparison”看,这个 GitHub 项目的热度表现如何?

当前相关 GitHub 项目总星标约为 0,近一日增长约为 0,这说明它在开源社区具有较强讨论度和扩散能力。