Technical Deep Dive
The second wave of API openness is not a simple rehash of the 2011 era. The core technical driver is the emergence of LLMs as reasoning engines that require external tools to interact with the world. This is enabled by a new architectural pattern: the function-calling or tool-use paradigm.
At its simplest, an LLM is given a list of available API functions, each described by a JSON schema (name, parameters, description). When the model determines that a user's request requires an action (e.g., "book a flight to Tokyo"), it outputs a structured JSON object specifying which function to call and with what arguments. The host application then executes that API call and returns the result to the model, which incorporates it into its response. This loop—reason, call, observe, respond—is the foundation of agentic workflows.
Key Technical Components:
1. Function Calling APIs: OpenAI's GPT-4o, Anthropic's Claude 3.5, and Google's Gemini all support native function calling. The model is trained to output structured JSON for tool invocation. The quality of the function schema (clear descriptions, precise parameter types) directly impacts the model's accuracy.
2. Agent Orchestration Frameworks: Frameworks like LangChain, AutoGPT, and Microsoft's Semantic Kernel provide the scaffolding for managing multi-step agent loops, including memory, planning, and error handling. The open-source repository LangChain (over 100k stars on GitHub) is the most popular, offering integrations with hundreds of APIs and tools. Another notable repo is CrewAI (over 25k stars), which focuses on multi-agent collaboration.
3. API Standardization: The OpenAPI Specification (formerly Swagger) is becoming the de facto standard for describing RESTful APIs. Tools like OpenAPI-to-GraphQL and APIMatic help convert legacy APIs into formats that LLMs can more easily consume. The key challenge is that many existing APIs were designed for human developers, not machine agents—they lack clear, unambiguous documentation and consistent error handling.
4. Authentication & Authorization: OAuth 2.0 remains the standard, but agentic workflows introduce new challenges. An AI agent may need to act on behalf of a user for multiple steps, requiring long-lived tokens or delegated authorization. Services like Auth0 and Okta are developing agent-specific authentication flows.
Benchmark Data: Function Calling Accuracy
| Model | Function Calling Accuracy (Simple) | Function Calling Accuracy (Complex) | Average Latency (per call) |
|---|---|---|---|
| GPT-4o | 94.2% | 82.1% | 1.2s |
| Claude 3.5 Sonnet | 93.8% | 80.5% | 1.5s |
| Gemini 1.5 Pro | 91.5% | 76.3% | 1.8s |
| Llama 3 70B (local) | 87.1% | 68.9% | 3.4s |
Data Takeaway: While frontier models achieve high accuracy on simple function calls, complex multi-step workflows (e.g., booking a flight that requires checking multiple dates and comparing prices) still see significant degradation. Latency is also a critical factor—each API call adds a round-trip delay, making real-time agentic applications challenging.
Key Players & Case Studies
The second wave is being driven by a mix of established platforms and startups that are rethinking their API strategy.
1. OpenAI & the ChatGPT Plugin Ecosystem (Now Deprecated but Influential)
OpenAI's ChatGPT plugins, launched in March 2023, were the first major attempt to create an agentic API marketplace. Developers could register their APIs as plugins, and ChatGPT would automatically invoke them when relevant. While the plugin system was deprecated in April 2024 in favor of the GPT Store and native function calling, it proved the concept. Services like Expedia, Kayak, and Zapier saw millions of API calls from AI agents. The key lesson: the API must be designed for zero-shot discovery—the model must understand what the API does without human intervention.
2. Zapier's AI-Powered Automation
Zapier, the king of no-code automation, has pivoted hard into AI. Its Zapier Central product allows users to create AI agents that connect to over 6,000 apps via its existing API integrations. Zapier's advantage is its massive library of pre-built connectors, each with clear action schemas. The company reports that AI-triggered automations now account for over 20% of new Zaps created monthly.
3. Stripe's API-First Approach
Stripe has long been the gold standard for API design. Its payment APIs are now being consumed by AI agents for tasks like invoicing, subscription management, and fraud detection. Stripe's Stripe Connect platform is particularly relevant, as it allows agents to handle marketplace payments on behalf of users. The company has published best practices for building APIs that are "agent-friendly," emphasizing idempotency keys, clear error messages, and pagination.
4. Notion's AI Integration
Notion has opened its API to allow AI agents to create, edit, and search documents. The Notion API is used by agents like Mem and Taskade to automatically populate project databases, summarize meeting notes, and generate reports. Notion's API documentation includes explicit examples of function calling schemas for LLMs.
Comparison: API Design for Humans vs. Agents
| Aspect | Human-Focused API | Agent-Focused API |
|---|---|---|
| Documentation | Prose, examples, tutorials | Machine-readable OpenAPI spec, JSON schemas |
| Error Handling | Human-readable error messages | Structured error codes with machine-readable details |
| Rate Limiting | Fixed limits per API key | Dynamic limits based on agent identity and task priority |
| Authentication | OAuth with user consent | Delegated authorization with long-lived tokens |
| Response Format | Flexible, may include HTML | Strict JSON with consistent field types |
Data Takeaway: The shift from human-focused to agent-focused API design is not optional. Platforms that fail to optimize their APIs for machine consumption will be bypassed by agents that prefer cleaner, more predictable interfaces.
Industry Impact & Market Dynamics
The second wave of API openness is reshaping competitive dynamics across multiple industries.
1. The Rise of the "API-as-Product" Model
Companies like Twilio, Stripe, and Plaid have long operated on an API-as-product model. Now, this model is expanding to traditional SaaS. For example, Salesforce is aggressively promoting its MuleSoft API platform, which allows AI agents to access CRM data. Shopify has opened its GraphQL API to enable AI-powered store management. The market for API management platforms is projected to grow from $5.1 billion in 2024 to $13.9 billion by 2029, according to industry estimates.
2. The Emergence of "Agent Marketplaces"
Just as the first API wave gave rise to app marketplaces (e.g., Salesforce AppExchange, Shopify App Store), the second wave is spawning agent marketplaces. OpenAI's GPT Store is the most prominent, but others like AgentGPT and AutoGPT's plugin store are emerging. These marketplaces allow developers to publish agents that consume specific APIs, creating a new distribution channel for API providers.
3. The Threat to Traditional SaaS Interfaces
If AI agents can perform tasks directly via APIs, the traditional graphical user interface (GUI) becomes less critical. This threatens companies whose primary value is their UI, not their underlying data or logic. For example, a travel booking agent that calls Expedia's API directly could bypass Expedia's website entirely, capturing the user's intent before they ever visit the site. This is a direct challenge to the "front-end moat" that many SaaS companies have built.
Market Size & Growth Data
| Segment | 2024 Market Size | 2029 Projected Size | CAGR |
|---|---|---|---|
| API Management Platforms | $5.1B | $13.9B | 22.3% |
| AI Agent Platforms | $3.2B | $18.4B | 41.7% |
| API Security | $1.8B | $4.7B | 21.1% |
| Function-as-a-Service (FaaS) | $12.5B | $30.1B | 19.2% |
Data Takeaway: The API management and AI agent platform markets are growing at explosive rates, reflecting the convergence of these two trends. The FaaS market, which underpins many serverless API backends, is also growing steadily.
Risks, Limitations & Open Questions
Despite the promise, the second wave of API openness faces significant challenges.
1. Security & Abuse
APIs that allow AI agents to execute actions are inherently more dangerous than read-only data APIs. A poorly designed agent could accidentally delete data, make unauthorized purchases, or leak sensitive information. The OAuth 2.0 Device Authorization Grant (device flow) is being explored for agent authentication, but it is not yet widely adopted. In 2024, a security researcher demonstrated a proof-of-concept attack where an agent was tricked into calling a malicious API endpoint, leading to data exfiltration.
2. Reliability & Error Handling
LLMs are probabilistic, and their function calling is not always accurate. A model might call the wrong API, pass incorrect parameters, or fail to handle an error response. This is particularly problematic in high-stakes domains like healthcare or finance. Companies like Fixie.ai and Vellum are building observability tools specifically for agentic workflows, but the field is nascent.
3. The "Cold Start" Problem
For an API to be useful to an AI agent, it must be well-documented and have a clear schema. Many legacy APIs lack this. The effort required to retrofit existing APIs for agent consumption is substantial, and many companies may not prioritize it.
4. Vendor Lock-In
As platforms make their APIs more agent-friendly, they also create a new form of lock-in. An agent that is deeply integrated with Stripe's payment APIs, for example, would be costly to migrate to a competitor. This could lead to a new era of "API moats" that are even harder to break than traditional platform lock-in.
AINews Verdict & Predictions
The second wave of API openness is not a trend—it is a structural shift in how software is built and consumed. Here are our key predictions:
1. By 2027, the majority of new SaaS products will launch with an "agent-first" API as their primary interface, with the GUI as a secondary concern. The GUI will become a debugging tool for human oversight, not the primary interaction point.
2. A new standard for agent-to-API authentication will emerge, likely based on the OAuth 2.0 Device Authorization Grant with extensions for long-running agent sessions. This will be critical for enterprise adoption.
3. The first major security incident involving an AI agent abusing an API will occur within 18 months, triggering a wave of regulation and insurance requirements for agentic workflows. This will slow adoption in regulated industries but accelerate it in others.
4. The biggest winners will be platforms that own both a popular LLM and a rich set of APIs—think OpenAI + Microsoft Graph API, or Google Gemini + Google Workspace APIs. The integration between model and API will become a key competitive moat.
5. The biggest losers will be middle-layer SaaS companies whose primary value is a UI that sits on top of commodity APIs. They will be disintermediated by agents that call the underlying APIs directly.
The second API wave is here. The question is not whether to participate, but how quickly you can redesign your API for an agentic world.