Are Your APIs Ready for AI? Preparing Your Landscape for Intelligent Consumption
- Erik Wilde
- Head of Enterprise Strategy , Jentic
Getting APIs to work with AI has become one of the major themes in the API space recently. And that’s not surprising because APIs are at the core of an AI’s ability to reach out into the world, to get access to data and information, and to invoke commands and workflows to act. This was always what APIs were for, but in this article we will dive a little deeper what that evolution looks like, and what that means for API governance and management.
Specifically, we want to go deeper than the current excitement around the Model Context Protocol (MCP). MCP is relevant and exciting because it is the current de-facto standard of how LLMs can interact with any external capabilities (not just APIs). It has become standard practice to expose APIs through MCP servers so that LLMs can easily access them.
There’s nothing wrong with that general model, but as we already can see, MCP is only part of the solution. It doesn’t answer the question of how to best design and describe capabilities for AI consumers. It also doesn’t answer the question of how to manage environments where there are a large number of capabilities that LLMs potentially can access.
And that’s fine because MCP wasn’t invented to tackle or solve these questions. But we have to keep these questions in mind if we want to have a strategy of how to do AI enablement at scale.
1. Why AI changes the API game
APIs have always been the “digital surface” of organizations. They define what can be accessed and what can be done in a programmable way. To some extent, you could even argue that AI hasn’t changed things all that much: we’re still in the business of accessing information, and of invoking capabilities. Only this time we have LLMs acting instead of traditional business applications.
But one could argue that while that’s true in principle, in practice AI does indeed change the game. AI’s model and mode of consumption is fundamentally different from the traditional view, where APIs were created for developers so that they can write applications with them.
- AI has less context and needs more clarity: Humans are good at dealing with fuzzy information and filling in missing pieces, based on their knowledge and experience. AI is less forgiving and may either fail or, arguably worse, may start hallucinating to fill in gaps in its knowledge.
- AI needs capabilities at runtime: Developers consume APIs to write applications. Once these are designed, developed, tested, and deployed, there is a fixed relationship between what these applications do, and which capabilities they access.
- Agentic AI is different: It makes and executes plans for each task it is given, meaning that APIs are selected and accessed at runtime. That’s a rather different overall model because now every time an AI works on a task, it needs to find and use the right capabilities for that task.
There are some more nuanced points as well, but these two major points alone make it clear that AI does create new challenges in the API space. The recent wave of reports reporting a relatively high failure rate for AI initiatives [MIT, McKinsey, Deloitte] may be an indication that AI actually is not “just another API consumer”. The difficulty of AI integration (i.e., connecting the AI to your organization’s digital capabilities) is always ranking high on the lists when it comes to explaining why many AI initiatives fail to see satisfactory ROI.
2. From endpoints to capabilities: Designing for machine understanding
In order to address the challenges laid out in the previous section, the question is how to move forward. We see three important paths, getting increasingly harder in terms of execution, but also delivering increasingly better results in terms of AI enablement.
- Better API descriptions. Most APIs have been designed and described for human developers. That focus will remain important, but is supplemented by a focus for agentic consumption. Using AI-focused API scoring , you can make sure that your APIs are well-suited for AI scenarios.
- Improving API selection. While better described APIs will help with AI consumption, they can get you only so far when there are too many APIs for an AI to fit into its context. Having a high-quality way of how agents can ask for APIs specific to their tasks, and get a small and solid set of responses, increases the success rate of AI applications substantially.
- Providing workflows. Even with good description and search, you still may end up with too many and overly fine-grained APIs that are presented to agents. In that case, introducing a layer of workflows can reduce complexity for agents substantially. Instead of having 20 APIs to orchestrate, there is just one workflow that does the orchestration job for them.
This, once again, is only following good practices we have had in the API space for quite a while: Align your APIs with business capabilities, so that consumers are able to consume business-level capabilities instead of having to orchestrate low-level system APIs.
AI is just reinforcing this message. Adding higher-level APIs reduces complexity for agents, and allows you to better describe the business outcomes that agents need to discover to accomplish their tasks.
3. Making APIs discoverable in large-scale landscapes
As detailed before, once organizations move past isolated experiments, scaling AI integration becomes more challenging. The reason for that is that potentially, API landscapes can become complex and large, meaning that they are hard to navigate for any consumer, be it human or a machine.
This of course is a very typical challenge for larger organizations where the IT landscapes have been constantly growing alongside increasing digitalization. Having just hundreds of APIs these days is a very modestly sized IT estate, oftentimes the number go up into the thousands or even into five-figure territory. Now multiply this number of the APIs with the number of endpoints that each of these APIs have, and it becomes clear that scalability is a challenge.
The first way to address this challenge is to make descriptions better, so that even in large landscapes, it is easier to tell what individual APIs are doing, so that they can be combined and orchestrated more safely.
Going forward, as soon as your API landscape gets beyond a relatively modest size, there are two approaches how to limit context size so that API consumers can still deal with the complexity of the landscape:
- If you have well delineated business domains, you can classify APIs by business domains and then only expose those APIs to agents that they need to know for the domain they are operating in. This reduces the complexity of the API surface exposed to agents, but domains still can be large, and when you have agents spanning multiple domains, you quickly get to the limits of this approach.
- A more scalable approach is to have a robust and scalable search facility. This allows any agent with any scope (search ideally could still be scoped by domain) to just get exposed to those APIs that are relevant to the current task. With robust search in place, the number of APIs you can handle is virtually unlimited, even though you always have to have a close eye on whether search precision and recall are still at the quality levels where you need them to be.
Using these two approaches, it is possible to scale to relatively large landscapes. However, it also becomes apparent that at some point, the approach to API design possibly has to change, and maybe what you’re exposing to AI consumers are not anymore the foundational APIs, but instead more task-oriented ones that are more easy to understand and consume for AI applications.
4. Leveling up: API workflows
In the API space, there has long been the idea of various levels of APIs, which some of them being more atomic, and others being more composite and aligned with business-level concepts and processes.
One isolated example (and there are many others) in this space has been the backend for frontend (BFF)pattern, which is based on the assumption that it makes sense to build dedicated backends for specific frontends. But even in this scenario, the question is what the BFF is working with. Is it based on rather atomic APIs, or can it already take advantage of more business-aligned APIs which then are stitched together for a specific frontend?
One thing becomes apparent: Starting from relatively low-level abstractions, potentially wrapping those in MCP, and then letting agents do the orchestration is not working too well as soon as the number of APIs to work with grows into what’s normal in organization.
There are different ways to get around this, but MCP alone isn’t the answer. Recent developments in this space include Cloudflare’s Code Mode, Anthropic’s Advanced Tool Use, and Anthropic’s Agent Skills. We also see organizations building their own bespoke solutions when it comes to matching AI applications with the right tools.
The approaches mentioned above use slightly different approaches and they have different advantages and disadvantages. We don’t want to pick a “winner” here, it seems like the jury is still out, and we will probably see a number of other approaches being developed and published.
One approach that’s interesting, because it uses AI itself to better serve AI’s needs, is to observe the decisions that agents make, to surface those orchestrations that happen more frequently and that can be labeled as successful, and to use those as guiding models for future attempts to solve similar tasks.
You can then use these workflows, represent them in a deterministic way, review and refine them, and finally provide them to agents for future problem-solving. That way, AI starts using deterministic models at a level where business processes can be represented by workflows, and the inherent uncertainty of AI can be reduced because of this deterministic layer.
5. Safe experimentation: Sandboxes for AI agents
In the scenario above, we leave it open how exactly the traces of agents solving problems are created. Are these logs of agents doing work in production, or are these traces of test runs in environments where workflows are being mined and minted, but not with actual production data?
We believe that both are valid models, and the question which one to choose is very much a function of what the agents are doing, and your appetite for risk. If you have agents in the customer service domain, you may accept instances where they do not follow the optimal route. If, on the other hand, you have core business processes being invoked by agents, then you want those to follow exactly the rules that are in place for these processes.
For that reason, it often is helpful to have sandboxes where AI agents can use APIs, but in an environment that is isolated from production data and processes. That way, agent behavior can be observed and monitored, and this data then serves as input for workflow mining and minting.
These kinds of guardrails combine the best of both worlds: You can have agents using their creativity and unique approaches to solve problems. But you can also do that in an architecture where there are clear guardrails in place of what AI can and cannot do.
In the most risk-averse environments, AI agents possibly never get to act in production. But they are tirelessly working in the sandbox, allowing the organization to then define workflows, which are reviewed and refined before they go into production. That way, the traditional model of writing business code by hand (or increasingly through agentic coding) is replaced or at least supplemented by a more data-driven approach, fed by behavioral observations against real or simulated business capabilities.
As we can see, the path towards scalable AI at enterprise scale has a number of interesting steps to it. MCP is one small step and not one that helps a lot with large-scale scenarios. For the path outlined here, a number of other capabilities are required, starting with more AI-friendly APIs, moving on to workflows, and then adding in a sandbox.
6. Your AI-aware API strategy
This is by no means a complete overview of where AI is currently standing, or how to best bring your investment into your API landscape into the AI age. But this article should convince you that there are many more things to be done than just exposing individuals capabilities through MCP and declaring victory.
Some of the aspects discussed here take more work than others, and not everybody needs to follow that exact path. But we believe that the considerations underlying the path presented here are valid for any slightly larger organization, so looking at these and thinking how much they are already part of your current strategy hopefully gives you some interesting food for thought!
Our companies
- WSO2 is a European-headquartered technology company owned by EQT, a purpose-driven global investment organization. Since 2005, we have served as a critical infrastructure partner to the world’s largest enterprises, providing the foundational technologies required to connect systems, protect digital identities, and accelerate innovation.
- Jentic is building the enterprise AI enablement platform. It starts with existing APIs, takes these through a scoring and improvement process for AI consumers, adds workflows on top of it, and uses a sandbox for mining and minting workflows in a safe way.
References
- [Deloitte] Jim Rowan, Beena Ammanath, Nitin Mittal, and Costi Perricos, "State of AI in the Enterprise: The untapped edge", Deloitte, January 2026, [`www.deloitte.com/us/en/what-we-do/capabilities/applied-artificial-intel…`](https://www.deloitte.com/us/en/what-we-do/capabilities/applied-artifici…).
- [MIT] Aditya Challapally, Chris Pease, Ramesh Raskar, and Pradyumna Chari, "The GenAI Divide: State of AI in Business 2025", MIT NANDA, July 2025, [`nanda.media.mit.edu/ai_report_2025.pdf`](https://nanda.media.mit.edu/ai_report_2025.pdf)
- [McKinsey] Alex Singla, Alexander Sukharevsky, Lareina Yee, and Michael Chui, "The state of AI in 2025: Agents, innovation, and transformation", McKinsey, November 2025, [`www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai`](https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-sta…)