Understand
Trend 4: Finding the AI RoI for API developers - API Economy Trends for 2025

2025 API Economy Prediction 4: AI over-hype begins to settle as balance sought between AI usage and RoI
Throughout 2024, Artificial Intelligence (AI) and in particular generative AI (GenAI) was held up as a creating a pivotal turning point for industries of all kinds, offering the opportunity to create solutions faster by enabling coding pilots, automating content generation, sifting through data, responding to customer and workforce queries, brokering knowledge, and performing other tasks.
But as organisations of all sizes started testing AI, and the tech media hype machine went into overdrive suggesting you were "missing out" if you were not adopting AI, a number of concerns began arising, which I highlighted in a keynote at apidays at the end of 2023, and which came into sharper focus throughout 2024:
- The high environmental cost of compute power required1,
- The propensity for AI to 'hallucinate' and provide incorrect answers,
- The risk of model collapse as AI results were fed back into AI datasets,
- The mediocre quality of the content produced,
- The need to introduce strong data, API, and AI governance,
- The opaque copyright legal adherence,
- The exploitation of workers required to view disturbing content2, and
- The poor code outputs provided by GenAI autopilots.
Studies regularly found that AI-generated code had security flaws, required deployment rollbacks, or had to be refactored at a higher rate than developer-generated code. AI generated code also has a higher level of technical debt, with code duplication ten times higher than two years ago, before AI coding assistants became prevalent3.

Potential AI benefits for API tech leads
Despite these limitations, many developers report they feel there are benefits in using GenAI to create OpenAPI Specifications, write code, ask about coding or spreadsheet formula questions and so on, and have been able to use AI outputs to create solutions. By the end of 2024, this gave rise to chatbots and the emergent agentic AI tooling, which included assistance to developers to search documentation or understand process steps. Some report that they feel this is beneficial to developer productivity.
Sterling Davis, of DX, noted at the start of 2025 that "developers using AI-pairing tools complete tasks up to 55% faster and reduce code review times by an average of 19.3 hours".
In API design, new tools like Blackbird from Ambassador have been able to enhance API design practices and speed up some of the more tedious tasks required when working through an API design process and converting outputs into an OpenAPI definition. James Higginbotham, from LaunchAny, for example, was able to generate an OpenAPI document, and then apply API design first processes to generate boilerplate API code.
In addition, we saw some neat examples of the potential for AI in coding work. One of our favorites is this demonstration by Frank Kilcommins showing how Arazzo workflow documentation can be spun up from using AI tools:
Other AI use cases for API management, such as using AI to analyse and monitor traffic, are also emerging that show significant promise. New open source AI governance tools like PAIG are also emerging to support developers.
AI across the API Landscape
While we expect Q1 2025 data to show a jump, as at end of 2024, 13% of API tools providers had introduced AI features into their tooling.
At present count, the API Landscape tracks 182 AI-specific tools aimed at supporting developer workflow tasks as part of API-related infrastructure and processes:

In addition, we also count 212 AI products available to use via API (that is, that use an API-as-a-product business model):

As 2024 concluded, AI appeared to support developers in their work, with AI tooling on the precipice of possibly becoming a new partner in supporting the work of API teams.
Analysing API AI trends in 2025
At the start of 2025, Agentic AI has been the main focus for AI experimentation. However, there are serious doubts that Agentic AI will be implementable this year. Writing in The New Stack, Barr Moses, co-founder of Monte Carlo, highlights the limitations of Agentic AI and the need to focus more on data governance first. This prediction follows work like that from Goldman Sachs in mid 2024 that forecasted a limited economic upside from AI over the next decade and argued that AI isn’t designed to solve the complex problems that would justify costs.
At Platformable, we are hearing of wholesale changes to budgets and business models as more companies — often larger enterprises — seek to apply AI across their functions. One large consultancy firm mentioned stopping subscriptions to major tech support and resource services as they shifted to reliance on AI, and were pausing hiring of new consultants as they tried to understand how AI agents might replace this work effort. Others have mentioned that a significant proportion of their team are not currently working on client (and revenue)-facing projects as clients wait to see if AI could replace their consultancy contract purchasing.
We are still seriously confused and disappointed in the widespread acceptance of OpenAI as a vendor in this space. As we have discussed in our previous analysis of AI open ecosystems, OpenAI has not proven itself to be a reputable company. They are involved in exploitation of African workers, there are serious concerns about their role in stealing copyrighted content, there are multiple instances of changing terms of service opaquely, and they have no real governance oversight. Statements by their leadership, pricing changes, current losses and investment returns all suggest they do not have a viable business model. Looking at their focus on Agentic AI is a good example: their recent forecasts suggest they will generate billions from an Agentic AI product, which is only available on their premium subscription plan and so far has involved a very poor demo of AI working pretty miserably at booking the most generic of travel plans possible from a website without any personalisation.
It is still challenging to understand the true value being fostered by AI. Work we have completed with clients at Platformable over the past six months could not have been done by AI. This includes interoperability mapping, health data policy analysis, ecosystem mindset training, and our banking/finance data collection. Several times we did run experiments comparing our outputs with GenAI, as did some of our clients (one client even asked us to justify why we thought our model was superior to the model proposed by ChatGPT). In each case, our work was more nuanced, used more examples and case studies, had less errors, made more strategically aligned recommendations, and balanced business goals with social and sustainability concerns. The quality of our writing was more engaging (and included less marketing-language speak), and our conclusions were more creative and well-honed to the context we had analysed.
But while our work is focused on the business side of APIs — the ecosystem mindset in particular and supports to enable ecosystem participation and growth — our focus for API Economy trends is on how AI can support API technical and product management work, including for solution architects, platform leads, developers, developer relations leads and product managers.
The API Economy Trends we believe will unfold further in 2025 is that AI will find its greatest value when being used as a support tool for developers. However, how AI enhances and supports the work of developers without risking the introduction of shortcuts that reduce the critical thinking and creative skills of developers will need to be closely monitored. Recent research from Microsoft shows that generative AI can reduce critical thinking if results are taken at face value rather than reviewed. This suggests the need for AI code tooling to include features that encourage developers to review and confirm code rather than simply integrate it into their existing code base or other processes: we will be watching for the development of those kinds of features within tooling.
This year, our API Economy trends tracking will be focused on:
- How AI governance can be improved and aligned with data and API governance practices
- Where AI is proving beneficial to API design, development and deployment
- How AI is shifting the API Economy.
In addition, from Platformable's point of view, we will be looking deeper at the impact of AI on open digital ecosystems. For this work, we will be drawing on Abeba Birhane's work with the AI Accountability Lab, where she maps "concrete steps forward" for AI in public services. Many of her recommendations align with principles and strategies that would apply to fostering open digital ecosystems that ensure responsible and ethical uses of AI. This includes nurturing knowledge ecosystems, fostering data and AI literacy, creating accountability and evaluation tools, and regulating and acting against power imbalances.
Challenges in tech policy and analysis in the AI era
As a purpose-focused business with close to fifteen years experience working across the API Economy and thirty years working in data advocacy and data systems, we always look for ways to keep advancing our values focused on ensuring that open digital ecosystems flourish so that everyone can participate and co-create their own value. While we have strong diversity, equity and inclusion (DEI) policies and practices, we have needed to frame the benefits of DEI around the business goals of clients. For example, in open banking and open finance, we regularly highlight that the international remittances sector, with a global market size of $905 billion (25% of that representing remittance flows to low and middle income countries)4 has a sizeable portion of use from migrants sending money to families and friends in their country of origin. Yet there is a very limited number of products that aim to provide other financial services to this target segment, despite the evidence of their viability as a consumer market. Similarly, women owned businesses are often more economically stable than startups founded by men, yet VC and investment funding does not reflect this, despite women owned businesses being a more reliable investment. So while our values are focused on ensuring services are designed targeting these consumer segments, our arguments focus on the business case for doing so. On more than one occasion, we have been told to tone down our focus on the societal and equity benefits of open digital ecosystems in favor of demonstrating the business case. From a business point of view, we have gotten more client work when we have made equity an additional lens in the analysis we provide rather than as a central pillar (although we do provide substantial discounts to organisations and prioritise work on projects with an equity or sustainability focus). We mention this because we understand we will have to find a balance with our AI analysis work in order to preserve client relationships.
We believe our trends analysis is already, and will be, more thorough, honest and contextualised than you will read from many larger tech analyst companies, including Accenture, Gartner and McKinsey. Accenture were all in on the metaverse, recommending high level investment by companies to build out their metaverse real estate. Companies that followed that advice lost significant time and budget pursuing a dead-end that we had strongly cautioned against, with us instead promoting open digital ecosystems that leverage APIs which has since become more mainstream. While we have a lot of respect for Gartner, we find their analysis lacks wider socio-political context. They have been completely silent on the links between AI investment by cloud providers who benefit significantly if AI is adopted. They have not discussed the governance challenges and shady business practices of AI companies. They are silent on how AI companies are aligning themselves with the coup currently happening in the United States. Their analysis has been unable to find a way to describe the growing interest in digital sovereignty, and global activities around digital public infrastructure and the emergence of projects like the IndiaStack and EuroStack. (We were also fairly non-plussed by the way they pushed Backstage as the internal developer portal when it was not ready for widespread adoption and would not solve the API governance and platform engineering issues many enterprises faced, and we feel that they could be applying that some tactic to AI). And McKinsey has done little governance work or reorganisation to address the fact that they worked with the U.S. Food and Drug Administration to prevent opioid abuse while also working with pharmaceutical companies to increase opioid consumption. I have read and seen nothing that would suggest they are not playing the same both angles with AI tech analysis and recommendations. And don't get me started on BCG!
In the current economic climate (which should be exciting and dynamic as digital solutions have boosted economic development and created greater choice and solved for some of the friction in previous analog processes but which is instead precarious because tech CEOs and shareholders amass wealth and do not fairly distribute revenue to workers or reduce costs for consumers and take a monopolistic approach to the market which now includes fighting against regulations aimed at protecting local communities and economies), I do feel some fear in being forthright about the limitations of AI. But I need to remember that Platformable has been ahead of the curve on promoting greater use of open data, encouraging open digital ecosystems, mapping data and API governance processes, looking at how APIs and technologies can support health interoperability and sustainability decision-making, have rejected cryptocurrency and the metaverse, we even cautioned against Alexa API workflows back when everyone thought that would be the next big thing: we have consistently been promoting the tech policy and adoption directions that have now become mainstream and avoided the tech hypes that larger consultancy firms and the tech media promote.
We will be working in AI this year: we have landscape mapping and industry survey work already planned, for example, but we also want to continue to look at how AI governance should be managed as part of an overall open digital ecosystem approach. But, to be honest, I also do face a business fear that being outspoken and opinionated on AI will reduce some client opportunities. We urge greater conversations and assessment of AI in a wider tech and societal context. The days of separating tech from its wider positioning as an influence in geopolitical contexts, and its role in creating wider societal and economic impacts needs to end.
Addressing this API Economy Trend: Where to start
Our predictions for 2025
The poor quality of content completely generated by AI will reduce interest in using GenAI as a replacement for developer-focused content generation beyond simple documentation. However, a new balance will be found with tooling that uses AI to assist developers to work more effectively and productively by reducing repetitive and duplicative tasks and more tedious aspects of their work. More organisations will start to make adoption decisions based on the business value of AI that they are able to generate, if not the environmental footprint (except where this influences costs). There will be a shift towards smaller learning models that focus on specific ontologies and can be used for specific industry and business needs as larger models increasingly generate incorrect answers.

Ensure you have good AI governance, and consumption management practices in place before integrating AI tools into your API consumption value chain.

In addition to documenting AI governance, work with developer teams to see AI as an aid to developer coding work rather than as a replacement. Look at AI tooling that helps developers do tedious work faster (like creating API specification files and designing integration tests).

Look at opportunities to deliver features that enable users to reduce the tedious and repetitive aspects of their work with AI (for example, creating integration tests). Create features that support developers to review code and other outputs rather than simply integrate results.
Article references

Mark Boyd
DIRECTORmark@platformable.com