This week, we’re sharing some AI-related security news, with several reports highlighting vulnerabilities in trusted AI platforms. We also review a blog post claiming an API BOLA vulnerability at Mercury Energy New Zealand and cover a recent interview exploring a range of API security topics. First up though, news of a new OWASP Top 10 list just released.
Industry News: OWASP Top 10 2025 now available
A release candidate for the new OWASP Top 10:2025 vulnerability list is available from the OWASP website. This is the general OWASP Top 10 list, not the API-specific one. But there are a couple of notable trends that I think could also influence the next API vulnerability list.
First, Security Misconfigurations has been bumped up from #5 to #2, signaling that insecure or unexpected application behavior is increasingly caused by configuration issues rather than just coding flaws.
That has important implications for security testing. Static Application Security Testing (SAST) tools that scan only source code may be blind to vulnerabilities introduced through configuration settings outside the application. These issues often only surface during manual configuration reviews or dynamic testing (assuming you can define upfront the expected behavior to test against), so the trends may warrant a shift in emphasis towards dynamic testing.
There’s also a new entry: Software Supply Chain Failures. In the API space, this likely reflects the growing complexity of API ecosystems and their interdependencies. APIs today often act as just one node in a chain of upstream and downstream API calls, with LLMs and Model Context Protocols (MCPs) now thrown in for good measure as consumers and providers of API data.
It’ll be interesting to see if this drives any movement in the OWASP API Security Top 10 list, particularly for API10:2023 “Unsafe Consumption of APIs” in its next release. Read the OWASP Top 10 introduction.
Vulnerability: Researcher Claims API Flaw at Mercury Energy NZ
A recent blog post spotlights a potential API flaw at Mercury Energy in New Zealand, where broken object-level authorization (BOLA) may have allowed access to customer records. This type of API authorization vulnerability is a common root cause in API attacks and highlights the importance of enforcing strict access controls at every API endpoint.
BOLA is often exploited by authenticated users who gain access to other users’ resources or data by manipulating resource identifiers used in API requests. To prevent it, API developers should add authorization checks to ensure that a user requesting access to a resource is the owner or has the right permissions.
Testing and validating that these authorization controls are in place and effective can help teams remove low-level authorization vulnerabilities that often carry very high risk. Read the post.
Vulnerability: ChatGPT API Exposes Access Tokens
Security researcher Jacob Krut uncovered a high-severity API vulnerability in ChatGPT’s custom actions feature. It allows a custom GPT to connect to external tools and services by calling your own APIs to pull in additional context for an AI agent or LLM. It’s similar to Antrophics MCP protocol in effect.
The researcher found that instead of pointing to some benign external API, the custom actions feature would accept a URL for an internal API, allowing him to trick the system into returning data from its own file system, and sensitive data at that. This is a classic example of server-side request forgery (SSRF). You should always validate and restrict user-supplied data to an API.
ChatGPT did appear to have some input validation in place to check the user-supplied URL, but it was limited. And with some tricks and hacking magic the researcher was able to launch a successful SSRF attack to access an internal metadata service on the hosting cloud platform, and ultimately generate privileged access tokens for Azure management API , the proverbial keys to the kingdom! Read the researchers full report.
Vulnerability: Risk from AI Products
Cybersecurity firm Wiz recently found critical vulnerabilities in the Github repositories of some of the biggest and most influential AI companies on the Forbes AI 50 list. API Keys, access tokens, credentials, but also model data that can reveal insights about the training data used by these AI systems.
Leaked credentials from Github and other public repositories aren’t exclusive to AI companies. It’s been a recurring problem across industries for years. I came across a similar case we reported on in this newsletter from 2019, and there have been many others since then. It’s clearly a systemic issue for collaborative software development in general.
That said, AI tools and platforms are increasingly embedded across organizations in software development, testing, and critical business workflows. So any compromise of these tools can have an outsized impact. And with the growing popularity and adoption of MCP servers, AI platforms are also moving into the critical paths of organizations service and data delivery. So vulnerabilities in these AI platforms can quickly spread across systems to multiply risk and business impact. Thread carefully with AI enablement. Read the news article.
Article: API Risk is a board-level business continuity issue
A discussion on Betanews about the risks from APIs as the fastest growing class of security incidents is worth a read. It covers a range of different topics from the impact of Agentic AI on API security, to the essentials of an API-first security strategy, to the impact of a single API vulnerability on the broader supply chain.
“APIs are not just developer conveniences, they are business-critical assets that demand the same rigor as financial systems or customer databases”
In the article, Scott Wheeler talks about the need for organizations to make API security central to their overall architecture, and to integrate security into the very design and development of APIs, rather than waiting until they’re already out in the world, when it’s too late.
There’s also a discussion on using behavior-based threat detection to identify out of bounds behavior. That’s certainly an improvement on the traditional signature-based defenses of a WAF or Gateway, but behavioral-detection based on machine learning language often struggles to keep up with the pace of API delivery and rates of API updates, since those tools require constant retraining to keep up. During that time the API traffic is essentially unprotected.
“Once the scope is understood, the priority becomes shifting security left, embedding protection into the design and development lifecycle rather than bolting it on after deployment”
A security-by-design first approach for APIs that evolves along with API development is essential, to build APIs that are reliable data providers for AI models, as well as resilient to the unprecedented speeds of AI-powered attacks. Read the article.
Get API Security news directly in your Inbox.
By clicking Subscribe you agree to our Data Policy