This week, we cover the promise and pitfalls of using AI for API security, along with newly discovered vulnerabilities in Web Application Firewalls and emerging Vibe Coding platforms. We explore strategies for building APIs optimized for AI integration, and highlight a critical vulnerability in a popular API development framework that developers should be aware of.
Vulnerability: WAF security hacked by HTTP parameter pollution
Researchers at Ethiack uncovered security bypass vulnerabilities in a number of WAF products, demonstrating a cross-site scripting attack using a relatively simple technique to bypass WAF security. ‘HTTP parameter pollution’ exploits the fact that some application technologies interpret duplicate parameters in different ways:
‘https:\//example.com/path?myParam=val1&myParam=val2′
For example, an ASP.net application automatically combines the values from a duplicate parameter as a comma separated list or array:
myParam=val1,val2
Hackers can use this feature to break a cross-site scripting (XSS) attack pattern into multiple pieces in order to obfuscate the full attack pattern.
The team found that while WAFs will recognize the malicious pattern in its entirety, if the pattern is spread across multiple input parameters most WAFs fail to recognize the risk and allow the malicious request through to the server, where it is combined back into the full attack pattern.
This is a tricky attack to prevent for APIs, and clearly most WAFs aren’t up to the task. But the ambiguity around duplicate parameters can be removed by defining up-front a parameter’s data type, to clarify that it is not a list of array of values and so should only appear once in a request.
Or If the API does need to support multiple parameter values as an array, best to set further constraints on the parameter values, to limit the room for hackers to smuggle through scripts, commands and other invalid input in a parameter pollution attack.
One to watch out for!
Article: API Security – guided by tools but driven by expertise
This article outlines a good example of how developers can use AI tools effectively to produce secure APIs. A combination of a developer’s domain knowledge guiding an AI tool to automatically add the appropriate security controls in the API code, rather than trying to offload security expertise to AI.
While the article focuses on building secure APIs for the Fintech industry, this really should be recommended practice for any secure API development.
First the article makes a great case for API security fundamentals: validate all inputs against strict rules, enforce rate limits to prevent abuse, and handle errors without leaking details. These are solid security recommendations, since they help build-in API resilience to common attacks.
Next, the developer leverages an AI tool, GitHub Copilot in this case, to enrich the API code with the security controls, but with the critical eye of an experienced practitioner to watch for AI mistakes.
“A basic security check was performed using the prompt generated by Copilot’s code. However, we need to implement a more robust error handling check to enhance security.”
An interesting article and worth a read.
Vulnerability: API flaws in Base44’s vibe coding platform
Reports of vulnerabilities in AI agents, LLMs, and especially MCP solutions have surged in recent months. As with many emerging technologies, the rush to deploy AI-powered tools and integrations has often outpaced security considerations.
The latest example comes from Wiz researchers, who discovered critical API vulnerabilities in the Base44 vibe coding platform, a tool used by enterprises to build applications with the help of AI. Despite offering access control features that allow organizations to restrict application usage to invited users only, researchers found that the platform’s APIs allowed anyone to self-register for any hosted application.
The only requirement was an application ID, which was easily discoverable even for private enterprise applications. This bypassed access controls entirely, exposing private applications and potentially sensitive data to unauthorized access.
Authorization flaws remain one of the most common and dangerous vulnerabilities in API development. As AI-driven platforms become more widely adopted, they must be held to the same security standards as any production system.
Article: Define APIs with precision for AI integration
A recent New Stack article explores how agentic AI systems discover and call APIs autonomously. There are some useful API design tips here that apply beyond AI clients, but are particularly relevant and important for autonomous systems, to limit the room for integration errors or malicious attacks.
The key message from the article is that vague or loosely defined APIs lead to unexpected behavior from both the calling agent and the API itself.
Without the precision of clear endpoint names, parameter constraints, and schemas, agents can either misuse input fields or misinterpret responses, causing unintended API behaviours.
Some recommendations to make APIs AI-ready include:
-
Publish a well-defined OpenAPI contract with complete request/response schemas, parameter constraints, authentication schemes, error formats, and success codes. Agents can rely on that spec as a source of truth.
- Add rich, natural‑language descriptions at both API and operation levels. Include business intent and context, and relevant examples that help agents choose endpoints correctly.
High‑quality OpenAPI contracts are essential to clarify how APIs work, for consistent interpretation by clients, especially autonomous agents, and reduce risks of misuse, hallucinations, or unpredictable API behavior.
Vulnerability: API developers exposed to attacks
Developers are already under pressure to secure production APIs, but now their local development environments are becoming attack surfaces too.
According to recent reports, a package from the popular Next.js framework contained a critical flaw that allowed remote code execution (RCE) attacks against the developer’s environment.
“it exposes a local HTTP server with an API endpoint at /inspector/graph/interact that accepts and executes JavaScript code within an unsafe sandbox environment”
Attackers could send commands to launch applications on the developer’s local machine and potentially steal sensitive data.
A key issue was the failure to validate the Origin header in a request, which tells the API server where the request came from, allowing it to verify the legitimacy of requests and to enforce CORS protections.
Without verifying it against a trusted list, the server accepted requests from any source, bypassing CORS protections and enabling exploitation from malicious origins.
The case highlights key lessons: always apply strict validation to headers like Origin and Content-Type, and keep software dependencies updated to avoid known vulnerabilities.
Get API Security news directly in your Inbox.
By clicking Subscribe you agree to our Data Policy