Securing GenAI: Addressing the Top OWASP LLM Risks with Lunar’s AI Gateway

Securing GenAI: Addressing the Top OWASP LLM Risks with Lunar’s AI Gateway

Large Language Models introduce new security challenges, including prompt injection, data exposure, and misuse of model functionality. Lunar's AI Gateway provides a defense-in-depth approach to mitigate these risks, enabling safe and reliable use of generative AI in production environments.

Eyal Solomon, Co-Founder & CEO

Eyal Solomon, Co-Founder & CEO

June 4, 2025

MCP

MCPX

AI Gateways

As enterprises rapidly scale GenAI deployments, the conversation around AI security has moved from theoretical to urgent. AI-native systems are beginning to connect to sensitive internal APIs, invoke real-world actions, and interact with regulated data. The OWASP Top 10 for LLM Applications provides a much-needed framework for understanding these risks.

In this post, we’ll focus on five of the most crucial OWASP risks—the ones we at Lunar.dev see most frequently in the field, and the ones our platform is purpose-built to mitigate. We’ll explore how Lunar’s AI Gateway offers a security enforcement layer that governs outbound LLM traffic with precision, accountability, and policy-based controls.

The 10 OWASP LLM Risks (2024)

To establish context, here’s the full list:

  1. Prompt Injection
  2. Sensitive Information Disclosure
  3. Supply Chain Vulnerabilities
  4. Training Data Poisoning
  5. Improper Output Handling
  6. Excessive Agency
  7. System Prompt Leakage
  8. Embedding/Vector Weaknesses
  9. Misinformation
  10. Unbounded Consumption

Now let’s dive deeper into the five risks that Lunar.dev helps address most effectively:

1. Unbounded Consumption

The Risk: AI workloads are unpredictable by nature. A single prompt can fan out into recursive requests or trigger heavy-duty API calls. OWASP calls this "Denial of Wallet"—where excessive consumption creates real business risk via ballooning costs or resource exhaustion.

Real World: In multiple real-world deployments, we’ve seen LLM agents loop endlessly across APIs like OpenAI or Google Maps. In minutes, these agents rack up thousands of dollars in usage or cause infrastructure strain.

How Lunar Helps: Lunar’s Client-Side Limiting Flow enforces per-request token quotas, burst rate limits, and fair queuing. Teams can allocate usage budgets per agent or per application tier and monitor them in real time. Combined with our Quota Management strategies, this gives platform teams guardrails that align consumption with cost accountability.

2. Excessive Agency

The Risk: GenAI agents often operate autonomously, selecting tools and executing real API calls. But many platforms lack sufficient controls over which tools they can access or under what conditions.

Real World: A high-profile example from Simon Willison shows how an AI agent using GitHub's MCP plugin was exploited to access private repositories by invoking a tool that had overly broad access. This is a textbook case of excessive agency.

How Lunar Helps: With Endpoint Access Control and MCPX Access Policies, Lunar allows teams to define precise access rules by tool, method, user, or request context. Tools can be explicitly gated, scoped per customer, and dynamically enabled or disabled. Our model ensures that no agent can call sensitive internal APIs unless explicitly authorized—by policy, not assumption.

3. Prompt Injection

The Risk: LLMs are vulnerable to adversarial inputs—crafted text that hijacks the model’s behavior. This can override system prompts, exfiltrate data, or force unintended API calls. OWASP places this risk at the top of its list.

Real World: Users have embedded hidden instructions within natural-looking queries to jailbreak ChatGPT or manipulate agent behavior in complex chains.

How Lunar Helps: Our Data Sanitation Flow inspects all outbound traffic, filtering payloads for known injection patterns and suspicious structures. Combined with Transform Flow, we enable rewriting or rejecting prompts based on compliance, source, or structure. These layers let you enforce semantic boundaries before the prompt ever hits the model.

4. Sensitive Information Disclosure

The Risk: LLMs can inadvertently leak private or regulated data—either through prompts containing PII or through completions that echo sensitive material.

Real World: We’ve worked with companies who discovered their prompts included customer account info, secrets, and even bearer tokens—all of which were being sent to third-party LLM providers without redaction.

How Lunar Helps: Lunar’s Data Sanitation Flow and header filtering policies strip secrets, redact PII, and log outbound payloads for auditing. Teams can apply pattern-based scrubbing of both request and response data. Our gateway ensures that no sensitive fields ever transit to external LLMs unless explicitly whitelisted.

5. Improper Output Handling

The Risk: LLM completions can include unsafe commands, untrusted code, or malformed responses. When applications trust those outputs blindly—e.g. passing them to tools—unexpected behavior or data corruption can result.

How Lunar Helps: With Transform Flow, Lunar intercepts and validates LLM outputs before any downstream system acts on them. Responses can be normalized, sanitized, or blocked entirely based on business rules. This gives teams last-mile enforcement at the point of response delivery, a key step in building safe AI-powered automation.

Why This Matters: The Egress Layer Is the Security Layer

Most organizations are focused on model safety—prompt engineering, evals, fine-tuning. But the real-world risks that OWASP identifies don’t live inside the model—they live at the integration points between the LLM and the external world.

That’s why Lunar focuses on egress control—governing the requests your systems send to models, the APIs your agents can call, and the payloads that flow through both.

As security frameworks for GenAI continue to mature, we believe every organization will need:

  • Consumption controls to manage cost and scale
  • Access controls to prevent agent misuse
  • Data policies to protect information boundaries
  • Traffic observability to detect and trace issues across LLM pipelines

We’ve built Lunar.dev to provide exactly that.

Next Steps: Make AI Safer in Production

If your team is moving beyond experimentation and into real GenAI applications, it’s time to invest in security infrastructure purpose-built for AI.

Try the Lunar AI Gateway, or book a demo to learn how we help platform teams secure their AI stack.

‍

Ready to Start your journey?

Manage a single service and unlock API management at scale