Next Steps for API Consumption Policies and Chaining Remedies

Next Steps for API Consumption Policies and Chaining Remedies

Here’s the TL;DR: Simplifying consumption policies gives developers an indispensable tool for quickly addressing the root causes of issues in the production environment. 

Eyal Solomon, Co-Founder & CEO

Eyal Solomon, Co-Founder & CEO

August 7, 2023

Optimization

Quick error recovery is essential for companies that are relying heavily on APIs in real-time (like flight and mobility tech, demand tech, payment, blockchain, and more) while simultaneously scaling. 

During our journey building lunar.dev, we noticed a pattern that kept repeating itself. For most companies that built a middleware component for third-party party API consumption (usually for optimization and quick issues recovery), their maintenance issues are continuous. To simplify this daunting task, we started thinking about the combination of plugins rather than policies, and we began to see their potential as a sort of recipe or blueprint that could be emulated and then shared or configured with a click, no code needed. 

This post explores the uses of API consumption policies and the case for using plugins to easily implement remediation in production. 

What is API Error Remediation? 

Maintaining API performance in production and preventing incidents usually means going deep into middleware and configuration logic, and by the time we find the incident, and then realize what logic should be implemented, we’re crossing our fingers and hoping to get the issue solved before we receive a Slack message from someone in the customer success team.

The first step to fixing errors in production is identifying them and understanding their root causes. Interaction with an API often results in unwanted errors in production due to various reasons like incorrect input, server issues and downtime, authentication failures, and consumption configuration errors that result in hitting rate limits or quotas, and many other scale errors you couldn't foresee without the heavy traffic of production. 

Remediation typically involves: 

  • Understanding the error code which is returned by the failed API request
  • Identifying the root cause
  • If the issue is urgent, they will try to have a temp head--hoc fix until they design build and deploy a permanent solution in next versions.
  • The permanent solution will have monitoring, alerts, and resilience mechanisms. But it takes time and effort to reach there
  • Implementing relevant fixes 
  • Monitoring and alerting 
  • Setting resilience mechanisms to either prevent the error from happening again or to be made aware of it when it does happen

What Are API Consumption Policies?

Are you hitting rate limits when consuming Azure API, or making too many calls to Google Maps? Do you need to manage tokens in real time regardless of downtime imposed by providers?

API consumption policies typically require the consumer to comply with the provider’s usage guidelines. The consumer puts these policies in place to protect their application from misuse or excessive consumption.

Here is what it can look like (and what it looks like with Lunar):

Policies include business logic and mechanisms to limit or configure specific consumption behaviors. 

Entry Level—define a specific policy on an API Provider's endpoint. 

Here are some usage examples: 

  • Rate Limiting—Consumers implement Throttling logic at their end to restrict the number of API calls within a specific time frame to prevent API providers’ rate limits from being exceeded. 
  • Caching—This can reduce the number of redundant API calls and improve the application performance for the consumer  - especially when your consumption is scattered across multiple services.
  • Error Unification - Instead of trying to to address every different error you catch with endless switch case in your service, move the error handling logic outside of your code, write it once, and support all your services.
  • API Key Management—Instead of taking care of authentication, secret management, access token validation and invalidation manually in your code do it once on lunar, and provide a secure always authenticated connection to your services.

Advanced Policies: 

The more complex your service and scaling uses, the more advanced you will need your policies and logic to be. 

Such examples include:

  • Distributed throttling
  • Distributed caching
  • Unified error handling
  • Prioritized queueing for API calls
  • Unified token management

Policy chaining is one example of a super-advanced level of policies. This is the process of combining the effects of multiple remedies on a single request or response, so with Lunar, this means allowing users to define multiple remedies for a single endpoint. In such cases, it will prioritize the remedies based on the type of action they take and combine their effects in a specific order.

What such cases are useful for:

  • Enhanced flexibility: By allowing the combination of multiple remedies, Lunar provides users with greater flexibility in customizing the behavior of requests and responses. Different remedies can be applied to different endpoints, and their effects can be combined to achieve desired outcomes.

  • Efficient request processing: Prioritizing policies ensures that the most critical actions are taken first, such as generating an immediate response or modifying requests/responses. By prioritizing remedial policy plugins, Lunar optimizes the processing of requests and responses, reducing latency and improving overall performance.

  • Customizable behavior: The ability to define and chain remedies allows users to tailor the behavior of Lunar according to their specific use cases and requirements. 

API Account Orchestration and Caching

In this example, Lunar Proxy applies two remedies:

  • AccountOrchestration: It modifies the request by adding or overwriting the Authorization header with a token from the list of tokens defined for the account in a round-robin manner. If the request already contains an Authorization header, it is overwritten. If the request does not contain an Authorization header, it is added. The next remedy in the chain will see the modified request.

  • Caching: It checks if there's a cached response for the current Authorization header that is less than an hour old and within the allowed cache size. If it exists, Lunar Proxy immediately serves this cached response, skipping the subsequent remedies. If there's no valid cached response, the request is forwarded to the original API provider. Note that the Authorization header is affected by the previous remedy, so the cache key will be different for each account.

What's the Future of Policies?

The future of API consumption management includes: 

1. Modular composition of plugins - Each consumption policy is a lego part or building block. the composition of different, configurable, building blocks will be able to address the vast range of problems in the ever growing API economy. 

2. Dynamic plugin configuration - Today, we're building together the understanding of how to optimize each API provider we're consuming. But when time comes, we will have a big enough dataset to enable machine learning that will change it's plugin configuration dynamically in real time. Consumption of API will respond to the way the API provider behaves in real time

3. Decoupled business logic from API provider - today the way you consume data is specific to the API you've integrated with. But the future will be a mere facade to the API providers. You don't need to think whether the API provider you integrate with is efficient or not. You will have a dynamic load balancing between API providers to whatever suits your needs, taking into account costs, latency, availability and other parameters. the future will decouple business logic from API providers

Lunar.dev is changing the way companies consume third-party APIs.

Get in touch to learn more about lunar.dev's policy plugins and learn more at info@lunar.dev

Ready to Start your journey?

Manage a single service and unlock API management at scale