Lunar Flow | Switching requests from the OpenAI API to Anthropic’s Claude APIs

This Lunar flow makes it simple to introduce an API “reroute” in your environment and switch LLM providers seamlessly and without any code changes.

In certain scenarios, you may wish to reroute requests made from your environment from one LLM provider (like OpenAI) to another (such as Anthropic). There are several reasons to do this:

  • Model Performance: You may find that one provider's model performs significantly better than another's for your specific use case, warranting a reroute to take advantage of the superior model capabilities.

  • Price: Pricing models can vary between providers, and rerouting requests may allow you to optimize costs by leveraging a more cost-effective option.

  • Context Window / Prompt Size: Different LLM providers may offer varying maximum prompt sizes or context windows, so rerouting could enable you to work with a model that supports your specific requirements more effectively.

  • Reliability: If you find that one provider's model is more reliable, consistent, or less prone to unexpected behaviors, rerouting your requests could help ensure more stable and trustworthy outputs.

  • Specialization: Certain providers may offer models that are specialized for specific domains or tasks, so rerouting could allow you to leverage a more tailored solution for your needs.

This Lunar flow makes it simple to introduce an API “reroute” in your environment and switch LLM providers seamlessly and without any code changes.

Relevant APIs:

  • OpenAI Chat Completion:

  • 1

    [REQUEST] Filter - we filter out requests made to the OpenAI Chat completion API. It is important to include this filter to ensure you are only switching over requests made to that specific endpoint. Any other requests made from your environment to the OpenAI API will continue flowing to OpenAI unimpacted.
    a. In advanced use cases, you may want to filter requests made using specific models - read on to the section “Filtering by Model” below for more information.

  • 2

    [REQUEST] Request Switch - this processor switches over the request from the OpenAI endpoint to the Anthropic endpoint. Because we use the request switch processor it will treat the modified request as a new request - passing it through all the flows in your system. So if you have global flows / flows specific to the Anthropic API, they will be processed for this new modified request.
    Thankfully the OpenAI and Anthropic API offer very similar interfaces, so we only minimally rewrite the request. There are two major parameter we insert at this point:
    a. Model (in request body) - this is the Anthropic model you want to use. Refer to the list of models on anthropics website to find the ID of the appropriate model for you -
    b. X-API-Key (in request headers) - your Anthropic API key.

Filtering based on Model:

Sometimes you may only wish to route requests to Anthropic that use specific OpenAI models. A common driver for this can be the usage of fine tuned models. If some of your applications use fine tuned OpenAI models, you may either want to not map them to Anthropic at all as it currently does not offer model fine tuning.

This is a common pattern for mixed environments where some applications use fine-tuned models and others use general purpose models. You may find that you prefer Anthropic for the general purpose models, but want to keep OpenAI (or map to a different provider altogether) for fine tuned options.

Luckily this can be easily achieved with this Lunar flow. You can modify the flow filter to filter for requests made to a specific model. Here is an exampling capturing only OpenAI requests that are made to the gpt-4 model:

About the OpenAI API:

OpenAI is a prominent artificial intelligence research company founded in 2015. They are known for developing some of the most advanced large language models (LLMs) in the world, including GPT-3 and the more recent GPT-4.

The GPT (Generative Pre-trained Transformer) models are a series of powerful autoregressive language models that can be used for a wide variety of natural language processing tasks. GPT-3, released in 2020, was a groundbreaking model that demonstrated impressive language generation capabilities. GPT-4, the latest iteration, was released in 2023 and is even more capable, with enhancements in areas like multimodal understanding and task-completion.

OpenAI offers access to their GPT models through the OpenAI API, which allows developers and researchers to integrate these advanced language models into their own applications and projects. The API provides a straightforward interface for sending text prompts to the models and receiving generated responses, enabling a wide range of use cases such as content creation, question answering, translation, and more.

By making their cutting-edge AI technology available through the API, OpenAI has empowered a global community of users to push the boundaries of what is possible with large language models.

About the Anthropic API:

Anthropic is an artificial intelligence research company founded in 2021 with the mission of developing safe and ethical AI systems. One of their flagship products is the Claude language model, which was released in 2023.

Claude is a large language model (LLM) that has been trained on a vast corpus of text data using advanced machine learning techniques. Like OpenAI's GPT models, Claude is capable of engaging in a wide variety of natural language processing tasks such as text generation, question answering, summarization, and more.

What sets Claude apart is Anthropic's focus on developing AI systems that are aligned with human values and interests. The Claude model has been imbued with a strong sense of ethics and a commitment to being helpful and truthful in its interactions. Anthropic has also placed a heavy emphasis on ensuring Claude's outputs are safe, coherent, and free of biases or harmful content.

Anthropic offers access to the Claude model through the Claude API, which allows developers and researchers to integrate this advanced language AI into their own applications. The API provides a simple and intuitive interface for sending prompts to the model and receiving high-quality responses tailored to the user's needs.

By making the Claude model available through the API, Anthropic hopes to empower a global community of users to explore the potential of safe and ethical AI technology. The Claude API represents a important step forward in the development of AI systems that can be reliably and responsibly used to augment and enhance human intelligence.

OpenAI API vs Anthropic API:

The OpenAI API and the Anthropic Claude API both provide access to powerful large language models, but there are some key differences in the capabilities and technical features of the underlying models.

Model Capabilities

  • GPT-3 vs GPT-4: OpenAI's GPT-3 model, released in 2020, was a groundbreaking achievement in language AI. However, the more recently released GPT-4 model demonstrates even more impressive natural language understanding and generation abilities across a wide range of tasks.

  • Claude: Anthropic's Claude model was purpose-built with a focus on safety, coherence, and ethical behavior. While perhaps not quite as broad in its raw capabilities as GPT-4, Claude is known for producing highly reliable and trustworthy outputs. Recent Claude 3 models are claimed to be as powerful or even more powerful than the OpenAI GPT series of modes (reference).

Technical API Features

  • Prompting: Both APIs allow users to provide natural language prompts to the models, which then generate relevant text responses. However, OpenAI's API provides more fine-grained control over prompting, including features like temperature, top-k, and presence penalty.

  • Multimodal Support: The OpenAI API recently added support for processing image inputs in addition to text, while the current Anthropic Claude API is text-only.

  • Pricing: Pricing models differ, with OpenAI offering a pay-as-you-go structure and Anthropic focusing more on custom enterprise-level pricing.

  • Latency: Users have generally reported lower latency and faster response times when using the Anthropic Claude API compared to the OpenAI API.

Ultimately, the choice between the OpenAI API and the Anthropic Claude API will depend on the specific needs and requirements of the user's application or project. Developers should carefully evaluate the tradeoffs between model capabilities, technical features, and pricing to determine the best fit.

About is your go to solution for Egress API controls and API consumption management at scale.
With, engineering teams of any size gain instant unified controls to effortlessly manage, orchestrate, and scale API egress traffic across environments— all without the need for code changes. is agnostic to any API provider and enables full egress traffic observability, real-time controls for cost spikes or issues in production, all through an egress proxy, an SDK installation, and a user-friendly UI management layer. offers solutions for quota management across environments, prioritizing API calls, centralizing API credentials management, and mitigating rate limit issues.

Table of content

Talk to an Expert

Got a use case you want to get help with?

By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.