Skip to main content
Helicone supports 100+ LLM providers and integrates seamlessly with popular AI frameworks. Choose the integration method that works best for your setup.

Supported Providers

Helicone works with all major LLM providers:

OpenAI

GPT-4, GPT-3.5, and more

Anthropic

Claude models

Azure OpenAI

Enterprise OpenAI deployment

Google Vertex AI

Gemini and PaLM models

AWS Bedrock

Multiple model providers

Together AI

Open source models

Integration Methods

Proxy Integration

The simplest way to integrate. Just change your API base URL:
from openai import OpenAI

client = OpenAI(
    api_key=os.getenv("OPENAI_API_KEY"),
    base_url="https://oai.helicone.ai/v1",
    default_headers={
        "Helicone-Auth": f"Bearer {os.getenv('HELICONE_API_KEY')}",
    },
)
Benefits:
  • No code changes beyond configuration
  • Works with any SDK
  • Real-time logging
  • Full request/response capture

Async Integration

Log requests asynchronously without affecting latency:
import { HeliconeAsyncLogger } from '@helicone/helicone';
import { OpenAI } from 'openai';

const logger = new HeliconeAsyncLogger({
  apiKey: process.env.HELICONE_API_KEY,
  providers: {
    openAI: OpenAI,
  },
});
logger.init();

const openai = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,
});

// Use OpenAI normally - logging happens in the background
Benefits:
  • Zero latency impact
  • Uses your existing provider keys
  • Background processing

Gateway Integration

Route through multiple providers with failover and load balancing:
const openai = new OpenAI({
  apiKey: process.env.HELICONE_API_KEY,
  baseURL: "https://ai-gateway.helicone.ai",
});

// Use multiple models with automatic fallback
const response = await openai.chat.completions.create({
  model: "claude-3-7-sonnet-20250219/anthropic,gpt-4o-mini",
  messages: [{ role: "user", content: "Hello!" }],
});
Benefits:
  • Automatic failover between providers
  • Load balancing
  • Single API key for all providers
  • Cost optimization

Framework Integrations

Helicone integrates with popular AI frameworks:

LangChain

Full chain observability

Vercel AI SDK

Streaming and edge support

LlamaIndex

RAG pipeline tracking

Instructor

Structured output logging

Quick Start by Provider

from openai import OpenAI

client = OpenAI(
    api_key=os.getenv("OPENAI_API_KEY"),
    base_url="https://oai.helicone.ai/v1",
    default_headers={
        "Helicone-Auth": f"Bearer {os.getenv('HELICONE_API_KEY')}",
    },
)
Full OpenAI guide →
from anthropic import Anthropic

client = Anthropic(
    api_key=os.getenv("ANTHROPIC_API_KEY"),
    base_url="https://anthropic.helicone.ai",
    default_headers={
        "Helicone-Auth": f"Bearer {os.getenv('HELICONE_API_KEY')}",
    },
)
Full Anthropic guide →
from openai import AzureOpenAI

client = AzureOpenAI(
    api_key=os.getenv("AZURE_OPENAI_API_KEY"),
    base_url="https://oai.helicone.ai/v1",
    azure_endpoint=os.getenv("AZURE_OPENAI_ENDPOINT"),
    api_version="2024-02-01",
    default_headers={
        "Helicone-Auth": f"Bearer {os.getenv('HELICONE_API_KEY')}",
        "Helicone-Target-URL": os.getenv("AZURE_OPENAI_ENDPOINT"),
    },
)

Getting Your API Key

To use any integration method, you’ll need a Helicone API key:
1

Sign up

Create an account at helicone.ai
2

Generate API key

Go to Settings > API Keys and create a new key
3

Store securely

Add to your environment variables:
export HELICONE_API_KEY="sk-helicone-..."

Next Steps

OpenAI Integration

Complete setup guide for OpenAI

Anthropic Integration

Integrate with Claude models

Custom Headers

Add metadata and properties

Caching

Enable request caching