Integrate Helicone with OpenAI to track, monitor, and optimize your GPT-4, GPT-3.5, and other OpenAI model usage.
Quick Start
Integrate Helicone with OpenAI by changing your base URL and adding your API key:
import os
from openai import OpenAI
client = OpenAI(
api_key=os.getenv("OPENAI_API_KEY"),
base_url="https://oai.helicone.ai/v1",
default_headers={
"Helicone-Auth": f"Bearer {os.getenv('HELICONE_API_KEY')}",
},
)
# Use the client normally
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[
{"role": "user", "content": "What is the capital of France?"}
],
)
print(response.choices[0].message.content)
Installation
Configuration
Basic Setup
Only two changes are needed:
- Change base URL to
https://oai.helicone.ai/v1
- Add Helicone-Auth header with your API key
Environment Variables
Set up your environment:
OPENAI_API_KEY=sk-...
HELICONE_API_KEY=sk-helicone-...
Features
Streaming Support
Helicone fully supports streaming responses:
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Write a story"}],
stream=True,
stream_options={"include_usage": True},
)
for chunk in response:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="")
Enable include_usage: true in stream_options to capture token usage for streaming requests.
Function Calling
Track function calls and tool usage:
tools = [
{
"type": "function",
"function": {
"name": "get_weather",
"description": "Get the current weather",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state",
},
},
"required": ["location"],
},
},
}
]
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "What's the weather in Paris?"}],
tools=tools,
)
# Helicone automatically logs function calls
Vision Models
Log image inputs with GPT-4 Vision:
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{
"role": "user",
"content": [
{"type": "text", "text": "What's in this image?"},
{
"type": "image_url",
"image_url": {
"url": "https://example.com/image.jpg",
},
},
],
}
],
)
# Helicone tracks image assets automatically
Caching
Enable caching to reduce costs and latency:
client = OpenAI(
api_key=os.getenv("OPENAI_API_KEY"),
base_url="https://oai.helicone.ai/v1",
default_headers={
"Helicone-Auth": f"Bearer {os.getenv('HELICONE_API_KEY')}",
"Helicone-Cache-Enabled": "true",
},
)
Learn more about caching →
Custom Properties
Add metadata to your requests:
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Hello"}],
extra_headers={
"Helicone-Property-Session": "session-123",
"Helicone-Property-User": "user@example.com",
"Helicone-Property-Environment": "production",
},
)
Learn more about custom properties →
Advanced Usage
User Tracking
Track requests by user:
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Hello"}],
extra_headers={
"Helicone-User-Id": "user-123",
},
)
Session Tracking
Group related requests into sessions:
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Hello"}],
extra_headers={
"Helicone-Session-Id": "session-abc",
"Helicone-Session-Name": "Customer Support Chat",
"Helicone-Session-Path": "/chat/support",
},
)
Prompt Tracking
Track prompt versions:
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Hello"}],
extra_headers={
"Helicone-Prompt-Id": "prompt-v2",
},
)
Azure OpenAI
Helicone also supports Azure OpenAI deployments:
from openai import AzureOpenAI
client = AzureOpenAI(
api_key=os.getenv("AZURE_OPENAI_API_KEY"),
base_url="https://oai.helicone.ai/v1",
azure_endpoint=os.getenv("AZURE_OPENAI_ENDPOINT"),
api_version="2024-02-01",
default_headers={
"Helicone-Auth": f"Bearer {os.getenv('HELICONE_API_KEY')}",
"Helicone-Target-URL": os.getenv("AZURE_OPENAI_ENDPOINT"),
},
)
Troubleshooting
Requests not appearing in dashboard
Check these common issues:
- Verify your
HELICONE_API_KEY is correct
- Ensure base URL is
https://oai.helicone.ai/v1 (with /v1)
- Check that
Helicone-Auth header includes Bearer prefix
- Wait 10-30 seconds for requests to appear
Still not working? Check the response headers for Helicone-Status:response = client.chat.completions.create(...).with_response()
print(response.response.headers.get('Helicone-Status'))
If you encounter SSL errors, ensure you’re using the latest version of the OpenAI SDK:pip install --upgrade openai
Helicone does not add rate limits. If you’re hitting rate limits, they’re from OpenAI. The error message will be passed through from OpenAI’s API.
Examples from Source
See real integration examples in the repository:
Next Steps
Custom Properties
Add metadata to your requests
Caching
Enable intelligent caching
Sessions
Track multi-turn conversations
Dashboard
Explore your analytics