Feature: data-extractionTime Range: Last 7 daysSort by: Recent first
2
Review Request/Response
Click on a request to see:
// Request{ "model": "gpt-4o", "messages": [ { "role": "system", "content": "Extract data as JSON" }, { "role": "user", "content": "Name: John Doe, Age: 30" } ]}// Response (incorrect)"The person's name is John Doe and they are 30 years old."// Expected{"name": "John Doe", "age": 30}
3
Identify the Issue
The prompt is too vague. The model needs clearer instructions.
4
Test Fix in Dashboard
Use Helicone’s prompt testing feature or test locally:
// Improved promptconst response = await client.chat.completions.create( { model: "gpt-4o", messages: [ { role: "system", content: `Extract data and respond ONLY with valid JSON.Format: {"name": "string", "age": number}Do not include any explanation or additional text.` }, { role: "user", content: "Name: John Doe, Age: 30" } ], temperature: 0, // More deterministic }, { headers: { "Helicone-Property-Version": "v2.2.0", // Track new version } });
5
Compare Versions
After deploying, compare old vs. new:
Filter 1: Version = v2.1.0Filter 2: Version = v2.2.0Compare success rates, costs, latencies
// Before: Including entire documentconst largePrompt = `Analyze this document:\n${fullDocument}`;// After: Summarize or chunk firstconst optimizedPrompt = `Analyze this summary:\n${summarizeDocument(fullDocument)}`;
Problem: “User user-456 says chatbot gave wrong answer yesterday”
1
Find User's Requests
Filter by:- User ID: user-456- Date: [Yesterday]- Feature: chatbot
2
Review Conversation
If using sessions:
Go to: SessionsFilter: User ID = user-456Find relevant session by timestamp
View entire conversation flow to understand context.
3
Identify Issue
Review the specific request/response:
Was context missing?
Did model hallucinate?
Was there a misunderstanding?
Share findings with user:
Found the issue: The chatbot didn't have access to thelatest product pricing, which was updated yesterday morning.We're adding a knowledge base refresh to fix this.
Correlate Helicone logs with your application logs:
const appRequestId = generateId(); // Your app's ID// Log in your applicationlogger.info("Starting LLM request", { requestId: appRequestId });// Use same ID in Heliconeawait client.chat.completions.create( { /* ... */ }, { headers: { "Helicone-Request-Id": appRequestId, } });// Later, search Helicone by your ID// URL: https://helicone.ai/requests?requestId=your-app-id-123