Skip to main content

Overview

Lava’s forward proxy sits transparently between your application and AI providers, automatically tracking usage and costs without requiring provider-specific integrations. This guide shows you how to route requests through the proxy with working code examples for multiple providers. What you’ll learn:
  • How to construct proxy URLs with URL encoding
  • Provider-specific request examples (OpenAI, Anthropic, Google)
  • Extracting request IDs for debugging
  • Error handling and troubleshooting strategies
Prerequisites: You need a Lava forward token. See Quickstart: Route Your First Request to get started.
New to Lava? See Quickstart: Route Your First Request for complete environment setup including .env.local configuration and forward token setup.

URL Construction

The Forward Proxy Pattern

Lava uses a query parameter (?u=) to specify the upstream provider endpoint:
https://api.lavapayments.com/v1/forward?u=<ENCODED_PROVIDER_URL>
Key requirements:
  • The provider URL must be URL-encoded
  • Your forward token goes in the Authorization header
  • Request body remains unchanged from provider’s API spec

URL Encoding Helper

Always URL-encode the provider endpoint to handle special characters correctly:
function buildProxyUrl(providerUrl) {
  const baseUrl = 'https://api.lavapayments.com/v1/forward';
  const encodedUrl = encodeURIComponent(providerUrl);
  return `${baseUrl}?u=${encodedUrl}`;
}

// Example usage
const openaiUrl = buildProxyUrl('https://api.openai.com/v1/chat/completions');
// Result: https://api.lavapayments.com/v1/forward?u=https%3A%2F%2Fapi.openai.com%2Fv1%2Fchat%2Fcompletions
Most HTTP libraries handle URL encoding automatically, but constructing the full proxy URL manually requires explicit encodeURIComponent().

Provider Examples

OpenAI Chat Completions

Route OpenAI chat requests through Lava:
async function callOpenAI(forwardToken, messages) {
  const proxyUrl = 'https://api.lavapayments.com/v1/forward?u=' +
    encodeURIComponent('https://api.openai.com/v1/chat/completions');

  const response = await fetch(proxyUrl, {
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
      'Authorization': `Bearer ${forwardToken}`
    },
    body: JSON.stringify({
      model: 'gpt-4o-mini',
      messages: messages,
      temperature: 0.7
    })
  });

  if (!response.ok) {
    const error = await response.text();
    throw new Error(`OpenAI request failed: ${error}`);
  }

  const data = await response.json();

  // Extract request ID for debugging
  const requestId = response.headers.get('x-lava-request-id');

  return {
    data,
    requestId,
    usage: data.usage // OpenAI's native usage field
  };
}

// Example usage
const result = await callOpenAI(process.env.LAVA_FORWARD_TOKEN, [
  { role: 'system', content: 'You are a helpful assistant.' },
  { role: 'user', content: 'Explain quantum computing in simple terms.' }
]);

console.log('Response:', result.data.choices[0].message.content);
console.log('Usage:', result.usage);
console.log('Request ID:', result.requestId);
// Output:
// Usage: { prompt_tokens: 23, completion_tokens: 133, total_tokens: 156 }
What changed:
  • ✅ URL: https://api.openai.com/v1/chat/completions → Lava proxy URL with encoded parameter
  • ✅ Authorization: OpenAI API key → Lava forward token
  • ❌ Request body: No changes (same as OpenAI’s API)

Anthropic Messages API

Route Anthropic Claude requests through Lava:
async function callAnthropic(forwardToken, messages) {
  const proxyUrl = 'https://api.lavapayments.com/v1/forward?u=' +
    encodeURIComponent('https://api.anthropic.com/v1/messages');

  const response = await fetch(proxyUrl, {
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
      'Authorization': `Bearer ${forwardToken}`,
      'anthropic-version': '2023-06-01'  // Required by Anthropic
    },
    body: JSON.stringify({
      model: 'claude-3-5-sonnet-20241022',
      max_tokens: 1024,
      messages: messages
    })
  });

  if (!response.ok) {
    throw new Error(`Anthropic request failed: ${response.statusText}`);
  }

  const data = await response.json();

  // Extract request ID for debugging
  const requestId = response.headers.get('x-lava-request-id');

  return {
    data,
    requestId,
    usage: data.usage // Anthropic's native usage field
  };
}

// Example usage
const result = await callAnthropic(process.env.LAVA_FORWARD_TOKEN, [
  { role: 'user', content: 'Write a haiku about coding.' }
]);

console.log(result.data.content[0].text);
Anthropic-specific headers:
  • anthropic-version: Required version header (forward to provider)
  • Provider-specific headers are passed through Lava unchanged

Google Gemini

Route Google Gemini requests through Lava:
async function callGemini(forwardToken, prompt) {
  // Note: Google uses API key in URL, not Authorization header
  const geminiEndpoint = 'https://generativelanguage.googleapis.com/v1beta/models/gemini-pro:generateContent';

  const proxyUrl = 'https://api.lavapayments.com/v1/forward?u=' +
    encodeURIComponent(geminiEndpoint);

  const response = await fetch(proxyUrl, {
    method: 'POST',
    headers: {
      'Content-Type': 'application/json',
      'Authorization': `Bearer ${forwardToken}`
    },
    body: JSON.stringify({
      contents: [{
        parts: [{ text: prompt }]
      }]
    })
  });

  if (!response.ok) {
    throw new Error(`Gemini request failed: ${response.statusText}`);
  }

  return await response.json();
}

// Example usage
const result = await callGemini(
  process.env.LAVA_FORWARD_TOKEN,
  'Explain the difference between async and await in JavaScript.'
);

console.log(result.candidates[0].content.parts[0].text);
Google-specific behavior:
  • Google’s API typically uses ?key= query parameter for authentication
  • Lava handles provider authentication automatically using your forward token
  • No need to manage provider-specific API keys

Response Header Parsing

Request ID Header

Lava adds a request ID header to every response for debugging and tracking:
function extractRequestId(response) {
  // Request ID for debugging and dashboard correlation
  return response.headers.get('x-lava-request-id') || null;
}
Usage tracking: Lava doesn’t add usage information to response headers. Instead, usage data comes from the provider’s response body (e.g., OpenAI’s usage field) and is tracked automatically in your Lava dashboard at Monetize > Explore.

Usage Tracking Helper

Create a reusable helper for consistent request tracking:
class LavaClient {
  constructor(forwardToken) {
    this.forwardToken = forwardToken;
    this.baseUrl = 'https://api.lavapayments.com/v1/forward';
  }

  buildProxyUrl(providerUrl) {
    return `${this.baseUrl}?u=${encodeURIComponent(providerUrl)}`;
  }

  extractRequestId(response) {
    return response.headers.get('x-lava-request-id') || null;
  }

  async request(providerUrl, options = {}) {
    const proxyUrl = this.buildProxyUrl(providerUrl);

    const response = await fetch(proxyUrl, {
      ...options,
      headers: {
        ...options.headers,
        'Authorization': `Bearer ${this.forwardToken}`
      }
    });

    if (!response.ok) {
      const error = await response.text();
      throw new Error(`Proxy request failed: ${error}`);
    }

    const data = await response.json();
    const requestId = this.extractRequestId(response);

    return {
      data,
      requestId,
      usage: data.usage // From provider's response body
    };
  }
}

// Example usage
const lava = new LavaClient(process.env.LAVA_FORWARD_TOKEN);

const { data, requestId, usage } = await lava.request(
  'https://api.openai.com/v1/chat/completions',
  {
    method: 'POST',
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify({
      model: 'gpt-4o-mini',
      messages: [{ role: 'user', content: 'Hello!' }]
    })
  }
);

console.log('Request ID:', requestId);
console.log('Usage:', usage);
// Output:
// Request ID: req_abc123...
// Usage: { prompt_tokens: 10, completion_tokens: 5, total_tokens: 15 }

Error Handling

Common Error Scenarios

Handle different error types with appropriate retry logic:
async function makeProxyRequest(forwardToken, providerUrl, body) {
  const proxyUrl = 'https://api.lavapayments.com/v1/forward?u=' +
    encodeURIComponent(providerUrl);

  try {
    const response = await fetch(proxyUrl, {
      method: 'POST',
      headers: {
        'Content-Type': 'application/json',
        'Authorization': `Bearer ${forwardToken}`
      },
      body: JSON.stringify(body)
    });

    // Parse error responses
    if (!response.ok) {
      const errorData = await response.json().catch(() => ({}));

      switch (response.status) {
        case 401:
          throw new Error('Invalid forward token. Check your credentials.');

        case 402:
          throw new Error('Insufficient balance. Please add funds to your Lava wallet.');

        case 429:
          // Rate limit - retry with exponential backoff
          throw new Error('Rate limit exceeded. Please retry after a delay.');

        case 500:
        case 502:
        case 503:
          // Provider error - retry may help
          throw new Error(`Provider error: ${errorData.message || response.statusText}`);

        default:
          throw new Error(`Request failed: ${response.statusText}`);
      }
    }

    return await response.json();

  } catch (error) {
    console.error('Proxy request error:', error.message);
    throw error;
  }
}

Retry Logic with Exponential Backoff

Implement retry logic for transient errors:
async function fetchWithRetry(
  forwardToken,
  providerUrl,
  body,
  maxRetries = 3
) {
  let lastError;

  for (let attempt = 0; attempt < maxRetries; attempt++) {
    try {
      return await makeProxyRequest(forwardToken, providerUrl, body);
    } catch (error) {
      lastError = error;

      // Don't retry on auth or balance errors
      if (error.message.includes('Invalid forward token') ||
          error.message.includes('Insufficient balance')) {
        throw error;
      }

      // Retry on rate limits and server errors
      if (attempt < maxRetries - 1) {
        const delay = Math.pow(2, attempt) * 1000; // Exponential backoff
        console.log(`Retry attempt ${attempt + 1} after ${delay}ms delay...`);
        await new Promise(resolve => setTimeout(resolve, delay));
      }
    }
  }

  throw lastError;
}

Troubleshooting

Cause: Provider URL not properly encoded in the ?u= parameterSolution:
  • Always use encodeURIComponent() on the full provider URL
  • Check that the encoded URL doesn’t have double encoding
  • Verify the URL structure: https://api.lavapayments.com/v1/forward?u=<ENCODED_URL>
Example:
// ✅ Correct
const url = 'https://api.lavapayments.com/v1/forward?u=' +
  encodeURIComponent('https://api.openai.com/v1/chat/completions');

// ❌ Wrong (not encoded)
const url = 'https://api.lavapayments.com/v1/forward?u=https://api.openai.com/v1/chat/completions';
Cause: The provider (OpenAI, Anthropic, etc.) returned an errorSolution:
  • Check the error message for provider-specific issues
  • Common provider errors: invalid model name, missing required fields, malformed request body
  • Lava passes through provider error messages unchanged
  • Verify your request body matches the provider’s API specification
Debugging:
  • Look at the error response body for provider details
  • Check provider’s API documentation for required fields
  • Test the same request directly with the provider to isolate Lava vs provider issues
Cause: Request ID header not being read correctlySolution:
  • Request ID appears on all responses (successful or failed)
  • Check that you’re reading headers from the Response object, not the parsed JSON
  • Header name is lowercase: x-lava-request-id
Example:
const response = await fetch(proxyUrl, { /* ... */ });

// ✅ Correct: Read headers from Response object
const requestId = response.headers.get('x-lava-request-id');

// ❌ Wrong: Can't read headers from parsed JSON
const data = await response.json();
const requestId = data.headers; // undefined
Cause: Attempting to call Lava proxy from client-side JavaScriptSolution:
  • Lava blocks direct browser requests to prevent token exposure
  • Always call Lava from your backend server
  • Create a backend API route that proxies requests to Lava
  • Your frontend calls your backend, your backend calls Lava
Architecture:
Frontend → Your Backend API → Lava Proxy → AI Provider
This keeps your forward token secure on the server.
How usage tracking works:
  • Usage data comes from the provider’s response body, not Lava headers
  • Each provider has their own usage format in the response
  • Lava automatically tracks all usage in your dashboard at Monetize > Explore
Provider-specific usage formats:
  • OpenAI: usage.total_tokens in response body
  • Anthropic: usage.input_tokens + usage.output_tokens in response body
  • Google: Token counts in usageMetadata field
Where to view usage:
  • Real-time: Parse the provider’s native usage field from response body
  • Historical: View all requests in Lava dashboard at Monetize > Explore
  • Request correlation: Use x-lava-request-id header to match API responses to dashboard entries

What’s Next?

Deep Dive

Ready to understand how Lava works internally?

Architecture Overview

Learn about Lava’s transparent proxy architecture, request flow, and system design