Skip to main content
While Lava’s AI Gateway seamlessly enables usage-based billing, it is possible to manually report usage to Lava via POST /v1/requests instead of using the Gateway. However, Gateway is the recommended integration approach with more powerful enforcement capabilities.
Gateway routingPost-request reporting
When Lava sees the requestBefore, during, and afterAfter only
Usage trackingAutomatic (tokens, cost, model)You extract and report it
Pre-request balance checksYes — block if insufficient creditsNo — request already ran
Works withSupported AI providersAnything — AI, APIs, compute, storage
EndpointPOST /v1/forwardPOST /v1/requests
Post-request reporting enables billing, while Gateway routing enables billing + enforcement + automatic tracking. If you’re billing AI usage and starting fresh, start with the gateway.

What you get with gateway routing

Block requests when customers are out of credits

Gateway: Customer exhausted their plan and hit their card limit. They try to make a request. Lava rejects it before it reaches OpenAI, Claude, or Gemini. No wasted API call. Post-request: The request already ran. You’re out the provider cost for a customer who can’t pay.

Know if a customer can afford an operation before running it

Gateway: Before an expensive agent loop, Lava checks if they have budget. If not, the request is rejected and you can show “please upgrade.” Post-request: You find out after you’ve already burned provider costs.

One source of truth for usage

Gateway: The request that hit the provider is the same request that generated the billing record. Token counts, cost, and model are captured automatically. Nothing to reconcile. Post-request: You have provider logs and Lava records. You’re extracting usage data from each provider’s response format, normalizing it, and reporting it yourself. If the two don’t match, you’re debugging two systems.

Works across all providers with one integration

Gateway: Same billing flow whether you’re calling OpenAI, Claude, Gemini, Mistral, or any of 30+ supported providers. One integration, all providers metered the same way. Post-request: Each provider returns usage differently. You normalize and report each one.

When post-request reporting makes sense

Post-request reporting via POST /v1/requests is the right choice when:
  • Non-AI usage — You’re billing for API calls, compute time, storage, or any operation that doesn’t go through an AI provider
  • Your own API keys — You call AI providers directly with your own keys (BYOK) and want Lava to handle billing only
  • Unsupported providers — You use a provider Lava doesn’t proxy yet
  • Can’t change the request path — You’re deeply integrated with provider SDKs and can’t swap the base URL
It works well. But for AI usage, you’re responsible for extraction and enforcement logic that gateway routing gives you for free.

How it works

After each billable operation, report it to Lava:
import { Lava } from '@lavapayments/nodejs';

const lava = new Lava();

// After your operation completes
await lava.requests.create({
  request_id: crypto.randomUUID(),    // Idempotency key
  connection_id: connectionId,         // Which customer to charge
  meter_slug: 'my-meter',             // Which pricing to apply
  input_tokens: response.usage.prompt_tokens,
  output_tokens: response.usage.completion_tokens
});
Populate only the fields relevant to your usage type:
FieldsUse case
input_tokens, output_tokensLLM APIs (GPT-4, Claude, etc.)
input_characters, output_secondsText-to-speech
input_seconds, output_charactersSpeech-to-text
output_secondsVideo generation
(none)Flat per-request pricing (meter handles it)
See the Create a Request API reference for the full field list.

Next steps