Step-by-step instructions for connecting Lava to popular AI tools. In every case, the pattern is the same: set the API key to your lava_sk_* key, and point the base URL to Lava.
Which API format? Tools that use the OpenAI SDK or OpenAI-compatible APIs need the OpenAI format. Tools that use the Anthropic SDK (like Claude Code) need the Anthropic format. When in doubt, start with OpenAI.
Vercel AI SDK
Use Lava as your OpenAI-compatible backend in any Vercel AI SDK app.
Create a Spend Key
Go to Dashboard > AI Spend and click Create Spend Key. Set the API format to OpenAI (/v1/chat/completions), choose the model(s) you want, and copy the key.
Install dependencies
npm install ai @ai-sdk/openai
Configure Lava in your app
Add your spend key as an environment variable:export LAVA_SPEND_KEY="lava_sk_your_key_here"
Then create an OpenAI client pointed at Lava:import { streamText } from 'ai';
import { createOpenAI } from '@ai-sdk/openai';
const lava = createOpenAI({
apiKey: process.env.LAVA_SPEND_KEY,
baseURL: 'https://api.lava.so/v1',
});
const result = await streamText({
model: lava('claude-opus-4-6'),
messages: [{ role: 'user', content: 'Write a one-line release note.' }],
});
for await (const textPart of result.textStream) {
process.stdout.write(textPart);
}
Run and verify usage
Send a test prompt from your app. You’ll see spend and usage on your AI Spend dashboard for that key.
Claude Code
Route Claude Code requests through Lava using Anthropic-format spend keys.
Create a Spend Key in Anthropic format
Go to Dashboard > AI Spend, click Create Spend Key, and set the API format to Anthropic (/v1/messages).
Point Claude Code to Lava
Set these environment variables before launching Claude Code:export ANTHROPIC_AUTH_TOKEN="lava_sk_your_key_here"
export ANTHROPIC_BASE_URL="https://api.lava.so"
Run Claude Code
Start Claude Code as normal. Requests now go through Lava’s /v1/messages path and usage is tracked in AI Spend.
Anthropic SDK
Use a Lava spend key with any Anthropic-style client or agent runtime.
Create a Spend Key for Anthropic format
Go to Dashboard > AI Spend, click Create Spend Key, and set the API format to Anthropic (/v1/messages).
Set Anthropic environment variables
export ANTHROPIC_API_KEY="lava_sk_your_key_here"
export ANTHROPIC_BASE_URL="https://api.lava.so"
Keep the base URL at https://api.lava.so (without /v1). Anthropic clients automatically append /v1/messages.
Use the Anthropic client as usual
import Anthropic from '@anthropic-ai/sdk';
const client = new Anthropic({
apiKey: process.env.ANTHROPIC_API_KEY,
baseURL: process.env.ANTHROPIC_BASE_URL,
});
const response = await client.messages.create({
model: 'claude-opus-4-6',
max_tokens: 512,
messages: [{ role: 'user', content: 'Summarize this in one paragraph.' }],
});
console.log(response.content);
Most Anthropic-based agent SDKs use the same ANTHROPIC_API_KEY and base URL settings, so once those are set, requests route through Lava automatically.
Cursor
Create a Spend Key
Go to Dashboard > AI Spend and create a key with OpenAI (/v1/chat/completions) format.
Configure Cursor
In Cursor, go to Settings > Models > OpenAI API Key. Paste your lava_sk_* key and set the base URL to https://api.lava.so/v1.
Start coding
Select a model and start coding. Usage appears in your AI Spend dashboard.
Raycast
Create a Spend Key
Go to Dashboard > AI Spend and create a key with OpenAI (/v1/chat/completions) format.
Configure Raycast
Open Raycast > Settings > Extensions > AI. Under provider settings, add a custom OpenAI-compatible provider:
- API Key: your
lava_sk_* key
- Base URL:
https://api.lava.so/v1
Start chatting
Pick a model and start chatting. Usage is tracked in AI Spend.
OpenClaw
Use Lava as a custom OpenAI-compatible provider in OpenClaw.
Create a Spend Key
Go to Dashboard > AI Spend and create a key with OpenAI (/v1/chat/completions) format.
Add Lava to your agent's model config
In your agent’s model config (e.g. ~/.openclaw/agents/<agentId>/agent/models.json):{
models: {
mode: "merge",
providers: {
lava: {
baseUrl: "https://api.lava.so/v1",
apiKey: "lava_sk_your_key_here",
api: "openai-completions",
models: [
{ id: "claude-opus-4-6", name: "Claude Opus 4.6" },
{ id: "gpt-4o", name: "GPT-4o" }
]
}
}
}
}
Select a Lava model
Choose one of the model IDs you configured. OpenClaw will route calls through Lava with spend controls and usage tracking.
LangChain
Connect LangChain to Lava using ChatOpenAI custom base URL support.
Create a Spend Key
Go to Dashboard > AI Spend and create a key with OpenAI (/v1/chat/completions) format.
Install LangChain OpenAI package
npm install @langchain/openai @langchain/core
Use ChatOpenAI with Lava base URL
import { ChatOpenAI } from "@langchain/openai";
const llm = new ChatOpenAI({
model: "claude-opus-4-6",
apiKey: process.env.LAVA_SPEND_KEY,
configuration: {
baseURL: "https://api.lava.so/v1",
},
});
const response = await llm.invoke("Give me 3 bullet points about this repo.");
console.log(response.content);
Verify in AI Spend
Run a request and confirm usage appears on your AI Spend dashboard.
Cline
Create a Spend Key
Go to Dashboard > AI Spend and create a key with OpenAI (/v1/chat/completions) format.
Authenticate Cline against Lava
cline auth -p openai -k lava_sk_your_key_here -b https://api.lava.so/v1 -m claude-opus-4-6
Run Cline normally
Start Cline and run a prompt. Calls route through Lava with your key limits and usage tracking.
Codex CLI
Use Lava as a custom model provider in OpenAI Codex CLI.
Create a Spend Key
Go to Dashboard > AI Spend and create a key with OpenAI (/v1/chat/completions) format.
Set the environment variable
export LAVA_SPEND_KEY="lava_sk_your_key_here"
Add Lava to Codex config
In ~/.codex/config.toml:model = "claude-opus-4-6"
model_provider = "lava"
[model_providers.lava]
name = "Lava"
base_url = "https://api.lava.so/v1"
env_key = "LAVA_SPEND_KEY"
wire_api = "chat"
Start Codex
Run codex as normal. Requests route through Lava, and usage appears in AI Spend.