Skip to main content

Quick Reference

  • Base URL: https://api.deepseek.com/v1
  • Authentication: Bearer token
  • API Format: OpenAI-compatible (standard messages array)
  • Usage Tracking: data.usage (prompt_tokens, completion_tokens)
  • BYOK Support: ✓ Supported

Current Models (October 2025)

Available Models

  • deepseek-chat - General conversational AI model
  • deepseek-coder - Code-specialized model for programming tasks
  • deepseek-v3 - Latest third-generation model
  • deepseek-v3.2 - Enhanced v3 with improved reasoning

Specialization

DeepSeek models are optimized for coding tasks including:
  • Code generation and completion
  • Code explanation and documentation
  • Debugging and error analysis
  • Algorithm design and optimization

Integration Example

Prerequisites

  1. Get your Lava forward token:
  2. Set up environment variables:
.env.local
LAVA_BASE_URL=https://api.lavapayments.com/v1
LAVA_FORWARD_TOKEN=your_forward_token_from_dashboard
  1. Run from backend server (CORS blocks frontend requests for security)

Complete Example

/**
 * DeepSeek Chat Completion via Lava
 *
 * Prerequisites:
 * - Lava forward token (get from dashboard: Build > Secret Keys)
 * - Backend server (CORS blocks frontend calls for security)
 *
 * Current Models (October 2025):
 * - deepseek-v3.2: Latest enhanced model with improved reasoning
 * - deepseek-v3: Third-generation general model
 * - deepseek-coder: Code-specialized model
 * - deepseek-chat: General conversational model
 *
 * Response Data:
 * - Usage tracking: Available in response body at `data.usage`
 * - Request ID: Available in `x-lava-request-id` response header
 */

// Load environment variables
require('dotenv').config({ path: '.env.local' });

async function callDeepSeekViaLava() {
  // 1. Define the DeepSeek endpoint
  const PROVIDER_ENDPOINT = 'https://api.deepseek.com/v1/chat/completions';

  // 2. Build the Lava forward proxy URL
  const url = `${process.env.LAVA_BASE_URL}/forward?u=${PROVIDER_ENDPOINT}`;

  // 3. Set up authentication headers
  const headers = {
    'Content-Type': 'application/json',
    'Authorization': `Bearer ${process.env.LAVA_FORWARD_TOKEN}`
  };

  // 4. Define the request body (standard OpenAI format)
  const requestBody = {
    model: 'deepseek-v3.2',  // Use latest enhanced model
    messages: [
      { role: 'system', content: 'You are an expert programmer.' },
      { role: 'user', content: 'Write a Python function to check if a number is prime.' }
    ],
    temperature: 0.7,
    max_tokens: 1024
  };

  // 5. Make the request
  try {
    const response = await fetch(url, {
      method: 'POST',
      headers: headers,
      body: JSON.stringify(requestBody)
    });

    // 6. Parse the response
    const data = await response.json();

    // 7. Extract usage data (from response body)
    const usage = data.usage;
    console.log('\nUsage Tracking:');
    console.log(`  Prompt tokens: ${usage.prompt_tokens}`);
    console.log(`  Completion tokens: ${usage.completion_tokens}`);
    console.log(`  Total tokens: ${usage.total_tokens}`);

    // 8. Extract request ID (from response header)
    const requestId = response.headers.get('x-lava-request-id');
    console.log(`\nLava Request ID: ${requestId}`);
    console.log('  (Use this ID to find the request in your dashboard)');

    // 9. Display the AI response
    console.log('\nAI Response:');
    console.log(data.choices[0].message.content);

    return data;
  } catch (error) {
    console.error('Error calling DeepSeek via Lava:', error.message);
    throw error;
  }
}

// Run the example
callDeepSeekViaLava();

Request/Response Formats

Request Format

DeepSeek uses the standard OpenAI format with no modifications required:
{
  "model": "deepseek-v3.2",
  "messages": [
    { "role": "system", "content": "You are an expert programmer." },
    { "role": "user", "content": "Write a Python function to check if a number is prime." }
  ],
  "temperature": 0.7,
  "max_tokens": 1024,
  "stream": false
}

Response Format

Standard OpenAI-compatible response:
{
  "id": "chatcmpl-xxx",
  "object": "chat.completion",
  "created": 1234567890,
  "model": "deepseek-v3.2",
  "choices": [{
    "index": 0,
    "message": {
      "role": "assistant",
      "content": "Here's a Python function to check if a number is prime:\n\n```python\ndef is_prime(n):\n    if n <= 1:\n        return False\n    for i in range(2, int(n**0.5) + 1):\n        if n % i == 0:\n            return False\n    return True\n```"
    },
    "finish_reason": "stop"
  }],
  "usage": {
    "prompt_tokens": 25,
    "completion_tokens": 85,
    "total_tokens": 110
  }
}

Key Features

Code Specialization

DeepSeek models excel at:
  • Code generation: Writing functions, classes, and complete programs
  • Code explanation: Breaking down complex algorithms
  • Debugging: Identifying and fixing code errors
  • Refactoring: Improving code structure and efficiency
  • Documentation: Generating docstrings and comments

OpenAI Compatibility

DeepSeek is fully OpenAI-compatible, which means:
  • ✓ Standard messages array format
  • ✓ Standard usage object in responses
  • ✓ Support for stream: true with usage tracking
  • ✓ Compatible with OpenAI client libraries

Streaming Support

Enable streaming for real-time responses:
const requestBody = {
  model: 'deepseek-coder',
  messages: [
    { role: 'user', content: 'Explain async/await in JavaScript.' }
  ],
  stream: true,
  stream_options: {
    include_usage: true  // Include token usage with streaming
  }
};

Usage Tracking

Usage data is available in the response body at data.usage:
{
  "usage": {
    "prompt_tokens": 100,
    "completion_tokens": 50,
    "total_tokens": 150
  }
}
Access in code:
const usage = data.usage;
console.log(`Prompt tokens: ${usage.prompt_tokens}`);
console.log(`Completion tokens: ${usage.completion_tokens}`);
console.log(`Total tokens: ${usage.total_tokens}`);
Request ID tracking:
const requestId = response.headers.get('x-lava-request-id');
console.log(`Request ID: ${requestId}`);
// Use this ID to find the request in your Lava dashboard

BYOK Support

DeepSeek fully supports Bring Your Own Key (BYOK) mode. Your forward token format:
${LAVA_SECRET_KEY}.${CONNECTION_SECRET}.${PRODUCT_SECRET}.${YOUR_DEEPSEEK_API_KEY}
Note: When using BYOK, Lava meters usage but does not charge your Lava wallet. Costs are billed directly to your DeepSeek account. For detailed BYOK setup, see the BYOK guide.

Official Documentation