What You’ll Learn
This guide shows you how to build flexible AI applications that work with multiple providers. You’ll learn to:
Abstract provider logic for provider-agnostic code
Implement configuration-based provider switching
Build fallback strategies (primary → secondary provider)
Handle provider-specific features (function calling, vision, etc.)
Multi-provider architecture provides resilience. By supporting multiple AI providers, your application can gracefully handle rate limits, outages, and pricing changes without code changes.
Abstracting Provider Logic
Provider Configuration Pattern
Create a configuration-driven approach instead of hardcoding providers:
// config/providers.ts
export interface ProviderConfig {
name : string ;
baseUrl : string ;
models : string [];
features : string [];
priority : number ;
}
export const PROVIDERS : Record < string , ProviderConfig > = {
openai: {
name: 'OpenAI' ,
baseUrl: 'https://api.openai.com/v1/chat/completions' ,
models: [ 'gpt-4' , 'gpt-4-turbo' , 'gpt-3.5-turbo' ],
features: [ 'streaming' , 'function-calling' , 'vision' ],
priority: 1
},
anthropic: {
name: 'Anthropic' ,
baseUrl: 'https://api.anthropic.com/v1/messages' ,
models: [ 'claude-3-opus' , 'claude-3-sonnet' , 'claude-3-haiku' ],
features: [ 'streaming' , 'function-calling' , 'vision' ],
priority: 2
},
google: {
name: 'Google' ,
baseUrl: 'https://generativelanguage.googleapis.com/v1beta/models' ,
models: [ 'gemini-pro' , 'gemini-pro-vision' ],
features: [ 'streaming' , 'vision' , 'multimodal' ],
priority: 3
},
groq: {
name: 'Groq' ,
baseUrl: 'https://api.groq.com/openai/v1/chat/completions' ,
models: [ 'llama-3-70b' , 'mixtral-8x7b' ],
features: [ 'streaming' , 'low-latency' ],
priority: 4
}
};
Provider Abstraction Class
Build a unified interface for all providers:
// lib/ai-provider.ts
import { PROVIDERS , ProviderConfig } from '@/config/providers' ;
export interface CompletionRequest {
messages : Array <{ role : string ; content : string }>;
model ?: string ;
stream ?: boolean ;
temperature ?: number ;
maxTokens ?: number ;
}
export interface CompletionResponse {
content : string ;
usage : {
inputTokens : number ;
outputTokens : number ;
totalTokens : number ;
};
cost : number ;
provider : string ;
model : string ;
}
export class AIProvider {
private config : ProviderConfig ;
private forwardToken : string ;
constructor ( providerName : string , forwardToken : string ) {
const config = PROVIDERS [ providerName ];
if ( ! config ) {
throw new Error ( `Unknown provider: ${ providerName } ` );
}
this . config = config ;
this . forwardToken = forwardToken ;
}
async createCompletion ( request : CompletionRequest ) : Promise < CompletionResponse > {
const model = request . model || this . config . models [ 0 ];
const response = await fetch (
`https://api.lavapayments.com/v1/forward?u= ${ encodeURIComponent ( this . config . baseUrl ) } ` ,
{
method: 'POST' ,
headers: {
'Authorization' : `Bearer ${ this . forwardToken } ` ,
'Content-Type' : 'application/json'
},
body: JSON . stringify ({
model: model ,
messages: request . messages ,
stream: request . stream || false ,
temperature: request . temperature ,
max_tokens: request . maxTokens
})
}
);
if ( ! response . ok ) {
throw new Error ( `Provider ${ this . config . name } error: ${ response . statusText } ` );
}
const data = await response . json ();
// Extract usage from response body
const usage = {
inputTokens: data . usage ?. prompt_tokens || 0 ,
outputTokens: data . usage ?. completion_tokens || 0 ,
totalTokens: data . usage ?. total_tokens || 0
};
const requestId = response . headers . get ( 'x-lava-request-id' );
return {
content: this . extractContent ( data ),
usage ,
requestId ,
provider: this . config . name ,
model: model
};
}
private extractContent ( data : any ) : string {
// Handle different provider response formats
if ( data . choices && data . choices [ 0 ]?. message ?. content ) {
return data . choices [ 0 ]. message . content ; // OpenAI, Groq format
}
if ( data . content && data . content [ 0 ]?. text ) {
return data . content [ 0 ]. text ; // Anthropic format
}
if ( data . candidates && data . candidates [ 0 ]?. content ?. parts ?.[ 0 ]?. text ) {
return data . candidates [ 0 ]. content . parts [ 0 ]. text ; // Google format
}
throw new Error ( 'Unknown response format' );
}
hasFeature ( feature : string ) : boolean {
return this . config . features . includes ( feature );
}
getModels () : string [] {
return this . config . models ;
}
}
Usage Example
// Using the abstraction
const provider = new AIProvider ( 'openai' , forwardToken );
const response = await provider . createCompletion ({
messages: [
{ role: 'user' , content: 'Explain quantum computing' }
],
temperature: 0.7 ,
maxTokens: 500
});
console . log ( 'Response:' , response . content );
console . log ( 'Cost:' , `$ ${ response . cost } ` );
console . log ( 'Provider:' , response . provider );
Single interface, multiple providers. The abstraction layer handles provider-specific response formats, allowing you to switch providers by changing one line of code.
Provider Switching with Configuration
Environment-Based Selection
Switch providers via environment variables:
// lib/get-provider.ts
export function getProvider () : AIProvider {
const providerName = process . env . AI_PROVIDER || 'openai' ;
const forwardToken = process . env . LAVA_FORWARD_TOKEN ! ;
return new AIProvider ( providerName , forwardToken );
}
// Usage
const provider = getProvider ();
const response = await provider . createCompletion ({ messages });
Environment configuration:
# .env.production
AI_PROVIDER = openai
# .env.development
AI_PROVIDER = groq # Use faster Groq for development
Dynamic Provider Selection
Choose provider based on request characteristics:
export function selectProvider ( request : {
requiresVision ?: boolean ;
requiresFunctionCalling ?: boolean ;
prioritizeSpeed ?: boolean ;
prioritizeCost ?: boolean ;
}) : string {
// Vision required
if ( request . requiresVision ) {
return 'google' ; // Gemini Pro Vision
}
// Speed priority (low latency)
if ( request . prioritizeSpeed ) {
return 'groq' ; // Ultra-fast inference
}
// Cost priority
if ( request . prioritizeCost ) {
return 'groq' ; // Most cost-effective
}
// Default: OpenAI for quality
return 'openai' ;
}
// Usage
const providerName = selectProvider ({ prioritizeSpeed: true });
const provider = new AIProvider ( providerName , forwardToken );
Model-Based Routing
Route to providers based on desired model:
export function getProviderForModel ( modelName : string ) : string {
for ( const [ providerKey , config ] of Object . entries ( PROVIDERS )) {
if ( config . models . includes ( modelName )) {
return providerKey ;
}
}
throw new Error ( `No provider found for model: ${ modelName } ` );
}
// Usage
const provider = new AIProvider (
getProviderForModel ( 'claude-3-opus' ),
forwardToken
);
Fallback Strategies
Sequential Fallback
Try providers in priority order until one succeeds:
export async function completionWithFallback (
request : CompletionRequest
) : Promise < CompletionResponse > {
// Sort providers by priority
const sortedProviders = Object . entries ( PROVIDERS )
. sort (([, a ], [, b ]) => a . priority - b . priority )
. map (([ key ]) => key );
const errors : Array <{ provider : string ; error : string }> = [];
for ( const providerName of sortedProviders ) {
try {
console . log ( `Attempting provider: ${ providerName } ` );
const provider = new AIProvider ( providerName , forwardToken );
const response = await provider . createCompletion ( request );
console . log ( `Success with provider: ${ providerName } ` );
return response ;
} catch ( error ) {
const errorMessage = error instanceof Error ? error . message : 'Unknown error' ;
errors . push ({ provider: providerName , error: errorMessage });
console . warn ( `Provider ${ providerName } failed:` , errorMessage );
// Continue to next provider
continue ;
}
}
// All providers failed
throw new Error (
`All providers failed: \n ${ errors . map ( e => `- ${ e . provider } : ${ e . error } ` ). join ( ' \n ' ) } `
);
}
// Usage
try {
const response = await completionWithFallback ({
messages: [{ role: 'user' , content: 'Hello' }]
});
console . log ( 'Response:' , response . content );
} catch ( error ) {
console . error ( 'All providers exhausted:' , error );
}
Smart Fallback with Retry Logic
Add exponential backoff for transient errors:
async function completionWithRetry (
request : CompletionRequest ,
maxRetries = 3
) : Promise < CompletionResponse > {
const sortedProviders = Object . keys ( PROVIDERS )
. sort (( a , b ) => PROVIDERS [ a ]. priority - PROVIDERS [ b ]. priority );
for ( const providerName of sortedProviders ) {
let retries = 0 ;
while ( retries < maxRetries ) {
try {
const provider = new AIProvider ( providerName , forwardToken );
return await provider . createCompletion ( request );
} catch ( error ) {
const errorMessage = error instanceof Error ? error . message : '' ;
// Check if error is retryable (rate limit, timeout)
const isRetryable = errorMessage . includes ( '429' ) ||
errorMessage . includes ( 'timeout' ) ||
errorMessage . includes ( '503' );
if ( ! isRetryable || retries >= maxRetries - 1 ) {
// Non-retryable error or max retries reached, try next provider
break ;
}
// Exponential backoff: 1s, 2s, 4s
const delay = Math . pow ( 2 , retries ) * 1000 ;
console . log ( `Retrying ${ providerName } in ${ delay } ms...` );
await new Promise ( resolve => setTimeout ( resolve , delay ));
retries ++ ;
}
}
}
throw new Error ( 'All providers and retries exhausted' );
}
Cost-Aware Fallback
Fallback to cheaper providers when possible:
interface ProviderWithCost extends ProviderConfig {
estimatedCostPer1kTokens : number ;
}
async function costAwareFallback (
request : CompletionRequest ,
maxCostUsd : number
) : Promise < CompletionResponse > {
// Try cheapest providers first
const providersByCost = Object . entries ( PROVIDERS )
. sort (([, a ], [, b ]) =>
( a as ProviderWithCost ). estimatedCostPer1kTokens -
( b as ProviderWithCost ). estimatedCostPer1kTokens
);
for ( const [ providerName ] of providersByCost ) {
try {
const provider = new AIProvider ( providerName , forwardToken );
const response = await provider . createCompletion ( request );
if ( response . cost <= maxCostUsd ) {
return response ;
}
console . log ( `Provider ${ providerName } exceeded cost limit: $ ${ response . cost } ` );
} catch ( error ) {
continue ;
}
}
throw new Error ( `No provider within budget: $ ${ maxCostUsd } ` );
}
Provider-Specific Feature Handling
Feature Detection
Check provider capabilities before using features:
export class AIProvider {
// ... existing code ...
async createCompletionWithFeatures (
request : CompletionRequest & {
functions ?: any [];
imageUrls ?: string [];
}
) : Promise < CompletionResponse > {
// Check function calling support
if ( request . functions && ! this . hasFeature ( 'function-calling' )) {
throw new Error (
`Provider ${ this . config . name } does not support function calling`
);
}
// Check vision support
if ( request . imageUrls && ! this . hasFeature ( 'vision' )) {
throw new Error (
`Provider ${ this . config . name } does not support vision`
);
}
// Build request based on provider
const body : any = {
model: request . model || this . config . models [ 0 ],
messages: request . messages
};
// Add provider-specific features
if ( request . functions && this . config . name === 'OpenAI' ) {
body . tools = request . functions . map ( fn => ({
type: 'function' ,
function: fn
}));
}
if ( request . imageUrls && this . hasFeature ( 'vision' )) {
// Format vision request based on provider
body . messages = this . formatVisionMessages ( request . messages , request . imageUrls );
}
// Make request...
const response = await fetch ( /* ... */ );
return this . parseResponse ( response );
}
private formatVisionMessages ( messages : any [], imageUrls : string []) : any [] {
// Provider-specific vision formatting
if ( this . config . name === 'OpenAI' ) {
return messages . map ( msg => ({
... msg ,
content: [
{ type: 'text' , text: msg . content },
... imageUrls . map ( url => ({
type: 'image_url' ,
image_url: { url }
}))
]
}));
}
// Add other provider formats...
return messages ;
}
}
Graceful Feature Degradation
Fallback when features aren’t supported:
async function completionWithFeatureFallback (
request : CompletionRequest & { functions ?: any [] }
) : Promise < CompletionResponse > {
const preferredProvider = 'openai' ;
try {
const provider = new AIProvider ( preferredProvider , forwardToken );
if ( request . functions && ! provider . hasFeature ( 'function-calling' )) {
console . warn ( 'Function calling not supported, removing from request' );
delete request . functions ;
}
return await provider . createCompletion ( request );
} catch ( error ) {
// Fallback to another provider without advanced features
const fallbackProvider = new AIProvider ( 'groq' , forwardToken );
return await fallbackProvider . createCompletion ({
messages: request . messages ,
model: request . model
});
}
}
Troubleshooting
Provider abstraction breaks with new provider
Symptom: Response parsing fails for a newly added providerCause: Provider uses different response format than expectedSolution:
Update extractContent() method to handle new format:private extractContent ( data : any ): string {
// Try each known format
if ( data . choices ?.[ 0 ]?. message ?. content ) return data . choices [ 0 ]. message . content ;
if ( data . content ?.[ 0 ]?. text ) return data . content [ 0 ]. text ;
if ( data . candidates ?.[ 0 ]?. content ?. parts ?.[ 0 ]?. text ) {
return data . candidates [ 0 ]. content . parts [ 0 ]. text ;
}
// Log unknown format for debugging
console . error ( 'Unknown response format:' , JSON . stringify ( data , null , 2 ));
throw new Error ( 'Unknown response format' );
}
Fallback doesn't trigger when primary fails
Check:
Exception is being caught properly in try/catch
Provider priority order is correct (lower number = higher priority)
Error is thrown (not just logged) when provider fails
Debug: for ( const providerName of sortedProviders ) {
try {
console . log ( 'Trying provider:' , providerName );
const response = await provider . createCompletion ( request );
console . log ( 'Success!' );
return response ;
} catch ( error ) {
console . error ( 'Provider failed:' , providerName , error );
// IMPORTANT: Must continue, not return/throw here
continue ;
}
}
Feature detection returns wrong capabilities
Issue: Provider claims to support feature but request failsReasons:
Provider configuration outdated (features changed)
Feature available but different API format required
Feature requires specific model (not all models support all features)
Solution:
Add model-specific feature checks:hasFeature ( feature : string , model ?: string ): boolean {
if ( ! this . config . features . includes ( feature )) {
return false ;
}
// Model-specific feature support
if ( feature === 'vision' && model ) {
return model . includes ( 'vision' ) || model . includes ( '4' );
}
return true ;
}
Cost estimates inaccurate across providers
Problem: Estimated costs don’t match actual Lava costsExplanation:
Lava costs include provider base cost + merchant fee + service charge
Provider pricing varies by model and usage
Costs are only accurate AFTER request completes
Solution:
Track usage from response body and calculate cost based on your pricing:const usage = data . usage ?. total_tokens || 0 ;
const requestId = response . headers . get ( 'x-lava-request-id' );
// Calculate cost based on your configured pricing
console . log ( 'Tokens used:' , usage , 'Request ID:' , requestId );
What’s Next