Errors & Troubleshooting

HTTP Status Codes

400

Bad Request

Invalid request parameters or malformed JSON

Common Causes:

  • Missing required fields
  • Invalid parameter values
  • Malformed JSON body
  • Unsupported model ID

Solutions:

  • Check API reference for required parameters
  • Validate JSON syntax
  • Ensure all required fields are present
401

Unauthorized

Authentication failed - API key invalid

Common Causes:

  • Missing Authorization header
  • Invalid API key
  • Expired API key
  • Wrong header format

Solutions:

  • Add 'Authorization: Bearer YOUR_API_KEY' header
  • Generate new API key from dashboard
  • Check for extra spaces in header
403

Forbidden

API key lacks permission for this resource

Common Causes:

  • API key not authorized for model
  • Account limits exceeded
  • Restricted content detected

Solutions:

  • Upgrade your plan
  • Check model availability in dashboard
  • Review content policy
404

Not Found

Endpoint or resource does not exist

Common Causes:

  • Wrong endpoint URL
  • Typo in model name
  • Deprecated endpoint

Solutions:

  • Check API reference for correct endpoints
  • Verify model name spelling
  • Update to latest API version
429

Rate Limited

Too many requests - implement backoff strategy

Common Causes:

  • Exceeded requests per minute
  • Concurrent request limits
  • Burst rate limits

Solutions:

  • Implement exponential backoff
  • Reduce request frequency
  • Upgrade to higher tier
500

Internal Server Error

Unexpected error on our servers

Common Causes:

  • Temporary server issues
  • Model unavailable
  • Infrastructure problems

Solutions:

  • Retry with exponential backoff
  • Check status page
  • Contact support if persists
502

Bad Gateway

Upstream service temporarily unavailable

Common Causes:

  • Model service down
  • Network issues
  • Maintenance mode

Solutions:

  • Retry after delay
  • Check status page
  • Switch to alternative model if available
503

Service Unavailable

Service temporarily overloaded

Common Causes:

  • High traffic
  • Scheduled maintenance
  • Capacity limits

Solutions:

  • Retry with backoff
  • Reduce request rate
  • Check maintenance schedule

Error Handling Implementation

Node.js with Retry Logic

async function makeRequestWithRetry(requestFn, maxRetries = 3) {
  for (let attempt = 1; attempt <= maxRetries; attempt++) {
    try {
      return await requestFn();
    } catch (error) {
      console.log(`Attempt ${attempt} failed:`, error.message);
      
      // Check if error is retryable
      if (error.status === 429 || error.status >= 500) {
        if (attempt === maxRetries) {
          throw new Error(`Max retries (${maxRetries}) exceeded: ${error.message}`);
        }
        
        // Exponential backoff: 1s, 2s, 4s, 8s...
        const delay = Math.min(1000 * Math.pow(2, attempt - 1), 10000);
        console.log(`Retrying in ${delay}ms...`);
        await new Promise(resolve => setTimeout(resolve, delay));
      } else {
        // Non-retryable error (4xx except 429)
        throw error;
      }
    }
  }
}

// Usage example
try {
  const response = await makeRequestWithRetry(async () => {
    return await openai.chat.completions.create({
      model: 'gpt-4-turbo',
      messages: [{ role: 'user', content: 'Hello' }]
    });
  });
  console.log(response.choices[0].message.content);
} catch (error) {
  console.error('Request failed permanently:', error.message);
}

Python Error Handling

import time
import random
from openai import OpenAI, APIError, RateLimitError, APIConnectionError

def make_request_with_retry(client, max_retries=3):
    for attempt in range(1, max_retries + 1):
        try:
            response = client.chat.completions.create(
                model='gpt-4-turbo',
                messages=[{'role': 'user', 'content': 'Hello'}]
            )
            return response
            
        except RateLimitError as e:
            if attempt == max_retries:
                raise e
            # Rate limit hit, wait longer
            delay = min(2 ** attempt + random.uniform(0, 1), 60)
            print(f"Rate limited. Waiting {delay:.1f}s...")
            time.sleep(delay)
            
        except APIConnectionError as e:
            if attempt == max_retries:
                raise e
            # Network error, retry quickly
            delay = min(2 ** (attempt - 1), 10)
            print(f"Connection error. Retrying in {delay}s...")
            time.sleep(delay)
            
        except APIError as e:
            if e.status_code >= 500:
                # Server error, retry
                if attempt == max_retries:
                    raise e
                delay = min(2 ** attempt, 30)
                print(f"Server error {e.status_code}. Retrying in {delay}s...")
                time.sleep(delay)
            else:
                # Client error (4xx), don't retry
                raise e

# Usage
client = OpenAI(api_key='your-key', base_url='https://api.coreapi.com/v1')
try:
    response = make_request_with_retry(client)
    print(response.choices[0].message.content)
except Exception as e:
    print(f"Request failed: {e}")

Debugging Guide

🔍 Request Debugging

  • Enable request logging to see full HTTP requests/responses
  • Verify API key format: should start with 'sk-' or 'core-'
  • Check base URL: https://api.coreapi.com/v1
  • Validate JSON payload with online JSON validators
  • Test endpoints with cURL first before SDK integration

⚡ Performance Issues

  • Image/video generation takes 10-60 seconds - show progress UI
  • Use streaming for chat to improve perceived performance
  • Implement request queuing to avoid rate limits
  • Cache responses when appropriate to reduce API calls
  • Use webhooks for long-running tasks when available

🔐 Authentication Issues

  • API keys are case-sensitive and should not contain spaces
  • Use environment variables, never hardcode keys in source code
  • Rotate API keys regularly (monthly recommended)
  • Check key permissions in the dashboard
  • Ensure Authorization header format: 'Bearer YOUR_API_KEY'

🚨 Common Mistakes

  • Using wrong model names (check available models in dashboard)
  • Sending too large payloads (check model-specific limits)
  • Not handling streaming responses correctly
  • Ignoring rate limit headers for optimal batching
  • Not implementing proper error handling for production

Still Need Help?

Mori API is an AI model aggregation platform that gives developers a single, unified API to access 50+ AI models across text, image, audio, and video — with transparent pricing, real‑time analytics, and enterprise‑grade reliability.

Copyright © 2024 CoreAPI Inc
All rights reserved