OpenAI-Compatible API

AI

Use the Sylphx AI Gateway with any OpenAI-compatible client library.

Drop-in Compatible

Just change base URL

OpenAI SDKs

Python, Node.js, curl

Streaming

Server-sent events

200+ Models

One API, all providers

Overview

The Sylphx AI Gateway provides OpenAI-compatible REST endpoints. You can use existing OpenAI client libraries by simply changing the base URL to https://sylphx.com/api/v1.

Base URL

https://sylphx.com/api/v1

Authentication

All endpoints require an API key from your Sylphx dashboard. Pass it as a Bearer token in the Authorization header.

Available Endpoints

Supported OpenAI-compatible endpoints:

PropertyTypeDescription
/v1/modelsGETList available models
/v1/chat/completionsPOSTCreate chat completion
/v1/embeddingsPOSTCreate embeddings

Using with OpenAI Python SDK

from openai import OpenAI

client = OpenAI(
    base_url="https://sylphx.com/api/v1",
    api_key="your-sylphx-api-key",
)

# Chat completion
response = client.chat.completions.create(
    model="anthropic/claude-3.5-sonnet",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Hello!"},
    ],
)

print(response.choices[0].message.content)

# Embeddings
embeddings = client.embeddings.create(
    model="openai/text-embedding-3-small",
    input="Hello, world!",
)

print(embeddings.data[0].embedding[:5])  # First 5 values

Using with OpenAI Node.js SDK

import OpenAI from 'openai'

const client = new OpenAI({
  baseURL: 'https://sylphx.com/api/v1',
  apiKey: 'your-sylphx-api-key',
})

// Chat completion
const response = await client.chat.completions.create({
  model: 'anthropic/claude-3.5-sonnet',
  messages: [
    { role: 'system', content: 'You are a helpful assistant.' },
    { role: 'user', content: 'Hello!' },
  ],
})

console.log(response.choices[0].message.content)

// Streaming
const stream = await client.chat.completions.create({
  model: 'anthropic/claude-3.5-sonnet',
  messages: [{ role: 'user', content: 'Tell me a story.' }],
  stream: true,
})

for await (const chunk of stream) {
  process.stdout.write(chunk.choices[0]?.delta?.content || '')
}

Using with curl

# Chat completion
curl https://sylphx.com/api/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer your-sylphx-api-key" \
  -d '{
    "model": "anthropic/claude-3.5-sonnet",
    "messages": [
      {"role": "user", "content": "Hello!"}
    ]
  }'

# List models
curl https://sylphx.com/api/v1/models

# Embeddings
curl https://sylphx.com/api/v1/embeddings \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer your-sylphx-api-key" \
  -d '{
    "model": "openai/text-embedding-3-small",
    "input": "Hello, world!"
  }'

Streaming

The chat completions endpoint supports streaming responses. Add "stream": true to your request:

const response = await fetch('https://sylphx.com/api/v1/chat/completions', {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
    'Authorization': 'Bearer your-sylphx-api-key',
  },
  body: JSON.stringify({
    model: 'anthropic/claude-3.5-sonnet',
    messages: [{ role: 'user', content: 'Hello!' }],
    stream: true,
  }),
})

const reader = response.body.getReader()
const decoder = new TextDecoder()

while (true) {
  const { done, value } = await reader.read()
  if (done) break

  const chunk = decoder.decode(value)
  const lines = chunk.split('\n').filter(line => line.startsWith('data: '))

  for (const line of lines) {
    const data = line.slice(6) // Remove 'data: '
    if (data === '[DONE]') continue

    const json = JSON.parse(data)
    const content = json.choices[0]?.delta?.content
    if (content) process.stdout.write(content)
  }
}

Error Handling

Errors follow the OpenAI error format:

{
  "error": {
    "message": "Invalid API key",
    "type": "invalid_api_key",
    "code": "invalid_api_key"
  }
}
PropertyTypeDescription
401invalid_api_keyMissing or invalid API key
400invalid_request_errorInvalid request parameters
429rate_limit_exceededRate limit reached
500internal_errorServer error

Why Sylphx?

200+ models

One API key for all providers

Usage tracking

Automatic cost analytics

Rate limiting

Per-environment quotas

Zero lock-in

Switch models instantly