Drop-in Compatible
Just change base URL
OpenAI SDKs
Python, Node.js, curl
Streaming
Server-sent events
200+ Models
One API, all providers
Overview
The Sylphx AI Gateway provides OpenAI-compatible REST endpoints. You can use existing OpenAI client libraries by simply changing the base URL to https://sylphx.com/api/v1.
Base URL
https://sylphx.com/api/v1
Authentication
All endpoints require an API key from your Sylphx dashboard. Pass it as a Bearer token in the Authorization header.
Available Endpoints
Supported OpenAI-compatible endpoints:
| Property | Type | Description |
|---|---|---|
/v1/models | GET | List available models |
/v1/chat/completions | POST | Create chat completion |
/v1/embeddings | POST | Create embeddings |
Using with OpenAI Python SDK
from openai import OpenAI
client = OpenAI(
base_url="https://sylphx.com/api/v1",
api_key="your-sylphx-api-key",
)
# Chat completion
response = client.chat.completions.create(
model="anthropic/claude-3.5-sonnet",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Hello!"},
],
)
print(response.choices[0].message.content)
# Embeddings
embeddings = client.embeddings.create(
model="openai/text-embedding-3-small",
input="Hello, world!",
)
print(embeddings.data[0].embedding[:5]) # First 5 valuesUsing with OpenAI Node.js SDK
import OpenAI from 'openai'
const client = new OpenAI({
baseURL: 'https://sylphx.com/api/v1',
apiKey: 'your-sylphx-api-key',
})
// Chat completion
const response = await client.chat.completions.create({
model: 'anthropic/claude-3.5-sonnet',
messages: [
{ role: 'system', content: 'You are a helpful assistant.' },
{ role: 'user', content: 'Hello!' },
],
})
console.log(response.choices[0].message.content)
// Streaming
const stream = await client.chat.completions.create({
model: 'anthropic/claude-3.5-sonnet',
messages: [{ role: 'user', content: 'Tell me a story.' }],
stream: true,
})
for await (const chunk of stream) {
process.stdout.write(chunk.choices[0]?.delta?.content || '')
}Using with curl
# Chat completion
curl https://sylphx.com/api/v1/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer your-sylphx-api-key" \
-d '{
"model": "anthropic/claude-3.5-sonnet",
"messages": [
{"role": "user", "content": "Hello!"}
]
}'
# List models
curl https://sylphx.com/api/v1/models
# Embeddings
curl https://sylphx.com/api/v1/embeddings \
-H "Content-Type: application/json" \
-H "Authorization: Bearer your-sylphx-api-key" \
-d '{
"model": "openai/text-embedding-3-small",
"input": "Hello, world!"
}'Streaming
The chat completions endpoint supports streaming responses. Add "stream": true to your request:
const response = await fetch('https://sylphx.com/api/v1/chat/completions', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
'Authorization': 'Bearer your-sylphx-api-key',
},
body: JSON.stringify({
model: 'anthropic/claude-3.5-sonnet',
messages: [{ role: 'user', content: 'Hello!' }],
stream: true,
}),
})
const reader = response.body.getReader()
const decoder = new TextDecoder()
while (true) {
const { done, value } = await reader.read()
if (done) break
const chunk = decoder.decode(value)
const lines = chunk.split('\n').filter(line => line.startsWith('data: '))
for (const line of lines) {
const data = line.slice(6) // Remove 'data: '
if (data === '[DONE]') continue
const json = JSON.parse(data)
const content = json.choices[0]?.delta?.content
if (content) process.stdout.write(content)
}
}Error Handling
Errors follow the OpenAI error format:
{
"error": {
"message": "Invalid API key",
"type": "invalid_api_key",
"code": "invalid_api_key"
}
}| Property | Type | Description |
|---|---|---|
401 | invalid_api_key | Missing or invalid API key |
400 | invalid_request_error | Invalid request parameters |
429 | rate_limit_exceeded | Rate limit reached |
500 | internal_error | Server error |
Why Sylphx?
200+ models
One API key for all providers
Usage tracking
Automatic cost analytics
Rate limiting
Per-environment quotas
Zero lock-in
Switch models instantly