Docs

Everything you need to integrate with raux.one API.

Quick Start

raux.one is fully OpenAI-compatible. Use any OpenAI SDK โ€” just change the base URL and API key.

Base URL: https://api.raux.one/v1
API Key:  sk-raux-xxxxxxxx (from Dashboard โ†’ API Keys)

Python

pip install openai

from openai import OpenAI

client = OpenAI(
    base_url="https://api.raux.one/v1",
    api_key="sk-raux-your-key"
)

response = client.chat.completions.create(
    model="gemini-2.5-flash",
    messages=[{"role": "user", "content": "Hello!"}],
    stream=True
)

for chunk in response:
    print(chunk.choices[0].delta.content or "", end="")

Node.js / TypeScript

npm install openai

import OpenAI from "openai";

const client = new OpenAI({
  baseURL: "https://api.raux.one/v1",
  apiKey: "sk-raux-your-key",
});

const stream = await client.chat.completions.create({
  model: "claude-sonnet-4-6",
  messages: [{ role: "user", content: "Hello!" }],
  stream: true,
});

for await (const chunk of stream) {
  process.stdout.write(chunk.choices[0]?.delta?.content || "");
}

cURL

curl https://api.raux.one/v1/chat/completions \
  -H "Authorization: Bearer sk-raux-your-key" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "deepseek-v3",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

Endpoints

POST/v1/chat/completionsChat completion (streaming supported)
GET/v1/modelsList available models

Rate Limits

Rate limits depend on your tier (requests per minute):

Free
3 RPM
Pro
15 RPM
Ultra
40 RPM
Enterprise
100 RPM

Error Handling

Errors include a raux_id for support reference.

{
  "error": {
    "message": "Rate limit reached. Please wait.",
    "type": "rate_limit_error",
    "raux_id": "raux_a1b2c3d4e5f6g7h8"
  }
}