Images

Creating images

Generate a new image from a text prompt, or remix one or more existing images. Returns a hosted URL on qaves.me.

Endpoint

POST https://qaves.me/api/v1/images/generations

Headers

Authorizationstringrequired
Bearer sk_qu_… — see Authentication.
Content-Typestringrequired
Must be application/json.

Body parameters

modelstringrequired
Either gpt-image-2 (OpenAI, supports quality) or z-image-turbo (Replicate, fast preview model).
promptstringrequired
What you want to see. Plain text, any length the model accepts. Example: "A neon orange fox in a forest, cinematic lighting".
qualitystringoptional
One of low (default), medium, high. Higher quality is slower and uses more upstream credit when you bring your own key.
imagestring | string[]optional
One or more images to remix / edit. Each entry is either:
  • An https:// URL the server can fetch.
  • A base64 data URI, e.g. data:image/png;base64,iVBORw0KGgo….
Pass a single string for one input image, or an array for multiple.
bucketstringoptional
Label to group this image under. Auto-normalized to lowercase kebab-case ( "Cats And Dogs""cats-and-dogs"), max 64 chars. Defaults to default. New buckets are created on first use. See Working with buckets.
optimize_promptbooleanoptional
When true, your prompt is first rewritten by gpt-5-nano for better visual results, then the rewritten prompt is sent to the image model. The response includes a prompt_optimization object with original and optimized values. Defaults to false.

Response

created_atintegeroptional
Unix epoch (seconds) when the image was returned.
modelstringoptional
The model you requested.
qualitystring | nulloptional
The quality used.
promptstringoptional
The prompt you sent (echoed back).
bucketstringoptional
The normalized bucket name the image was stored in.
sizestringoptional
Output dimensions, currently always 1024x1024.
key_sourcestringoptional
"free" if served from your free quota, "user" if it fell back to your own OpenAI key.
completion_msintegeroptional
Total server-side time in milliseconds.
rate_limitobjectoptional
dataarrayoptional
Array of generated images. Each entry has a url string pointing at the hosted PNG.
prompt_optimizationobject | undefinedoptional
Only present when optimize_prompt: true. Contains original (your prompt), optimized (the rewritten prompt actually sent to the image model), and model (always gpt-5-nano).
{
  "created_at": 1778833800,
  "model": "gpt-image-2",
  "quality": "low",
  "prompt": "A neon orange fox in a forest",
  "bucket": "default",
  "size": "1024x1024",
  "key_source": "free",
  "completion_ms": 4821,
  "rate_limit": {
    "limit": 20,
    "used": 3,
    "remaining": 17,
    "next_available_at": 1778920200
  },
  "data": [{ "url": "https://qaves.me/api/i/<id>.png" }]
}

Examples

Minimal text-to-image

curl https://qaves.me/api/v1/images/generations \
  -H "Authorization: Bearer sk_qu_YOUR_KEY" \
  -H "Content-Type: application/json" \
  -d '{ "model": "gpt-image-2", "prompt": "A neon orange fox" }'

Remix an existing image

{
  "model": "gpt-image-2",
  "prompt": "Make it Studio Ghibli style",
  "image": "https://example.com/photo.jpg"
}

Combine multiple inputs

{
  "model": "gpt-image-2",
  "prompt": "Put the character from image 1 into the scene from image 2",
  "image": [
    "https://example.com/character.png",
    "https://example.com/scene.png"
  ]
}

Save into a custom bucket

{
  "model": "gpt-image-2",
  "prompt": "A logo for a coffee shop",
  "bucket": "client-acme"
}

Optimize the prompt before generating

Let gpt-5-nano rewrite a short prompt into a more descriptive one before it hits the image model. Useful when you want better-looking results from minimal user input.

{
  "model": "gpt-image-2",
  "prompt": "a fox",
  "optimize_prompt": true
}
{
  "created_at": 1778833800,
  "model": "gpt-image-2",
  "prompt": "A vibrant red fox standing in a sunlit forest clearing, ...",
  "prompt_optimization": {
    "original": "a fox",
    "optimized": "A vibrant red fox standing in a sunlit forest clearing, ...",
    "model": "gpt-5-nano"
  },
  "data": [{ "url": "https://qaves.me/api/i/<id>.png" }]
}

Heads up

Generations can take 5–30 seconds depending on model and quality. Set your HTTP client timeout to at least 120 seconds. The connection stays open until the image is ready.