Using AKI.IO as OpenAI API

For easy integration into existing AI tools and applications, AKI.IO can be used as a GDPR-compliant drop-in replacement for OpenAI API services.

To send OpenAI-compatible API requests, use the following base URL:

https://aki.io/v1

Most applications that support OpenAI integration allow you to specify a custom API endpoint. Simply replace the OpenAI base URL with the AKI.IO endpoint above.

When prompted for an OpenAI API key, enter your AKI.IO API key instead.

Check the settings of your preferred AI applications or services. Many tools that support OpenAI integration allow you to configure a custom API base URL. Replace the OpenAI endpoint with: https://aki.io/v1 Then enter your AKI.IO API key in place of the OpenAI API key. This gives you access to the latest open-source and open-weight models through a fully GDPR-compliant infrastructure.

Supported OpenAI endpoints

/v1/models

The /v1/models endpoint returns a list of all available AKI.IO language‑model resources that are available for your AKI.IO API key. When called, it responds with a JSON payload containing an array of model objects.

/v1/chat/completions

The /v1/chat/completions endpoint generates conversational replies instead of OpenAI’s GPT models any available LLM from AKI.IO can be selected as model. By sending a POST request that includes a list of message objects — each with a role (system, user, or assistant) and content — plus optional parameters such as temperature, max_tokens, and stop sequences, the API returns a JSON response containing one or more generated message choices. Response streaming is supported.

/v1/images/generations

This endpoint creates images from textual prompts using the latest diffusion models. You send a POST request with a prompt (or an array of prompts) and optional parameters such as n (number of images), size (resolution like 256x256, 512x512, 1024x1024). The API returns a JSON payload containing a data array where each entry holds the generated image as a base‑64‑encoded image string, along with metadata about the request. This endpoint is used to generate original, AI‑driven visuals from natural‑language descriptions.

/v1/images/edits

This endpoint modifies an existing image according to a textual instruction using the latest OS image‑editing model. Returned is an image modified to your request, replace colors or objects in an image, transform the image style, exchange the background or setting or even show the image from a different angle.