Javascript

Alle required calls to the AKI.IO API are contained in the aki-io.js Javascript file.

It has all required methods for authentication, doing API calls and handling streaming.

For in-house or local web services aki-io.js can directly included in your HTML files with following line:

<script src="/static/js/aki_io.js"></script>
Bear in mind that this should never be done on public websites, as your API key will get exposed and can easily be misused with out your permission. Please keep your API key always secret!

It is more useful to include the aki-io.js Javascript in your backend, which acts as a proxy to your clients, and can be used like a npm package by including it with the following line, and having a local copy of aki-io.js in the specified path.

const { Aki, doAPIRequest } = require('/js/aki_io');
An official Javascript npm package is in the works and will soon be available.

Simple LLM Chat Example

A complete node Javascript example to do a blocking AKI.IO LLM request. To run the node llm_simple_example.js example execute:

node llm_simple_example.js
const { Aki, doAPIRequest } = require('../aki_io');

const ENDPOINT = 'llama3_8b_chat';
const API_KEY = 'fc3a8c50-b12b-4d6a-ba07-c9f6a6c32c37';

const chatContext = [
    {role: 'system', content: 'You are a helpful assistant named AKI.' },
    {role: 'assistant', content: 'How can I help you today?' },
    {role: 'user', content: 'Tell me a joke' }
];

const params = {
    chat_context: JSON.stringify(chatContext),
    top_k: 40,
    top_p: 0.9,
    temperature: 0.8,
    max_gen_tokens: 1000
};

doAPIRequest(
    ENDPOINT,
    API_KEY,
    params,
    (result) => {
        if (result.success) {
            console.log('\nAPI JSON response:', result);
            console.log('\nChat response:\n', result.text);
            console.log('\nGenerated Tokens:', result.num_generated_tokens);
        }
        else {
            console.error('API Error:', result.error_code, '-', result.error);
        }
    }
);

Streaming LLM Chat Example

A complete node Javascript example to do a streaming AKI.IO LLM request. To run the node llm_stream_example.js example execute:

node llm_stream_example.js
const { Aki, doAPIRequest } = require('../aki_io');

const aki = new Aki('llama3_8b_chat', 'fc3a8c50-b12b-4d6a-ba07-c9f6a6c32c37')

const chatContext = [
    {role: 'system', content: 'You are a helpful assistant named AKI.' },
    {role: 'assistant', content: 'How can I help you today?' },
    {role: 'user', content: 'Tell me a funny story with more than 100 words' }
];

const params = {
    chat_context: JSON.stringify(chatContext),
    top_k: 40,
    top_p: 0.9,
    temperature: 0.8,
    max_gen_tokens: 1000
};

output_position = 0

console.log('\n🤖 Assistant: ');

// Make the API request
aki.doAPIRequest(
    params,
    (result) => {
        if (!result || result.success === false) {
            console.error('❌ API Error:', result.error || 'Unknown error');
            return;
        }

        process.stdout.write(result.text.slice(output_position) + '\n');

        console.log('\n📊 Stats:', {
            'Generated Tokens': result.num_generated_tokens,
            'Compute Duration': `${result.compute_duration.toFixed(2)}s`,
            'Total Duration': `${result.total_duration.toFixed(2)}s`
        });

        console.log('\n✨ Chat completed!');
    },
    (progress, progress_data) => {
        if(progress_data) {
            text = progress_data.text
            process.stdout.write(text.slice(output_position));
            output_position = text.length 
        }
    }
);

More Javascript examples to use AKI.IO for various use cases and model types can be found here