Skip to main content

Actions

The ballerinax/openai package exposes the following clients:

ClientPurpose
ClientComprehensive client for the full OpenAI REST API: chat, images, audio, embeddings, assistants, files, fine-tuning, and more.

Client

Comprehensive client for the full OpenAI REST API: chat, images, audio, embeddings, assistants, files, fine-tuning, and more.

Configuration

FieldTypeDefaultDescription
authhttp:BearerTokenConfigRequiredBearer token configuration containing your OpenAI API key.
httpVersionhttp:HttpVersionHTTP_2_0HTTP protocol version.
timeoutdecimal60Request timeout in seconds.
retryConfighttp:RetryConfig()Retry configuration for failed requests.
secureSockethttp:ClientSecureSocket()SSL/TLS configuration.
proxyhttp:ProxyConfig()Proxy server configuration.

Initializing the client

import ballerinax/openai;

configurable string token = ?;

openai:Client openaiClient = check new ({
auth: {token}
});

Operations

Chat completions

Create chat completion

Creates a model response for the given chat conversation. Supports text, function calling, and structured outputs.

Parameters:

NameTypeRequiredDescription
requestCreateChatCompletionRequestYesChat completion request containing model, messages, and optional parameters.

Returns: CreateChatCompletionResponse|error

Sample code:

openai:CreateChatCompletionResponse response = check openaiClient->/chat/completions.post({
model: "gpt-4o",
messages: [
{role: "system", content: "You are a helpful assistant."},
{role: "user", content: "What is the capital of France?"}
]
});

Sample response:

{
"id": "chatcmpl-abc123",
"object": "chat.completion",
"created": 1710000000,
"model": "gpt-4o-2024-08-06",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "The capital of France is Paris."
},
"finish_reason": "stop"
}
],
"usage": {"prompt_tokens": 25, "completion_tokens": 8, "total_tokens": 33}
}
List chat completions

Lists stored chat completions. Only chat completions created with the store parameter set to true are returned.

Parameters:

NameTypeRequiredDescription
queriesGetChatCompletionsQueriesNoQuery parameters including model, after, limit, and order.

Returns: ChatCompletionList|error

Sample code:

openai:ChatCompletionList completions = check openaiClient->/chat/completions();

Sample response:

{
"object": "list",
"data": [
{"id": "chatcmpl-abc123", "object": "chat.completion", "model": "gpt-4o", "created": 1710000000}
],
"has_more": false
}
Retrieve chat completion

Retrieves a stored chat completion by its ID.

Parameters:

NameTypeRequiredDescription
completionIdstringYesThe ID of the chat completion to retrieve.

Returns: CreateChatCompletionResponse|error

Sample code:

openai:CreateChatCompletionResponse completion = check openaiClient->/chat/completions/["chatcmpl-abc123"]();

Sample response:

{
"id": "chatcmpl-abc123",
"object": "chat.completion",
"model": "gpt-4o",
"choices": [
{"index": 0, "message": {"role": "assistant", "content": "Hello!"}, "finish_reason": "stop"}
]
}
Delete chat completion

Deletes a stored chat completion. Only completions created with the store parameter set to true can be deleted.

Parameters:

NameTypeRequiredDescription
completionIdstringYesThe ID of the chat completion to delete.

Returns: ChatCompletionDeleted|error

Sample code:

openai:ChatCompletionDeleted result = check openaiClient->/chat/completions/["chatcmpl-abc123"].delete();

Sample response:

{"id": "chatcmpl-abc123", "object": "chat.completion.deleted", "deleted": true}

Images

Create image

Creates an image given a prompt using DALL·E models.

Parameters:

NameTypeRequiredDescription
requestCreateImageRequestYesImage generation request containing prompt, model, size, and quality options.

Returns: ImagesResponse|error

Sample code:

openai:ImagesResponse images = check openaiClient->/images/generations.post({
model: "dall-e-3",
prompt: "A serene mountain landscape at sunset",
n: 1,
size: "1024x1024"
});

Sample response:

{
"created": 1710000000,
"data": [
{"url": "https://oaidalleapiprodscus.blob.core.windows.net/...", "revised_prompt": "A serene mountain landscape at sunset..."}
]
}
Create image edit

Creates an edited or extended image given an original image and a prompt.

Parameters:

NameTypeRequiredDescription
requestCreateImageEditRequestYesImage edit request containing the image, mask, prompt, and options.

Returns: ImagesResponse|error

Sample code:

openai:ImagesResponse editedImages = check openaiClient->/images/edits.post({
image: {fileContent: check io:fileReadBytes("original.png"), fileName: "original.png"},
prompt: "Add a rainbow in the sky",
n: 1,
size: "1024x1024"
});

Sample response:

{
"created": 1710000000,
"data": [
{"url": "https://oaidalleapiprodscus.blob.core.windows.net/..."}
]
}

Audio

Create speech

Generates audio from the input text using a TTS model.

Parameters:

NameTypeRequiredDescription
requestCreateSpeechRequestYesSpeech request containing model, input text, and voice.

Returns: byte[]|error

Sample code:

byte[] audioBytes = check openaiClient->/audio/speech.post({
model: "tts-1",
input: "Hello, welcome to the Ballerina OpenAI connector!",
voice: "alloy"
});
Create transcription

Transcribes audio into text in the input language.

Parameters:

NameTypeRequiredDescription
requestCreateTranscriptionRequestYesTranscription request containing the audio file and model.

Returns: InlineResponse200|error

Sample code:

openai:InlineResponse200 transcription = check openaiClient->/audio/transcriptions.post({
file: {fileContent: check io:fileReadBytes("audio.mp3"), fileName: "audio.mp3"},
model: "whisper-1"
});

Sample response:

{"text": "Hello, this is a test transcription of the audio file."}

Embeddings

Create embeddings

Creates an embedding vector representing the input text.

Parameters:

NameTypeRequiredDescription
requestCreateEmbeddingRequestYesEmbedding request containing the input text and model.

Returns: CreateEmbeddingResponse|error

Sample code:

openai:CreateEmbeddingResponse embeddings = check openaiClient->/embeddings.post({
model: "text-embedding-3-small",
input: "The quick brown fox jumps over the lazy dog"
});

Sample response:

{
"object": "list",
"data": [
{"object": "embedding", "index": 0, "embedding": [0.0023, -0.0091, 0.0154, ...]}
],
"model": "text-embedding-3-small",
"usage": {"prompt_tokens": 9, "total_tokens": 9}
}

Assistants

Create assistant

Creates an assistant with a model and instructions.

Parameters:

NameTypeRequiredDescription
requestCreateAssistantRequestYesAssistant creation request containing model, name, instructions, and tools.

Returns: AssistantObject|error

Sample code:

openai:AssistantObject assistant = check openaiClient->/assistants.post({
model: "gpt-4o",
name: "Financial Advisor",
instructions: "You are a personal financial advisor. Provide clear, actionable advice."
});

Sample response:

{
"id": "asst_abc123",
"object": "assistant",
"name": "Financial Advisor",
"model": "gpt-4o",
"instructions": "You are a personal financial advisor. Provide clear, actionable advice.",
"tools": [],
"created_at": 1710000000
}
List assistants

Returns a list of assistants.

Parameters:

NameTypeRequiredDescription
queriesListAssistantsQueriesNoQuery parameters including limit, order, after, and before.

Returns: ListAssistantsResponse|error

Sample code:

openai:ListAssistantsResponse assistants = check openaiClient->/assistants();

Sample response:

{
"object": "list",
"data": [
{"id": "asst_abc123", "object": "assistant", "name": "Financial Advisor", "model": "gpt-4o"}
],
"has_more": false
}
Delete assistant

Deletes an assistant.

Parameters:

NameTypeRequiredDescription
assistantIdstringYesThe ID of the assistant to delete.

Returns: DeleteAssistantResponse|error

Sample code:

openai:DeleteAssistantResponse result = check openaiClient->/assistants/["asst_abc123"].delete();

Sample response:

{"id": "asst_abc123", "object": "assistant.deleted", "deleted": true}

Threads & messages

Create thread

Creates a thread for an assistant conversation.

Parameters:

NameTypeRequiredDescription
requestCreateThreadRequestYesThread creation request, optionally containing initial messages.

Returns: ThreadObject|error

Sample code:

openai:ThreadObject thread = check openaiClient->/threads.post({});

Sample response:

{
"id": "thread_abc123",
"object": "thread",
"created_at": 1710000000,
"metadata": {}
}
Create message

Creates a message within a thread.

Parameters:

NameTypeRequiredDescription
threadIdstringYesThe ID of the thread to create the message in.
requestCreateMessageRequestYesMessage request containing the role and content.

Returns: MessageObject|error

Sample code:

openai:MessageObject message = check openaiClient->/threads/[thread.id]/messages.post({
role: "user",
content: "What are some good strategies for saving money?"
});

Sample response:

{
"id": "msg_abc123",
"object": "thread.message",
"thread_id": "thread_abc123",
"role": "user",
"content": [{"type": "text", "text": {"value": "What are some good strategies for saving money?"}}],
"created_at": 1710000000
}
List messages

Returns a list of messages for a given thread.

Parameters:

NameTypeRequiredDescription
threadIdstringYesThe ID of the thread.
queriesListMessagesQueriesNoQuery parameters including limit, order, after, and before.

Returns: ListMessagesResponse|error

Sample code:

openai:ListMessagesResponse messages = check openaiClient->/threads/[thread.id]/messages();

Sample response:

{
"object": "list",
"data": [
{"id": "msg_abc123", "object": "thread.message", "role": "assistant", "content": [{"type": "text", "text": {"value": "Here are some strategies..."}}]}
],
"has_more": false
}

Runs

Create run

Creates a run for a thread with a specified assistant.

Parameters:

NameTypeRequiredDescription
threadIdstringYesThe ID of the thread to run.
requestCreateRunRequestYesRun request containing the assistant ID and optional parameters.

Returns: RunObject|error

Sample code:

openai:RunObject run = check openaiClient->/threads/[thread.id]/runs.post({
assistant_id: assistant.id
});

Sample response:

{
"id": "run_abc123",
"object": "thread.run",
"thread_id": "thread_abc123",
"assistant_id": "asst_abc123",
"status": "queued",
"created_at": 1710000000
}
Retrieve run

Retrieves a run to check its status and results.

Parameters:

NameTypeRequiredDescription
threadIdstringYesThe ID of the thread.
runIdstringYesThe ID of the run to retrieve.

Returns: RunObject|error

Sample code:

openai:RunObject runStatus = check openaiClient->/threads/[thread.id]/runs/[run.id]();

Sample response:

{
"id": "run_abc123",
"object": "thread.run",
"status": "completed",
"assistant_id": "asst_abc123",
"thread_id": "thread_abc123"
}
Cancel run

Cancels an in-progress run.

Parameters:

NameTypeRequiredDescription
threadIdstringYesThe ID of the thread.
runIdstringYesThe ID of the run to cancel.

Returns: RunObject|error

Sample code:

openai:RunObject cancelledRun = check openaiClient->/threads/[thread.id]/runs/[run.id]/cancel.post();

Sample response:

{
"id": "run_abc123",
"object": "thread.run",
"status": "cancelling",
"assistant_id": "asst_abc123"
}

Files

List files

Returns a list of files that belong to the user's organization.

Parameters:

NameTypeRequiredDescription
queriesListFilesQueriesNoQuery parameters including purpose, limit, order, and after.

Returns: ListFilesResponse|error

Sample code:

openai:ListFilesResponse files = check openaiClient->/files();

Sample response:

{
"object": "list",
"data": [
{"id": "file-abc123", "object": "file", "bytes": 12345, "filename": "data.jsonl", "purpose": "fine-tune"}
],
"has_more": false
}
Upload file

Uploads a file that can be used across various endpoints such as fine-tuning, assistants, and batches.

Parameters:

NameTypeRequiredDescription
requestCreateFileRequestYesFile upload request containing the file and its purpose.

Returns: OpenAIFile|error

Sample code:

openai:OpenAIFile file = check openaiClient->/files.post({
file: {fileContent: check io:fileReadBytes("training-data.jsonl"), fileName: "training-data.jsonl"},
purpose: "fine-tune"
});

Sample response:

{
"id": "file-abc123",
"object": "file",
"bytes": 12345,
"filename": "training-data.jsonl",
"purpose": "fine-tune",
"status": "processed"
}
Delete file

Deletes a file.

Parameters:

NameTypeRequiredDescription
fileIdstringYesThe ID of the file to delete.

Returns: DeleteFileResponse|error

Sample code:

openai:DeleteFileResponse result = check openaiClient->/files/["file-abc123"].delete();

Sample response:

{"id": "file-abc123", "object": "file", "deleted": true}

Fine-Tuning

Create fine-tuning job

Creates a fine-tuning job to customize a model with your training data.

Parameters:

NameTypeRequiredDescription
requestCreateFineTuningJobRequestYesFine-tuning request containing model, training file ID, and hyperparameters.

Returns: FineTuningJob|error

Sample code:

openai:FineTuningJob job = check openaiClient->/fine_tuning/jobs.post({
model: "gpt-4o-mini-2024-07-18",
training_file: "file-abc123"
});

Sample response:

{
"id": "ftjob-abc123",
"object": "fine_tuning.job",
"model": "gpt-4o-mini-2024-07-18",
"training_file": "file-abc123",
"status": "validating_files",
"created_at": 1710000000
}
List fine-tuning jobs

Lists your organization's fine-tuning jobs.

Parameters:

NameTypeRequiredDescription
queriesListPaginatedFineTuningJobsQueriesNoQuery parameters including after and limit.

Returns: ListPaginatedFineTuningJobsResponse|error

Sample code:

openai:ListPaginatedFineTuningJobsResponse jobs = check openaiClient->/fine_tuning/jobs();

Sample response:

{
"object": "list",
"data": [
{"id": "ftjob-abc123", "object": "fine_tuning.job", "model": "gpt-4o-mini-2024-07-18", "status": "succeeded"}
],
"has_more": false
}
Cancel fine-tuning job

Immediately cancels a fine-tuning job.

Parameters:

NameTypeRequiredDescription
fineTuningJobIdstringYesThe ID of the fine-tuning job to cancel.

Returns: FineTuningJob|error

Sample code:

openai:FineTuningJob cancelledJob = check openaiClient->/fine_tuning/jobs/["ftjob-abc123"]/cancel.post();

Sample response:

{
"id": "ftjob-abc123",
"object": "fine_tuning.job",
"status": "cancelled"
}

Vector stores

Create vector store

Creates a vector store for use with the file search tool in assistants.

Parameters:

NameTypeRequiredDescription
requestCreateVectorStoreRequestYesVector store creation request containing name and optional file IDs.

Returns: VectorStoreObject|error

Sample code:

openai:VectorStoreObject vectorStore = check openaiClient->/vector_stores.post({
name: "Knowledge Base"
});

Sample response:

{
"id": "vs_abc123",
"object": "vector_store",
"name": "Knowledge Base",
"status": "completed",
"file_counts": {"in_progress": 0, "completed": 0, "failed": 0, "cancelled": 0, "total": 0},
"created_at": 1710000000
}
List vector stores

Returns a list of vector stores.

Parameters:

NameTypeRequiredDescription
queriesListVectorStoresQueriesNoQuery parameters including limit, order, after, and before.

Returns: ListVectorStoresResponse|error

Sample code:

openai:ListVectorStoresResponse stores = check openaiClient->/vector_stores();

Sample response:

{
"object": "list",
"data": [
{"id": "vs_abc123", "object": "vector_store", "name": "Knowledge Base", "status": "completed"}
],
"has_more": false
}
Delete vector store

Deletes a vector store.

Parameters:

NameTypeRequiredDescription
vectorStoreIdstringYesThe ID of the vector store to delete.

Returns: DeleteVectorStoreResponse|error

Sample code:

openai:DeleteVectorStoreResponse result = check openaiClient->/vector_stores/["vs_abc123"].delete();

Sample response:

{"id": "vs_abc123", "object": "vector_store.deleted", "deleted": true}

Models

List models

Lists the currently available models and provides basic information about each.

Returns: ListModelsResponse|error

Sample code:

openai:ListModelsResponse models = check openaiClient->/models();

Sample response:

{
"object": "list",
"data": [
{"id": "gpt-4o", "object": "model", "owned_by": "openai", "created": 1710000000},
{"id": "dall-e-3", "object": "model", "owned_by": "openai", "created": 1710000000}
]
}
Retrieve model

Retrieves a model instance, providing basic information about the model.

Parameters:

NameTypeRequiredDescription
modelstringYesThe ID of the model to retrieve (e.g., "gpt-4o").

Returns: Model|error

Sample code:

openai:Model model = check openaiClient->/models/["gpt-4o"]();

Sample response:

{"id": "gpt-4o", "object": "model", "owned_by": "openai", "created": 1710000000}
Delete fine-tuned model

Deletes a fine-tuned model. You must have the Owner role in your organization.

Parameters:

NameTypeRequiredDescription
modelstringYesThe model ID to delete (e.g., "ft:gpt-4o-mini:my-org:custom:abc123").

Returns: DeleteModelResponse|error

Sample code:

openai:DeleteModelResponse result = check openaiClient->/models/["ft:gpt-4o-mini:my-org:custom:abc123"].delete();

Sample response:

{"id": "ft:gpt-4o-mini:my-org:custom:abc123", "object": "model", "deleted": true}

Moderations

Create moderation

Classifies if text and/or images are potentially harmful across several categories.

Parameters:

NameTypeRequiredDescription
requestCreateModerationRequestYesModeration request containing the input text or images to classify.

Returns: CreateModerationResponse|error

Sample code:

openai:CreateModerationResponse moderation = check openaiClient->/moderations.post({
input: "I want to learn about safe coding practices."
});

Sample response:

{
"id": "modr-abc123",
"model": "omni-moderation-latest",
"results": [
{
"flagged": false,
"categories": {"sexual": false, "hate": false, "violence": false, "self-harm": false},
"category_scores": {"sexual": 0.0001, "hate": 0.0002, "violence": 0.0001, "self-harm": 0.0001}
}
]
}

Batches

Create batch

Creates and executes a batch of API requests from an uploaded file of requests.

Parameters:

NameTypeRequiredDescription
requestCreateBatchRequestYesBatch request containing the input file ID, endpoint, and completion window.

Returns: Batch|error

Sample code:

openai:Batch batch = check openaiClient->/batches.post({
input_file_id: "file-abc123",
endpoint: "/v1/chat/completions",
completion_window: "24h"
});

Sample response:

{
"id": "batch_abc123",
"object": "batch",
"endpoint": "/v1/chat/completions",
"input_file_id": "file-abc123",
"status": "validating",
"completion_window": "24h",
"created_at": 1710000000
}
Retrieve batch

Retrieves a batch to check its status and results.

Parameters:

NameTypeRequiredDescription
batchIdstringYesThe ID of the batch to retrieve.

Returns: Batch|error

Sample code:

openai:Batch batchStatus = check openaiClient->/batches/["batch_abc123"]();

Sample response:

{
"id": "batch_abc123",
"object": "batch",
"status": "completed",
"output_file_id": "file-xyz789",
"request_counts": {"total": 100, "completed": 100, "failed": 0}
}

Responses

Create response

Creates a model response. Provides a unified interface supporting text generation, tool use, and structured outputs.

Parameters:

NameTypeRequiredDescription
requestCreateResponseRequestYesResponse request containing model, input, and optional instructions.

Returns: Response|error

Sample code:

openai:Response response = check openaiClient->/responses.post({
model: "gpt-4o",
input: "Explain quantum computing in simple terms."
});

Sample response:

{
"id": "resp_abc123",
"object": "response",
"model": "gpt-4o",
"output": [
{"type": "message", "role": "assistant", "content": [{"type": "output_text", "text": "Quantum computing uses quantum bits..."}]}
],
"usage": {"input_tokens": 12, "output_tokens": 45, "total_tokens": 57}
}
Delete response

Deletes a model response with the given ID.

Parameters:

NameTypeRequiredDescription
responseIdstringYesThe ID of the response to delete.

Returns: error?

Sample code:

check openaiClient->/responses/["resp_abc123"].delete();