Skip to main content
Swap two environment variables and change the model name. That’s it.
export OPENAI_BASE_URL='https://api.telnyx.com/v2/ai/openai'
export OPENAI_API_KEY='KEY***'
from openai import OpenAI

client = OpenAI()  # picks up env vars
chat_completion = client.chat.completions.create(
    model="moonshotai/Kimi-K2.5",
    messages=[{"role": "user", "content": "Tell me about Telnyx"}],
    temperature=0.0,
    stream=True,
)
Or pass explicitly:
import os
from openai import OpenAI

client = OpenAI(
    api_key=os.getenv("TELNYX_API_KEY"),
    base_url="https://api.telnyx.com/v2/ai/openai",
)
chat_completion = client.chat.completions.create(
    model="moonshotai/Kimi-K2.5",
    messages=[{"role": "user", "content": "Tell me about Telnyx"}],
    temperature=0.0,
    stream=True,
)

Chat Completions Compatibility

ParameterTelnyxOpenAI
messages
model
stream
max_tokens
temperature
top_p
frequency_penalty
presence_penalty
n
stop
logit_bias
logprobs
top_logprobs
seed
response_format
tool_choice
tools
function
retrieval
guided_json
guided_regex
guided_choice
min_p
use_beam_search
best_of
length_penalty
early_stopping
user

Transcriptions Compatibility

ParameterTelnyxOpenAI
file
model
response_format
timestamp_granularities[]segment
timestamp_granularities[]word
language
prompt
temperature