Skip to main content

Overview

We've worked to make migration from OpenAI to Telnyx as painless as possible.

You will only need to:

  1. Set the OPENAI_BASE_URL and OPENAI_API_KEY environment variables
  2. Use an open-source model supported by Telnyx

Set the environment variables...

export OPENAI_BASE_URL='https://api.telnyx.com/v2/ai'
export OPENAI_API_KEY='KEY***'

and this code will work!

from openai import OpenAI

client = OpenAI()
chat_completion = client.chat.completions.create(
# model="gpt-3.5-turbo",
model="meta-llama/Meta-Llama-3-70B-Instruct",
messages=[
{"role": "user", "content": "Tell me about Telnyx"}
],
temperature=0.0,
stream=True,
)

If you'd like to use different environment variables, you may also pass these fields in the client constructor

from openai import OpenAI

client = OpenAI(
api_key=os.getenv("TELNYX_API_KEY"),
base_url=os.getenv("TELNYX_BASE_URL")
)
chat_completion = client.chat.completions.create(
# model="gpt-3.5-turbo",
model="meta-llama/Meta-Llama-3-70B-Instruct",
messages=[
{"role": "user", "content": "Tell me about Telnyx"}
],
temperature=0.0,
stream=True,
)

Telnyx supports the vast majority of parameters for Chat and Audio (and a few helpful ones that OpenAI does not).

See the full compatibility matrix below.

Chat Completions

ParameterDescriptionTelnyxOpenAI
messagesProvides chat context
modelAdjusts speed + quality
streamStreams response
max_tokensLimits output length
temperatureAdjusts predictability
top_pAdjusts variety
frequency_penaltyDecreases repetition
presence_penaltyDecreases repetition...
nReturns n responses
stopForces model to stop
logit_biasTweaks odds of results
logprobsReturns odds of outputs
top_logprobs-> For how many candidates?
seedReduces randomness
response_formatEnsures syntax (e.g. JSON)
guided_jsonEnsures output conforms to schema
guided_regexEnsures output conforms to regex
guided_choiceEnsures output conforms to choice
min_ptop_p alternative
use_beam_searchExplores more options
best_of-> How many options?
length_penalty-> Are long options bad?
early_stopping-> How hard should it try?
toolsHelps model respond
functions-> Outputs JSON for your code🔜
retrieval-> Uses your docs (e.g. PDFs)
tool_choiceHow does model choose?🔜
userTracks users

Transcriptions (BETA)

ParameterWhat does this do?TelnyxOpenAI
fileProvides audio data
modelAdjusts speed + quality
response_formatAdjusts output format
timestamp_granularities[]Adds timestamps
-> segment-> per audio segment
-> word-> per word
languageImproves accuracy
promptGuides style
temperatureAdjusts "creativity"