import Telnyx from 'telnyx';
const client = new Telnyx({
apiKey: 'My API Key',
});
const response = await client.ai.chat.createCompletion({
messages: [
{ role: 'system', content: 'You are a friendly chatbot.' },
{ role: 'user', content: 'Hello, world!' },
],
});
console.log(response);{
"detail": [
{
"loc": [
"<string>"
],
"msg": "<string>",
"type": "<string>"
}
]
}Chat with a language model. This endpoint is consistent with the OpenAI Chat Completions API and may be used with the OpenAI JS or Python SDK.
import Telnyx from 'telnyx';
const client = new Telnyx({
apiKey: 'My API Key',
});
const response = await client.ai.chat.createCompletion({
messages: [
{ role: 'system', content: 'You are a friendly chatbot.' },
{ role: 'user', content: 'Hello, world!' },
],
});
console.log(response);{
"detail": [
{
"loc": [
"<string>"
],
"msg": "<string>",
"type": "<string>"
}
]
}Bearer authentication header of the form Bearer <token>, where <token> is your auth token.
The language model to chat with.
If you are using an external inference provider like xAI or OpenAI, this field allows you to pass along a reference to your API key. After creating an integration secret for you API key, pass the secret's identifier in this field.
Whether or not to stream data-only server-sent events as they become available.
Adjusts the "creativity" of the model. Lower values make the model more deterministic and repetitive, while higher values make the model more random and creative.
Maximum number of completion tokens the model should generate.
The function tool type follows the same schema as the OpenAI Chat Completions API. The retrieval tool type is unique to Telnyx. You may pass a list of embedded storage buckets for retrieval-augmented generation.
none, auto, required Must be a valid JSON schema. If specified, the output will follow the JSON schema.
If specified, the output will follow the regex pattern.
If specified, the output will be exactly one of the choices.
This is an alternative to top_p that many prefer. Must be in [0, 1].
This will return multiple choices for you instead of a single chat completion.
Setting this to true will allow the model to explore more completion options. This is not supported by OpenAI.
This is used with use_beam_search to determine how many candidate beams to explore.
This is used with use_beam_search to prefer shorter or longer completions.
This is used with use_beam_search. If true, generation stops as soon as there are best_of complete candidates; if false, a heuristic is applied and the generation stops when is it very unlikely to find better candidates.
Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the content of message.
This is used with logprobs. An integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability.
Higher values will penalize the model from repeating the same output tokens.
Higher values will penalize the model from repeating the same output tokens.
An alternative or complement to temperature. This adjusts how many of the top possibilities to consider.
Successful Response
Was this page helpful?