Using a Custom LLM
In addition to standard third-party LLM providers like OpenAI, Gemini, and Groq, you can also power your AI Assistant with any public OpenAI-compatible chat completions endpoint. This includes models hosted using AWS Bedrock, Azure OpenAI, and Baseten or open source inference engines like vLLM and SGLang.
Deploying your Public Inference endpoint
Start by ensuring you have a publicly accessible OpenAI-compatible chat completions endpoint.
Azure
For this guide, we will deploy gpt-4o on Azure AI Foundry.
First, create or select a resource

Then, drill into the resource to find the API key and Azure OpenAI endpoint

Create or edit a Telnyx AI Assistant and in the Agent tab, check Use Custom LLM
Input the endpoint URL as the Base URL and append /openai/v1 and create a new Integration Secret with your API Key.
You will see a dropdown of all possible Azure models but only ones that you have deployed will validate an LLM connection.

Once you save your assistant you will be able to immediately use your assistant in the Telnyx portal with the Test Assistant dropdown
Baseten
For this guide, we will deploy Llama 3.3 70B on Baseten.
First, click Deploy Now

Navigate to the deployment

Click the API Endpoint button to see the endpoint and generate an API Key. Save these details.

After about 15 minutes, the deployment should be live.
When it is complete, create or edit a Telnyx AI Assistant and in the Agent tab, check Use Custom LLM
Input the Baseten Endpoint URL for your deployment as the Base URL and create a new Integration Secret with your Baseten API Key.

If your base URL supports an OpenAI-compatible /models endpoint the Model Name dropdown will populate automatically.
Baseten deployments do not support this endpoint, so you can enter any name for your model here. You can also validate the connection is live before saving your assistant.

Once you save your assistant you will be able to immediately use your assistant in the Telnyx portal with the Test Assistant dropdown as well as review metrics in your Baseten deployment.
