📄️ Function Calling
In this tutorial, you'll learn how to connect large language models to external tools using our chat completions API. This includes:
📄️ Streaming and Parallel Calls
In the previous tutorial, we learned the basics for defining and executing functions using our chat completions API.
📄️ Embeddings
In this tutorial, you'll learn how to:
📄️ Dynamic Variables
Dynamic variables let you configure a template for your agent's behavior. You can re-use the same general instructions while dynamically personalizing every conversation your agent has.
📄️ Importing Assistants
If you have voice assistants with another provider, you can import them to Telnyx in the portal or via API.
📄️ Integrations
Connect your Telnyx AI assistants with enterprise platforms like Salesforce, ServiceNow, Jira, and HubSpot to automate workflows and enhance customer experiences.
📄️ Workflow
Manage AI assistant workflows with visual flowcharts. Configure tools, design conversation flows, and optimize call routing for your voice AI assistant.
📄️ Memory
Memory enables your AI assistant to recall essential details from past conversations. Instead of starting each phone call or text exchange from scratch, your AI assistant naturally continues previous discussions.
🗃️ AI Insights
4 items
📄️ Agent Handoff
Enable seamless AI-to-AI handoffs with specialized assistants working together in a single conversation, providing expert-level support across multiple domains.
📄️ Testing, Versions & Traffic Distribution
This guide walks you through testing your AI assistant before production deployment and managing live traffic distribution between different versions. You'll learn how to create tests, iterate on your assistant, and safely roll out changes using A/B testing.
📄️ Custom LLMs for Assistants
In addition to standard third-party LLM providers like OpenAI, Gemini, and Groq, you can also power your AI Assistant with any public OpenAI-compatible chat completions endpoint. This includes models hosted using AWS Bedrock, Azure OpenAI, and Baseten or open source inference engines like vLLM and SGLang.