Getting started with Telnyx Inference API
Introduction
Welcome to the Telnyx Inference API! This guide will walk you through the basics of chatting with open-source language models running on Telnyx GPUs.
Prerequisites
- Sign up for a free Telnyx account
- Create an API Key
- [Optional] OpenAI SDK
- Our inference API is OpenAI-compatible
- Try using one of their SDKs (
pip install openai
ornpm install openai
)
Python Example
Let's complete your first chat. Here's some simple Python to interact with a language model:
import os
from openai import OpenAI
client = OpenAI(
api_key=os.getenv("TELNYX_API_KEY"),
base_url="https://api.telnyx.com/v2/ai",
)
chat_completion = client.chat.completions.create(
messages=[
{
"role": "user",
"content": "Can you explain 10DLC to me?"
}
],
model="meta-llama/Meta-Llama-3-70B-Instruct",
stream=True
)
for chunk in chat_completion:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="", flush=True)
Note: Make sure you have set the TELNYX_API_KEY environment variable
Core Concepts
- Messages: This refers to the history of messages in a chat
- Roles: Every message has a role: system, user, assistant, or tool.
- System messages are usually sent once at the start of a chat, and influence the entire chat
- User messages refer to what the end user has input
- Assistant messages refer to what the model has output
- Tool messages refer to results of any tool calls
- Models: In the context of chat completions, we are talking about large language models (LLMs). Your choice of LLM will affect the quality, speed, and price of your chat completions.
- If you are optimizing for price, try meta-llama/Meta-Llama-3-8B-Instruct
- For quality, try meta-llama/Meta-Llama-3-70B-Instruct
- Or explore our full list of supported models
- Streaming: For real-time interactions, you will want the ability to stream partial responses back to a client as they are completed. To achieve this, we follow the same Server-sent events standard as OpenAI.
Next Steps
- Dive into our tutorials
- Explore our our full API reference.
- Review our OpenAI Compatibility Matrix
- Check out our pricing page
Feedback
Have questions or need help troubleshooting? Our support team is here to assist you. Join our Slack community to connect with other developers and the Telnyx team.