Skip to main content

Getting started with Telnyx Inference API

Introduction

Welcome to the Telnyx Inference API. This guide will walk you through the basics of chatting with open-source language models running on Telnyx GPUs.

Prerequisites

Before you begin, you'll need:

  • Telnyx account: Ensure you have an active account. Sign up for one if you haven't already.
  • API key: You'll need an API key for authentication. You can obtain one from the Telnyx Mission Control Portal.
  • [Optional] OpenAI SDK: Our inference API is OpenAI-compatible. While not mandatory, using one of their SDKs (available for Python and Node) will simplify the process.

Core Concepts

  • Messages: In the context of chat completions, this refers to the history of messages in a chat
  • Roles: Every message has a role, which can be system, user, or assistant.
    • System messages are usually sent once, at the start of a chat, and influence the entire chat
    • User messages refer to what the end user has input
    • Assistant messages refer to what the model has output
  • Models: In the context of chat completions, we are talking about large language models (LLMs). Your choice of LLM will have an impact on the quality, speed, and price of your chat completions.
  • Streaming: For real-time interactions, you will want the ability to stream partial responses back to a client as they are completed. To achieve this, we follow the same Server-sent events standard as OpenAI.

Example

Let's complete your first chat. Here's some simple Python to interact with a language model:

import os
from openai import OpenAI

client = OpenAI(
api_key=os.getenv("TELNYX_API_KEY"),
base_url="https://api.telnyx.com/v2/ai",
)

chat_completion = client.chat.completions.create(
messages=[{
"role": "user",
"content": "Can you explain 10DLC to me?"
}],
model="mistralai/Mistral-7B-Instruct-v0.1",
stream=True
)

for chunk in chat_completion:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="", flush=True)

Note: Make sure you have set the TELNYX_API_KEY environment variable

Next Steps

  • Explore tutorials: Once you've completed your first chat, dive deeper into our tutorials to explore all the inference capabilities Telnyx offers.
  • API reference: For a complete list of API endpoints and their functionalities, check out our API reference.
  • OpenAI Compatibility Matrix: For a thorough breakdown of how our APIs compare to OpenAI's, review our OpenAI Compatibility Matrix.

Feedback

Have questions or need help troubleshooting? Our support team is here to assist you. Join our Slack community to connect with other developers and the Telnyx team.

On this page