Documentation Index
Fetch the complete documentation index at: https://developers.telnyx.com/llms.txt
Use this file to discover all available pages before exploring further.
Prerequisites
That’s it. No other setup required.
WebSocket Streaming
File Upload
Real-time transcription over a persistent connection. Send audio, get partial and final transcripts back as they happen.Install:pip install "websockets>=14"
main.py:import asyncio
import json
import urllib.request
import websockets
API_KEY = "YOUR_TELNYX_API_KEY"
STREAM_URL = "https://kexp-mp3-128.streamguys1.com/kexp128.mp3"
async def transcribe():
url = (
"wss://api.telnyx.com/v2/speech-to-text/transcription"
"?transcription_engine=Deepgram"
"&model=nova-3"
"&input_format=mp3"
"&interim_results=true"
)
headers = {"Authorization": f"Bearer {API_KEY}"}
async with websockets.connect(
url, additional_headers=headers
) as ws:
# Listen for transcripts
async def listen():
async for message in ws:
data = json.loads(message)
transcript = data.get("transcript", "")
if not transcript:
continue
prefix = "FINAL" if data.get("is_final") else "partial"
print(f"[{prefix}] {transcript}")
listener = asyncio.create_task(listen())
# Stream audio from KEXP Radio
req = urllib.request.urlopen(STREAM_URL)
try:
while chunk := req.read(4096):
await ws.send(chunk)
await asyncio.sleep(0.05)
except KeyboardInterrupt:
pass
await ws.send(json.dumps({"type": "CloseStream"}))
listener.cancel()
asyncio.run(transcribe())
Run it: Install:index.js:const WebSocket = require("ws");
const https = require("https");
const API_KEY = "YOUR_TELNYX_API_KEY";
const STREAM_URL = "https://kexp-mp3-128.streamguys1.com/kexp128.mp3";
const url = new URL("wss://api.telnyx.com/v2/speech-to-text/transcription");
url.searchParams.set("transcription_engine", "Deepgram");
url.searchParams.set("model", "nova-3");
url.searchParams.set("input_format", "mp3");
url.searchParams.set("interim_results", "true");
const ws = new WebSocket(url.toString(), {
headers: { Authorization: `Bearer ${API_KEY}` },
});
ws.on("open", () => {
console.log("Connected. Streaming KEXP Radio...\n");
https.get(STREAM_URL, (stream) => {
stream.on("data", (chunk) => {
if (ws.readyState === WebSocket.OPEN) {
ws.send(chunk);
}
});
});
});
ws.on("message", (data) => {
const msg = JSON.parse(data);
const transcript = msg.transcript || "";
if (!transcript) return;
const prefix = msg.is_final ? "FINAL" : "partial";
console.log(`[${prefix}] ${transcript}`);
});
ws.on("error", (err) => console.error("Error:", err.message));
Run it: Example output
Connected. Streaming KEXP Radio...
[partial] the latest news from
[partial] the latest news from the BBC
[FINAL] The latest news from the KEXP Radio.
[partial] tensions continue
[partial] tensions continue to rise in the
[FINAL] Tensions continue to rise in the region as diplomatic talks stall.
Upload an audio file and get the full transcript back. The endpoint is OpenAI SDK compatible — swap base_url and api_key and your existing code works.Install:main.py:from openai import OpenAI
client = OpenAI(
api_key="YOUR_TELNYX_API_KEY",
base_url="https://api.telnyx.com/v2",
)
result = client.audio.transcriptions.create(
model="openai/whisper-large-v3-turbo",
file=open("audio.mp3", "rb"),
)
print(result.text)
Run it: Install:index.js:const OpenAI = require("openai");
const fs = require("fs");
const client = new OpenAI({
apiKey: "YOUR_TELNYX_API_KEY",
baseURL: "https://api.telnyx.com/v2",
});
(async () => {
const result = await client.audio.transcriptions.create({
model: "openai/whisper-large-v3-turbo",
file: fs.createReadStream("audio.mp3"),
});
console.log(result.text);
})();
Run it: curl -X POST https://api.telnyx.com/v2/ai/audio/transcriptions \
-H "Authorization: Bearer YOUR_TELNYX_API_KEY" \
-F model="openai/whisper-large-v3-turbo" \
-F file=@audio.mp3
Or transcribe from a URL (no file upload needed):curl -X POST https://api.telnyx.com/v2/ai/audio/transcriptions \
-H "Authorization: Bearer YOUR_TELNYX_API_KEY" \
-F model="openai/whisper-large-v3-turbo" \
-F file_url="https://example.com/audio.mp3"
Example response
{
"text": "The latest news from the KEXP Radio. Tensions continue to rise in the region as diplomatic talks stall."
}
For segment- or word-level timestamps, use model="deepgram/nova-3" with response_format=verbose_json. The Whisper models (openai/whisper-large-v3-turbo, openai/whisper-tiny) return text only.
What’s next