Observability
Observability gives you full visibility into your AI assistant’s behavior. By connecting your assistant to Langfuse, you can trace every LLM call, tool execution, and conversation turn — including input messages, output responses, token usage, latency, and cost.
In this tutorial, you will learn how to:
- Connect your AI assistant to Langfuse for LLM observability
- Store your Langfuse credentials securely as integration secrets
- View traces, generations, and tool calls in the Langfuse dashboard
- Understand how conversations are grouped in traces
Overview
When observability is enabled on an assistant, every interaction is automatically traced and sent to your Langfuse project. This includes:
| What is traced | Where it happens | Details captured |
|---|
| LLM generations | AI Conversations | Input messages, output response, model, token usage |
| Tool calls | AI Assistants | Tool name, input arguments, output result |
Traces are grouped by conversation using a deterministic trace ID derived from the conversation_id. This means all LLM calls and tool executions within the same conversation appear together in your Langfuse dashboard.
Key benefits
- Debugging: Inspect the exact messages sent to the LLM and the responses received.
- Cost tracking: Monitor token usage per conversation, assistant, or model.
- Quality evaluation: Review LLM outputs and tool call results to identify issues.
- Latency analysis: Measure response times for LLM calls and tool executions.
- Multi-tenant: Each assistant can connect to a different Langfuse project with its own credentials.
Requirements
Before you begin, you will need:
- A Langfuse account (cloud or self-hosted)
- A Langfuse project with a public key and secret key
- A Telnyx AI Assistant
Configuration
Step 1: Create your Langfuse credentials
Log in to your Langfuse dashboard and navigate to Settings > API Keys. Create a new API key pair. You will need:
| Credential | Description | Example |
|---|
| Public Key | Identifies your Langfuse project | pk-lf-abc123... |
| Secret Key | Authenticates requests to Langfuse | sk-lf-xyz789... |
| Host | Your Langfuse instance URL | https://cloud.langfuse.com |
Step 2: Store credentials as integration secrets
Your Langfuse keys must be stored securely as Telnyx integration secrets. Navigate to the Integration Secrets tab in the portal.
Create two secrets:
- Langfuse Secret Key — store your Langfuse secret key as the secret value. Choose a memorable identifier (e.g.,
langfuse-secret-key).
- Langfuse Public Key — store your Langfuse public key as the secret value. Choose a memorable identifier (e.g.,
langfuse-public-key).
You will not be able to access the value of a secret after it is stored.
Step 3: Enable observability on your assistant
You can enable observability via the API when creating or updating an assistant:
curl --request POST \
--url https://api.telnyx.com/v2/ai/assistants \
--header "Authorization: Bearer $TELNYX_API_KEY" \
--header 'Content-Type: application/json' \
--data '{
"name": "My Observable Assistant",
"model": "anthropic/claude-haiku-4-5",
"instructions": "You are a helpful assistant.",
"observability_settings": {
"status": "enabled",
"secret_key_ref": "langfuse-secret-key",
"public_key_ref": "langfuse-public-key",
"host": "https://cloud.langfuse.com"
}
}'
To update an existing assistant:
curl --request POST \
--url https://api.telnyx.com/v2/ai/assistants/{assistant_id} \
--header "Authorization: Bearer $TELNYX_API_KEY" \
--header 'Content-Type: application/json' \
--data '{
"observability_settings": {
"status": "enabled",
"secret_key_ref": "langfuse-secret-key",
"public_key_ref": "langfuse-public-key",
"host": "https://cloud.langfuse.com"
}
}'
Disabling observability
To stop tracing, update the status to disabled:
curl --request POST \
--url https://api.telnyx.com/v2/ai/assistants/{assistant_id} \
--header "Authorization: Bearer $TELNYX_API_KEY" \
--header 'Content-Type: application/json' \
--data '{
"observability_settings": {
"status": "disabled"
}
}'
Observability settings reference
| Field | Type | Required when enabled | Description |
|---|
status | string | Yes | enabled or disabled |
secret_key_ref | string | Yes | Integration secret identifier for your Langfuse secret key |
public_key_ref | string | Yes | Integration secret identifier for your Langfuse public key |
host | string | Yes | Your Langfuse instance URL |
When status is enabled, all three credential fields are required. The API will return an error if any are missing. The secret references are validated to ensure they exist in your integration secrets.
What you will see in Langfuse
Once observability is enabled and your assistant handles a conversation, traces will appear in your Langfuse dashboard.
Traces
Each conversation turn generates a trace. Traces from the same conversation share a deterministic ID derived from the conversation_id, so they are grouped together in the Langfuse UI.
Each trace includes:
- Name: The conversation name (if set), otherwise
chat
- Metadata:
conversation_id and assistant_id
Generations
Each LLM call appears as a generation observation within the trace. Generations include:
- Model: The LLM model used (e.g.,
anthropic/claude-haiku-4-5)
- Input: The full message array sent to the model, including system prompt and conversation history
- Output: The model’s response content
- Token usage: Prompt tokens, completion tokens, and total tokens (non-streaming only)
When your assistant uses webhook tools, each tool execution appears as an event within the trace. Events include:
- Name:
tool-call-{tool_name}
- Input: The tool call arguments
- Output: The tool response
Best practices
Security
- Never share your Langfuse keys directly — always store them as Telnyx integration secrets.
- Use separate Langfuse projects for development and production assistants.
- Rotate keys periodically — update the integration secrets and the assistant configuration when you rotate Langfuse API keys.
- Observability adds minimal overhead. Traces are sent asynchronously and do not block conversation flow.
- If you are self-hosting Langfuse, ensure your instance is reachable from Telnyx infrastructure.
Organization
- Use conversation names to make traces easier to find in the Langfuse dashboard. Conversation names are set automatically and appear as the trace name.
- Filter by metadata in Langfuse to find traces for a specific
conversation_id or assistant_id.
Troubleshooting
Traces not appearing in Langfuse
- Verify status is enabled: Check that
observability_settings.status is "enabled" on your assistant.
- Verify credentials: Ensure your
secret_key_ref and public_key_ref point to valid integration secrets with correct Langfuse keys.
- Check the host URL: Confirm the
host field matches your Langfuse instance (e.g., https://cloud.langfuse.com for Langfuse Cloud).
- Check Langfuse project: Verify you are looking at the correct project in the Langfuse dashboard.
Missing output or token usage
- Token usage is captured for non-streaming LLM calls. Streaming calls may not include token counts depending on the model provider.
- Output is captured after the LLM response completes. If a call fails mid-stream, the output may be empty.
Secret reference errors
If you receive an error like secret_key_ref not found, ensure:
- The integration secret exists in your Integration Secrets.
- The identifier in
secret_key_ref or public_key_ref exactly matches the secret name you created.
- The secret belongs to the same organization as the assistant.
Observability not working after key rotation
If you rotated your Langfuse API keys:
- Update the integration secret values in the portal.
- The assistant will automatically use the new values on the next conversation — no assistant update is required.