Skip to main content
Edge Compute enforces resource limits to ensure fair usage and platform stability.

Execution Limits

LimitDefaultMaximum
Request timeout30 seconds60 seconds
CPU time per request30 seconds60 seconds
Memory per container256 MB512 MB
Request body size10 MB10 MB
Response body size10 MB10 MB

Request Timeout

Functions must return a response within the timeout period. If exceeded, the request is terminated and returns a 504 Gateway Timeout.
# Handle long operations gracefully
import signal

def handler(request):
    # Set internal timeout shorter than platform limit
    signal.alarm(25)  # 25 seconds, leaving buffer
    
    try:
        result = long_running_operation()
        signal.alarm(0)  # Cancel alarm
        return result
    except TimeoutError:
        return {"error": "Operation timed out"}

Memory

Each container has a fixed memory allocation. If your function exceeds memory limits, the container is terminated and a new one starts for subsequent requests. Tips for memory efficiency:
  • Stream large files instead of loading into memory
  • Use generators for large datasets
  • Clean up resources after use
  • Avoid global caches that grow unbounded
# Bad — loads entire file into memory
def bad_handler(request):
    data = large_file.read()  # May exceed memory
    return process(data)

# Good — streams the file
def good_handler(request):
    for chunk in stream_file():
        yield process_chunk(chunk)

Function Limits

LimitValue
Function code size50 MB (compressed)
Environment variables per function64
Environment variable name size256 bytes
Environment variable value size5 KB
Secrets per organization100
Secret value size10 KB

Code Size

The 50 MB limit includes your code and all dependencies (after compression). Most functions are well under this limit. If you’re hitting size limits:
  • Remove unused dependencies
  • Use lighter alternatives (e.g., httpx instead of requests + urllib3)
  • Exclude development dependencies from production builds
  • Consider splitting into multiple functions

Network Limits

LimitValue
Outbound connections per request100
Outbound request timeout30 seconds (configurable)
DNS resolution timeout5 seconds

Outbound Connections

Each invocation can make up to 100 outbound network connections. This includes:
  • HTTP/HTTPS requests
  • Database connections
  • TCP sockets
Connection pooling recommended:
# Initialize connection pool globally (once per container)
import httpx

client = httpx.Client(
    timeout=10.0,
    limits=httpx.Limits(max_connections=20)
)

def handler(request):
    # Reuse the pooled connection
    response = client.get('https://api.example.com/data')
    return response.json()

Rate Limits

LimitValue
Deployments per hour60
API requests per minute1,000
Concurrent function invocationsNo hard limit (auto-scales)

Deployment Limits

You can deploy up to 60 times per hour per organization. This is rarely hit in normal development.

Invocation Scaling

There’s no hard limit on concurrent invocations — the platform auto-scales to handle traffic. However, each new container incurs a cold start, so extremely spiky traffic may see increased latency.

Account Limits

LimitValue
Functions per organization100
Total function invocationsBased on plan
Total CPU timeBased on plan
Need higher limits? Contact support@telnyx.com to discuss enterprise plans.

Storage Limits (Coming Soon)

These limits apply to upcoming storage features:

KV Storage

LimitValue
Key size512 bytes
Value size25 MB
Keys per namespace1 billion
Namespaces per organization100
Read operations per second10,000
Write operations per second1,000

SQL Database

LimitValue
Database size10 GB
Databases per organization10
Rows per tableNo hard limit
Query timeout30 seconds

Error Handling

When limits are exceeded, the platform returns specific error codes:
ErrorCodeMeaning
Request Timeout504Function didn’t respond in time
Memory Exceeded500Container terminated due to memory
Payload Too Large413Request/response body exceeded limit
Rate Limited429Too many requests or deployments
Example error response:
{
  "error": {
    "code": "timeout_exceeded",
    "message": "Function execution exceeded 30 second timeout",
    "request_id": "req_abc123"
  }
}

Best Practices

Stay Within Limits

  1. Set conservative timeouts — Use 25 seconds internally when the platform limit is 30
  2. Monitor memory — Log memory usage during development
  3. Stream large data — Avoid buffering entire files
  4. Use connection pools — Reuse HTTP/database connections

Handle Limit Errors

import httpx

def handler(request):
    try:
        # Set explicit timeout shorter than platform limit
        response = httpx.get(
            'https://api.example.com/data',
            timeout=25.0
        )
        return response.json()
    except httpx.TimeoutException:
        return {
            "error": "Upstream API timed out",
            "status": 504
        }
    except httpx.HTTPError as e:
        return {
            "error": f"Request failed: {e}",
            "status": 502
        }

Next Steps