messages | Provides chat context | ✅ | ✅ |
model | Adjusts speed + quality | ✅ | ✅ |
stream | Streams response | ✅ | ✅ |
max_tokens | Limits output length | ✅ | ✅ |
temperature | Adjusts predictability | ✅ | ✅ |
top_p | Adjusts variety | ✅ | ✅ |
frequency_penalty | Decreases repetition | ✅ | ✅ |
presence_penalty | Decreases repetition... | ✅ | ✅ |
n | Returns n responses | ✅ | ✅ |
stop | Forces model to stop | ✅ | ✅ |
logit_bias | Tweaks odds of results | ✅ | ✅ |
logprobs | Returns odds of outputs | ✅ | ✅ |
-> top_logprobs | -> For how many candidates? | ✅ | ✅ |
seed | Reduces randomness | ✅ | ✅ |
response_format | Ensures syntax (e.g. JSON) | ✅ | ✅ |
guided_json | Ensures output conforms to schema | ✅ | ❌ |
guided_regex | Ensures output conforms to regex | ✅ | ❌ |
guided_choice | Ensures output conforms to multiple choice | ✅ | ❌ |
min_p | top_p alternative | ✅ | ❌ |
use_beam_search | Explores more options | ✅ | ❌ |
-> best_of | -> How many options? | ✅ | ❌ |
-> length_penalty | -> Are long options bad? | ✅ | ❌ |
-> early_stopping | -> How hard should it try? | ✅ | ❌ |
tools | Helps model respond | ✅ | ✅ |
-> functions | -> Outputs JSON for your code | 🔜 | ✅ |
-> retrieval | -> Uses your docs (e.g. PDFs) | ✅ | ❌ |
tool_choice | How does model choose? | 🔜 | ✅ |
user | Tracks users | ❌ | ✅ |