messages | array | Required | A list of messages comprising the conversation. Each message has a role ("system", "user", "assistant", or "tool") and content. |
→role | string | Required | The role of the message author: "system", "user", "assistant", "tool", or "developer". |
→content | string | array | Required | The content of the message. Can be a string or an array of content parts (text, image_url). |
model | string | Optional | The model to use. Defaults to "glm-4.7". Available: "glm-5", "glm-4.7", "glm-4.7-flash", "glm-4.6", "glm-4.5", "glm-4.5-air", "minimax-m2.5". |
stream | boolean | Optional | If true, returns a stream of Server-Sent Events. Default: false. |
stream_options | object | Optional | Options for streaming. Set { "include_usage": true } to receive usage stats in the final chunk. |
reasoning | object | Optional | Controls reasoning behavior for OpenRouter-style routing. Set { "enabled": false } to skip reasoning and stream user-facing content earlier. |
→reasoning.enabled | boolean | Optional | Set to false to skip reasoning output and emit content directly. |
thinking | object | Optional | Legacy compatibility field. Use { "type": "disabled" } for backward compatibility when supported. |
temperature | number | Optional | Sampling temperature between 0 and 2. Higher values make output more random. Default: 1. |
top_p | number | Optional | Nucleus sampling. The model considers tokens with top_p cumulative probability. Default: 1. |
max_tokens | integer | Optional | Maximum number of tokens to generate. Model maximum: 32,768. |
seed | integer | Optional | A seed for deterministic generation. Same seed + same input = same output. |
stop | string | array | Optional | Up to 4 sequences where the model will stop generating. |
frequency_penalty | number | Optional | Penalizes tokens based on their frequency in the text so far. Range: -2 to 2. |
presence_penalty | number | Optional | Penalizes tokens based on whether they appear in the text so far. Range: -2 to 2. |
tools | array | Optional | A list of tools (functions) the model may call. Each tool has a type, name, description, and parameters schema. |
tool_choice | string | object | Optional | Controls tool calling: "none", "auto", "required", or { type: "function", function: { name: "..." } }. |
response_format | object | Optional | Set { "type": "json_object" } to force JSON output. The model will return valid JSON. |
logprobs | boolean | Optional | Whether to return log probabilities of output tokens. |
top_logprobs | integer | Optional | Number of most likely tokens to return at each position (0-20). Requires logprobs: true. |
logit_bias | object | Optional | Map of token IDs to bias values (-100 to 100). Use to increase or decrease likelihood of specific tokens. |