Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/control/reference/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ Complete reference documentation for Sequrity Control, including API interfaces,

## Quick Links

- **[REST API](rest_api/index.md)** - HTTP endpoints for chat completions and LangGraph execution
- **[REST API](rest_api/index.md)** - HTTP endpoints for chat completions, responses, messages, and LangGraph execution
- **[SequrityClient.control](sequrity_client/index.md)** - Python client API reference
- **[SQRT Policy Language](sqrt/index.md)** - Policy language specification and grammar

Expand Down
16 changes: 4 additions & 12 deletions docs/control/reference/rest_api/chat_completion.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ Where `{endpoint_type}` is `chat`, `code`, `agent`, or `lang-graph`. See [URL Pa
| `tools` | `array[Tool]` | No | Tools the model may call. See [Tools](#tools). |
| `stream` | `boolean` | No | If `true`, partial deltas are sent as server-sent events. |
| `seed` | `integer` | No | Seed for deterministic sampling (best-effort). |
| `reasoning_effort` | `string` | No | Reasoning effort for reasoning models: `"minimal"`, `"low"`, `"medium"`, `"high"`. |
| `reasoning_effort` | `string` | No | Reasoning effort for reasoning models: `"none"`, `"minimal"`, `"low"`, `"medium"`, `"high"`, `"xhigh"`. |
| `response_format` | `object` | No | Output format constraint. See [Response Format](#response-format). |

### Message Types
Expand Down Expand Up @@ -67,7 +67,6 @@ Messages are distinguished by the `role` field.
| `refusal` | `string` | No | Refusal message by the assistant. |
| `audio` | `object` | No | Reference to a previous audio response. |
| `tool_calls` | `array[ToolCall]` | No | Tool calls generated by the model. |
| `function_call` | `object` | No | **Deprecated.** Use `tool_calls`. |

#### Tool Message

Expand All @@ -77,14 +76,6 @@ Messages are distinguished by the `role` field.
| `content` | `string \| array[ContentPartText]` | Yes | The tool result. |
| `tool_call_id` | `string` | Yes | ID of the tool call this responds to. |

#### Function Message (deprecated)

| Field | Type | Required | Description |
|-------|------|----------|-------------|
| `role` | `"function"` | Yes | |
| `content` | `string \| null` | Yes | The function result. |
| `name` | `string` | Yes | The function name. |

### Content Parts

User messages support multimodal content:
Expand Down Expand Up @@ -124,15 +115,16 @@ User messages support multimodal content:
| `model` | `string` | The model used. |
| `choices` | `array[Choice]` | Completion choices. |
| `usage` | `CompletionUsage` | Token usage statistics. |
| `session_id` | `string \| null` | Sequrity session ID (also available via `X-Session-ID` response header). |
| `service_tier` | `string \| null` | Service tier used (e.g., `"auto"`, `"default"`, `"flex"`, `"scale"`, `"priority"`). |
| `system_fingerprint` | `string \| null` | Backend configuration fingerprint. |

### Choice

| Field | Type | Description |
|-------|------|-------------|
| `index` | `integer` | Index of this choice. |
| `message` | `ResponseMessage` | The generated message. |
| `finish_reason` | `string` | Why generation stopped: `"stop"`, `"length"`, `"tool_calls"`, `"content_filter"`, `"error"`. |
| `finish_reason` | `string` | Why generation stopped: `"stop"`, `"length"`, `"tool_calls"`, `"content_filter"`, `"function_call"`. OpenRouter may also return `"error"`. |
| `logprobs` | `object \| null` | Log probability information, if requested. |

### Response Message
Expand Down
91 changes: 56 additions & 35 deletions docs/control/reference/rest_api/headers/security_config.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,39 +9,40 @@ This header is **optional** and can be used in Headers-Only Mode to fine-tune se
```json
{
"fsm": {
"min_num_tools_for_filtering": 10,
"clear_session_meta": "never",
"min_num_tools_for_filtering": null,
"clear_session_meta": null,
"max_n_turns": null,
"history_mismatch_policy": null,
"clear_history_every_n_attempts": null,
"disable_rllm": true,
"disable_rllm": null,
"enable_multistep_planning": null,
"enabled_internal_tools": null,
"prune_failed_steps": false,
"force_to_cache": [],
"prune_failed_steps": null,
"force_to_cache": null,
"max_pllm_steps": null,
"max_pllm_failed_steps": null,
"max_tool_calls_per_step": null,
"reduced_grammar_for_rllm_review": true,
"retry_on_policy_violation": false,
"reduced_grammar_for_rllm_review": null,
"retry_on_policy_violation": null,
"wrap_tool_result": null,
"detect_tool_errors": null,
"detect_tool_error_regex_pattern": null,
"detect_tool_error_max_result_length": null,
"strict_tool_result_parsing": null
"strict_tool_result_parsing": null,
"tool_result_transform": null
},
"prompt": {
"pllm": {
"flavor": null,
"version": null,
"debug_info_level": "normal",
"debug_info_level": null,
"clarify_ambiguous_queries": null,
"context_var_visibility": null,
"query_inline_roles": null,
"query_role_name_overrides": null,
"query_include_tool_calls": null,
"query_include_tool_args": null,
"query_include_tool_results": null
"query_include_tool_results": null,
},
"rllm": {
"flavor": null,
Expand All @@ -56,10 +57,11 @@ This header is **optional** and can be used in Headers-Only Mode to fine-tune se
}
},
"response_format": {
"strip_response_content": false,
"include_program": false,
"include_policy_check_history": false,
"include_namespace_snapshot": false
"strip_response_content": null,
"stream_thoughts": null,
"include_program": null,
"include_policy_check_history": null,
"include_namespace_snapshot": null
}
}
```
Expand All @@ -84,17 +86,17 @@ All fields are optional and have sensible defaults.

| Type | Required | Default | Constraints |
|------|----------|---------|-------------|
| `integer` or `null` | No | `10` | >= 2 |
| `integer` or `null` | No | `null` | >= 2 |

Minimum number of registered tools to enable tool-filtering LLM step. Set to `null` to disable.
Minimum number of registered tools to enable tool-filtering LLM step. When not set, the server default is `10`.

#### `fsm.clear_session_meta`

| Type | Required | Default |
|------|----------|---------|
| `string` | No | `"never"` |
| `string` or `null` | No | `null` |

When to clear session meta information:
When to clear session meta information. When not set, the server default is `"never"`.

- `"never"`: Never clear
- `"every_attempt"`: Clear at the beginning of each PLLM attempt
Expand Down Expand Up @@ -134,17 +136,17 @@ Single-step mode only. Clear all failed step history every N attempts to save to

| Type | Required | Default |
|------|----------|---------|
| `boolean` | No | `true` |
| `boolean` or `null` | No | `null` |

Whether to skip the response LLM (RLLM) review step.
Whether to skip the response LLM (RLLM) review step. When not set, the server default is `true`.

#### `fsm.enable_multistep_planning`

| Type | Required | Default |
|------|----------|---------|
| `boolean` | No | `false` |
| `boolean` or `null` | No | `null` |

When `false` (single-step), each attempt solves independently. When `true` (multi-step), each step builds on previous.
When `false` (single-step), each attempt solves independently. When `true` (multi-step), each step builds on previous. When not set, the server default is `false`.

#### `fsm.enabled_internal_tools`

Expand All @@ -158,17 +160,17 @@ List of internal tool IDs available to planning LLM. Valid values: `"parse_with_

| Type | Required | Default |
|------|----------|---------|
| `boolean` | No | `false` |
| `boolean` or `null` | No | `null` |

Multi-step mode only. Remove failed steps from history after turn completes.
Multi-step mode only. Remove failed steps from history after turn completes. When not set, the server default is `true`.

#### `fsm.force_to_cache`

| Type | Required | Default |
|------|----------|---------|
| `array[string]` | No | `[]` |
| `array[string]` or `null` | No | `null` |

List of tool ID regex patterns to always cache their results regardless of the cache_tool_result setting.
List of tool ID regex patterns to always cache their results regardless of the cache_tool_result setting. When not set, the server default is `[]`.

#### `fsm.max_pllm_steps`

Expand Down Expand Up @@ -198,17 +200,17 @@ Maximum number of tool calls allowed per PLLM attempt. If `null`, no limit is en

| Type | Required | Default |
|------|----------|---------|
| `boolean` | No | `true` |
| `boolean` or `null` | No | `null` |

Whether to paraphrase RLLM output via reduced grammar before feeding back to planning LLM.
Whether to paraphrase RLLM output via reduced grammar before feeding back to planning LLM. When not set, the server default is `true`.

#### `fsm.retry_on_policy_violation`

| Type | Required | Default |
|------|----------|---------|
| `boolean` | No | `false` |
| `boolean` or `null` | No | `null` |

When `true`, allow planning LLM to retry after policy violation.
When `true`, allow planning LLM to retry after policy violation. When not set, the server default is `false`.

#### `fsm.wrap_tool_result`

Expand Down Expand Up @@ -254,6 +256,17 @@ The maximum length of tool result to consider for error detection. Longer result

If `true`, only parse external tool results as JSON when the tool declares an output_schema. When `false`, always attempt `json.loads` on tool results.

#### `fsm.tool_result_transform`

| Type | Required | Default |
|------|----------|---------|
| `string` or `null` | No | `null` |

Transform applied to tool results before processing:

- `"none"`: No transformation
- `"codex"`: Apply codex-style transformation to tool results

---

## Prompt Overrides (`prompt`)
Expand All @@ -268,7 +281,7 @@ Planning LLM prompt overrides:
|-------|------|---------|-------------|
| `flavor` | `string` | `null` | Prompt template variant to use (e.g., `"universal"`). |
| `version` | `string` | `null` | Prompt template version. Combined with flavor to load template. |
| `debug_info_level` | `string` | `"normal"` | Level of detail for debug/execution information in planning LLM prompt: `"minimal"`, `"normal"`, `"extra"`. |
| `debug_info_level` | `string` | `null` | Level of detail for debug/execution information in planning LLM prompt: `"minimal"`, `"normal"`, `"extra"`. When not set, the server default is `"normal"`. |
| `clarify_ambiguous_queries` | `boolean` | `null` | Whether planning LLM is allowed to ask for clarification on ambiguous queries. |
| `context_var_visibility` | `string` | `null` | The visibility level of context variables in the PLLM prompts: `"none"`, `"basic-notext"`, `"basic-executable"`, `"all-executable"`, `"all"`. |
| `query_inline_roles` | `array[string]` | `null` | List of roles whose messages will be inlined into the user query: `"assistant"`, `"tool"`, `"developer"`, `"system"`. |
Expand Down Expand Up @@ -315,30 +328,38 @@ Tool-formulating LLM prompt overrides:

| Type | Required | Default |
|------|----------|---------|
| `boolean` | No | `false` |
| `boolean` | No | `null` |

When `true`, returns only essential result value as plain text, stripping all metadata.

#### `response_format.stream_thoughts`

| Type | Required | Default |
|------|----------|---------|
| `boolean` or `null` | No | `null` |

Whether to stream the model's thinking process in the response.

#### `response_format.include_program`

| Type | Required | Default |
|------|----------|---------|
| `boolean` | No | `false` |
| `boolean` | No | `null` |

Whether to include the generated program in the response.

#### `response_format.include_policy_check_history`

| Type | Required | Default |
|------|----------|---------|
| `boolean` | No | `false` |
| `boolean` | No | `null` |

Whether to include policy check results even when there are no violations.

#### `response_format.include_namespace_snapshot`

| Type | Required | Default |
|------|----------|---------|
| `boolean` | No | `false` |
| `boolean` | No | `null` |

Whether to include snapshot of all variables after program execution.
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ This header is **required** when using Headers-Only Mode (must be provided toget

| Type | Required | Default |
|------|----------|---------|
| `string` | Yes | - |
| `string` | No | `null` |

The agent architecture to use. Valid values:

Expand Down
2 changes: 1 addition & 1 deletion docs/control/reference/rest_api/headers/security_policy.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ Whether to auto-generate policies based on tool metadata and natural language de
|------|----------|---------|
| `boolean` or `null` | No | `null` |

Whether to fail fast on first hard denial during policy checks.
Whether to fail fast on first hard denial during policy checks. When not set (i.e. `null`), the server default is `true`.

### `presets`

Expand Down
37 changes: 29 additions & 8 deletions docs/control/reference/rest_api/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,9 +10,10 @@ Sequrity Control API (`https://api.sequrity.ai/control`) provides the following

| URL | Status | Description |
|-----|--------| ------------|
| `POST /chat/v1/chat/completions` | :white_check_mark: | OpenAI-compatible chat completions (default provider) |
| `POST /chat/v1/chat/completions` | :white_check_mark: | OpenAI-compatible chat completions (default: [OpenRouter](https://openrouter.ai/)) |
| `POST /chat/openai/v1/chat/completions` | :white_check_mark: | Chat completions with [OpenAI](https://openai.com/) |
| `POST /chat/openrouter/v1/chat/completions` | :white_check_mark: | Chat completions with [OpenRouter](https://openrouter.ai/) |
| `POST /chat/sequrity_azure/v1/chat/completions` | :white_check_mark: | Chat completions with Sequrity Azure |

### Anthropic Messages

Expand All @@ -25,24 +26,43 @@ Sequrity Control API (`https://api.sequrity.ai/control`) provides the following

| URL | Status | Description |
|-----|--------| ------------|
| `POST /code/v1/chat/completions` | :white_check_mark: | Code-oriented chat completions (default provider) |
| `POST /code/{service_provider}/v1/chat/completions` | :white_check_mark: | Code-oriented chat completions with specified [service provider](../../../general/rest_api/service_provider.md) |
| `POST /code/v1/chat/completions` | :white_check_mark: | Code-oriented chat completions (default: [OpenRouter](https://openrouter.ai/)) |
| `POST /code/openai/v1/chat/completions` | :white_check_mark: | Code-oriented chat completions with [OpenAI](https://openai.com/) |
| `POST /code/openrouter/v1/chat/completions` | :white_check_mark: | Code-oriented chat completions with [OpenRouter](https://openrouter.ai/) |
| `POST /code/sequrity_azure/v1/chat/completions` | :white_check_mark: | Code-oriented chat completions with Sequrity Azure |
| `POST /code/v1/messages` | :white_check_mark: | Code-oriented Anthropic Messages (default provider) |
| `POST /code/anthropic/v1/messages` | :white_check_mark: | Code-oriented Messages with Anthropic |
| `POST /code/v1/responses` | :white_check_mark: | Code-oriented Responses API (default: [OpenAI](https://openai.com/)) |
| `POST /code/openai/v1/responses` | :white_check_mark: | Code-oriented Responses with [OpenAI](https://openai.com/) |
| `POST /code/sequrity_azure/v1/responses` | :white_check_mark: | Code-oriented Responses with Sequrity Azure |

### Responses

| URL | Status | Description |
|-----|--------| ------------|
| `POST /chat/v1/responses` | :white_check_mark: | OpenAI-compatible Responses API (default provider) |
| `POST /chat/openai/v1/responses` | :white_check_mark: | Responses API with [OpenAI](https://openai.com/) |
| `POST /chat/sequrity_azure/v1/responses` | :white_check_mark: | Responses API with Sequrity Azure |

### LangGraph

| URL | Status | Description |
|-----|--------| ------------|
| `POST /lang-graph/v1/chat/completions` | :white_check_mark: | Chat completions for [LangGraphExecutor](../sequrity_client/langgraph.md) (default provider) |
| `POST /lang-graph/{service_provider}/v1/chat/completions` | :white_check_mark: | LangGraph chat completions with specified [service provider](../../../general/rest_api/service_provider.md) |
| `POST /lang-graph/v1/chat/completions` | :white_check_mark: | Chat completions for [LangGraphExecutor](../sequrity_client/langgraph.md) (default: [OpenRouter](https://openrouter.ai/)) |
| `POST /lang-graph/openai/v1/chat/completions` | :white_check_mark: | LangGraph chat completions with [OpenAI](https://openai.com/) |
| `POST /lang-graph/openrouter/v1/chat/completions` | :white_check_mark: | LangGraph chat completions with [OpenRouter](https://openrouter.ai/) |
| `POST /lang-graph/sequrity_azure/v1/chat/completions` | :white_check_mark: | LangGraph chat completions with Sequrity Azure |
| `POST /lang-graph/anthropic/v1/messages` | :white_check_mark: | LangGraph Messages with Anthropic |

### Policy Generation

| URL | Status | Description |
|-----|--------| ------------|
| `POST /policy-gen/v1/generate` | :white_check_mark: | Generate security policies from natural language descriptions |
| `POST /policy-gen/v1/generate` | :white_check_mark: | Generate security policies (default: [OpenRouter](https://openrouter.ai/)) |
| `POST /policy-gen/openai/v1/generate` | :white_check_mark: | Policy generation with [OpenAI](https://openai.com/) |
| `POST /policy-gen/openrouter/v1/generate` | :white_check_mark: | Policy generation with [OpenRouter](https://openrouter.ai/) |
| `POST /policy-gen/anthropic/v1/generate` | :white_check_mark: | Policy generation with [Anthropic](https://anthropic.com/) |
| `POST /policy-gen/sequrity_azure/v1/generate` | :white_check_mark: | Policy generation with Sequrity Azure |

### Utility

Expand All @@ -61,17 +81,18 @@ https://api.sequrity.ai/control/{endpoint_type}/{service_provider?}/{version}/{a

| Segment | Description | Examples |
|---------|-------------|---------|
| `endpoint_type` | The type of endpoint | `chat`, `code`, `lang-graph`, `policy-gen` |
| `endpoint_type` | The type of endpoint | `chat`, `code`, `agent`, `lang-graph`, `policy-gen` |
| `service_provider` | Optional LLM service provider | `openai`, `openrouter`, `anthropic`, `sequrity_azure` |
| `version` | API version | `v1` |
| `api_suffix` | API-specific suffix | `chat/completions`, `messages`, `generate` |
| `api_suffix` | API-specific suffix | `chat/completions`, `messages`, `responses`, `generate` |

When `service_provider` is omitted, the default provider is used.

## Documentation

- **[Service Providers](../../../general/rest_api/service_provider.md)** - Available LLM service providers
- **[Chat Completion](chat_completion.md)** - OpenAI-compatible Chat Completions API reference
- **[Responses](responses.md)** - OpenAI-compatible Responses API reference
- **[Messages](messages.md)** - Anthropic-compatible Messages API reference

### Custom Headers
Expand Down
1 change: 0 additions & 1 deletion docs/control/reference/rest_api/messages.md
Original file line number Diff line number Diff line change
Expand Up @@ -152,7 +152,6 @@ Extended thinking configuration. Discriminated by `type`:
| `stop_reason` | `string \| null` | Why generation stopped: `"end_turn"`, `"max_tokens"`, `"stop_sequence"`, `"tool_use"`, `"pause_turn"`, `"refusal"`. |
| `stop_sequence` | `string \| null` | Which stop sequence was hit, if any. |
| `usage` | `Usage` | Token usage statistics. |
| `session_id` | `string \| null` | Sequrity session ID (also available via `X-Session-ID` response header). |

### Response Content Blocks

Expand Down
Loading