Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
26 commits
Select commit Hold shift + click to select a range
021bd5e
refactor: remove fine_tuning API
stainless-app[bot] Mar 27, 2026
d6a79d0
fix: remove duplicate dataset_id parameter in append-rows endpoint
stainless-app[bot] Mar 11, 2026
147b88b
fix(inference): improve chat completions OpenAI conformance
stainless-app[bot] Mar 13, 2026
f468096
chore(internal): version bump
stainless-app[bot] Mar 13, 2026
b4c2f15
feat: Add stream_options parameter support
stainless-app[bot] Mar 16, 2026
accfb0f
codegen metadata
stainless-app[bot] Mar 16, 2026
f6836f9
fix(pydantic): do not pass `by_alias` unless set
stainless-app[bot] Mar 16, 2026
50ea4d7
fix(deps): bump minimum typing-extensions version
stainless-app[bot] Mar 16, 2026
1f28d73
feat!: eliminate /files/{file_id} GET differences
stainless-app[bot] Mar 16, 2026
1df7e26
chore(internal): tweak CI branches
stainless-app[bot] Mar 16, 2026
94a14da
refactor: rename rag-runtime provider to file-search
stainless-app[bot] Mar 18, 2026
9b288d5
fix: sanitize endpoint path params
stainless-app[bot] Mar 28, 2026
23d591c
refactor(tests): switch from prism to steady
stainless-app[bot] Mar 28, 2026
f5ad8f8
chore(tests): bump steady to v0.19.4
stainless-app[bot] Mar 28, 2026
55689e1
chore(tests): bump steady to v0.19.5
stainless-app[bot] Mar 28, 2026
c0df2dc
refactor: remove tool_groups from public API and auto-register from p…
stainless-app[bot] Mar 28, 2026
0e98cfd
chore(internal): update gitignore
stainless-app[bot] Mar 28, 2026
e364f5d
codegen metadata
stainless-app[bot] Mar 23, 2026
87cb87e
chore(tests): bump steady to v0.19.6
stainless-app[bot] Mar 28, 2026
b096c2c
chore(ci): skip lint on metadata-only changes
stainless-app[bot] Mar 24, 2026
10f6ed7
chore(tests): bump steady to v0.19.7
stainless-app[bot] Mar 28, 2026
f5c27db
refactor!: rename agents API to responses API
stainless-app[bot] Mar 25, 2026
dad9f54
feat!: eliminate GET /chat/completions/{completion_id} conformance is…
stainless-app[bot] Mar 26, 2026
6694121
feat(internal): implement indices array format for query and form ser…
stainless-app[bot] Mar 28, 2026
d9bc91a
feat(responses): add cancel endpoint for background responses
stainless-app[bot] Mar 27, 2026
00c6ed3
release: 0.7.0-alpha.1
stainless-app[bot] Mar 28, 2026
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
18 changes: 10 additions & 8 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
@@ -1,12 +1,14 @@
name: CI
on:
push:
branches-ignore:
- 'generated'
- 'codegen/**'
- 'integrated/**'
- 'stl-preview-head/**'
- 'stl-preview-base/**'
branches:
- '**'
- '!integrated/**'
- '!stl-preview-head/**'
- '!stl-preview-base/**'
- '!generated'
- '!codegen/**'
- 'codegen/stl/**'
pull_request:
branches-ignore:
- 'stl-preview-head/**'
Expand All @@ -18,7 +20,7 @@ jobs:
timeout-minutes: 10
name: lint
runs-on: ${{ github.repository == 'stainless-sdks/llama-stack-client-python' && 'depot-ubuntu-24.04' || 'ubuntu-latest' }}
if: github.event_name == 'push' || github.event.pull_request.head.repo.fork
if: (github.event_name == 'push' || github.event.pull_request.head.repo.fork) && (github.event_name != 'push' || github.event.head_commit.message != 'codegen metadata')
steps:
- uses: actions/checkout@v6

Expand All @@ -34,7 +36,7 @@ jobs:
run: ./scripts/lint

build:
if: github.event_name == 'push' || github.event.pull_request.head.repo.fork
if: (github.event_name == 'push' || github.event.pull_request.head.repo.fork) && (github.event_name != 'push' || github.event.head_commit.message != 'codegen metadata')
timeout-minutes: 10
name: build
permissions:
Expand Down
2 changes: 1 addition & 1 deletion .release-please-manifest.json
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
{
".": "0.6.1-alpha.1"
".": "0.7.0-alpha.1"
}
8 changes: 4 additions & 4 deletions .stats.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
configured_endpoints: 108
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/llamastack%2Fllama-stack-client-1b387ba7b0e0d1aa931032ac2101e5a473b9fa42975e6575cf889feace342b80.yml
openapi_spec_hash: a144868005520bd3f8f9dc3d8cac1c22
config_hash: ef1f9b33e203c71cfc10d91890c1ed2d
configured_endpoints: 94
openapi_spec_url: https://storage.googleapis.com/stainless-sdk-openapi-specs/llamastack%2Fllama-stack-client-ce0519da94b80140cbcd7c35fa41c78daa192530a7525b3452f2a0b2e998cfc8.yml
openapi_spec_hash: 8a9ec9c7c3a1216ec0e8580ad598a529
config_hash: 7d5765272a641656f8231509937663a7
48 changes: 48 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,53 @@
# Changelog

## 0.7.0-alpha.1 (2026-03-28)

Full Changelog: [v0.6.1-alpha.1...v0.7.0-alpha.1](https://github.com/llamastack/llama-stack-client-python/compare/v0.6.1-alpha.1...v0.7.0-alpha.1)

### ⚠ BREAKING CHANGES

* eliminate GET /chat/completions/{completion_id} conformance issues
* rename agents API to responses API
* eliminate /files/{file_id} GET differences

### Features

* Add stream_options parameter support ([b4c2f15](https://github.com/llamastack/llama-stack-client-python/commit/b4c2f15b16872730a9c254b1b2dfc02aba223a71))
* eliminate /files/{file_id} GET differences ([1f28d73](https://github.com/llamastack/llama-stack-client-python/commit/1f28d730824b6cb721415985194c5f4567e42ea7))
* eliminate GET /chat/completions/{completion_id} conformance issues ([dad9f54](https://github.com/llamastack/llama-stack-client-python/commit/dad9f546400133d34a0cd650a227800be78b0d1f))
* **internal:** implement indices array format for query and form serialization ([6694121](https://github.com/llamastack/llama-stack-client-python/commit/6694121eee689fb7033704bad2b698a4640e2431))
* **responses:** add cancel endpoint for background responses ([d9bc91a](https://github.com/llamastack/llama-stack-client-python/commit/d9bc91afecb64ec27b97d37699d5ff6c1222d369))


### Bug Fixes

* **deps:** bump minimum typing-extensions version ([50ea4d7](https://github.com/llamastack/llama-stack-client-python/commit/50ea4d7fd98a86726f6825d911507b7fc96e2e60))
* **inference:** improve chat completions OpenAI conformance ([147b88b](https://github.com/llamastack/llama-stack-client-python/commit/147b88b44eb83bceb7cd6204cd79d8dafe8f8e7a))
* **pydantic:** do not pass `by_alias` unless set ([f6836f9](https://github.com/llamastack/llama-stack-client-python/commit/f6836f9dacef1b9b26774fcfaf82689ae00f374a))
* remove duplicate dataset_id parameter in append-rows endpoint ([d6a79d0](https://github.com/llamastack/llama-stack-client-python/commit/d6a79d0a830bad4e82b70d7ab9e007ebc16e0f05))
* sanitize endpoint path params ([9b288d5](https://github.com/llamastack/llama-stack-client-python/commit/9b288d553ae83860fbe1d8ee9352532ed04ddd9b))


### Chores

* **ci:** skip lint on metadata-only changes ([b096c2c](https://github.com/llamastack/llama-stack-client-python/commit/b096c2ce513a5d2de9a17e7841609feb30d1b0b2))
* **internal:** tweak CI branches ([1df7e26](https://github.com/llamastack/llama-stack-client-python/commit/1df7e2605e78572eccc53aa8db1e44d987106a9b))
* **internal:** update gitignore ([0e98cfd](https://github.com/llamastack/llama-stack-client-python/commit/0e98cfdcf7779ca24ef4dbd7e9e8d9c75fa2a751))
* **internal:** version bump ([f468096](https://github.com/llamastack/llama-stack-client-python/commit/f46809696ddf1f179cc26984facfcbb7f9264730))
* **tests:** bump steady to v0.19.4 ([f5ad8f8](https://github.com/llamastack/llama-stack-client-python/commit/f5ad8f801078d79c03ec7723cd64b1c9895def2d))
* **tests:** bump steady to v0.19.5 ([55689e1](https://github.com/llamastack/llama-stack-client-python/commit/55689e1ddee55d81efff681dbb3523b0ed09d658))
* **tests:** bump steady to v0.19.6 ([87cb87e](https://github.com/llamastack/llama-stack-client-python/commit/87cb87e8ecd52d95b5a375f8b4c00f5837e4feeb))
* **tests:** bump steady to v0.19.7 ([10f6ed7](https://github.com/llamastack/llama-stack-client-python/commit/10f6ed745b38d89be2d6a5eb007427b015e84e23))


### Refactors

* remove fine_tuning API ([021bd5e](https://github.com/llamastack/llama-stack-client-python/commit/021bd5e6138574884befe6f20ba86ceeefee1767))
* remove tool_groups from public API and auto-register from provider specs ([c0df2dc](https://github.com/llamastack/llama-stack-client-python/commit/c0df2dcf9bb38600f73db746dc38d3277e74e7b9))
* rename agents API to responses API ([f5c27db](https://github.com/llamastack/llama-stack-client-python/commit/f5c27db9d2716098a116d516cc5ad673ee621988))
* rename rag-runtime provider to file-search ([94a14da](https://github.com/llamastack/llama-stack-client-python/commit/94a14dad88ed55d3f2baf1de8eb30ba529fb9818))
* **tests:** switch from prism to steady ([23d591c](https://github.com/llamastack/llama-stack-client-python/commit/23d591c70549c7f00b7be136a19893dbdd65f43c))

## 0.6.1-alpha.1 (2026-03-13)

Full Changelog: [v0.5.0-alpha.2...v0.6.1-alpha.1](https://github.com/llamastack/llama-stack-client-python/compare/v0.5.0-alpha.2...v0.6.1-alpha.1)
Expand Down
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ $ pip install ./path-to-wheel-file.whl

## Running tests

Most tests require you to [set up a mock server](https://github.com/stoplightio/prism) against the OpenAPI spec to run the tests.
Most tests require you to [set up a mock server](https://github.com/dgellow/steady) against the OpenAPI spec to run the tests.

```sh
$ ./scripts/mock
Expand Down
9 changes: 5 additions & 4 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -253,11 +253,12 @@ from llama_stack_client import LlamaStackClient

client = LlamaStackClient()

client.toolgroups.register(
provider_id="provider_id",
toolgroup_id="toolgroup_id",
mcp_endpoint={"uri": "uri"},
response_object = client.responses.create(
input="string",
model="model",
prompt={"id": "id"},
)
print(response_object.prompt)
```

## File uploads
Expand Down
88 changes: 8 additions & 80 deletions api.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,47 +18,6 @@ from llama_stack_client.types import (
)
```

# Toolgroups

Types:

```python
from llama_stack_client.types import ListToolGroupsResponse, ToolGroup, ToolgroupListResponse
```

Methods:

- <code title="get /v1/toolgroups">client.toolgroups.<a href="./src/llama_stack_client/resources/toolgroups.py">list</a>() -> <a href="./src/llama_stack_client/types/toolgroup_list_response.py">ToolgroupListResponse</a></code>
- <code title="get /v1/toolgroups/{toolgroup_id}">client.toolgroups.<a href="./src/llama_stack_client/resources/toolgroups.py">get</a>(toolgroup_id) -> <a href="./src/llama_stack_client/types/tool_group.py">ToolGroup</a></code>
- <code title="post /v1/toolgroups">client.toolgroups.<a href="./src/llama_stack_client/resources/toolgroups.py">register</a>(\*\*<a href="src/llama_stack_client/types/toolgroup_register_params.py">params</a>) -> None</code>
- <code title="delete /v1/toolgroups/{toolgroup_id}">client.toolgroups.<a href="./src/llama_stack_client/resources/toolgroups.py">unregister</a>(toolgroup_id) -> None</code>

# Tools

Types:

```python
from llama_stack_client.types import ToolListResponse
```

Methods:

- <code title="get /v1/tools">client.tools.<a href="./src/llama_stack_client/resources/tools.py">list</a>(\*\*<a href="src/llama_stack_client/types/tool_list_params.py">params</a>) -> <a href="./src/llama_stack_client/types/tool_list_response.py">ToolListResponse</a></code>
- <code title="get /v1/tools/{tool_name}">client.tools.<a href="./src/llama_stack_client/resources/tools.py">get</a>(tool_name) -> <a href="./src/llama_stack_client/types/tool_def.py">ToolDef</a></code>

# ToolRuntime

Types:

```python
from llama_stack_client.types import ToolDef, ToolInvocationResult, ToolRuntimeListToolsResponse
```

Methods:

- <code title="post /v1/tool-runtime/invoke">client.tool_runtime.<a href="./src/llama_stack_client/resources/tool_runtime.py">invoke_tool</a>(\*\*<a href="src/llama_stack_client/types/tool_runtime_invoke_tool_params.py">params</a>) -> <a href="./src/llama_stack_client/types/tool_invocation_result.py">ToolInvocationResult</a></code>
- <code title="get /v1/tool-runtime/list-tools">client.tool_runtime.<a href="./src/llama_stack_client/resources/tool_runtime.py">list_tools</a>(\*\*<a href="src/llama_stack_client/types/tool_runtime_list_tools_params.py">params</a>) -> <a href="./src/llama_stack_client/types/tool_runtime_list_tools_response.py">ToolRuntimeListToolsResponse</a></code>

# Responses

Types:
Expand Down Expand Up @@ -409,7 +368,12 @@ Methods:
Types:

```python
from llama_stack_client.types import DeleteFileResponse, File, ListFilesResponse
from llama_stack_client.types import (
DeleteFileResponse,
File,
ListFilesResponse,
FileContentResponse,
)
```

Methods:
Expand All @@ -418,7 +382,7 @@ Methods:
- <code title="get /v1/files/{file_id}">client.files.<a href="./src/llama_stack_client/resources/files.py">retrieve</a>(file_id) -> <a href="./src/llama_stack_client/types/file.py">File</a></code>
- <code title="get /v1/files">client.files.<a href="./src/llama_stack_client/resources/files.py">list</a>(\*\*<a href="src/llama_stack_client/types/file_list_params.py">params</a>) -> <a href="./src/llama_stack_client/types/file.py">SyncOpenAICursorPage[File]</a></code>
- <code title="delete /v1/files/{file_id}">client.files.<a href="./src/llama_stack_client/resources/files.py">delete</a>(file_id) -> <a href="./src/llama_stack_client/types/delete_file_response.py">DeleteFileResponse</a></code>
- <code title="get /v1/files/{file_id}/content">client.files.<a href="./src/llama_stack_client/resources/files.py">content</a>(file_id) -> object</code>
- <code title="get /v1/files/{file_id}/content">client.files.<a href="./src/llama_stack_client/resources/files.py">content</a>(file_id) -> str</code>

# Batches

Expand All @@ -442,42 +406,6 @@ Methods:

# Alpha

## PostTraining

Types:

```python
from llama_stack_client.types.alpha import (
AlgorithmConfig,
ListPostTrainingJobsResponse,
PostTrainingJob,
)
```

Methods:

- <code title="post /v1alpha/post-training/preference-optimize">client.alpha.post_training.<a href="./src/llama_stack_client/resources/alpha/post_training/post_training.py">preference_optimize</a>(\*\*<a href="src/llama_stack_client/types/alpha/post_training_preference_optimize_params.py">params</a>) -> <a href="./src/llama_stack_client/types/alpha/post_training_job.py">PostTrainingJob</a></code>
- <code title="post /v1alpha/post-training/supervised-fine-tune">client.alpha.post_training.<a href="./src/llama_stack_client/resources/alpha/post_training/post_training.py">supervised_fine_tune</a>(\*\*<a href="src/llama_stack_client/types/alpha/post_training_supervised_fine_tune_params.py">params</a>) -> <a href="./src/llama_stack_client/types/alpha/post_training_job.py">PostTrainingJob</a></code>

### Job

Types:

```python
from llama_stack_client.types.alpha.post_training import (
JobListResponse,
JobArtifactsResponse,
JobStatusResponse,
)
```

Methods:

- <code title="get /v1alpha/post-training/jobs">client.alpha.post_training.job.<a href="./src/llama_stack_client/resources/alpha/post_training/job.py">list</a>() -> <a href="./src/llama_stack_client/types/alpha/post_training/job_list_response.py">JobListResponse</a></code>
- <code title="get /v1alpha/post-training/jobs/{job_uuid}/artifacts">client.alpha.post_training.job.<a href="./src/llama_stack_client/resources/alpha/post_training/job.py">artifacts</a>(job_uuid) -> <a href="./src/llama_stack_client/types/alpha/post_training/job_artifacts_response.py">JobArtifactsResponse</a></code>
- <code title="post /v1alpha/post-training/jobs/{job_uuid}/cancel">client.alpha.post_training.job.<a href="./src/llama_stack_client/resources/alpha/post_training/job.py">cancel</a>(job_uuid) -> None</code>
- <code title="get /v1alpha/post-training/jobs/{job_uuid}/status">client.alpha.post_training.job.<a href="./src/llama_stack_client/resources/alpha/post_training/job.py">status</a>(job_uuid) -> <a href="./src/llama_stack_client/types/alpha/post_training/job_status_response.py">JobStatusResponse</a></code>

## Benchmarks

Types:
Expand Down Expand Up @@ -558,7 +486,7 @@ Methods:

- <code title="get /v1beta/datasets/{dataset_id}">client.beta.datasets.<a href="./src/llama_stack_client/resources/beta/datasets.py">retrieve</a>(dataset_id) -> <a href="./src/llama_stack_client/types/beta/dataset_retrieve_response.py">DatasetRetrieveResponse</a></code>
- <code title="get /v1beta/datasets">client.beta.datasets.<a href="./src/llama_stack_client/resources/beta/datasets.py">list</a>() -> <a href="./src/llama_stack_client/types/beta/dataset_list_response.py">DatasetListResponse</a></code>
- <code title="post /v1beta/datasetio/append-rows/{dataset_id}">client.beta.datasets.<a href="./src/llama_stack_client/resources/beta/datasets.py">appendrows</a>(path_dataset_id, \*\*<a href="src/llama_stack_client/types/beta/dataset_appendrows_params.py">params</a>) -> None</code>
- <code title="post /v1beta/datasetio/append-rows/{dataset_id}">client.beta.datasets.<a href="./src/llama_stack_client/resources/beta/datasets.py">appendrows</a>(dataset_id, \*\*<a href="src/llama_stack_client/types/beta/dataset_appendrows_params.py">params</a>) -> None</code>
- <code title="get /v1beta/datasetio/iterrows/{dataset_id}">client.beta.datasets.<a href="./src/llama_stack_client/resources/beta/datasets.py">iterrows</a>(dataset_id, \*\*<a href="src/llama_stack_client/types/beta/dataset_iterrows_params.py">params</a>) -> <a href="./src/llama_stack_client/types/beta/dataset_iterrows_response.py">DatasetIterrowsResponse</a></code>
- <code title="post /v1beta/datasets">client.beta.datasets.<a href="./src/llama_stack_client/resources/beta/datasets.py">register</a>(\*\*<a href="src/llama_stack_client/types/beta/dataset_register_params.py">params</a>) -> <a href="./src/llama_stack_client/types/beta/dataset_register_response.py">DatasetRegisterResponse</a></code>
- <code title="delete /v1beta/datasets/{dataset_id}">client.beta.datasets.<a href="./src/llama_stack_client/resources/beta/datasets.py">unregister</a>(dataset_id) -> None</code>
4 changes: 2 additions & 2 deletions pyproject.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
[project]
name = "llama_stack_client"
version = "0.6.1-alpha.1"
version = "0.7.0-alpha.1"
description = "The official Python library for the llama-stack-client API"
dynamic = ["readme"]
license = "MIT"
Expand All @@ -9,7 +9,7 @@ authors = [{ name = "Meta Llama", email = "llama-oss@meta.com" }]
dependencies = [
"httpx>=0.23.0, <1",
"pydantic>=1.9.0, <3",
"typing-extensions>=4.7, <5",
"typing-extensions>=4.7, <5", "typing-extensions>=4.14, <5",
"anyio>=3.5.0, <5",
"distro>=1.7.0, <2",
"sniffio",
Expand Down
25 changes: 15 additions & 10 deletions requirements-dev.lock
Original file line number Diff line number Diff line change
Expand Up @@ -7,15 +7,15 @@ anyio==4.12.1
# via
# httpx
# llama-stack-client
black==26.1.0
black==26.3.1
certifi==2026.1.4
# via
# httpcore
# httpx
# requests
cfgv==3.5.0
# via pre-commit
charset-normalizer==3.4.4
charset-normalizer==3.4.6
# via requests
click==8.3.1
# via
Expand All @@ -33,8 +33,10 @@ distro==1.9.0
# via llama-stack-client
execnet==2.1.2
# via pytest-xdist
filelock==3.20.3
# via virtualenv
filelock==3.25.2
# via
# python-discovery
# virtualenv
fire==0.7.1
# via llama-stack-client
h11==0.16.0
Expand All @@ -45,7 +47,7 @@ httpx==0.28.1
# via
# llama-stack-client
# respx
identify==2.6.16
identify==2.6.18
# via pre-commit
idna==3.11
# via
Expand All @@ -68,21 +70,22 @@ nodeenv==1.10.0
# via
# pre-commit
# pyright
numpy==2.4.2
numpy==2.4.3
# via pandas
packaging==25.0
# via
# black
# pytest
pandas==3.0.0
pandas==3.0.1
# via llama-stack-client
pathspec==1.0.3
# via
# black
# mypy
platformdirs==4.5.1
platformdirs==4.9.4
# via
# black
# python-discovery
# virtualenv
pluggy==1.6.0
# via pytest
Expand All @@ -108,13 +111,15 @@ pytest-asyncio==1.3.0
pytest-xdist==3.8.0
python-dateutil==2.9.0.post0
# via pandas
python-discovery==1.2.1
# via virtualenv
pytokens==0.4.1
# via black
pyyaml==6.0.3
# via
# pre-commit
# pyaml
requests==2.32.5
requests==2.33.0
# via llama-stack-client
respx==0.22.0
rich==14.2.0
Expand Down Expand Up @@ -147,7 +152,7 @@ tzdata==2025.3 ; sys_platform == 'emscripten' or sys_platform == 'win32'
# via pandas
urllib3==2.6.3
# via requests
virtualenv==20.36.1
virtualenv==21.2.0
# via pre-commit
wcwidth==0.6.0
# via prompt-toolkit
Expand Down
4 changes: 2 additions & 2 deletions scripts/mock
Original file line number Diff line number Diff line change
Expand Up @@ -26,11 +26,11 @@ echo "==> Modifying SSE schemas for the mock server"
yq -i '(.. | select(has("text/event-stream")).["text/event-stream"].schema) = {"type": "string"}' "$SPEC_PATH"
echo "==> Starting mock server with file ${SPEC_PATH}"

# Run prism mock on the given spec
# Run steady mock on the given spec
if [ "$1" == "--daemon" ]; then
npm exec --package=@mockoon/cli@9.3.0 -- mockoon-cli start --data "$SPEC_PATH" --port 4010 &>.mockoon.log &

# Wait for server to come online (max 30s)
# Wait for server to come online via health endpoint (max 30s)
echo -n "Waiting for server"
while ! grep -q "Error: \|Server started on port 4010" ".mockoon.log"; do
echo -n "."
Expand Down
6 changes: 3 additions & 3 deletions scripts/test
Original file line number Diff line number Diff line change
Expand Up @@ -45,14 +45,14 @@ elif ! prism_is_running; then
echo -e "running against your OpenAPI spec."
echo
echo -e "To run the server, pass in the path or url of your OpenAPI"
echo -e "spec to the prism command:"
echo -e "spec to the steady command:"
echo
echo -e " \$ ${YELLOW}npm exec --package=@stainless-api/prism-cli@5.15.0 -- prism mock path/to/your.openapi.yml${NC}"
echo -e " \$ ${YELLOW}npm exec --package=@stdy/cli@0.19.7 -- steady path/to/your.openapi.yml --host 127.0.0.1 -p 4010 --validator-query-array-format=comma --validator-form-array-format=comma --validator-query-object-format=brackets --validator-form-object-format=brackets${NC}"
echo

exit 1
else
echo -e "${GREEN}✔ Mock prism server is running with your OpenAPI spec${NC}"
echo -e "${GREEN}✔ Mock steady server is running with your OpenAPI spec${NC}"
echo
fi

Expand Down
Loading
Loading