-
Notifications
You must be signed in to change notification settings - Fork 15
Open
Description
Ramalama-Stack doesn't work, as you can see. Used root cause (according to ChatGPT after it read logs below) as title.
$ podman run \
--net=host \
--env RAMALAMA_URL=http://0.0.0.0:21401 \
--env INFERENCE_MODEL=lmstudio-community/Qwen3.5-122B-A10B-GGUF \
quay.io/ramalama/llama-stack
Trying to pull quay.io/ramalama/llama-stack:latest...
Getting image source signatures
Copying blob 27d9b0b3d7c8 done |
Copying blob 2056340a78af skipped: already exists
Copying config ed4693c72c done |
Writing manifest to image destination
/.venv/lib64/python3.14/site-packages/requests/__init__.py:113: RequestsDependencyWarning: urllib3 (2.6.3) or chardet (7.2.0)/charset_normalizer (3.4.6) doesn't match a supported version!
warnings.warn(
/.venv/lib64/python3.14/site-packages/pydantic/_internal/_generate_schema.py:2249: UnsupportedFieldAttributeWarning: The 'default' attribute with value 'sqlite' was provided to the `Field()` function, which has no effect in the context it was used. 'default' is field-specific metadata, and can only be attached to a model field using `Annotated` metadata or by assignment. This may have happened because an `Annotated` type alias using the `type` statement was used, or if the `Field()` function was attached to a single member of a union type.
warnings.warn(
INFO 2026-03-29 14:53:05,184 llama_stack.cli.stack.run:126 server: Using run configuration: /etc/ramalama/ramalama-run.yaml
Using virtual environment: /.venv
Virtual environment already activated
+ '[' -n /etc/ramalama/ramalama-run.yaml ']'
+ yaml_config_arg='--config /etc/ramalama/ramalama-run.yaml'
+ python -m llama_stack.distribution.server.server --config /etc/ramalama/ramalama-run.yaml --port 8321
/.venv/lib64/python3.14/site-packages/requests/__init__.py:113: RequestsDependencyWarning: urllib3 (2.6.3) or chardet (7.2.0)/charset_normalizer (3.4.6) doesn't match a supported version!
warnings.warn(
/.venv/lib64/python3.14/site-packages/pydantic/_internal/_generate_schema.py:2249: UnsupportedFieldAttributeWarning: The 'default' attribute with value 'sqlite' was provided to the `Field()` function, which has no effect in the context it was used. 'default' is field-specific metadata, and can only be attached to a model field using `Annotated` metadata or by assignment. This may have happened because an `Annotated` type alias using the `type` statement was used, or if the `Field()` function was attached to a single member of a union type.
warnings.warn(
INFO 2026-03-29 14:53:06,284 __main__:441 server: Using config file: /etc/ramalama/ramalama-run.yaml
INFO 2026-03-29 14:53:06,285 __main__:443 server: Run configuration:
INFO 2026-03-29 14:53:06,288 __main__:445 server: apis:
- agents
- datasetio
- eval
- inference
- post_training
- safety
- scoring
- telemetry
- tool_runtime
- vector_io
benchmarks: []
container_image: null
datasets: []
external_providers_dir: !!python/object/apply:pathlib.PosixPath
- /root/.llama/providers.d
image_name: ramalama
inference_store:
db_path: /root/.llama/distributions/ramalama/inference_store.db
type: sqlite
logging: null
metadata_store:
db_path: /root/.llama/distributions/ramalama/registry.db
namespace: null
type: sqlite
models:
- metadata: {}
model_id: lmstudio-community/Qwen3.5-122B-A10B-GGUF
model_type: !!python/object/apply:llama_stack.apis.models.models.ModelType
- llm
provider_id: ramalama
provider_model_id: null
- metadata:
embedding_dimension: 384
model_id: all-MiniLM-L6-v2
model_type: !!python/object/apply:llama_stack.apis.models.models.ModelType
- embedding
provider_id: sentence-transformers
provider_model_id: null
providers:
agents:
- config:
persistence_store:
db_path: /root/.llama/distributions/ramalama/agents_store.db
namespace: null
type: sqlite
responses_store:
db_path: /root/.llama/distributions/ramalama/responses_store.db
type: sqlite
provider_id: meta-reference
provider_type: inline::meta-reference
datasetio:
- config:
kvstore:
db_path: /root/.llama/distributions/ramalama/huggingface_datasetio.db
namespace: null
type: sqlite
provider_id: huggingface
provider_type: remote::huggingface
- config:
kvstore:
db_path: /root/.llama/distributions/ramalama/localfs_datasetio.db
namespace: null
type: sqlite
provider_id: localfs
provider_type: inline::localfs
eval:
- config:
kvstore:
db_path: /root/.llama/distributions/ramalama/meta_reference_eval.db
namespace: null
type: sqlite
provider_id: meta-reference
provider_type: inline::meta-reference
inference:
- config:
url: http://0.0.0.0:21401
provider_id: ramalama
provider_type: remote::ramalama
- config: {}
provider_id: sentence-transformers
provider_type: inline::sentence-transformers
post_training:
- config:
checkpoint_format: huggingface
device: cpu
distributed_backend: null
provider_id: huggingface
provider_type: inline::huggingface
safety:
- config:
excluded_categories: []
provider_id: llama-guard
provider_type: inline::llama-guard
scoring:
- config: {}
provider_id: basic
provider_type: inline::basic
- config: {}
provider_id: llm-as-judge
provider_type: inline::llm-as-judge
- config:
openai_api_key: '********'
provider_id: braintrust
provider_type: inline::braintrust
telemetry:
- config:
service_name: llamastack
sinks: console,sqlite
sqlite_db_path: /root/.llama/distributions/ramalama/trace_store.db
provider_id: meta-reference
provider_type: inline::meta-reference
tool_runtime:
- config:
api_key: '********'
max_results: 3
provider_id: brave-search
provider_type: remote::brave-search
- config:
api_key: '********'
max_results: 3
provider_id: tavily-search
provider_type: remote::tavily-search
- config: {}
provider_id: rag-runtime
provider_type: inline::rag-runtime
- config: {}
provider_id: model-context-protocol
provider_type: remote::model-context-protocol
- config:
api_key: '********'
provider_id: wolfram-alpha
provider_type: remote::wolfram-alpha
vector_io:
- config:
db_path: /root/.llama/distributions/ramalama/milvus.db
kvstore:
db_path: /root/.llama/distributions/ramalama/milvus_registry.db
namespace: null
type: sqlite
provider_id: milvus
provider_type: inline::milvus
scoring_fns: []
server:
auth: null
host: null
port: 8321
quota: null
tls_cafile: null
tls_certfile: null
tls_keyfile: null
shields: []
tool_groups:
- args: null
mcp_endpoint: null
provider_id: tavily-search
toolgroup_id: builtin::websearch
- args: null
mcp_endpoint: null
provider_id: rag-runtime
toolgroup_id: builtin::rag
- args: null
mcp_endpoint: null
provider_id: wolfram-alpha
toolgroup_id: builtin::wolfram_alpha
vector_dbs: []
version: 2
INFO 2026-03-29 14:53:06,306 llama_stack.distribution.distribution:151 core: Loading external providers from /root/.llama/providers.d
INFO 2026-03-29 14:53:06,306 llama_stack.distribution.distribution:166 core: Loading remote provider spec from
/root/.llama/providers.d/remote/inference/ramalama.yaml
INFO 2026-03-29 14:53:06,307 llama_stack.distribution.distribution:179 core: Loaded remote provider spec for remote::ramalama from
/root/.llama/providers.d/remote/inference/ramalama.yaml
INFO 2026-03-29 14:53:06,308 llama_stack.distribution.distribution:183 core: Successfully loaded external provider remote::ramalama
INFO 2026-03-29 14:53:06,368 ramalama_stack.ramalama_adapter:65 inference: checking connectivity to Ramalama at `http://0.0.0.0:21401`...
INFO 2026-03-29 14:53:06,421 ramalama_stack.ramalama_adapter:67 inference: successfully connected to Ramalama at `http://0.0.0.0:21401`...
Traceback (most recent call last):
File "/.venv/lib64/python3.14/site-packages/pymilvus/client/connection_manager.py", line 102, in from_uri
from milvus_lite.server_manager import ( # noqa: PLC0415
server_manager_instance,
)
File "/.venv/lib64/python3.14/site-packages/milvus_lite/__init__.py", line 15, in <module>
from pkg_resources import DistributionNotFound, get_distribution
ModuleNotFoundError: No module named 'pkg_resources'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "/.venv/lib64/python3.14/site-packages/llama_stack/distribution/server/server.py", line 601, in <module>
main()
~~~~^^
File "/.venv/lib64/python3.14/site-packages/llama_stack/distribution/server/server.py", line 491, in main
impls = asyncio.run(construct_stack(config))
File "/usr/lib64/python3.14/asyncio/runners.py", line 204, in run
return runner.run(main)
~~~~~~~~~~^^^^^^
File "/usr/lib64/python3.14/asyncio/runners.py", line 127, in run
return self._loop.run_until_complete(task)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^
File "/usr/lib64/python3.14/asyncio/base_events.py", line 719, in run_until_complete
return future.result()
~~~~~~~~~~~~~^^
File "/.venv/lib64/python3.14/site-packages/llama_stack/distribution/stack.py", line 283, in construct_stack
impls = await resolve_impls(
^^^^^^^^^^^^^^^^^^^^
run_config, provider_registry or get_provider_registry(run_config), dist_registry, policy
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
)
^
File "/.venv/lib64/python3.14/site-packages/llama_stack/distribution/resolver.py", line 145, in resolve_impls
return await instantiate_providers(sorted_providers, router_apis, dist_registry, run_config, policy)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.venv/lib64/python3.14/site-packages/llama_stack/distribution/resolver.py", line 271, in instantiate_providers
impl = await instantiate_provider(provider, deps, inner_impls, dist_registry, run_config, policy)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.venv/lib64/python3.14/site-packages/llama_stack/distribution/resolver.py", line 358, in instantiate_provider
impl = await fn(*args)
^^^^^^^^^^^^^^^
File "/.venv/lib64/python3.14/site-packages/llama_stack/providers/inline/vector_io/milvus/__init__.py", line 18, in get_provider_impl
await impl.initialize()
File "/.venv/lib64/python3.14/site-packages/llama_stack/providers/utils/telemetry/trace_protocol.py", line 103, in async_wrapper
result = await method(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/.venv/lib64/python3.14/site-packages/llama_stack/providers/remote/vector_io/milvus/milvus.py", line 175, in initialize
self.client = MilvusClient(uri=uri)
~~~~~~~~~~~~^^^^^^^^^
File "/.venv/lib64/python3.14/site-packages/pymilvus/milvus_client/milvus_client.py", line 81, in __init__
self._config = ConnectionConfig.from_uri(
~~~~~~~~~~~~~~~~~~~~~~~~~^
uri,
^^^^
...<2 lines>...
**kwargs,
^^^^^^^^^
)
^
File "/.venv/lib64/python3.14/site-packages/pymilvus/client/connection_manager.py", line 106, in from_uri
raise ConnectionConfigException(
...<2 lines>...
) from e
pymilvus.exceptions.ConnectionConfigException: <ConnectionConfigException: (code=1, message=milvus-lite is required for local database connections. Please install it with: pip install pymilvus[milvus_lite])>
++ error_handler 125
++ echo 'Error occurred in script at line: 125'
Error occurred in script at line: 125
++ exit 1
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels