Skip to content

feat: add registry role for disconnected deployment#866

Open
fabiendupont wants to merge 1 commit intoseapath:mainfrom
fabiendupont:feat/add-registry-role
Open

feat: add registry role for disconnected deployment#866
fabiendupont wants to merge 1 commit intoseapath:mainfrom
fabiendupont:feat/add-registry-role

Conversation

@fabiendupont
Copy link

@fabiendupont fabiendupont commented Feb 18, 2026

The current disconnected setup embeds container images at OS build time (e.g. via build_debian_iso), which works well for initial deployment. However, day-2 operations — upgrading Ceph, rolling out new container images, or adding services — require either repackaging the ISO or manually transferring images to each node. A local registry provides a persistent, updatable image source that's independent of the installation media, and aligns with Ceph's recommended approach for isolated environments.

This commit introduces a registry role that deploys docker.io/registry:v2 and allows importing images from internet (pull) or from an exported tarball (load). The seapath_setup_disconnected.yaml playbook installs the registry on the Ansible control node as a singleton.

TLS is enabled by default: the registry auto-generates a self-signed CA and server certificate when no user-provided certs are given. The CA is distributed to all cluster nodes so they trust the registry over HTTPS. The registry listens on port 443 to avoid specifying the port in image names.

The *_physical_machine roles are updated to use that registry as a mirror, which doesn't require changing the images names, both for Docker and Podman. They install the registry CA certificate in certs.d and set insecure = false when TLS is enabled.

The cephadm role is updated to remove image management, which is now handled by the registry role, so cephadm is focused on Ceph cluster management.

Contributes to #442

@insatomcat
Copy link
Member

insatomcat commented Feb 21, 2026

Thanks for the PR, this is an interesting and well-structured proposal 👍

A few points I’d like to clarify and discuss.


1️⃣ Fully disconnected is already possible in the current setup

In the current implementation, it is possible to be fully disconnected, provided that images are made available at OS installation time.

For example, with build_debian_iso on Debian:

  • When the ISO is built (with internet access), required container images are loaded into the ISO.
  • During installation (without internet), those images are deployed locally.
  • No external pull is required afterward at the OS level.

I assume a similar approach is feasible for:

  • Red Hat Enterprise Linux–like distributions (at ISO/image build stage),
  • or Yocto-based images (embedding container images at image generation time).

So strictly speaking, the setup is not inherently “internet-dependent” if the images are preloaded properly.


2️⃣ The real issue: cephadm’s pull behavior

The actual difficulty is not the base OS installation, but the behavior of cephadm.

Even if images are already present locally:

  • The bootstrap command allows skipping certain pulls.
  • However, later lifecycle events (deploying osd, mon, mgr, etc.) still trigger a podman pull check from the cephadm mgr.

see https://marc.info/?l=ceph-users&m=164399318917018

To be truly disconnected, we therefore need:

  • Either a local registry on each node (current setup),
  • Or a central registry (as proposed in the PR).

Before deciding on registry topology, I would really like to confirm something:

Is there absolutely no way to completely skip the podman pull check that cephadm performs when deploying components?

If such an option exists (or could exist), we could:

  • Preload all images at OS installation time (as done with build_debian_iso),
  • Avoid any registry entirely,
  • And remain fully disconnected without additional infrastructure.

Right now, the registry requirement seems to stem from cephadm enforcing the pull validation step.

If you have more information on whether this behavior is configurable or patchable, that would be very helpful.


3️⃣ Registry location: node-local vs controller-based

Regarding the architectural choice:

  • Current approach: registry on each node.
  • PR proposal: single registry on the Ansible controller.

Both are technically valid trade-offs:

  • Node-local registry → more autonomous nodes, no central dependency.
  • Controller-based registry → simpler, more resource-efficient, centralized management.

From my perspective, either:

  • The PR supports both models and lets the user choose,
  • Or we align on a community-level decision about the preferred architecture.

But I think we should make that decision explicitly rather than implicitly switching models.


Summary

  • Fully disconnected installs are already achievable if images are embedded at OS build time.
  • The real blocker is cephadm’s pull behavior.
  • If we could completely disable pull checks, we might not need a registry at all.
  • Otherwise, we need to consciously decide between distributed vs centralized registry architecture (or support both).

Looking forward to your feedback, especially regarding cephadm’s pull enforcement.

In the current implementation, every node installs a registry locally
and pull/push the cephadm image. However, this is neither truly
disconnected as pull requires internet, nor resource efficient as a
single registry is enough.

This commit introduces a registry role that deploys
docker.io/registry:v2 and allows importing images from internet (pull)
or from an exported tarball (load). The
seapath_setup_disconnected.yaml playbook installs the registry on the
Ansible control node as a singleton.

TLS is enabled by default: the registry auto-generates a self-signed CA
and server certificate when no user-provided certs are given. The CA is
distributed to all cluster nodes so they trust the registry over HTTPS.
The registry listens on port 443 to avoid specifying the port in image
names.

The *_physical_machine roles are updated to use that registry as a
mirror, which doesn't require changing the images names, both for Docker
and Podman. They install the registry CA certificate in certs.d and set
insecure = false when TLS is enabled.

The cephadm role is updated to remove image management, which is now
handled by the registry role, so cephadm is focused on Ceph cluster
management.

Contributes to seapath#442

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Signed-off-by: Fabien Dupont <fdupont@redhat.com>
@fabiendupont
Copy link
Author

Thanks for the detailed review and the questions.

On point 1 — Fully disconnected is already possible

You're right, and I should have been clearer about the motivation. The initial deployment is already covered by embedding images at OS build time (e.g. build_debian_iso). his PR offers an alternative approach and addresses day-2 operations: upgrading Ceph, rolling out new container images, or adding services currently requires either repackaging the ISO or manually transferring images to each node. A registry provides a persistent, updatable image source that's independent of the installation media.

I've updated the commit message and PR description to reflect this.

On point 2 — Cephadm's pull behavior

Good question. From what I could find, cephadm bootstrap does have a --skip-pull flag, but it only covers the bootstrap step itself — the mgr module may still attempt pulls during subsequent daemon operations. There's also mgr/cephadm/use_repo_digest (see ceph/ceph#50311) which can reduce pull attempts when images are already local.

That said, Ceph's own documentation for isolated environments points toward using a local registry as the supported path. A preload-only approach may work in practice, but registries are still predominant in the container space.

With this PR, we add an alternative and follow Ceph's documentation for disconnected environments.

On point 3 — Registry topology

Supporting both models makes sense. The registry role as written is already fairly decoupled — it deploys a registry wherever you point it. Making it work as either a centralized controller-based registry or a per-node local registry would mainly be a matter of inventory configuration and playbook targeting.

One argument for a centralized registry is that it doesn't become a noisy neighbor on cluster nodes, which already need to carve resources for Ceph itself, pacemaker, etc... reducing the resources available for vIEDs.

@fabiendupont fabiendupont force-pushed the feat/add-registry-role branch from 5799123 to 043937e Compare February 24, 2026 08:21
@insatomcat
Copy link
Member

Thanks for the clarification and for updating the commit message — I agree that the day-2 operations aspect (Ceph upgrades, new images, additional services) is a valid motivation for introducing a registry.

That said, my concern is not only about the description, but about the scope and positioning of the PR.

With the current implementation, we are already able to support a fully disconnected deployment by embedding container images at OS build time (e.g. via build_debian_iso). The registry is therefore not a prerequisite for “disconnected deployment”, but rather an additional mechanism that improves operational flexibility for day-2.

In this PR, we are not just adding the option of running a registry on the Ansible control node — we are also:

  • Introducing a new seapath-cluster-disconnected.yaml playbook
  • Introducing a dedicated seapath-cluster-disconnected.yaml inventory
  • Adding a full "SEAPATH Disconnected Deployment Guide"

This effectively reframes the disconnected model around the registry-based approach, whereas in reality:

  • Disconnected deployment is already possible without a persistent registry.
  • The registry on the nodes during installation is temporary.
  • A persistent registry on the controller is an optional architectural choice for day-2 convenience.

I think the PR would be clearer and more aligned with the existing design if it focused strictly on:

Adding the possibility to deploy a persistent registry on the Ansible controller, and letting the user choose whether to use:

  • preloaded images only (current model), or
  • a persistent local registry for day-2 operations.

The documentation could then explain:

  • The two approaches (embedded images vs persistent registry),
  • Their respective pros and cons,
  • The lifecycle implications (installation-time vs day-2),
  • How to enable either model via the inventory.

In other words, I believe this should be presented as an optional enhancement to the existing disconnected strategy, not as a new disconnected deployment model.

Let me know what you think.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants