Skip to content

Add support for in-memory deployment#306

Closed
vasartori wants to merge 1 commit intospectrocloud:mainfrom
vasartori:main
Closed

Add support for in-memory deployment#306
vasartori wants to merge 1 commit intospectrocloud:mainfrom
vasartori:main

Conversation

@vasartori
Copy link
Copy Markdown
Contributor

This commit enables support for deploying in-memory machines.

A deployInMemory flag was added to the MaasMachines and MaasMachineTemplates resources. The default value is false.

It is now possible to deploy machines using disks or entirely in memory (without disk).

Example:

apiVersion: infrastructure.cluster.x-k8s.io/v1beta1
kind: MaasMachineTemplate
metadata:
  annotations:
  name: mt-worker-memory
  namespace: test
spec:
  template:
    spec:
      deployInMemory: true
      image: custom/your-image
      minCPU: 4
      minMemory: 16384
      resourcePool: default
      tags:
      - memory

To use this option, your MAAS version must be >= 3.5.

This PR also updates Go to version 1.24.

Fixes: #305

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds support for deploying MAAS machines entirely in memory (without disk) by introducing a deployInMemory flag to MaasMachine and MaasMachineTemplate resources. The default value is false, maintaining backward compatibility. Additionally, the PR updates Go to version 1.24 and updates the maas-client-go dependency to v0.1.2-beta1.

Changes:

  • Added deployInMemory field to MaasMachine and MaasMachineTemplate CRDs with default value false
  • Updated machine deployment logic to set ephemeral deploy flag when deployInMemory is true
  • Updated Go version from 1.23.0 to 1.24 and toolchain to 1.24.12
  • Updated maas-client-go dependency to v0.1.2-beta1

Reviewed changes

Copilot reviewed 15 out of 16 changed files in this pull request and generated 2 comments.

Show a summary per file
File Description
api/v1beta1/maasmachine_types.go Added DeployInMemory boolean field to MaasMachineSpec
api/v1beta1/types.go Added DeployedAtMemory field to Machine struct to track deployment status
pkg/maas/machine/machine.go Updated deployment logic to set ephemeral deploy flag and read deployment status
config/crd/bases/infrastructure.cluster.x-k8s.io_maasmachines.yaml Added deployInMemory field definition to CRD
config/crd/bases/infrastructure.cluster.x-k8s.io_maasmachinetemplates.yaml Added deployInMemory field definition to CRD template
controllers/maasmachine_controller.go Fixed error message formatting to use verb formatting
controllers/maascluster_controller.go Fixed error message formatting to use verb formatting
go.mod Updated Go version to 1.24 and maas-client-go dependency
Dockerfile Updated base image to golang:1.24.12
Makefile Updated image tag to v0.7.0
clusterctl-settings.json Updated nextVersion to v0.7.0
pkg/maas/machine/machine_test.go Updated test expectations for new DeployedAtMemory field and context
pkg/maas/dns/dns_test.go Changed context.Background() to context.TODO() in test mocks
pkg/maas/client/mock/clienset_mock.go Added mock methods for new client interfaces
api/v1beta1/zz_generated.deepcopy.go Removed deprecated build constraint comment

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 15 out of 16 changed files in this pull request and generated 10 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@AmitSahastra
Copy link
Copy Markdown
Contributor

/test

@vasartori
Copy link
Copy Markdown
Contributor Author

@AmitSahastra did the tests pass?

@moondev
Copy link
Copy Markdown

moondev commented Feb 14, 2026

This is a really exciting feature now that maas can now deploy custom machine images in-memory.

If a custom image includes kernel modules for things like GPU and container runtimes, are they automatically included on in-memory machine boot?

When deploying an upstream image in-memory, installing items such as containerd would complain of missing modules.

Another thing this feature addresses is the ability to "release and deploy" without wiping all disks. "Repaving" a machine to a clean state is also very simple, just reboot and it gets rebootstraped again with cloud-init. Very cool.

@AmitSahastra
Copy link
Copy Markdown
Contributor

@vasartori Thanks for opening PR.

Is there any user-facing documentation for this feature (e.g., in README)?
What MAAS versions support ephemeral deploy? Should we add a note about minimum MAAS version requirements?

@vasartori
Copy link
Copy Markdown
Contributor Author

@vasartori Thanks for opening PR.

Is there any user-facing documentation for this feature (e.g., in README)? What MAAS versions support ephemeral deploy? Should we add a note about minimum MAAS version requirements?

@AmitSahastra Would you prefer this to be documented in the README.md, or is there another place you’d recommend?
We could also create a dedicated documentation file for this feature if that makes more sense.

@vasartori
Copy link
Copy Markdown
Contributor Author

vasartori commented Feb 18, 2026

@moondev

This is a really exciting feature now that maas can now deploy custom machine images in-memory.

If a custom image includes kernel modules for things like GPU and container runtimes, are they automatically included on in-memory machine boot?

The kernel is inherited from the ephemeral system, so these modules are not automatically available.
Because of that, this approach may not work properly for kernel-dependent components.

The recommended way is to install the required components using cloud-init or Kubernetes-level mechanisms, such as DaemonSets.
For example, if you are using NVIDIA GPUs, the NVIDIA Operator handles this automatically.

When deploying an upstream image in-memory, installing items such as containerd would complain of missing modules.

Yes, this can happen.
These dependencies should be handled via cloud-init, which can install the required packages and configurations during bootstrap.

Another thing this feature addresses is the ability to "release and deploy" without wiping all disks. "Repaving" a machine to a clean state is also very simple, just reboot and it gets rebootstraped again with cloud-init. Very cool.

Yes

@vasartori
Copy link
Copy Markdown
Contributor Author

@AmitSahastra I added some docs in the README file. Please take a look and see if they are good enough.

@AmitSahastra
Copy link
Copy Markdown
Contributor

@AmitSahastra I added some docs in the README file. Please take a look and see if they are good enough.

Looks good, Thanks!

See README.md for more information
@vasartori
Copy link
Copy Markdown
Contributor Author

@AmitSahastra code rebased!

Copy link
Copy Markdown
Contributor

@AmitSahastra AmitSahastra left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LTGM!

@AmitSahastra
Copy link
Copy Markdown
Contributor

@vasartori can you rebase this PR with main.

@vasartori
Copy link
Copy Markdown
Contributor Author

@AmitSahastra I believe the PR is still rebased with main...
See:
image

@arunkurni arunkurni removed the request for review from Kun483 March 3, 2026 04:25
@vasartori
Copy link
Copy Markdown
Contributor Author

@AmitSahastra, what are the next steps to get this PR merged?

@senatorispectro
Copy link
Copy Markdown

senatorispectro commented Mar 4, 2026

@vasartori It looks like at some point your fork's main branch was force-pushed (likely squashing multiple commits into one), which caused the PR's internal head SHA (7138795...) to go out of sync with your actual branch tip (a1787c0...). GitHub kept refusing to merge with the error "Head branch is out of date" even though your branch was genuinely up to date with upstream main and all checks had passed. It looks like to be an internal tracking issue with Github PR functionality.
I tried updating the closing and reopening the PR, all hit the same stale ref issue.
Could you open a new PR from your fork's main against spectrocloud:main? The content is good to go, it's just GitHub's PR tracking that got stuck. The new PR should pick up the correct SHA and merge cleanly.
Sorry for the hassle!

@AmitSahastra
Copy link
Copy Markdown
Contributor

@AmitSahastra, what are the next steps to get this PR merged?

@vasartori can you help open the fresh PR as suggested by @senatorispectro? Would like to get this content merged. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add support for in-memory deployment (MAAS >= 3.5)

5 participants