Skip to content

Conversation

@zorun
Copy link
Contributor

@zorun zorun commented Jan 4, 2020

No description provided.

@zorun
Copy link
Contributor Author

zorun commented Jan 4, 2020

An alternative would be to keep trying to set sysctl settings, but only output a warning if it fails.

But this approach wouldn't work with the transparent_hugepages systemd service.

@ReinerNippes
Copy link
Owner

I was thinking about a block to have just one when clause. Or an include_tasks when statement.

And I wanted to check if there is no ansible variable set by gather_facts to find out if the playbook is running inside a container.

@sgutermann
Copy link

I have been using something like this in the past:

set_fact: __is_container: '{{ (ansible_env|default([])).ANSIBLE_CONTAINER|default(false) }}'

Not sure if there is a more conveniant way.

@m-klenk
Copy link

m-klenk commented Feb 8, 2022

Could we also leverage the systemd-detect-virt for that?
https://www.freedesktop.org/software/systemd/man/systemd-detect-virt.html
If it returns "none" we could skip (all virtual environments are skipped).
Otherwise only the container values (lxc, docker, podman....) could be checked.
If this is feasible I would try it in another pull request.

Here a short snippet with a local change only (would be necessary at every step). If a global change is more suitable I can test this as well. This checks only for containers, not for VMs.
redis/tasks/main.yml

- name: check for virtualization
  ansible.builtin.shell: "systemd-detect-virt -c"
  register: virtualization_type

- name: sysctl vm.overcommit_memory=1
  sysctl:
    name: vm.overcommit_memory
    value: "1"
    state: present
    reload: true
    sysctl_file: /etc/sysctl.conf
  when: virtualization_type is none

@ReinerNippes
Copy link
Owner

I'm afraid no. In a container the command systemd-detect-virt might not be available. Or?

I found two facts:

  • ansible_virtualization_role: host
  • ansible_virtualization_type: kvm

I only have to check the values inside a container. And hope that won't change when the user is using podman.
Stay tuned.

@ReinerNippes
Copy link
Owner

It would be: when: not "container" in ansible_virtualization_tech_guest to disable tasks inside a container.

But next iI'm getting a

TASK [redis : start and enable redis] ************************************************************************************************************************************************
fatal: [localhost]: FAILED! => changed=false 
  msg: 'Could not find the requested service redis-server.service: host'

So it seems to make no sense to run the playbook in a container. Or?

@m-klenk
Copy link

m-klenk commented Feb 10, 2022

Hi Reiner,
thanks for looking into this. I tested my initial proposal only on Debian & OpenSuse host and Debian container (lxc).
But using the ansible-facts might be a much better solution. Nevertheless this does not differentiate between containers and virtual machines, right? (looking at the source file https://github.com/ansible/ansible/blob/devel/lib/ansible/module_utils/facts/virtual/linux.py#L55 it seems a check for docker, podman, lxc, containerd and container should be good enough?)
I could modify my pull request from here:
#137

For me the start and enable works in the container (which one are you testing? podman?)

@ReinerNippes
Copy link
Owner

I'm developing and testing the "nextcloud" playbook on aws ec2. Sometimes on hetzner vms. I wouldn't do it in a container. Because of the limitations we discuss here. And I don't have any experience with lxc so I wasn't aware about any problems inside an lxc container.

So if you're modifications are working now in lxc it's fine. If they don't work in docker or podman imho it doesn't matter because the playbook is meant to run in an docker container. One should do this with dockerfile.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants