Skip to content

Conversation

@rtollert
Copy link
Contributor

It was observed that the root filesystem on my dev machine had an unusually large amount of I/O during NILRT builds, sometimes sustaining over >50MB/s. This is not expected, because with the exception of the pyrex containers (which one would naively consider to be relatively static beasts), my NILRT build does not use the root filesystem in any way.

Some judicious use of fatrace indicated that the vast majority of the I/O was coming from e.g.
/var/lib/docker/overlay2/dc702bbbb061fbbf13b9d06c36d8ccee398257070e6c0455ff209b656b2db2f3/diff/tmp/ccuwDoBf.o. What's going on is that the tasks are using the container /tmp instead of the system /tmp. On many machines, including mine, /tmp is a tmpfs, and writing instead to the container /tmp causes needless wear on the root filesystem.

The fix is to simply always bind-mount the container /tmp to the system /tmp.

This change cannot induce any additional build system instability due to to tmpfs-related OOMs, because to a first order, all of our upstream recipes have to deal with this anyway: presumably, not all Yocto developers use pyrex nowadays.

It was observed that the root filesystem on my dev machine had an unusually
large amount of I/O during NILRT builds, sometimes sustaining over >50MB/s.
This is not expected, because with the exception of the pyrex containers (which
one would naively consider to be relatively static beasts), my NILRT build does
not use the root filesystem in any way.

Some judicious use of `fatrace` indicated that the vast majority of the I/O was
coming from e.g.
/var/lib/docker/overlay2/dc702bbbb061fbbf13b9d06c36d8ccee398257070e6c0455ff209b656b2db2f3/diff/tmp/ccuwDoBf.o.
What's going on is that the tasks are using the container /tmp instead of the
system /tmp. On many machines, including mine, /tmp is a tmpfs, and writing
instead to the container /tmp causes needless wear on the root filesystem.

The fix is to simply always bind-mount the container /tmp to the system /tmp.

This change cannot induce any additional build system instability due to to
tmpfs-related OOMs, because to a first order, all of our upstream recipes have
to deal with this anyway: presumably, not all Yocto developers use pyrex
nowadays.

Signed-off-by: Rich Tollerton <rich.tollerton@ni.com>
@rtollert
Copy link
Contributor Author

https://docs.yoctoproject.org/singleindex.html#speeding-up-a-build

Using tmpfs for TMPDIR as a temporary file system: While this can help speed up the build, the benefits are limited due to the compiler using -pipe.

My experience at present is that gcc is quite unambiguously not using -pipe. Did we break that configuration somehow? Or did upstream break it recently?

@amstewart
Copy link
Contributor

I think this change should go to upstream-pyrex to see what they say. But before you do that, does the DOCKER_TMPDIR config option get you what you want? I could see the argument that this is a machine-specific configuration that should be handled in your dockerd configuration.

@rtollert
Copy link
Contributor Author

rtollert commented Sep 6, 2024

Opened upstream garmin/pyrex#95.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants