I'm trying to develop a custom RPi image, and it's going to take a large number of build-test cycles to get everything working right. Building a stage 2 image from a clean state takes about 40 minutes on a Github Actions runner (2 cores, 7GB RAM), an AWS t3.2xlarge (8 cores, 32GB) and a c5a.8xlarge (32 cores 64GB RAM).
Looking in htop I can see that, when the build is maxing out a whole core it is almost always just one. Most of the time it's using much less. What's really strange is that some steps take a really long time, like "update-initramfs: Generating /boot/initrd.img-6.1.0-rpi6-rpi-v8", but hardly use any CPU, memory, network or disk bandwidth. What is it doing for all this time?!
Running a build without the CLEAN flag takes about 11 minutes, so that's a big improvement. Unfortunately I've been unable to get that to work in Github Actions because it's a fresh VM instance on each run. I've successfully got Actions to cache the pi-gen /work folder between runs but, for some reason, pi-gen still runs a clean rebuild...
It would be really great to be able to rent a ludicrously powerful cloud compute instance for a week or two while I work on my build, but there's no point at present - no matter how much resource you throw at it, pi-gen takes 40 minutes.
Is this what I should expect, or am I missing something?