ejolson
Posts: 10264
Joined: Tue Mar 18, 2014 11:47 am

Making a 2U Pi Cloud

Sun Jul 05, 2020 3:35 am

This thread is intended to discuss how to provision the 12 Pi computers stuffed into a 2U server chassis described in

viewtopic.php?f=36&t=272660

My first thought was to use OpenStack

https://docs.openstack.org/api-quick-st ... start.html

However, that appears so complicated as to defeat any pretensions of auditable security. Also, OpenStack is focused on virtual machines hosted by larger servers rather than running on the bare hardware of small single-tenant Raspberry Pi's.

While the ultimate goal is to develop a web interface that allows provisioning each of the Pi computers on demand with a variety of operating system images and attached storage, my thought is that the first step should be creating a clunky API using a bunch of shell scripts.

While dreaming away, I imagined

Code: Select all

$ openstack image list
+--------------------------------------+------------------+
| ID                                   | Name             |
+--------------------------------------+------------------+
| a5604931-af06-4512-8046-d43aabf272d3 | fedora-20.x86_64 |
+--------------------------------------+------------------+
might be replaced by

Code: Select all

$ pistack image list
+--------------------------------------+------------------+
| ID                                   | Name             |
+--------------------------------------+------------------+
| a5604931-af06-4512-8046-d43aabf272d3 | PiOS-buster_32   |
| a5604931-af06-4512-8046-d43aabf272d4 | PiOS-buster_64   |
+--------------------------------------+------------------+
and that the Zero, 2GB, 4GB and 8GB Raspberry Pi computers could correspond to tiny, small, medium and large single-tenant compute instances.

Before putting my face mask on for a late-night coding party during the full moon, it seems reasonable to facilitate help and advice by further describing the available hardware for this project. Basically there are a bunch of Raspberry Pi computers that can be controlled and communicated with in three different ways:
  • By toggling the run pin.
  • Through USB as a gadget.
  • Through wired Ethernet.
Note the Zero computers are lacking wired Ethernet connections and none of the computers have SD cards. Boot using rpiboot has been explicitly enabled by setting BOOT_ORDER=0xf31 in the EEPROM firmware of the 4B computers. The Zeros do this by default.

When enabled using the run pin, each Pi enters device mode and waits for the Intel-compatible PC to feed them boot files through USB to get up and running. In the case of the Zero the root filesystem is then mounted over USB; in the case of the 4B it is mounted over Ethernet.

The goal is to create preconfigured initial RAM filesystems on demand that set up networking and then mount the necessary root file system from an encrypted iSCSI target. Somehow, it seems like enabling user-mode QEMU on the Intel-compatible PC so it can execute ARM binaries might help in generating the needed initial RAM filesystems. Thus, one starting point for this thread might be an attempt to follow the relevant parts of the tutorial given at

https://github.com/sakaki-/gentoo-on-rp ... infmt_misc

Any advice or suggestions are welcome.
Last edited by ejolson on Tue Aug 04, 2020 3:48 pm, edited 1 time in total.

ejolson
Posts: 10264
Joined: Tue Mar 18, 2014 11:47 am

Re: Making a Pi Cloud

Tue Jul 07, 2020 8:01 pm

I now have some barely working scripts that automatically do the following:
  • Create random iSCSI authentication and dmcrypt keys.
  • Create a 32GB file and mount it encrypted over loopback.
  • Format the encrypted loopback device with an ext4 filesystem.
  • Unpack a PiOS image into the encrypted ext4 filesystem.
  • chroot to the PiOS image on the PC using QEMU user-mode.
  • Perform the modifications from the iSCSI root like a data center thread.
  • Add a suitable iSCSI target for the 32GB encrypted filesystem image.
  • Create a boot directory for rpiboot based on the target.
I'm particularly amused with being able to run the ARM versions of apt-get and mkinitramfs in the chroot environment under QEMU user-mode on the x86 computer. At this point it is possible to log into the Pi Zero whose GPIO is wired to the Pi 4B computers, toggle the run pin, and the corresponding Pi will boot the new image. It would be just like the EC2 cloud if there was a way to upload your ssh public key and do everything from a web interface.

After I have time to tidy up the scripts and do more testing, I'll post details.
Last edited by ejolson on Wed Jul 08, 2020 4:55 am, edited 1 time in total.

ejolson
Posts: 10264
Joined: Tue Mar 18, 2014 11:47 am

Re: Making a Pi Cloud

Wed Jul 08, 2020 3:50 am

For some reason, when I chroot into the experimental arm64 version of Raspberry Pi OS using QEMU user-mode on the x86, I don't have networking.

Code: Select all

# chroot iroot
root@silver:/# ifconfig
: error fetching interface information: Device not found
root@silver:/# exit
#
This obviously makes apt-get install not work inside the chroot. I've mounted the sys, dev and proc systems using

Code: Select all

# mount --rbind /sys iroot/sys
# mount --make-rslave iroot/sys
# mount --rbind /dev iroot/dev
# mount --make-rslave iroot/dev
# mount --rbind /proc iroot/proc
# mount --make-rslave iroot/proc
and copied the x86 QEMU binaries into place with

Code: Select all

# cp /usr/bin/qemu-arm-static iroot/usr/bin
# cp /usr/bin/qemu-aarch64-static iroot/usr/bin
Inside the chroot everything else seems to work, for example,

Code: Select all

# chroot iroot
root@silver:/# ls
bin   dev  home  lost+found  mnt  proc  run   srv  tmp  var
boot  etc  lib   media       opt  root  sbin  sys  usr
root@silver:/# file /bin/ls
/bin/ls: ELF 64-bit LSB pie executable, ARM aarch64, version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux-aarch64.so.1, for GNU/Linux 3.7.0, BuildID[sha1]=9ecc063cc78a0a8c15f950e5e8fc4a6954c734dc, stripped
root@silver:/# ldd /bin/ls
    libselinux.so.1 => /lib/aarch64-linux-gnu/libselinux.so.1 (0x000000550187a000)
    libc.so.6 => /lib/aarch64-linux-gnu/libc.so.6 (0x00000055018b0000)
    /lib/ld-linux-aarch64.so.1 (0x0000005500000000)
    libpcre.so.3 => /lib/aarch64-linux-gnu/libpcre.so.3 (0x0000005501a22000)
    libdl.so.2 => /lib/aarch64-linux-gnu/libdl.so.2 (0x0000005501a95000)
    libpthread.so.0 => /lib/aarch64-linux-gnu/libpthread.so.0 (0x0000005501aa9000)
root@silver:/# file /usr/bin/*static
/usr/bin/qemu-aarch64-static: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=8fcc876ee19d03df45ca3b9633c713421425fda4, stripped
/usr/bin/qemu-arm-static:     ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), dynamically linked, BuildID[sha1]=19a48d0e0df6c46aa2f7a08bff70b39781dea74f, stripped
root@silver:/proc# cd /proc/sys/fs/binfmt_misc/
root@silver:/proc/sys/fs/binfmt_misc# cat qemu-aarch64-static 
enabled
interpreter /usr/bin/qemu-aarch64-static
flags: OC
offset 0
magic 7f454c460201010000000000000000000200b7
mask ffffffffffffff00fffffffffffffffffeffff
root@silver:/proc/sys/fs/binfmt_misc# cd
root@silver:~# python
Python 2.7.16 (default, Oct 10 2019, 22:02:15) 
[GCC 8.3.0] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> 1+1
2
>>> exit()
root@silver:~# exit
exit
#
Only the network operations don't work. Does anyone have any ideas what is wrong and how to fix it?

ejolson
Posts: 10264
Joined: Tue Mar 18, 2014 11:47 am

Re: Making a Pi Cloud

Thu Jul 09, 2020 7:49 am

ejolson wrote:
Wed Jul 08, 2020 3:50 am
For some reason, when I chroot into the experimental arm64 version of Raspberry Pi OS using QEMU user-mode on the x86, I don't have networking.
After testing, the network problems seem to be a result of the QEMU emulation rather than the change root. In particular, the configuration script works fine in a chroot environment hosted by a real AARCH64 processor [Edit: this is likely wrong]. I'm going to focus on the standard 32-bit versions of Raspberry Pi OS which work and add other options later.

One difficulty with my present prototype hardware is the USB bus on the x86 PC enumerates differently at each boot. By chance three of the USB hubs are on one bus and the fourth on another, thus it's pretty easy to distinguish what is what just by counting the hubs. I now have a script which does exactly that and then sets up the rpiboot directories so the 2, 4 and 8GB models are properly identified after each reboot of the server.
Last edited by ejolson on Mon Jul 13, 2020 6:55 am, edited 6 times in total.

ejolson
Posts: 10264
Joined: Tue Mar 18, 2014 11:47 am

Re: Making a Pi Cloud

Mon Jul 13, 2020 3:43 am

ejolson wrote:
Thu Jul 09, 2020 7:49 am
ejolson wrote:
Wed Jul 08, 2020 3:50 am
For some reason, when I chroot into the experimental arm64 version of Raspberry Pi OS using QEMU user-mode on the x86, I don't have networking.
After testing, the network problems seem to be a result of the QEMU emulation rather than the change root. In particular, the configuration script works fine in a chroot environment hosted by a real AARCH64 processor. I'm going to focus on the standard 32-bit versions of Raspberry Pi OS which work and add other options later.
As reported in the post

viewtopic.php?p=1695281#p1695281

networking worked inside the AARCH64 chroot on a Pi Zero but ping and ifconfig don't. What seems to be happening in this case is related to DNS services. I've copied a reasonable resolv.conf file and from within the chroot I can type

Code: Select all

# getent hosts raspberripi.org
199.59.242.153  raspberripi.org
# getent hosts fractal.math.unr.edu
134.197.117.192 fractal.math.unr.edu
which indicates the DNS server clearly can be reached from within the chroot. However, then I type

Code: Select all

# slogin fractal.math.unr.edu
slogin: Could not resolve hostname fractal.math.unr.edu: Temporary failure in name resolution
and apt-get can't resolve its hosts either.

I suspect the problem is something like this: The 64-bit Raspberry Pi OS image in the chroot is configured to talk to a systemd service to look up DNS queries, the host does not use systemd, the systemd service is not running outside the chroot environment so the lookup reports an error and quits [Edit: this is also likely wrong]. I have no clue how to get the chroot DNS lookups to work.

This appears to be one of those problems related to the current state of Linux user land being too complex with too many gratuitous and incompatible changes. What I find strange is that the 32-bit version of Raspberry Pi OS seems to manage but maybe the 64-bit version is much closer to the standard Debian release of Buster. Help would be very much appreciated here.

In details, the host is running musl variant of Void linux which is based on the runit init and service management system. If I'm right about this being a problem with systemd inside chroot environments on hosts which are not running systemd, then likely Gentoo would also have similar problems with chroot into the current 64-bit Raspberry Pi OS image. Are there any thoughts on this?
Last edited by ejolson on Mon Jul 13, 2020 4:34 pm, edited 1 time in total.

ejolson
Posts: 10264
Joined: Tue Mar 18, 2014 11:47 am

Re: Making a Pi Cloud

Mon Jul 13, 2020 7:09 am

It seems systemd is not the culprit after all as the same problem happens with 64-bit ARM Gentoo running on aarch64 QEMU. When I use strace I get a bunch of "Message too long" errors. The final message "Temporary failure in name resolution" matches a similar problem a couple years ago with qemu-aarch64-static builds for MacOS. This points to the problem being a compatibility problem with the Musl C library similar to what happened with rpiboot. I imagine there is a lot of code that checks for Linux by checking for glibc instead. Whoever thought that was a good idea has caused a lot of trouble for nothing.

Although it's what got us into the present mess of everything being dependent on undocumented and nonstandard features in glibc, I'm tempted to remove Musl and start over with glibc. I was only using Musl on that server as part of a previous experiment anyway.

https://musl.libc.org/

ejolson
Posts: 10264
Joined: Tue Mar 18, 2014 11:47 am

Re: Making a Pi Cloud

Mon Jul 13, 2020 6:00 pm

ejolson wrote:
Mon Jul 13, 2020 7:09 am
This points to the problem being a compatibility problem with the Musl C library similar to what happened with rpiboot. I imagine there is a lot of code that checks for Linux by checking for glibc instead. Whoever thought that was a good idea has caused a lot of trouble for nothing.
For a confirmation, I copied qemu-aarch64-static and qemu-arm-static from the glibc version of Void Linux into the 64-bit Raspberry Pi OS chroot directory and it just works.

Code: Select all

root@silver:/# file /bin/bash
/bin/bash: ELF 64-bit LSB pie executable, ARM aarch64, version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux-aarch64.so.1, for GNU/Linux 3.7.0, BuildID[sha1]=b11533bde88bb45ef2891fbf3ad86c1869ed3a41, stripped
root@silver:/# cat /proc/cpuinfo | grep "model name"
model name  : AMD A6-5400K APU with Radeon(tm) HD Graphics
model name  : AMD A6-5400K APU with Radeon(tm) HD Graphics
root@silver:/# uname -a
Linux silver 5.4.50_1 #1 SMP PREEMPT Thu Jul 2 15:07:59 UTC 2020 aarch64 GNU/Linux
root@silver:/# ping google.com
PING google.com (172.217.14.78) 56(84) bytes of data.
64 bytes from lax17s38-in-f14.1e100.net (172.217.14.78): icmp_seq=1 ttl=115 time=38.1 ms
64 bytes from lax17s38-in-f14.1e100.net (172.217.14.78): icmp_seq=2 ttl=115 time=37.7 ms
64 bytes from lax17s38-in-f14.1e100.net (172.217.14.78): icmp_seq=3 ttl=115 time=39.3 ms
64 bytes from lax17s38-in-f14.1e100.net (172.217.14.78): icmp_seq=4 ttl=115 time=39.6 ms
^C
--- google.com ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 7ms
rtt min/avg/max/mdev = 37.659/38.692/39.627/0.840 ms
So there is a problem with qemu-aarch64-static when compiled with Musl C. That's for another forum. Although this still leaves open the question why the version of qemu-aarch64-static that appears in the Buster version of 32-bit Raspberry Pi OS leads to segmentation faults while the one in Stretch does not, that's for a different thread.

viewtopic.php?f=63&t=279820

As happens frequently, I'm again reminded of the thread

viewtopic.php?f=62&t=225442

in which it's discussed why nothing ever works.

ejolson
Posts: 10264
Joined: Tue Mar 18, 2014 11:47 am

Re: Making a Pi Cloud

Mon Jul 13, 2020 6:22 pm

ejolson wrote:
Mon Jul 13, 2020 6:00 pm
For a confirmation, I copied qemu-aarch64-static and qemu-arm-static from the glibc version of Void Linux into the 64-bit Raspberry Pi OS chroot directory and it just works.
Woohoo! That was the final piece of the puzzle preventing the 64-bit beta test for Raspberry Pi OS from spinning up on the Pi Cloud. Now the scripts can automatically configure any version of PiOS to run on a 4B and each of the 32-bit versions for the Zero. Before beginning work on a clunky web interface, I need backup scripts and ways to duplicate user-customized images. That part should be pretty easy compared to fixing bugs in QEMU.

After that, it's on to developing the web interface that allows this to be done with a mouse. Since Amazon, Google, IBM and Microsoft all present a boring black-on-white website for their clouds, I'm thinking the way to defeat the coronavirus is a retro dark theme with lots of JavaScript, animated graphics and maybe sound. It's too bad the blink tag doesn't work anymore.

ejolson
Posts: 10264
Joined: Tue Mar 18, 2014 11:47 am

Re: Making a Pi Cloud

Tue Jul 14, 2020 5:24 am

Here is a photographic depiction of the actual hardware

Image

so that the software configuration which will be described shortly makes sense.

As already mentioned, one distinguishing feature of this setup is the use of rpiboot to perform the initial program load for the Pi 4B computers. To do this the USB-C power ports are connected to USB hubs modified to obtain power directly from the 5V rail of the ATX supply and plugged into the headers on the x86 file server. Each hub is connected to one Zero and three Pi 4B computers and protected by an inline fast-blow automotive fuse. One of the Zero's has its GPIO wired to the run enable pins on each of the other computers and serves as the run controller. This leaves the remaining three Zero's and 12 Pi 4B's available for use as compute instances in what I'm calling the Pi cloud.

The ATX supply has been modified for the front to back airflow needed in the server chassis by reversing the direction of the fan that would have otherwise tried to pull air out through the front. An additional fan has been inserted on the other side to form a wall of four fans that pull air from the left and push it out the back of the box on the right.

Blue networking cables connect each of the 4B computers to a network switch. There are also some blue cables being used to connect the hubs to the mother board and run pins to the run controller. The eight-port switches have been chained together. One is connected to the network card on the server and the other is wired to the socket dangling from the back end of the box. That socket is presently unused, but could extend the system area network to an additional pie-filled 2U server chassis. The switches are directly powered by the 12V rail of the ATX power supply.

The file server has a built-in network port that serves as the upstream connection to the Internet. The internal IP numbers on the system area network are translated to public IP numbers, forwarded and can be reassigned on demand. This allows shaping and metering the upstream traffic though the switches are unmanaged.

It is nice to be able to preform a hard reset of any individual Pi computer as needed by togging the corresponding GPIO line on the Zero which serves as the run controller. A hard reset of the run controller can be achieved by power cycling the file server. This can also be done remotely. So far everything has been surprisingly reliable, likely due to the quality of the 5V power provided by that old ATX power supply and, of course, the Pi's themselves.

ejolson
Posts: 10264
Joined: Tue Mar 18, 2014 11:47 am

Re: Making a Pi Cloud

Thu Jul 16, 2020 8:47 pm

Things have been going quite slowly due to many other chores among which preparing for the next school year is included. At any rate, I couldn't resist including an option to disable encryption on the iSCSI root since the Zero suffered so much. Even though much faster, since the 4B does not have hardware encryption built-in, performance also benefits when using an unencrypted root device.

Since iSCSI access is still authenticated, the main security concern would be snooping the system area network that consists of those two 8-port network switches which are hidden inside the 2U chassis. This might be possible with various types of MAC flooding attacks or ARP cache poisoning that could be perpetrated using one of the other Pi computers inside the box. While not impossible to pull off, especially if you further pivot root on the attacking Pi to a RAM filesystem so it doesn't crash first, such activity would likely be detected and violate any sensible terms of service. Moreover, not everyone will be processing sensitive data subject to regulations. Consequently, as with the coronavirous, almost anything is preferable over safety.

ejolson
Posts: 10264
Joined: Tue Mar 18, 2014 11:47 am

Re: Making a Pi Cloud

Mon Jul 20, 2020 5:56 am

I've been studying the Amazon cloud

Image
https://docs.aws.amazon.com/AWSEC2/late ... orage.html

and the Google cloud

https://cloud.google.com/products/storage

and noticed they have
  1. Local storage (Instance Store).
  2. Network block devices (EBS).
  3. Network file systems (EFS).
  4. Archive and object store (S3).
Although SD cards and thumb drives offer almost no performance advantage over an iSCSI network block device, one can obtain a good performance boost on a 4B using SSDs connected through SATA bridges. However, any type of local storage presents a problem when renting out non-virtualized hardware. Not only must the data on the drive be wiped, but one would need to verify the integrity of the firmware on the SSDs and SATA bridges between tenants as well.

Unfortunately, it seems people can't even tell the difference between the real and fake Cisco switches

https://labs.f-secure.com/assets/BlogFi ... -cisco.pdf

that are starting to be discovered in the supply chain. As SSDs and SATA bridges have firmware which is vulnerable to an easier spy-versus-spy style modification, none of the nodes in the Pi cloud have any form of local storage.

Upon scaling out the Pi cloud to multiple 2U server chassis, it would be possible to set up a Ceph object store distributed using the different file servers inside each box. While this is not something that can be accomplished at present, since there is no local storage anyway, there is little need to archive it.

This leaves the Pi cloud with
  • The iSCSI root for the OS.
  • NFSv4 with system security.
The boot files for each Pi are provided using rpiboot and contained in a subvolume which is later mounted under /boot for updates via a chroot sshfs. The size of this boot directory is limited to 256MB using BTRFS quota groups.

Quota groups are also used to determine how much space is available for the NFSv4 exports of each tenant. Right now Fido is the only tenant, so it's a little difficult to know whether things are working properly or not.

alkersan
Posts: 24
Joined: Thu Mar 26, 2020 5:13 pm

Re: Making a Pi Cloud

Mon Jul 20, 2020 8:21 am

ejolson wrote:
Sun Jul 05, 2020 3:35 am
While the ultimate goal is to develop a web interface that allows provisioning each of the Pi computers on demand with a variety of operating system images and attached storage, my thought is that the first step should be creating a clunky API using a bunch of shell scripts.
Just wondering - who is the end user? Do you intend to serve online or it is a hobby project for learning and fun?

ejolson
Posts: 10264
Joined: Tue Mar 18, 2014 11:47 am

Re: Making a Pi Cloud

Mon Jul 20, 2020 7:29 pm

alkersan wrote:
Mon Jul 20, 2020 8:21 am
ejolson wrote:
Sun Jul 05, 2020 3:35 am
While the ultimate goal is to develop a web interface that allows provisioning each of the Pi computers on demand with a variety of operating system images and attached storage, my thought is that the first step should be creating a clunky API using a bunch of shell scripts.
Just wondering - who is the end user? Do you intend to serve online or it is a hobby project for learning and fun?
Ideally the present prototype would be scaled out to a half-rack online at the Switch Citadel or a similar data center.

https://www.switch.com/the-citadel/

The original goal was to create a cloud of single-tenant compute nodes that would be more secure than the multi-tenancy model used by traditional hyperscalers. In my opinion, the main reason Meltdown and the Spectre side-channel information-leakage attacks were such big news was because they demonstrated the business plan of using virtualization to allow a processor originally designed for personal computing to be shared safely between adversarial entities was mostly wishful thinking.

The fact that the Linux system call mechanism was slowed down by about 100 fold in order to partially mitigate these side channels demonstrates how much influence certain companies have over development. Since most code relies little on the efficiency of the system call, average performance only regressed by 5 to 20 percent. Even so, certain industries have chosen single-tenant hardware to increase security when dealing with sensitive information. My observation is the compute instances offered by IBM, Google, Amazon and the others are too large for many of the smaller companies who need single-tenant hardware to comply with regulations.

One problem with a Pi cloud

viewtopic.php?p=1283072#p1283072

is that people interested in low cost aren't much concerned about security, while people interested in security tend to have plenty of money.

I have found one tenant already. After both the commercial and non-commercial failures of FidoBASIC, the former developer seems to have retrained for a career change as a dentist. Since many dentists are unable to practice their trade while the patient is wearing a face mask, this move appears further motivated by a misconception that dogs don't get the coronavirus. When I pointed out minks appear particularly susceptible and are closely related to weasels, the dog dentist became hostile and growled that the only weasel around here walks on two legs.

At any rate, although the family physician long ago joined a group medical practice to offset the regulatory cost of renting 128-core single-tenant nodes in the cloud, many dentists have not done so. As most medical groups further discriminate against dogs as members as well as patients, Fido has gladly signed up to process dog-dentistry records in the prototype single-tenant Pi cloud. Aside from this, things are still in development and I'm mostly learning stuff.
Last edited by ejolson on Sat Aug 01, 2020 4:32 pm, edited 8 times in total.

ejolson
Posts: 10264
Joined: Tue Mar 18, 2014 11:47 am

Re: Making a Pi Cloud

Wed Jul 22, 2020 7:32 pm

Woohoo! The kernel upgrade to 5.4.51 happened without difficulties. The initial RAM filesystem automatically regenerated and a simple reboot brought the non-virtualized instance in the Pi cloud back up with the new kernel.

All this happened because of the custom kernel hook mykernel-copy that I created in /etc/kernel/postinst.d which reads

Code: Select all

#!/bin/bash
(
cd /boot

# Please change the line below to kernel.img, kernel7.img, kernel7l.img
# or kernel8.img depending on the model of Pi and whether you are
# running in 32 or 64-bit mode.
kfile=kernel.img

if diff -b $kfile mykernel.img
then
    exit 0
fi

pos=`hexdump -ve '"%_ad " 1/1 "%02x\n"' $kfile |
    grep -B1 '8b' | grep -m1 '1f' | awk '{print $1}'`
vers=`dd if=$kfile skip=$pos iflag=skip_bytes |
    gunzip | strings | grep -iPm1 'Linux version' |
    awk '{print $3}'`

echo Updating mykernel and myinitrd to version $vers...
update-initramfs -c -k $vers
mv initrd.img-$vers myinitrd.img
cp $kfile mykernel.img
)
Every time the kernel is updated, this hook runs and checks to see if mykernel.img is the same as the system kernel specified in the kfile environment variable, see the comments in the script. If these files are different, then the new is copied over the old and the initial RAM filesystem is regenerated for the correct version of the new kernel. The tricky thing was reading the needed version of the kernel from the binary itself which I obtained using cut and paste programming.

When the Pi cloud instance in created, the kfile= line in the hook is automatically configured by the clunky API I've been developing with the correct type of kernel depending on the model of Pi (in the above example it was a Zero) and whether the Pi will be running 32 or 64-bit mode. The lines

Code: Select all

kernel=mykernel.img
initramfs myinitrd.img
are then attached to /boot/config.txt.

At this point kernel updates regenerate the initial RAM filesystem whenever needed. This is a much better idea than simply pinning the kernel as I did in the super cheap cluster to be whatever old thing was used for the initial install.

viewtopic.php?f=49&t=199994

Note that in that case, pinning the kernel was also done to avoid updating to a version that had a regression in the Ethernet gadget, which happily has been fixed for a long time. The next question is does the 5.4.x kernel have any regressions that affect the Pi cloud? I'm particularly interested to know whether core 1 reliably comes online as discussed in the thread

viewtopic.php?f=63&t=280544
Last edited by ejolson on Sat Aug 01, 2020 4:33 pm, edited 1 time in total.

ejolson
Posts: 10264
Joined: Tue Mar 18, 2014 11:47 am

Re: Making a Pi Cloud

Wed Jul 29, 2020 9:45 pm

On each Pi from the Zero to the current 4B are some holes to connect the run header. When this circuit is pulled high that means run, when pulled low it halts. It is possible to wire the GPIO of one Pi that will be used as a run controller to the run enable circuits on each of the other Pi computers and use that to perform a remote hardware reset and reboot on demand. After fighting with Inkscape for some time, here is a wiring diagram showing what GPIO wires are connected to what computers in the Pi cloud.

Image

Do not confuse the global enable on the Pi 4B header with the run pin. Global enable uses 5V logic not compatible with the GPIO logic levels. Only connect the GPIO wires to the pin labeled run.

Detailed information on how to connect the wires is at

https://scribles.net/waking-up-raspberr ... reset-pin/

see also the discussion in

viewtopic.php?f=63&t=272813

and

viewtopic.php?f=29&t=243530

as well as the abbreviated schematics

https://www.raspberrypi.org/documentati ... /README.md

ejolson
Posts: 10264
Joined: Tue Mar 18, 2014 11:47 am

Re: Making a Pi Cloud

Sat Aug 01, 2020 3:59 pm

I've just read a nice post about setting up kubernetes on a cluster of Pi computers

viewtopic.php?f=63&t=281585

It seems that NFS root filesystems are no problem anymore but that the default kernel is missing one of the necessary cgroup categories. At any rate, it's good to know a NFS root works. At the moment, I plan to stick with iSCSI for performance and security reasons. Before attempting kubernetes with the Pi cloud, the next post will describe the scripts used to perform a hardware reset of nodes in the cluster using the GPIO wires.

Return to “Networking and servers”