In my previous post, I ran through how to run a second, guest OS on your 64-bit RPi3, under KVM
, with the host OS running an xfce4 desktop, and the guest in console-only mode
But what if you wanted a GUI on your guest OS also?
Resources are super-tight on the RPi3 platform, but it is just about possible ^-^ So, in this follow-up guide I'll show you a few ways to go about it, picking up from where we left off last time. (To see a screenshot of the final result, scroll to the end of this post.)
As before, we'll be targetting:
You can of course adapt these instructions to your own requirements (most other 64-bit aarch64 OSes can be switched in as the guest, and you could use a non-Gentoo host if you wished (I have chosen that particular image because it ships with KVM support
in its kernel
, and I happen to maintain it ^-^).
Just before we dive in, a brief introduction to terminology may be in order. KVM (here) stands for "Kernel-based Virtual Machine"
: a virtualization infrastructure for the Linux kernel that turns it into a hypervisor
. This technology, together with some userspace "glue" (here, QEMU
), allows two (or more) distinct operating systems to efficiently, and securely, share a common SoC. Unlike emulation, both host and guest run almost all instructions natively (without translation). And unlike a chroot, this arrangement allows both host and guest to run distinct kernels (and init systems). A short intro to KVM on ARM may be found here
(this is for 32-bit v7, but the 64-bit v8 code is not too different
, so the concepts are still relevant).
OK, I'm going to assume you have already completed the setup from the previous post
; so if you haven't, begin by doing that first.
Then, if you're currently running the guest VM, shut it down (you can do so by running "sudo shutdown" as the ubuntu user). Next, restart it by issuing the following (as your regular user "demouser", working on the gentoo-on-rpi3-64bit image booted on an RPi3 B or B+) from a console in the qemu-test directory:
Hint: if this fails with an exception, simply issue "pkill qemu" from another window and try again.This happens sometimes when booting the UEFI firmware.
Code: Select all
qemu-system-aarch64 -M virt -cpu host \
-m 384M -smp 2 -nographic \
-bios QEMU_EFI.fd \
-cdrom seed-kvm-bionic-01.iso \
-drive if=none,file=bionic-image-01.img,id=hd0 -device virtio-blk-device,drive=hd0 \
-device virtio-net-device,netdev=vmnic -netdev user,id=vmnic,hostfwd=tcp::5555-:22 \
-accel kvm 2>/dev/null
This is almost the same as last time, but with two changes:
- We have allocated more memory (384MiB vs 256MiB) to the guest, a bare minimum to run a GUI; and
- We have requested QEMU forward port 5555/tcp on the host to port 22/tcp on the guest; this will allow us to connect via ssh.
As before, once this is run, the guest will start up (you may need to press Enter a few times at the GRUB boot stage). A minute or so later you should see an Ubuntu login prompt (in the same terminal as you issued the qemu-system-aarch64 command, since -nographic was specified).
Now open a new terminal window on your gentoo-on-rpi3-64bit desktop, and (as "demouser") issue:
Hint: if your browser shows "email protected", that's just the board's anti-spam system being a bit over-zealous: the login specifier is ubuntu at-sign 127.0.0.1 (in this and subsequent ssh commands - see the screenshot at the end of this post).
Enter the password for the "ubuntu" user (passw0rd) when prompted, and you should be in, connected independently from the QEMU console link (which is in the window where you issued qemu-system-aarch64, just a moment ago), via localhost port 5555 (which QEMU forwards to port 22 - the standard ssh port - on the guest).
Now you're logged in as ubuntu, take the opportunity to update your system, reboot, then install some necessary software on the host. Working within the ssh window (as the "ubuntu" user), issue:
Code: Select all
sudo apt-get update
sudo apt-get -y upgrade
Once the system comes back up again within QEMU, re-establish the ssh/ubuntu terminal connection as before, then issue (from that ssh terminal, as the "ubuntu" user):
is a simple editor app which can run on X11. It will pull in a number of additional dependency libraries, as the image starts with no GUI support at all, so please be patient.
Once the install is complete, you can try out the a first approach to using a GUI from your guest: X11 forwarding
onto your host's
X server. To do so, open a fresh terminal on your (host) desktop and issue (as "demouser"):
The -f tells ssh to background after asking for a password, and the -T disables pseudo-terminal allocation. The -Y enables (for convenience) trusted X11 forwarding, allowing the guest applications to use your host's X11 server to for graphical I/O.
Code: Select all
ssh -f -T email@example.com -p 5555 -Y mousepad /etc/os-release 2>/dev/null
Enter ubuntu's password (passw0rd) when prompted, and then, if all is well, you should find that an editor window opens on your host
desktop, but the underlying mousepad application (and os-release file it is editing, as you can see from its content) is on the guest
This is a highly efficient way to use the guest's GUI applications, since only one X server is running (your host's). However, it has a number of drawbacks. There are security implications to opening up your host's X11 server (see these notes
, for example), and while it is possible to work around these (using e.g. Xephyr inside firejail
on the host), it's still not an ideal solution for a full guest desktop.
So, while this approach is handy to keep in mind for quick one-off access to individual apps, let's next put together a full "remote desktop" for our guest.
There are various ways to approach this issue, but since QEMU on aarch64 on the RPi3 does not currently support the QXL paravirtual graphics card (which would be the default route on x86_64), we'll instead run a virtual framebuffer (Xvfb)-backed X11 server on the guest, run a lightweight desktop on that (xfce4), and forward the resulting (otherwise invisible) desktop to the host via VNC
OK, to begin, install the necessary software on your guest. Running as the ubuntu user, within the ssh terminal again, issue:
I don't recommend using --no-install-recommends with xfce4; you'll end up missing things like dbus-x11 which make it essentially unusable.
Code: Select all
sudo apt-get install -y xfce4 xvfb x11vnc xfce4-taskmanager xfce4-cpugraph-plugin xfce4-terminal links2
and let this run to completion (it will take a while, downloading ~80MiB of archives which take up ~400MiB when installed - the qcow2 disk image has sufficient space though). In the above:
- xfce4 is a relatively lightweight desktop system for X11 (you could just run a windowing manager, like openbox, but we're trying to push the envelope here ^-^);
- xvfb is a virtual framebuffer for X11 (a pretend graphics card that renders to a memory buffer);
- x11vnc is a VNC server for X11 (we'll use this in preference to QEMU's bundled VNC server, which does not always work correctly with aarch64);
- xfce4-taskmanager is a simple process monitor app for xfce4; installing it is optional for a minimal setup, but recommended;
- xfce4-cpugraph-plugin is a panel plugin for xfce4 that displays an running CPU load; installing it is optional (but nice to have);
- xfce4-terminal is a nice terminal emulator for xfce4; optional (but nicer than xterm!); and
- links2 is a super-light-weight web browser that can run in text or X11 mode; installing it is optional (but having some sort of web-browsing capability is nice when configuring a system).
Once the above has completed, you can start xfce4 on your guest!
Begin by creating an X11 virtual framebuffer on display :1, and putting it into the background. We'll make this 800x600 pixels at 24bit depth, you can vary this as desired (but don't go crazy, there isn't a lot of memory to play with here ^-^). Working as the ubuntu user in the ssh terminal, issue:
Hint - you can use nohup with these commands if desired.
Code: Select all
Xvfb $DISPLAY -screen 0 800x600x24 &
Now start the xfce4 desktop itself, for the ubuntu user! Still in the same terminal, issue:
Apart from a bit of CPU activity, nothing will apparently happen, but that's because the desktop is being rendered to our new virtual framebuffer only (this is the same trick sometimes used to run GUIs on headless cloud VM images etc.).
Next, start up the x11vnc server, serving the same screen and display. Still as the ubuntu user, in the same console window, issue:
The -bg instructs the server to background itself after setup; the -nopw disables the "no password" warning; the -listen localhost directive instructs the server to only accept connections on (guest) 127.0.0.1; and -xkb uses the XKEYBOARD extension, hopefully avoiding most keymapping problems.
Code: Select all
x11vnc -display $DISPLAY -bg -nopw -listen localhost -xkb &>/dev/null
With that done, you now have an xfce4 desktop running over an X11 server on the guest, rendering to a virtual Xvfb framebuffer, and available for remote viewing via VNC on (guest) port 127.0.0.1:5900/tcp (the port numbering is by convention, you can specify a different one if you like).
We're almost there now, but two problems remain.
The first issue is that the gentoo-on-rpi3-64bit
image does not ship with a VNC client pre-installed. Fortunately, net-misc/tigervnc
is available on the binhost (as a binary package). To install it, issue (as the "demouser" user on a terminal in the host desktop):
Code: Select all
sudo emerge --verbose --noreplace net-misc/tigervnc
This shouldn't take long. The program it installs may be launched from the commandline as "vncviewer" (and will also appear in the desktop Applications-> Internet menu, as "TigerVNC Viewer").
The second issue is that the the guest localhost port 5900 isn't visible on the host system by default. There are various ways around this, but to avoid having to set up multiple networking cards on QEMU, here we'll just take advantage of another neat / scary ssh feature, port forwarding. Using this, we can request ssh transparently forward traffic from a given port on the host system to another on the remote side (including replies).
To do so here, working in the same host terminal, and still as demouser, issue:
The -f and -T options we've seen before; the -N option tells ssh there is no command payload. The -L component sets up the port forwarding.
Code: Select all
ssh -f -N -T -L 5910:localhost:5900 firstname.lastname@example.org -p 5555
Enter ubuntu's password (passw0rd) when prompted. Nothing will appear to happen, but the tunnel for VNC traffic is now in place between host and guest.
All that remains is to open a viewer! Still working as demouser, in the same host terminal, issue:
Code: Select all
vncviewer 127.0.0.1:5910 &>/dev/null&
And with luck (and OOM-killer
permitting ^-^) a window should open on the host, showing the guest desktop. Here's a screenshot of one of my gentoo-on-rpi3-64bit
RPi3's, on which the above steps have been run:
Note that for this setup, I changed the icons (using the Xfce Applications->Settings->Appearance tool) after launch to the Ubuntu-Mono-Light set, downloaded (using the links browser!) an Ubuntu 18.04 desktop png, for appearance sake, and installed the cpugraph plugin. But everything else is vanilla.
If you look at the above screenshot, you'll notice that:
- There are four different connections to the guest in use: the bottom right QEMU terminal (which I have here rotated into QEMU monitor mode using Ctrl-a c; do this again and you'd get a synthetic serial console login prompt); the ssh terminal connection (top right, here already logged in as the ubuntu user); the X-forwarded mousepad editor (one from bottom on the right) and the VNC desktop itself (the large window on the left).
- The host and guest are running different kernels - this is not simply a chroot. Compare the output from "uname -a" in the Gentoo terminal (one from top on the right) and in the terminal window in the VNC guest desktop (and also in the ssh terminal).
- You can't see it here, but they're running different init systems too: OpenRC on the host, and systemd on the guest.
- System load is low (see the cpu graph plugins, in the top horizontal panel, on host and guest). KVM virtualization imposes very low overhead, as most code runs natively on both guest and host.
- The guest (as we specified when invoking qemu-system-aarch64) has only 2 cpu cores available, whereas the host has 4 (see the same graph plugins). You can do fun things with cpu affinity if you really want to minimize the latter stepping on the toes of the former, but I haven't for this simple example.
- You can launch apps etc. on the guest as you wish - it is a full xfce4 system. So in the above, I've opened the thunar file browser and an xfce4-terminal.
Of course, this is only a proof-of-concept/demo setup. If you used such a thing in a production scenario, you'd have services to launch the various components, with restart-on-failure etc. But hopefully it shows what can be done.
That's it for this time. I may follow up with one more post about SPICE
in the future, if this is proving useful to anyone ^-^
PS: there is the small question of why any sane person would want to do any of this in the first place, of course ^-^. It is perfectly possible to chroot most guest systems (even 32-bit guest on a 64-bit host), provided you are comfortable sharing a kernel, and there aren't any init system-expectation mismatches. KVM does provide pretty strong isolation I suppose, so if you had a server component you wanted to absolutely lock down (tor, for example) you could put it in a firejail chroot on a hardened guest OS, and then pipe traffic to it from the host... mostly though, on such a resource-limited system as the RPi3, it's for fun ^-^
PPS: forcing the X11 and VNC interaction over an ssh tunnel locally will involve encryption / decryption overhead. If you wanted to do the above in a production system, it'd be better to set up another virtual network card on QEMU shared by the host and guest, and vector traffic over that. Or use a paravirtualized graphics card (but unfortunately these don't seem to be available for vc4 / aarch64 at present, although I'd be happy to be proven wrong on that point).