Gentoo Linux: Building/rebuilding a kernel and Intel CPU microcode in an installation with initramfs

In a 2014 post I explained how to update the Intel CPU microcode in a Gentoo Linux installation with an initramfs (I use sys-kernel/genkernel to build the kernel in the installation on my Compal NBLB2 laptop, which is running the Testing Branch of Gentoo Linux although the branch is not important). The initscript method (Method 1 in that post) for updating the CPU microcode is no longer valid, and the behaviour of the tool sys-apps/iucode_tool for updating the CPU microcode (Method 2 in that post) has changed, hence this update.

Although not essential I normally perform the microcode upgrade procedure when I either rebuild or upgrade the Linux kernel, therefore I explain both procedures contiguously here.

These days the grub-mkconfig command edits the file /boot/grub/grub.cfg to add a line to the GRUB menu entries, to load the CPU microcode at boot, but nevertheless I prefer to follow a slightly different method that works reliably for me.

Below is the procedure I follow to build/rebuild the kernel and the Intel CPU microcode. Others may have a different approach, but this has always worked well for me, even if some of the steps are sometimes nugatory.

If they are not already installed, you need to merge a couple of packages before starting the main procedure:

root # emerge app-arch/lzma # Needed to build bzImage.
root # emerge iucode_tool

1. Mount the boot directory if it is on a separate partition

root # mount /dev/sda3 /boot

2. Check which kernel sources are installed and which of those sources is currently selected

root # eselect kernel list

3. Make a back-up configuration file for the current running kernel

root # zcat /proc/config.gz > /usr/src/config

4. Select the kernel sources I want to build

root # eselect kernel set <n>

5. Build the kernel image and the initramfs image

root # genkernel --kernel-config=/usr/src/config --clean --menuconfig --microcode=intel --no-splash --module-rebuild all

I have configured the following kernel options relating to the early loading of the Intel CPU microcode (see later):

root # grep CONFIG_BLK_DEV_INITRD /usr/src/linux/.config
CONFIG_BLK_DEV_INITRD=y
root # # grep CONFIG_MICROCODE /usr/src/linux/.config
CONFIG_MICROCODE=y
CONFIG_MICROCODE_INTEL=y
# CONFIG_MICROCODE_AMD is not set
# CONFIG_MICROCODE_OLD_INTERFACE is not set
# grep CONFIG_INITRAMFS_SOURCE /usr/src/linux/.config
CONFIG_INITRAMFS_SOURCE=""

6. Rebuild the X Windows Server and X Windows drivers

I always do this even though not always necessary. One less thing to think about (not rebuilding them has sometimes caused me problems).

root # emerge xorg-server xorg-drivers

7. Rebuild NetworkManager if it is installed

I always do this even though not always necessary. One less thing to think about (not rebuilding it has sometimes caused me problems).

root # emerge networkmanager

8. If there is a new version of the Intel CPU microcode, generate it and copy it to the boot directory

For several years updates to the package sys-kernel/linux-firmware have not resulted in a change to the version of Intel CPU microcode for the legacy Intel Core i7-720QM CPU in my Compal NBLB2 laptop, as Intel no longer supports that version of CPU. Nevertheless it does no harm to repeat the procedure.

root # emerge sys-firmware/intel-microcode
root # rm /boot/microcode.cpio
root # iucode_tool -S --write-earlyfw=/boot/microcode.cpio /lib/firmware/intel-ucode/*
root # rm /boot/intel-uc.img

(The fourth command is to stop the grub-mkconfig command (see Step 9.2) adding intel-uc.img to the initrd line in the grub.cfg file.)

Note the USE flags for that I have set and cleared for sys-firmware/intel-microcode:

root # equery uses intel-microcode
[ Legend : U - final flag setting for installation]
[        : I - package is installed with flag     ]
[ Colors : set, unset                             ]
 * Found these USE flags for sys-firmware/intel-microcode-20210608_p20210830:
 U I
 - - hostonly    : only install ucode(s) supported by currently available (=online) processor(s) 
 - - initramfs   : install a small initramfs for use with CONFIG_MICROCODE_EARLY 
 + + split-ucode : install the split binary ucode files (used by the kernel directly) 
 - - vanilla     : install only microcode updates from Intel's official microcode tarball

9. Create a new grub.cfg file

9.1 First check the contents of /etc/default/grub to make sure it will be OK for the new version of the kernel

root # nano /etc/default/grub

Modify the contents of /etc/default/grub if necessary for the kernel version that has just been built.

9.2 Generate a new grub.cfg file

root # grub-mkconfig -o /boot/grub/grub.cfg

9.3 Check the new grub.cfg file includes the loading of the CPU microcode

root # nano /boot/grub/grub.cfg

The last line for each menu entry (i.e. the line before the closing curly bracket of the menu entry) should contain:

initrd /microcode.cpio /initramfs-<kernel version>-gentoo-x86_64.img

as shown in the example file excerpt below:

[...]
### BEGIN /etc/grub.d/10_linux ###
menuentry 'Gentoo GNU/Linux' --class gentoo --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-9db2f668-a682-4d6f-abc5-ed6f6c515b95' {
load_video
set gfxpayload=1024x768
insmod gzio
insmod part_msdos
insmod ext2
set root='hd0,msdos3'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos3 --hint-efi=hd0,msdos3 --hint-baremetal=ahci0,msdos3  597e8c88-8d50-443f-ae19-f510844f5d4e
else
search --no-floppy --fs-uuid --set=root 597e8c88-8d50-443f-ae19-f510844f5d4e
fi
echo	'Loading Linux 5.15.0-gentoo-x86_64 ...'
linux	/vmlinuz-5.15.0-gentoo-x86_64 root=/dev/sda6 ro BOOT_IMAGE=/kernel-genkernel-x86_64-5.15.0-gentoo root=/dev/ram0 ramdisk=8192 real_root=/dev/sda6 init=/linuxrc resume=swap:/dev/sda5 real_resume=/dev/sda5 intel_iommu=off net.ifnames=0 snd_hda_intel.power_save=0 radeon.modeset=1
echo	'Loading initial ramdisk ...'
initrd	/microcode.cpio /initramfs-5.15.0-gentoo-x86_64.img
}
submenu 'Advanced options for Gentoo GNU/Linux' $menuentry_id_option 'gnulinux-advanced-9db2f668-a682-4d6f-abc5-ed6f6c515b95' {
menuentry 'Gentoo GNU/Linux, with Linux 5.15.0-gentoo-x86_64' --class gentoo --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-5.15.0-gentoo-x86_64-advanced-9db2f668-a682-4d6f-abc5-ed6f6c515b95' {
load_video
set gfxpayload=1024x768
insmod gzio
insmod part_msdos
insmod ext2
set root='hd0,msdos3'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos3 --hint-efi=hd0,msdos3 --hint-baremetal=ahci0,msdos3  597e8c88-8d50-443f-ae19-f510844f5d4e
else
search --no-floppy --fs-uuid --set=root 597e8c88-8d50-443f-ae19-f510844f5d4e
fi
echo	'Loading Linux 5.15.0-gentoo-x86_64 ...'
linux	/vmlinuz-5.15.0-gentoo-x86_64 root=/dev/sda6 ro BOOT_IMAGE=/kernel-genkernel-x86_64-5.15.0-gentoo root=/dev/ram0 ramdisk=8192 real_root=/dev/sda6 init=/linuxrc resume=swap:/dev/sda5 real_resume=/dev/sda5 intel_iommu=off net.ifnames=0 snd_hda_intel.power_save=0 radeon.modeset=1
echo	'Loading initial ramdisk ...'
initrd	/microcode.cpio /initramfs-5.15.0-gentoo-x86_64.img
}
menuentry 'Gentoo GNU/Linux, with Linux 5.15.0-gentoo-x86_64 (recovery mode)' --class gentoo --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-5.15.0-gentoo-x86_64-recovery-9db2f668-a682-4d6f-abc5-ed6f6c515b95' {
load_video
set gfxpayload=1024x768
insmod gzio
insmod part_msdos
insmod ext2
set root='hd0,msdos3'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos3 --hint-efi=hd0,msdos3 --hint-baremetal=ahci0,msdos3  597e8c88-8d50-443f-ae19-f510844f5d4e
else
search --no-floppy --fs-uuid --set=root 597e8c88-8d50-443f-ae19-f510844f5d4e
fi
echo	'Loading Linux 5.15.0-gentoo-x86_64 ...'
linux	/vmlinuz-5.15.0-gentoo-x86_64 root=/dev/sda6 ro single BOOT_IMAGE=/kernel-genkernel-x86_64-5.15.0-gentoo root=/dev/ram0 ramdisk=8192 real_root=/dev/sda6 init=/linuxrc resume=swap:/dev/sda5 real_resume=/dev/sda5 intel_iommu=off net.ifnames=0 snd_hda_intel.power_save=0 radeon.modeset=1
echo	'Loading initial ramdisk ...'
initrd	/microcode.cpio /initramfs-5.15.0-gentoo-x86_64.img
}
}

### END /etc/grub.d/10_linux ###
[...]

10. Reboot

11. Rebuild VirtualBox if it is installed

root # emerge virtualbox

12. Check the current version of the Intel CPU microcode

Either:

root # dmesg | grep microcode

or:

root # grep microcode /proc/cpuinfo

For example:

root # dmesg | grep microcode
[    0.000000] microcode: microcode updated early to revision 0xa, date = 2018-05-08
[    0.127937] MDS: Vulnerable: Clear CPU buffers attempted, no microcode
[    1.558008] microcode: sig=0x106e5, pf=0x10, revision=0xa
[    1.559335] microcode: Microcode Update Driver: v2.2.
root # grep microcode /proc/cpuinfo
microcode       : 0xa
microcode       : 0xa
microcode       : 0xa
microcode       : 0xa
microcode       : 0xa
microcode       : 0xa
microcode       : 0xa
microcode       : 0xa

Note from the output of the dmesg command that this specific CPU model is susceptible to the MDS (Microarchitectural Data Sampling) vulnerability.

13. Edit /var/lib/portage/world and add (or change) the specific kernel sources package version

I do this in order to ensure the command ‘emerge --depclean‘ does not remove a specific kernel’s source code during a world update. I want Portage always to install the latest version of gentoo-sources but not to delete the version of gentoo-sources that corresponds to the kernel my installation is currently using.

For example, let’s say I have just replaced a kernel built from gentoo-sources:5.15.11 with a kernel built from gentoo-sources:5.15.12. My world file would initially contain the following:

[...]
sys-kernel/gentoo-sources
sys-kernel/gentoo-sources:5.15.11
[...]

If, following a successful reboot with kernel 5.15.12, I want to delete the files for kernel 5.15.11 in /boot/ (initramfs-5.15.11-gentoo-x86_64.img, System.map-5.15.11-gentoo-x86_64 and vmlinuz-5.15.11-gentoo-x86_64) and to edit the file /boot/grub/grub.cfg to remove the menu entries for kernel 5.15.11, I would change the world file’s contents to:

[...]
sys-kernel/gentoo-sources
sys-kernel/gentoo-sources:5.15.12
[...]

On the other hand, if, following a successful reboot, I want to keep the files for both kernel 5.15.11 and kernel 5.15.12, I would change the world file’s contents to:

[...]
sys-kernel/gentoo-sources
sys-kernel/gentoo-sources:5.15.11
sys-kernel/gentoo-sources:5.15.12
[...]

Work-around if movie subtitles restart after the final subtitle is displayed

If I’m watching movies in a language I don’t understand, I want subtitles. On my computers this is possible as long as there is a subtitles file with the name suffix .srt and the same prefix name as the .mp4 video file in the same directory. I usually prefer to view movies on my TV with a bigger screen, so I copy the movie to a HDD that is normally connected to my TV (a FINLUX model 43-FUD-8020). However, the built-in media player in the TV does not show the subtitles in the .srt file, even when it is in the same directory as the .mp4 file. Therefore I use the MKVToolNix utility mkvmerge to put the movie and subtitles into a Matroska multimedia container (.mkv file), and the TV’s media player can play these .mkv files and does display the subtitles. In fact, so can my laptops and desktop running Linux (I have not tried on a machine running Windows 10, but I assume Windows 10 would have no trouble either).

To install in Lubuntu 20.10:

user $ sudo apt install mkvtoolnix

To install in Gentoo Linux:

root # emerge mkvtoolnix

To create a Matroska file containing the movie plus subtitles:

user $ mkvmerge -o movie_with_subtitles.mkv movie_without_subtitles.mp4 subtitles.srt

Normally the last subtitle in a movie does not occur at the very end of the movie. For example, there could be action without dialogue at the end of the movie, and/or final credits without dialogue. The media players on my laptops and desktop running Linux display the last subtitle and play the rest of the movie in the Matroska container as expected. However, the media player in my FINLUX TV displays the last subtitle and then displays the subtitles from the beginning again, at breakneck speed. Annoying to say the least. As the problem does not occur on my laptops and desktop with the same .mkv file, I assume the problem lies with the media player in the TV.

At first I suspected that the .srt file was the cause, but it correctly uses UTF-8 encoding and the syntax of the contents is correct. Anyway, just to be sure I ran it through an online cleaner for .srt files and re-generated the .mkv file, but that made no difference on the TV. Since there is no problem playing the .mkv file on my computers, I can only assume the TV’s media player is indeed at fault. I cannot do anything about the TV’s media player, so I came up with an acceptable work-around: I added a dummy subtitle at the end of the .srt file that is set to be displayed at the very end of the movie. For example, let’s say the movie duration is two hours, 12 minutes and twenty-two seconds but the last subtitle is at 01:56:38,201:

188
01:56:38,201 --> 01:56:40,286
The end justifies the means.

I edited the file and added a dummy subtitle at the end:

188
01:56:38,201 --> 01:56:40,286
The end justifies the means.

189
02:12:19,001 --> 02:12:21,999
THE END.

I then re-generated the .mkv file using the mkvmerge command and, lo and behold, after the subtitle displayed between at 01:56:38,201 and 01:56:40,286 the TV no longer displays any more subtitles until the very end of the movie when it displays ‘THE END’ and the video ends. Actually, in reality the movie must be very slightly longer than 02:12:21,999 because, after displaying ‘THE END’, the first six subtitles in the subtitle file are displayed in rapid succession before the media player stops playing, but that is no big deal.

I searched the Web quite a lot and was unable to find any mention of this particular problem, so I am posting my work-around here just in case it helps someone else experiencing the same problem.

‘IP configuration was unavailable’: a laptop cannot connect wirelessly to a router

I recently switched my ISP from BT to Virgin Media because the speed and reliability of the broadband connection were low. A Virgin Media Hub 3 was supplied as part of the package, and the TV, laptops (Gentoo Linux, Windows 10 and macOS), desktops (Lubuntu and Windows 10), tablets and phones (Android and iOS) could connect to the Hub 3 without any trouble. A few weeks later Virgin Media offered to upgrade the hub to a Hub 4. I don’t look a gift horse in the mouth, so I accepted the offer. The Hub 4 does indeed improve on the already excellent broadband speeds I was getting with the Hub 3. On the downside the Hub 4’s configuration software has a couple of bugs, but I was able to live with them.

In addition to the above-mentioned hub configuration bugs, one of my laptops (a Compal NBLB2 with Intel Wireless WiFi Link 5300 AGN adapter) running Linux could not connect to the hub via Wi-Fi, even though it had no trouble connecting to the Hub 3. All other devices so far can connect to the Hub 4, so I was scratching my head. The laptop has no trouble connecting to the Hub 4 via Ethernet cable.

The hub’s 5G and 2.4G Wi-Fi bands originally had the same SSID (I’ll call it ‘VM1234567‘ here). I decided to rename the two bands ‘VM1234567_5G‘ and ‘VM1234567_2.4G‘ respectively, via the hub’s Settings in a Web browser. Very occasionally the laptop could connect to either SSID, but usually it could not connect and the following notification would pop up:

Wireless interface (wlan0)
IP configuration was unavailable

I did various things to try to get the laptop to connect, such as:

  • changing Wi-Fi channel selection in the hub from Auto to Manual and specifying different channels myself;
  • specifying the BSSID in the Desktop Environment’s GUI front-end to NetworkManager;
  • explicitly restricting the connection to the specific (and only) Wi-Fi interface (‘wlan0‘, in my case) in the DE’s GUI front-end to NetworkManager;
  • disabling IPv6 (Virgin Media does not use IPv6) in the DE’s GUI front-end to NetworkManager;
  • disabling the UFW firewall.

None of the above enabled the laptop to connect to the hub via Wi-Fi.

I installed the GUI Wi-Fi scanner LinSSID on my other Linux machines so I could check which 2.4G and 5G Wi-Fi channels were being used by the hub and by my neighbours’ hubs/routers. Note that LinSSID requires the utility iw to be installed and CONFIG_CFG80211_WEXT to be set in the kernel. The NetworkManager command ‘nmcli dev wifi list‘ can also be used to check which channels are being used. The channels selected automatically by the hub looked reasonable to me, and the different channels I selected manually did not improve the situation.

Now, coincidentally that laptop can dual-boot Windows 7, so I booted Windows 7 to see if it could connect to the hub via Wi-Fi. However, Windows 7 had the same Wi-Fi connectivity problem as Linux. The Network and Sharing Centre displayed the error message ‘The default gateway is not available’ and allowed me to run the so-called Troubleshooter, which fixed the problem in Windows 7. The laptop could then connect to the hub and to the Internet via the 5G Wi-Fi band (the hub’s DHCP server allocated IP address 192.168.0.145 to the laptop). So it appeared the lack of a specified default gateway was the problem in both OSs. This surprised me because I had never had to specify a default gateway on my machines, and still do not have to on the other machines. Anyway, I booted back into Linux and did the following:

STEP 1 (on the Compal laptop)

Connected to the hub via an Ethernet cable.

Opened the Hub 4 Settings page (192.168.0.1) in a Web browser.

Selected ‘Advanced settings’ > ‘DHCP’

Added the MAC address of the laptop’s Wi-Fi adapter and the IP address 192.168.0.145 to the Reserved list.

STEP 2 (on the Compal laptop)

Selected ‘System Settings’ > ‘Network’ | ‘Connections’

Selected Wi-Fi connection VM1234567_5G

Entered the following on the ‘IPv4’ tab:

Method: Manual
DNS Servers: 194.168.4.100,194.168.8.100
Search Domains: cable.virginm.net (The laptop connects without this entry, so I’m not sure if it makes any difference.)

Clicked ‘+ Add’ and added the gateway details as follows:

Address
192.168.0.145

Netmask
255.255.255.0

Gateway
192.168.0.1

Ticked ‘IPv4 is required for this connection’.

Set the following on the ‘Wi-Fi’ tab (this is optional):

BSSID: <hub’s MAC address corresponding to the SSID>
Restrict to device: wlan0 (<MAC address of the laptop’s Wi-Fi adapter>)

The BSSID can be found either by using LinSSID on a machine that can access the Wi-Fi network or by using the command ‘nmcli dev wifi list‘ in a terminal window. The MAC address of the laptop’s Wi-Fi adapter can be found using the commands ‘ip link‘ or ‘ifconfig‘.

Clicked on the down arrow in the ‘Restrict to device:’ box and selected the device (wlan0, in my case).

STEP 3 (on the Compal laptop)

Selected ‘System Settings’ > ‘Network’ | ‘Connections’

Selected Wi-Fi connection VM1234567_2.4G

Performed the same configuration steps as for VM1234567_5G except that the SSID V1234567_2.4G has a different BSSID (found using LinSSID or nmcli) to the SSID V1234567_5G.

The laptop’s 5G W-Fi connection now works very well with the Hub 4. The 2.4G connection can be slow (even when the signal is at 100%) and sometimes stalls, so I’m not sure I have fixed that connection completely, or even if it is fixable in this case. I still do not know why the problem occurs with the Hub 4 but not the Hub 3, and why it only happens with one specific machine. Anyway, the 5G connection now works fine, so I’m happy.

Gentoo Linux: Building/rebuilding a kernel and Intel CPU microcode in an installation without initramfs

In a 2016 post I explained how to update the Intel CPU microcode in a Gentoo Linux Stable Branch installation without an initramfs (I do not use sys-kernel/genkernel to build the kernel in the installation on my Clevo W230SS laptop). The behaviour of the tool sys-apps/iucode_tool for updating the Intel CPU microcode has changed since that post, hence this update.

Although not essential I normally perform the microcode upgrade procedure when I either rebuild or upgrade the Linux kernel, therefore I explain both procedures contiguously here.

These days the grub-mkconfig command edits the file /boot/grub/grub.cfg to add a line to the GRUB menu entries, to load the CPU microcode at boot, but nevertheless I prefer to follow a slightly different method that works reliably for me.

Below is the procedure I follow to build/rebuild the kernel and the Intel CPU microcode. Others may have a different approach, but this has always worked well for me, even if some of the steps are sometimes nugatory.

1. Mount the boot directory if it is on a separate partition

root # mount /dev/sda1 /boot

2. Check which kernel sources are installed and which of those sources is currently selected

root # eselect kernel list

3. Make a back-up of the current kernel configuration file

root # cp /usr/src/linux-`uname -r`/.config /home/fitzcarraldo/kernel-config-`uname -r`

4. Select the kernel sources I want to build

root # eselect kernel set <n>

5. Change to the currently selected kernel sources directory

root # cd /usr/src/linux

6. If wanting to build a new version of the kernel, create a template configuration file

N.B. Do NOT do this if rebuilding the kernel version that is currently in use.

root # cp /usr/src/linux-`uname -r`/.config /usr/src/linux/.config

7. Remove any existing object files

Definitely needed if the ‘make‘ command (see further on) returns an error message mentioning an old version of the compiler. It does no harm to perform this step in any case, so I always do it.

root # make clean

8. If building a new version of the kernel, create a new configuration file

N.B. Do NOT do this if rebuilding the kernel version that is currently in use.

root # make olddefconfig

The command ‘make olddefconfig‘ will edit the existing /usr/src/linux/.config file, keeping all the existing options in the file and setting any new options to their recommended (i.e. default) values.

9. Display a TUI menu of the kernel options in the .config file and make any desired changes

root # make menuconfig

I have configured the following kernel options relating to the early loading of the Intel CPU microcode (see later):

root # grep CONFIG_BLK_DEV_INITRD /usr/src/linux/.config
CONFIG_BLK_DEV_INITRD=y
root # grep CONFIG_MICROCODE /usr/src/linux/.config
CONFIG_MICROCODE=y
CONFIG_MICROCODE_INTEL=y
# CONFIG_MICROCODE_AMD is not set
# CONFIG_MICROCODE_OLD_INTERFACE is not set
root # grep CONFIG_INITRAMFS_SOURCE /usr/src/linux/.config
CONFIG_INITRAMFS_SOURCE=""

10. Build the kernel and modules

root # make && make modules_install
root # make install

11. Rebuild any third-party packages containing kernel modules

These could include packages such as nvidia-drivers, for example.

root # emerge @module-rebuild

In my case, currently the @module-rebuild set only comprises the following two packages:

root # cat /var/lib/module-rebuild/moduledb
a:1:app-emulation/virtualbox-modules-6.1.24
a:1:x11-drivers/nvidia-drivers-470.63.01

12. Rebuild the X Windows Server and X Windows drivers

I always do this even though not always necessary. One less thing to think about (not rebuilding them has sometimes caused me problems).

root # emerge xorg-server xorg-drivers

13. Rebuild NetworkManager if it is installed

I always do this even though not always necessary. One less thing to think about (not rebuilding it has sometimes caused me problems).

root # emerge networkmanager

14. If there is a new version of the Intel CPU microcode, generate it and copy it to the boot directory

Updates to the package sys-firmware/intel-microcode in the last couple of years have not resulted in a change to the version of Intel CPU microcode for the fourth-generation Intel Core i7-4810MQ CPU in my Clevo W230SS laptop, so I assume Intel no longer supports that version of CPU. Nevertheless it does no harm to repeat the procedure.

root # rm /boot/microcode.cpio
root # iucode_tool -S --write-earlyfw=/boot/microcode.cpio /lib/firmware/intel-ucode/*
root # rm /boot/intel-uc.img

(The third command is to stop the grub-mkconfig command (see later) adding intel-uc.img to the initrd line in the grub.cfg file.)

15. If a different version of the kernel has just been built, or if this is the first time upgrading the CPU microcode, create a new grub.cfg file

15.1 First check the contents of /etc/default/grub to make sure it will be OK for the new version of the kernel

root # nano /etc/default/grub

Modify the contents of /etc/default/grub if necessary for the kernel that has just been built.

15.2 Generate a new grub.cfg file

root # grub-mkconfig -o /boot/grub/grub.cfg

15.3 Check the new grub.cfg file includes the loading of the CPU microcode

root # nano /boot/grub/grub.cfg

The last line for each menu entry (i.e. the line before the closing curly bracket of the menu entry) should contain only ‘initrd /microcode.cpio‘, as shown in the example file excerpt below:

[...]
### BEGIN /etc/grub.d/10_linux ###
menuentry 'Gentoo GNU/Linux' --class gentoo --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-525a90f1-8ad2-44a3-ade3-20f18a0a9595' {
load_video
insmod gzio
insmod part_msdos
insmod ext2
set root='hd0,msdos1'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1  f6ffc085-66fe-4bbe-b080-cec355749f85
else
search --no-floppy --fs-uuid --set=root f6ffc085-66fe-4bbe-b080-cec355749f85
fi
echo	'Loading Linux 5.10.61-gentoo ...'
linux	/vmlinuz-5.10.61-gentoo root=/dev/sda5 ro  locale=en_GB i965.modeset=1 rcutree.rcu_idle_gp_delay=1 acpi_enforce_resources=lax reboot=force raid=noautodetect resume=/dev/sda2
echo	'Loading initial ramdisk ...'
initrd	/microcode.cpio
}
submenu 'Advanced options for Gentoo GNU/Linux' $menuentry_id_option 'gnulinux-advanced-525a90f1-8ad2-44a3-ade3-20f18a0a9595' {
menuentry 'Gentoo GNU/Linux, with Linux 5.10.61-gentoo' --class gentoo --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-5.10.61-gentoo-advanced-525a90f1-8ad2-44a3-ade3-20f18a0a9595' {
load_video
insmod gzio
insmod part_msdos
insmod ext2
set root='hd0,msdos1'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1  f6ffc085-66fe-4bbe-b080-cec355749f85
else
search --no-floppy --fs-uuid --set=root f6ffc085-66fe-4bbe-b080-cec355749f85
fi
echo	'Loading Linux 5.10.61-gentoo ...'
linux	/vmlinuz-5.10.61-gentoo root=/dev/sda5 ro  locale=en_GB i965.modeset=1 rcutree.rcu_idle_gp_delay=1 acpi_enforce_resources=lax reboot=force raid=noautodetect resume=/dev/sda2
echo	'Loading initial ramdisk ...'
initrd	/microcode.cpio
}
menuentry 'Gentoo GNU/Linux, with Linux 5.10.61-gentoo (recovery mode)' --class gentoo --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-5.10.61-gentoo-recovery-525a90f1-8ad2-44a3-ade3-20f18a0a9595' {
load_video
insmod gzio
insmod part_msdos
insmod ext2
set root='hd0,msdos1'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1  f6ffc085-66fe-4bbe-b080-cec355749f85
else
search --no-floppy --fs-uuid --set=root f6ffc085-66fe-4bbe-b080-cec355749f85
fi
echo	'Loading Linux 5.10.61-gentoo ...'
linux	/vmlinuz-5.10.61-gentoo root=/dev/sda5 ro single
echo	'Loading initial ramdisk ...'
initrd	/microcode.cpio
}
}

### END /etc/grub.d/10_linux ###
[...]

16. Reboot

17. Rebuild VirtualBox if it is installed

root # emerge virtualbox

18. Check the current version of the Intel CPU microcode

Either:

root # dmesg | grep microcode

or:

root # grep microcode /proc/cpuinfo

For example:

root # dmesg | grep microcode
[    0.000000] microcode: microcode updated early to revision 0x28, date = 2019-11-12
[    0.335631] microcode: sig=0x306c3, pf=0x10, revision=0x28
[    0.335730] microcode: Microcode Update Driver: v2.2.
root # grep microcode /proc/cpuinfo
microcode       : 0x28
microcode       : 0x28
microcode       : 0x28
microcode       : 0x28
microcode       : 0x28
microcode       : 0x28
microcode       : 0x28
microcode       : 0x28

19. Edit /var/lib/portage/world and add (or change) the specific kernel sources package version

I do this in order to ensure the command ‘emerge --depclean‘ does not remove a specific kernel’s source code during a world update. I want Portage always to install the latest (stable) version of gentoo-sources but not to delete the version of gentoo-sources that corresponds to the kernel my installation is currently using.

For example, let’s say I have just replaced a kernel built from gentoo-sources:4.19.57 with a kernel built from gentoo-sources:4.19.66. My world file would initially contain the following:

[...]
sys-kernel/gentoo-sources
sys-kernel/gentoo-sources:4.19.57
[...]

If, following a successful reboot with kernel 4.19.66, I want to delete the files for kernel 4.19.17 in /boot/ (System.map-4.19.17-gentoo, config-4.19.17-gentoo and vmlinuz-4.19.17-gentoo) and to edit the file /boot/grub/grub.cfg to remove the menu entries for kernel 4.19.57, I would change the world file’s contents to:

[...]
sys-kernel/gentoo-sources
sys-kernel/gentoo-sources:4.19.66
[...]

On the other hand, if, following a successful reboot, I want to keep the files for both kernel 4.19.17 and kernel 4.19.66, I would change the world file’s contents to:

[...]
sys-kernel/gentoo-sources
sys-kernel/gentoo-sources:4.19.57
sys-kernel/gentoo-sources:4.19.66
[...]

Browsing a WebDAV share in Linux and Windows 10

In this post I explain how I configured my machines running two Linux distributions (Gentoo Linux and Lubuntu 20.10) and my Windows 10 test machine to enable me to browse a shared folder on my file server (running ownCloud, in my case) that uses the WebDAV protocol. I cover two options for configuring Linux to browse WebDAV shares. Further options exist in Linux, but the two methods I give here are fine for my purposes.

I installed ownCloud on my Linux server in a slightly different way to the method in the ownCloud installation manual, and my examples in this post use the URI https://fitzcarraldo.ddns.net/owncloud/remote.php/webdav rather than the usual https://fitzcarraldo.ddns.net/remote.php/webdav for ownCloud, so replace the URI in my examples with the appropriate URI in your case. The username of the user account on each client machine is ‘fitz’, and the ownCloud username (davusername) on the server is ‘bsf’. Obviously replace those with the usernames in your case.

PART 1 – LINUX

Unless I mention the distribution explicitly, the following steps apply to both Linux distributions. As my Gentoo Linux installations use KDE, the steps for Gentoo Linux assume the file manager is Dolphin. My Lubuntu installation uses the file manager PCManFM-Qt.

1. Install davfs2 if it is not already installed

Gentoo Linux:

root # emerge davfs2

That command installs three packages:

acct-group/davfs2
acct-user/davfs2
net-fs/davfs2

Lubuntu 20.10:

user $ sudo apt install davfs2

2. Lubuntu 20.10: Allow mounting by non-root users

user $ sudo dpkg-reconfigure davfs2

   Package configuration
   
    ┌──────────────────────────────────────────┤ Configuring davfs2 ├───────────────────────────────────────────┐
    │                                                                                                           │
    │ The file /sbin/mount.davfs must have the SUID bit set if you want to allow unprivileged (non-root) users  │
    │ to mount WebDAV resources.                                                                                │
    │                                                                                                           │
    │ If you do not choose this option, only root will be allowed to mount WebDAV resources. This can later be  │
    │ changed by running 'dpkg-reconfigure davfs2'.                                                             │
    │                                                                                                           │
    │ Should unprivileged users be allowed to mount WebDAV resources?                                           │
    │                                                                                                           │
    │                               <Yes>                                  <No>                                 │
    │                                                                                                           │
    └───────────────────────────────────────────────────────────────────────────────────────────────────────────┘

(Do not do anything in Gentoo Linux; the SUID bit should be set automatically.)

3. Check the SUID bit has been set (notice the ‘s’ in the file’s permissions)

Gentoo Linux:

user $ ls -la /sbin/mount.davfs
lrwxrwxrwx 1 root root 21 Sep 25 23:03 /sbin/mount.davfs -> /usr/sbin/mount.davfs
user $ ls -la /usr/sbin/mount.davfs
-rws--x--x 1 root root 130752 Sep 25 23:03 /usr/sbin/mount.davfs

If the SUID bit has not be set automatically, you can do it manually:

user $ sudo chmod u+s /usr/sbin/mount.davfs

Lubuntu 20.10:

user $ ls -la /sbin/mount.davfs
-rwsr-xr-x 1 root root 137464 Aug  8  2020 /sbin/mount.davfs

4. Add the user to the davfs2 group

user $ sudo usermod -aG davfs2 fitz

Logout and login again and check the user is a member of the group:

user $ groups | grep -q davfs2 && echo "OK"
OK

5. Leave the lines in the following files commented out (i.e. accept the defaults)

/etc/davfs2/davfs2.conf (system-wide)

~/.davfs2/davfs2.conf (user-specific)

6. Option 1 (simplest!) – Enter the URI in the file manager and bookmark it

6.1 Gentoo Linux with KDE

Enter the following URI on the Dolphin file manager’s address line and press Enter:

webdavs://fitzcarraldo.ddns.net/owncloud/remote.php/webdav

You will be prompted to enter the username and password for the WebDAV share.

Select ‘File’ > ‘Add to Places’ in Dolphin to bookmark the share. From then on, you can browse the share by clicking on the share in the Remote section in Dolphin’s Places pane. You can rename the bookmark if you wish (right-click and select ‘Edit…’).

Another way to do this in KDE is as follows:

  1. click on ‘Network’ in the Places pane;
  2. click on ‘Add Network Folder’ next to the address bar;
  3. select ‘WebFolder (webdav)’ and click ‘Next’;
  4. enter the fields as follows:
    • Name: webdav
    • User: bsf
    • Server: fitzcarraldo.ddns.net
    • Port: 443 (I use Port 443 but you may be using a different port)
    • Folder: owncloud/remote.php/webdav
  5. select ‘Create an icon for this folder’ and ‘Use encryption’;
  6. click ‘Save & Connect’;
  7. right-click on the webdav icon in the main Dolphin pane and select ‘Add to Places’.

6.2 Lubuntu 20.10

Enter the following URI on the PCManFM-Qt file manager’s address line and press Enter:

davs://fitzcarraldo.ddns.net/owncloud/remote.php/webdav

You will be prompted to enter the username and password for the WebDAV share.

Select ‘Bookmarks’ > ‘Add to Bookmarks’ in PCManFM-Qt to bookmark the share. From then on, you can browse the share by clicking on the share in the Bookmarks section in PCManFM-Qt’s Lists pane. You can rename the bookmark if you wish (Bookmarks > Edit Bookmarks).

7. Option 2 – Assign a mountpoint at boot:

Add the following credentials line in the file ~/.davfs2/secrets:

https://fitzcarraldo.ddns.net/owncloud/remote.php/webdav <davusername> <davpassword>

and set the file permissions as follows:

user $ chmod 600 ~/.davfs2/secrets

Create a user directory onto which to mount the share:

user $ mkdir ~/webdav

Add a line in /etc/fstab to map the WebDAV share onto that directory at boot:

# <file system>                                            <mount point>       <type>  <options>        <dump>  <pass>
https://fitzcarraldo.ddns.net/owncloud/remote.php/webdav   /home/fitz/webdav   davfs   noauto,user,rw   0       0

The options ‘auto‘ and ‘_netdev‘ do not mount the WebDAV share automatically at boot in my installations; I am prompted to enter the davuser and davpassword manually early in the boot process if I include those options. To avoid the latter I use the ‘noauto‘ option and do not bother including the ‘_netdev‘ option. There are ways to mount a WebDAV share automatically at boot whether your installation uses systemd, OpenRC or other rc systems. Nevertheless I prefer the WebDAV share not to be mounted auomatically at boot, especially in the case of my laptops.

Reboot to check everything works.

Lubuntu 20.10:

The share will be listed as ‘webdav’ (unmounted) in the Devices section under Lists in PCManFM-Qt. You can click on the unmounted share to mount it, and click on the Unmount icon to unmount it. Everything works as expected.

Gentoo Linux with KDE:

The share is not listed in the Places pane in Dolphin but the share can be mounted manually from the command line as follows:

user $ mount ~/webdav
/sbin/mount.davfs: warning: the server does not support locks

(The ‘user‘ option in /etc/fstab allows the non-root user to mount the share.)

The main pane displaying the contents of ~/webdav/ will only be populated with the contents of the remote folder after the share is mounted.

The share is browsable in Dolphin. I can perform all file and folder operations in KDE apart from one thing: I cannot copy files to the server (neither from the local machine nor from the server); Dolphin displays messages such as ‘There is not enough space on the disk to write file:///home/fitz/testfile.txt’. I suspect the problem is with KDE, because I can copy files to and on the share by using the command line (for example the commands ‘cp ~/test1.txt ~/webdav/‘ and ‘cp ~/webdav/test2.txt ~/webdav/test3.txt‘ work fine). I have yet to find a solution to this issue, so I use Option 1 for Gentoo Linux running KDE, which works fine. To create a bookmark in Dolphin’s Places pane, browse the share and select ‘File’ > ‘Add to Places’.
 
PART 2 – WINDOWS 10

There is a Map Network Drive Wizard, but it is not as straightforward for WebDAV shares as it is with SMB shares. See the thread Cannot connect to webdav service for the type of behaviour I experienced, althought in my case I could rarely establish a connection using either ‘Map network drive’ or ‘Add a network location’, and the mapping was always lost if I logged out or rebooted, despite selecting ‘Reconnect at sign-in’. I then discovered several invalid URIs in Registry keys. Presumably these were left in the Registry after my various unsuccessful configuration attempts using the wizard. To finally succeed in mapping the ownCloud WebDAV shared folder I had to search for the string ‘fitzcarraldo.ddns.net’ in the Registry (see Steps 1 & 2 below for how to open the Registry) and delete any existing strings similar or identical to ‘https://fitzcarraldo.ddns.net/ownloud/remote.php/webdav‘, as they seemed to interfere with successful mapping of the network directory.

After making sure the Registry no longer contained any incorrect-looking WebDAV URIs for my ownCloud server, I used the following steps:

  1. Right-click on Windows’ Start Menu icon on the left of the Task Bar and select ‘Run’.
  2. Enter ‘regedit’ in the Open box and click ‘OK’.
  3. Select Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\WebClient\Parameters
  4. If the value in BasicAuthLevel is not already 2, change it to 2.
  5. In the ‘Type here to search’ box on the Task Bar, enter ‘Services’ and press Enter.
  6. Click ‘Services App’.
  7. Scroll down to ‘WebClient’ in the Services window.
  8. Right-click ‘WebClient’ and select ‘Properties’.
  9. If ‘Startup type’ is not already set to ‘Automatic’, change it to ‘Automatic’ and click ‘Apply’.
  10. Launch File Explorer.
  11. Right-click ‘This PC’ and select ‘Map network drive…’.
  12. Select the drive letter (default is Z:).
  13. In the Folder box enter \\fitzcarraldo.ddns.net@SSL\owncloud\remote.php\webdav and make sure only ‘Reconnect at sign-in’ is ticked.
  14. Click ‘Finish’.
  15. A network icon and the label ‘webdav (\\fitzcarraldo.ddns.net@SSL\owncloud\remote.php) (Z:)’ should appear under ‘My PC’. Clicking that icon displays the contents of the shared folder of my ownCloud account on my server.

The only Registry entries containing ‘fitzcarraldo.ddns.net’ found by ‘Edit’ > ‘Find…’ are now the following:

Computer\HKEY_CURRENT_USER\Network\Z
RemotePath     REG_SZ     \\fitzcarraldo.ddns.net@SSL\owncloud\remote.php\webdav

Computer\HKEY_CURRENT_USER\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\Map Network Drive MRU
a     REG_SZ     \\fitzcarraldo.ddns.net@SSL\owncloud\remote.php\webdav

Computer\HKEY_CURRENT_USER\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\MountPoints2\##fitzcarraldo.ddns.net@SSL#owncloud#remote.php#webdav
LabelFromReg     REG_SZ     webdav (\\fitzcarraldo.ddns.net@SSL\owncloud\remote.php)

Computer\HKEY_CURRENT_USER\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\PublishingWizard\AddNetworkPlace\AddNetPlace\LocationMRU
a     REG_SZ     https://fitzcarraldo.ddns.net/owncloud/remote.php/webdav

Computer\HKEY_USERS\S-1-5-21-4039722433-590489090-552845671-1001\Network\Z
RemotePath     REG_SZ     \\fitzcarraldo.ddns.net@SSL\owncloud\remote.php\webdav

Computer\HKEY_USERS\S-1-5-21-4039722433-590489090-552845671-1001\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\Map Network Drive MRU
a     REG_SZ     \\fitzcarraldo.ddns.net@SSL\owncloud\remote.php\webdav

Computer\HKEY_USERS\S-1-5-21-4039722433-590489090-552845671-1001\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\MountPoints2\##fitzcarraldo.ddns.net@SSL#owncloud#remote.php#webdav
LabelFromReg     REG_SZ     webdav (\\fitzcarraldo.ddns.net@SSL\owncloud\remote.php)

Computer\HKEY_USERS\S-1-5-21-4039722433-590489090-552845671-1001\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\PublishingWizard\AddNetworkPlace\AddNetPlace\LocationMRU
a     REG_SZ     https://fitzcarraldo.ddns.net/owncloud/remote.php/webdav

 
CONCLUSION

There you have it. I can browse my ownCloud user account folders on my server from my machines running Linux and from my test machine running Windows 10.

Installing and configuring davfs2 in Linux, and using Option 1 to browse a WebDAV share is very easy in both Gentoo Linux running KDE and in Lubuntu 20.10. Using Option 2 is also very easy in Lubuntu 20.10 but is not easy in Gentoo Linux running KDE, and I still need to find out if there is a better approach for Option 2 in Gentoo Linux running KDE.

I found Windows 10 the most problematic, despite the apparent simplicity of the ‘Map network drive’ and ‘Add a network location’ wizards. I discovered that, if I didn’t get the format of the URI correct the first time, Windows 10 would leave ‘cruft’ in the Registry that apparently prevented further mapping attempts from working properly and consistently.

Anyway, everything works the way I want and I hope this post is of some help to others wanting to browse a share using WebDAV, be that a folder in ownCloud, Nextcloud or any other network service requiring the WebDAV protocol.

Removing PipeWire in Gentoo Linux

PipeWire, all the rage these days, was originally developed for video but was later enhanced to support audio as well, and is now an alternative to PulseAudio and JACK. My laptop running Gentoo Stable (amd64) with the KDE Plasma Desktop had been working fine with PipeWire for some time. The pulseaudio and screencast USE flags were both declared in the file /etc/portage/make.conf. Both audio playback and recording worked fine until a recent upgrade of the packages in my world file, when neither worked any more. The Audio Volume loudspeaker icon (the applet kde-plasma/plasma-pa) on the KDE Plasma panel had a red line through it, and the KMix loudspeaker icon (the applet kde-apps/kmix) on the panel was greyed out. Although I cannot be sure, I suspect the problem started when the first version of PipeWire that supported audio was released. The output of the command ‘ps -ef | grep pulse‘ showed me that both PulseAudio and PipeWire were running. At the time I did not know that PulseAudio is not supposed to be running at the same time as PipeWire. Sometimes when I booted the laptop and logged in, the loudspeaker icons on the Panel would appear correctly and audio output would work properly, but usually this was not the case. This behaviour made me wonder if there was some sort of race condition between the two applications at startup.

Anyway, I stopped PulseAudio being launched automatically at startup. I did this by editing the file /etc/pulse/client.conf to add the line ‘autospawn = no‘ (a comment in the as-installed file indicates that the default value for autospawn is ‘yes‘). That did indeed stop PulseAudio from being launched automatically, and left only PipeWire running. The loudspeaker icons were then displayed correctly on the Panel when I logged in to the KDE Plasma Desktop, and audio output then worked. However, PipeWire did not detect the laptop’s built-in microphone, and no Recording channel was displayed by KMix and Audio Volume. The troubleshooting chapter of the Arch Linux Wiki article on PipeWire has a section suggesting a couple of fixes for this problem (Microphone is not detected by PipeWire) but, even so, I decided to ditch PipeWire and revert to PulseAudio. As much as I dislike PulseAudio (see some of my previous posts on the various problems I have experienced with it), these days it is more or less stable on this laptop and I do not have to mess around too much with audio settings.

A few KDE packages in Gentoo Linux depend on PipeWire (they require the screencast USE flag to be set). I therefore added the following two entries to a file in the directory /etc/portage/package.use/ in order to stop PipeWire being required:

>=sys-apps/xdg-desktop-portal-1.8.1 -screencast
>=kde-apps/krfb-20.12.3 -wayland

I was then able to use the usual command ‘emerge -uvDN @world‘, followed by the command ‘emerge --ask --depclean‘, to rebuild the affected packages and remove PipeWire. I also deleted the line ‘autospawn = no‘ that I had previously added to the file /etc/pulse/client.conf, so that PulseAudio would again be launched automatically at startup. Audio playback and recording are now back to normal. I will probably try PipeWire again in the future but, for the moment, I don’t need it. According to the Gentoo Linux Wiki article on PipeWire:

Warning
As of mid 2021, PipeWire is still in active development and not everything is fully integrated, tested, or implemented – though the project is moving along. While replacing existing audio solutions on Gentoo is possible, the experience is currently not guaranteed to be perfect or free of issues and bugs.

I will therefore wait until the concensus amongst Gentoo Linux users is that PipeWire is trouble-free before I try it again.

croc – another file transfer method

I have lost count of the number of times I have had to send a large file to someone at work, usually in a hurry. I’ve used Dropbox, ownCloud, Firefox Send (no longer available) etc. Transferring large files became a bit easier when e-mail service providers increased the size limit for attachments, but that is still not a solution for very large files. The xkcd cartoon FILE TRANSFER sums up the situation nicely.

I recently discovered the command line utility croc, which the author claims is a way to ‘easily and securely transfer stuff from one computer to another.’ I thought I’d give it a try, if only to have another tool to fall back on in an emergency. It does rely on both ends having croc installed, but hopefully that should not be a show-stopper as croc is available for Linux, Windows, macOS and BSD. To quote the author:

croc differs from a utility like scp because it doesn’t require any two computers to have enabled port-forwarding. Instead, croc will uses a relay – a temporary server setup locally (if both computers are on lan) or publicly (default is at croc4.schollz.com). Any two computers can connect to the relay, and after securing their channel with PAKE [password authenticated key exchange], they can transfer encrypted metadata and data through the relay. The relay works by first having the computers communicate the PAKE protocol via websockets, and then exchanging encrypted metadata, and then stapling the TCP connections directly so that they can transfer directly.

So, to use croc you will be dependent on the public relay provided by the author unless you set up your own relay (instructions are provided in the author’s original 2018 blog post introducing croc – see link above – and in various third-party articles about croc, such as ‘Securely Transfer Files and Folders Between Computers Using Croc‘ and ‘Transfer Files And Folders Between Computers With Croc‘).

Anyway, I installed croc in Lubuntu and Gentoo Linux from the author’s GitHub repository and indeed it is easy to use and works fine. The binary releases for the various OSs and Linux distributions can be found on the Releases page of the GitHub repository or via the OS package manager.

Lubuntu 20.10:

user $ wget https://github.com/schollz/croc/releases/download/v9.1.6/croc_9.1.6_Linux-64bit.deb
user $ sudo dpkg -i croc_9.1.6_Linux-64bit.deb

Gentoo Linux:

root # emerge net-misc/croc

(Note that croc ebuilds are not currently marked as Stable in the Gentoo Linux Portage tree, so you’ll have to unmask them by keyword if you are using the Stable branch.)

Termux:

I even installed croc in Termux on my Samsung Galaxy Note 20 Ultra 5G, and it works in Android too:

$ pkg install croc

Other OSs and other Linux distributions:

See the instructions in the README file online.

Using croc

Using croc is as simple as entering a command on one computer, informing (via e-mail, telephone, SMS, Signal or other social media) the person using the other computer of the command to use, and entering that command on the other computer. For example:

Sender

user $ croc send Documents/flight-times.ods
Sending 'flight-times.ods' (16.6 kB)
Code is: 8878-salary-courage-roger
On the other computer run

croc 8878-salary-courage-roger

Receiver

user $ croc 8878-salary-courage-roger
Accept 'flight-times.ods' (16.6 kB)? (Y/n) 

If the receiving user then enters ‘Y’, the sending user sees something similar to this:

user $ croc send Documents/flight-times.ods
Sending 'flight-times.ods' (16.6 kB)
Code is: 8878-salary-courage-roger
On the other computer run

croc 8878-salary-courage-roger

Sending (->192.168.1.74:60740)
 100% |████████████████████| (17/17 kB, 10.918 MB/s)
user $ 

and the receiving user sees something similar to this:

user $ croc 8878-salary-courage-roger
Accept 'flight-times.ods' (16.6 kB)? (Y/n) Y

Receiving (<-[::1]:39442)
 100% |████████████████████| (17/17 kB, 3.989 MB/s)
user $ 

The observant reader will notice that the above example shows a file being transferred on the same computer. When transferred between different computers the IP addresses of each computer will be displayed instead. I have used croc to transfer files between different computers on my home network (I would normally just use my NAS for this, though), between remote computers on the Internet, and between my computers and my phone via mobile broadband, and croc works in all cases.

I have not mentioned all croc’s features. I’ll leave you to read up on croc in more detail in the links I’ve given above. It looks like it might be a useful tool to have installed.

Using adb tools in Linux to remove bloatware from my Samsung Galaxy Note 20 Ultra

Samsung included a lot of bloatware on my Galaxy Note 20 Ultra 5G, and it is not possible to uninstall it using Play Store. However, it is possible to remove this stuff using adb tools. I got rid of the bloatware I don’t want very easily using the Linux version of the adb tools.

I have never had a Facebook account and never will, so I decided to remove all trace of it as follows:

1. Installed adb tools

In Lubuntu 20.10:

user $ sudo apt install android-tools-adb

In Gentoo Linux:

root # emerge dev-util/android-tools

2. Enabled ‘Developer Options’ on the phone

‘Settings’ > ‘About Phone’ > ‘Software Information’ and quickly tapped 7 times on ‘Build number’.

3. Enabled USB Debugging on the phone

‘Settings’ > ‘Developer options’, scrolled down and tapped on ‘USB debugging’.

4. Launched adb

user $ adb start-server
* daemon not running; starting now at tcp:5037
* daemon started successfully

5. Connected the phone to the computer using the USB cable

A few prompts on the phone asked whether or not I wanted to allow USB debugging. Tapped ‘Always allow from this computer’ and tapped ‘OK’.

6. Uninstalled Facebook

The packages I needed to uninstall were:

com.facebook.appmanager
com.facebook.katana
com.facebook.services
com.facebook.system

First I tried to uninstall with the ‘-k‘ option:

user $ adb uninstall -k --user 0 com.facebook.appmanager
The -k option uninstalls the application while retaining the data/cache.
At the moment, there is no way to remove the remaining data.
You will have to reinstall the application with the same signature, and fully uninstall it.
If you truly wish to continue, execute 'adb shell cmd package uninstall -k'.

See ‘Difference between pm clear and pm uninstall -k on Android

I have never been a member of Facebook and never will, so I dispensed with the ‘-k‘ option and entered the following commands:

user $ adb uninstall --user 0 com.facebook.appmanager
Success
user $ adb uninstall --user 0 com.facebook.katana
Success
user $ adb uninstall --user 0 com.facebook.services
Success
user $ adb uninstall --user 0 com.facebook.system
Success

I didn’t want the LinkedIn, Samsung Global Goals and Spotify apps either, so I uninstalled those too:

user $ adb uninstall --user 0 com.linkedin.android
Success
user $ adb uninstall --user 0 com.samsung.sree
Success
user $ adb uninstall --user 0 com.spotify.music
Success

7. Stopped the adb server on the computer

user $ adb kill-server

8. Unplugged the phone from the computer.

That’s it.

In order to disable the apps using this method, you will need to know the exact package name of the app you want to get rid of. For this, use Play Store and install App Inspector (there are several apps with this name in Play Store; I installed the app by Projectoria Ltd but the others look OK too). Launch App Inspector and you can find the package name under the name of the app. This starts with a ‘com‘ or ‘net‘ followed by words separated by dots.

For example, App Inspector shows the package name for LinkedIn as ‘com.linkedin.android‘.

Some useful links:

To get a list of all the packages installed on my phone:

user $ adb shell pm list packages

To get a list of system apps only:

user $ adb shell pm list packages -s

To get a list of only Samsung packages:

user $ adb shell pm list packages | grep samsung

To search for e.g. facebook packages:

user $ adb shell pm list packages | grep facebook

(Returns nothing now, as I already deleted all the Facebook packages. Yay!)

To search for other packages, e.g.:

user $ adb shell pm list packages | grep kids
package:com.samsung.android.kidsinstaller
package:com.sec.android.app.kidshome

Is Gentoo Linux an anachronism?

When I started visiting the Gentoo Linux discussion forums in 2007 there were at least three pages of posts daily, if not more. These days there is usually one page. I’m sure the number of Gentoo Linux users has dropped significantly since then. Interest in the distribution has certainly decreased since its heyday: Google Trends – gentoo linux.

I don’t think the drop in interest is limited to individuals either. Articles such as ‘Flying Circus Internet Operations GmbH – Migrating a Hosting Infrastructure from Gentoo to NixOS‘ lead me to suspect that some companies have switched to other distributions over the years. NASDAQ’s use of ‘a modified version of Gentoo Linux’ was publicised in 2011 (How Linux Mastered Wall Street) but I do not know if it still uses the distribution and, in any case, that is only a single significant entity. I personally have never come across another user (corporation or individual) of Gentoo Linux, although I do know several companies and individuals using distributions such as Ubuntu and Fedora.

Gentoo Linux is certainly not for everyone. In recent years the user base seems to have settled down to a smaller number of people, primarily consisting of enthusiasts who appreciate its advanced features and are prepared to put in the extra effort and time required to create and maintain a working installation. I’m sure it also still has a place in some specialised commercial applications, but I have my doubts its deployment comes anywhere near that of the major distributions such as Ubuntu, Red Hat, Fedora, etc. If I were only interested in using an OS that enabled me to perform typical personal and professional tasks, I wouldn’t be using Gentoo Linux. Some people touted Gentoo Linux’s configurability as giving it a speed advantage over binary distributions but, having correctly installed and used Gentoo Linux and various other distributions on the same hardware, I cannot say I noticed an improvement in performance.

I think one has to choose the right tool for the job. I wouldn’t dream of installing Gentoo Linux on any of my family’s machines or on older hardware. Personal experience doing the latter has taught me it is a waste of time (both installation itself and subsequent maintenance). I installed Lubuntu on my family’s desktop machine because it is a reliable, low-maintenance OS with automatic update notifications and painless, fast updates. On the other hand I installed Gentoo Linux on my laptops because I want to tinker with the OS, configure it exactly the way I want, be able to apply patches to source code easily, install multiple versions of the same application (‘slotted applications’), learn more about how the OS works, and experiment. You can do that too with a binary distribution, but with Gentoo Linux I feel I know a lot more about the kernel, OS internals, package management and package customisation than with a pre-canned binary distribution. It really is good for learning about Linux in more depth than a binary distribution.

My old Compal NBLB2 laptop with a first-generation Intel Core i7 CPU is eleven years old and I have never touched the package installation log file /var/log/emerge.log since installation in March 2010. Ten years after I first installed Gentoo Linux on it I ran the command ‘qlop -c -H‘ out of curiosity to see how much time had been spent building packages over its lifetime. The reported statistics were as follows:

total: 492 days, 5 hours, 47 minutes, 44 seconds for 67841 merges, 4295 unmerges, 446 syncs

That’s roughly 13% of its then 124 months ‘life’ spent compiling.

It has an Intel Core i7 720QM CPU (1.6 GHz but throttled to 933 MHz by Compal due to Compal’s PSU size, although I bought a higher Wattage PSU a year ago and it seems to run at 1.6 GHz since then). It has always had KDE installed, and numerous upgrades to KDE have kept it busy compiling. Each version of LibreOffice, qtwebengine, Firefox etc. has also taken a very long time to compile. Until I removed qtwebengine and the few packages dependent on it this year, even with jumbo-build enabled qtwebengine took more than a day to build. Admittedly I did have trouble some years ago with the HDD becoming almost full with temporary directories and files over a long period of time (/usr/tmp/portage/ contained a whopping 30GB of directories and files until I cleared it out), which also slowed things down, but that is no longer the case. Unfortunately that laptop has ~amd64 (Gentoo Linux Testing) installed rather than amd64 (Gentoo Linux Stable), so it’s not possible to install the binary package of LibreOffice due to dependency conflicts. As all the big packages take so long to compile on this particular laptop I ended up merging the firefox-bin package rather than the firefox source code package, and I use Microsoft Office 2007 running under WINE rather than LibreOffice.

My Clevo W230SS laptop (fourth-generation Intel Core i7-4810MQ CPU @ 2.80GHz) running Gentoo amd64 (Gentoo Linux Stable) with a few ~amd64 (testing) packages is six years old and I have not touched /var/log/emerge.log since installation in April 2015. Five years after I first installed Gentoo Linux on it I ran the command ‘qlop -c -H‘ to see how it compared to the older Compal NBLB2 laptop running Gentoo ~amd64 mentioned above. The reported statistics were as follows:

total: 53 days, 11 hours, 3 minutes, 31 seconds for 24494 merges, 1717 unmerges, 169 syncs

That’s roughly 3% of its then 64 months ‘life’ spent compiling. Nowhere near as bad as my older laptop, but still a lot of time spent compiling. The merge time for qtwebengine 5.14.2 was 4 hours 25 minutes with that fourth-generation Intel Core i7 CPU, but later versions of qtwebengine take even longer to build.

I personally would now only consider installing Gentoo Linux on a machine with at least 16 GB RAM and a CPU with at least four cores and a speed of circa 3 GHz or more. Additionally, although I have been a user of KDE in Gentoo Linux all these years, I would probably switch from KDE to a simpler, less resource-hungry and less feature-rich (some might say less ‘bloated’!) desktop environment such as LXQt in new installations of Gentoo Linux.

One thing that has improved a lot since I started using Gentoo Linux over a decade ago is the package manager Portage, at least in terms of dependency resolution and blockage handling. I used to have to do a lot more work to resolve problems during package upgrades; ‘merging world’ (upgrading installed packages) is generally a lot less troublesome than it used to be ten years ago. Portage is a lot slower than it used to be, but that’s because it does a lot more than it used to do. I used to have to use revdep-rebuild – a utility to resolve reverse dependencies and rebuild affected packages – frequently, but not any more. Building software from source code takes time, though, so plenty of RAM and a fast CPU are important for installing packages, however good the package manager itself.

Some people maintain that the reduction in posts in the Gentoo Linux Forums could just mean users have fewer problems these days compared to earlier years. However, I have my doubts that would account for the much larger number of pages of ‘posts from the last 24 hours’ in earlier years, nor for the big drop in Google Trends statistics since 2004. Posts from new users do appear from time to time in the forums, so I suspect there are simply not as many new users as a decade or more ago. There are also posts from long-time users when there are major changes such as an upgrade to a newer version of Python or a profile change.

Another argument against a drop in popularity is that many of the users in the high number of users online in a 24-hour period in earlier years were spambots. I used to be a moderator of the Sabayon Linux forums, so I’m well aware of the phenomenon and I had to ban quite a few spammers & spambots in my time. But I’m not buying for one moment that the majority or even a significant number of the 1850 users logged in to the Gentoo Forums on 30 December 2004 were spambots. I am aware of puerile, more-recent attempts by a few lone individuals to boost the distribution’s exposure, such as ‘Gentoo Linux Forums – I’ve just set up Gentoo on distrowatch as my homepage‘ but I doubt very much that has had any impact on uptake. Mind you, such antics are not confined to Gentoo Linux; I’ve seen similar posts in the forums of some other distributions.

I think former Gentoo Linux developer and Council member Donnie Berkholz got it about right in his 2013 article Ranking Linux distributions, and the decline of the traditional distros.

A discussion on Reddit in 2016 indicated that other Linux users have noticed a decline in use of the distribution: Why did Gentoo peak in popularity in 2005, then fade into obscurity?.

The decline in use of Gentoo Linux is not just due to lower uptake by new users; veteran users have also moved away due to its demands on time and effort: ‘Au Revoir, Gentoo – Sell Me A New Linux Distro‘. There are occasionally posts in the Gentoo Linux forums by previous users announcing that they have started using the distribution again, but I strongly suspect they are exceptions to the general trend.

Gentoo Linux is not as popular as it used to be, and there is no way of dressing it up any other way. However, Gentoo Linux can still be worthwhile for the Linux enthusiast and ‘power user’ who enjoys tinkering and learning more about Linux internals, and who does not mind the significant additional time required to maintain it, and the time, effort and extra energy consumption required to compile packages. But I would not recommend Gentoo Linux if you just want a Linux installation in order to perform typical desktop tasks such as browsing the Web, sending e-mails, word processing, working on spreadsheets and so on.

Hardware has become much more powerful since Gentoo Linux’s heyday, and drivers have improved significantly (I shudder to think of the time I spent years ago getting Linux to work with some devices), making the optimisation and lower-level tinkering that Gentoo Linux facilitates less of a necessity. Furthermore, binary distributions have improved noticeably over the years, becoming easier to install, more user-friendly, easier to maintain, more reliable and better-looking. The improvements in binary distributions have, in my opinion, also contributed to the drift away from Gentoo Linux.

Nevertheless, I believe Gentoo Linux will not disappear; it is rather unique and there will always be people who enjoy the challenge of developing it and/or using it rather than a binary distribution. Furthermore, the additional control Gentoo Linux offers those who are prepared to put in the extra time and effort to use it, plus its high degree of ‘customisability’, make it attractive to certain users or for certain specialist applications. Then there are those who simply prefer not to follow the mainstream and want to try something different. I certainly hope Gentoo Linux continues long into the future and manages to maintain its distinctiveness, including the ease in not using systemd if the user so choses. Using OpenRC – which has never caused me a problem in over a decade – instead of systemd has become increasingly difficult for many Gentoo Linux users because upstream software is increasingly being written specifically to use systemd and would require significant effort to patch (KDE Control Module Plasma Firewall being a recent example). Portage is an excellent, powerful package manager, as is the accompanying suite of tools, and I don’t think there is anything that can beat that (probably one of the reasons the developers of Google Chrome OS opted to use Portage). Now, if only someone could release a machine an average home user could afford that could compile source-code packages such as qtwebengine, LibreOffice and Firefox in, say, one minute, perhaps Gentoo Linux’s popularity would increase! 😉 Until Moore’s Law results in manufacturers of home computers catching up with the build requirements of Gentoo Linux, the distribution will definitely remain a niche player. Personally, that does not bother me, although I must admit I am finding the time and effort to maintain my installations rather irksome these days.

Configuration of the APC UPS Daemon on my Linux server

 

UPS connections in my home network

For obvious reasons my Linux home server supplying NAS and Web services 24/7 is connected to a UPS. The UPS model (now discontinued) I use is a 700VA 230V APC Back-UPS ES-BE700G-UK. It is connected to one of the server’s USB ports via an APC-supplied cable so that the server can interrogate the UPS and so that the UPS can send unsolicited messages to the server (e.g. mains power supply interrupted, mains power supply restored, shut down the server now, and so on). The open-source APC UPS Daemon apcupsd that I installed on the server enables the server to react automatically to UPS events. apcupsd provides a shell script apccontrol and various other shell scripts to act on these events. All these scripts can be customised by the user. As users with an APC UPS that supports this functionality are likely to be interested in configuration of apcupsd, I think it might be useful for me to explain how I configured apcupsd.

An Ethernet switch and an external USB 6 TB HDD (connected to the server for automated daily backups) are in the same room as the server and also connected to the UPS. If my router were in the same room as the server then it would be connected to the same UPS as the server but, as it has to be in a different room next to the broadband provider’s master socket, it is instead connected to a separate mini UPS so that the server can still send e-mails after an interruption to the mains power supply.

Before getting into the configuration of apcupsd, I should mention that I have come across some home users who think the purpose of a UPS is solely to protect against loss of mains supply from the electricity utility company. Whilst that is one of the purposes of a UPS, home users should note that home fuses can blow and RCD consumer units can trip even when there is no interuption to the mains supply to the house from the utility company. So the argument that the local utility company is extremely reliable is not a reason to dispense with a UPS for a server. Well, not unless you are prepared to accept the risk of corruption of the OS and/or users’ data.

It is possible to configure apcupsd to perform a controlled shutdown of a server if the mains power supply to a UPS has been interrupted for a user-specified amount of time or if the UPS battery’s remaining charge has dropped to a user-specified percentage of its full capacity. If desired, it would also be possible to configure apcupsd and a server’s firmware to reboot the server automatically once mains power has been restored to the UPS following an earlier controlled shutdown of the server (see ‘Arranging for Reboot on Power-Up‘ in the APCUPSD User Manual). However, as I am often away from home on work trips and cannot immediately check what has happened, I do not want the server to reboot automatically when there is power to the server, in case the mains power supply is intermittent for whatever reason. Instead, after receiving an e-mail from the server informing me it is shutting down, I would phone home and ask a family member what has happened and, if I were satisfied everything is now OK, I would then ask them to power up the server. Therefore I configured the server’s BIOS not to reboot automatically if there is power to the server after it has been shut down.

Although apcupsd offers a mechanism to tell the UPS to go into hibernation, I am not interested in trying to get the UPS to hibernate once the OS shuts down, because I do not want to risk the UPS going into hibernation before my server has shutdown the OS completely and powered down the server. Furthermore, the server is not the only device powered by the UPS. Therefore, if there were a long delay until the mains power supply to the UPS is restored, the UPS would continue to supply power until its battery is flat. However, it is unlikely the power supply to the UPS would be down for long, so the possibility of draining the battery completely is unlikely once the server has been powered down; power to the UPS will usually be restored before the battery is flat. The power requirement of the tiny Ethernet switch is small and the external USB HDD goes to sleep automatically after a few minutes of inactivity anyway. It is more important that the server is powered down ‘gracefully’.

The mechanism an OS would use to tell a UPS to go into hibernation is the command ‘/sbin/apcupsd --killpower‘, when apcupsd runs the killpower script. My understanding of the intended process is as follows:

  1. The mains supply to the UPS ceases.
  2. The UPS tells apcupsd that the mains supply has ceased.
  3. apcupsd uses $BATTERYLEVEL, $MINUTES, and $TIMEOUT (set in /etc/apcupsd.conf) to determine when to shutdown the OS (the next step below).
  4. apcupsd runs /etc/apcupsd/doshutdown to initiate shutdown of the OS.
  5. After the OS initiates shutdown, apcupsd (which runs /etc/apcupsd/killpower) tells the UPS to go into hibernation. I think the message to tell the UPS to hibernate is sent $KILLDELAY seconds after /etc/apcupsd/doshutdown runs, where $KILLDELAY is user-configurable. In the case of Gentoo Linux, the apcupsd.powerfail init script (if the user has enabled it) tries to put the UPS into hibernation when the OS is in Runlevel 0 and the OS has almost completed shutting down (the file systems have already been mounted Read-Only).

The message telling the UPS to hibernate can be disabled by setting KILLDELAY=0 in /etc/apcupsd.conf, which I have done. And, just to be sure, I also modified the script /etc/apcups/killpower to do the same thing as the script /etc/apcupsd/doshutdown, and I configured the server’s BIOS not to boot automatically when power is supplied to the server.

I think my caution and disabling of killpower are justified, as the APCUPSD User Manual states:

KILLDELAY time in seconds
If KILLDELAY is set, apcupsd will continue running after a shutdown has been requested, and after the specified time in seconds, apcupsd will attempt to shut off the UPS the power. This directive should normally be disabled by setting the value to zero, but on some systems such as Win32 systems apcupsd cannot regain control after a shutdown to force the UPS to shut off the power. In this case, with proper consideration for the timing, the KILLDELAY directive can be useful. Please be aware, if you cause apcupsd to kill the power to your computer too early, the system and the disks may not have been properly prepared. In addition, apcupsd must continue running after the shutdown is requested, and on Unix systems, this is not normally the case as the system will terminate all processes during the shutdown.

The as-installed configuration file apcupsd.conf contained the following settings:

$ grep -v "^#\|^;\|^$" /etc/apcupsd/apcupsd.conf.original
UPSCABLE smart
UPSTYPE apcsmart
DEVICE /dev/ttyS0
LOCKFILE /var/lock
SCRIPTDIR /etc/apcupsd
PWRFAILDIR /etc/apcupsd
NOLOGINDIR /etc
ONBATTERYDELAY 6
BATTERYLEVEL 5
MINUTES 3
TIMEOUT 0
ANNOY 300
ANNOYDELAY 60
NOLOGON disable
KILLDELAY 0
NETSERVER on
NISIP 127.0.0.1
NISPORT 3551
EVENTSFILE /var/log/apcupsd.events
EVENTSFILEMAX 10
UPSCLASS standalone
UPSMODE disable
STATTIME 0
STATFILE /var/log/apcupsd.status
LOGSTATS off
DATATIME 0

The purposes of BATTERYLEVEL, MINUTES and TIMEOUT are explained in the configuration file’s comments:

[...]
#
# Note: BATTERYLEVEL, MINUTES, and TIMEOUT work in conjunction, so
# the first that occurs will cause the initation of a shutdown.
#

# If during a power failure, the remaining battery percentage
# (as reported by the UPS) is below or equal to BATTERYLEVEL,
# apcupsd will initiate a system shutdown.
BATTERYLEVEL 30
# Was 10 but I changed it to 30.

# If during a power failure, the remaining runtime in minutes
# (as calculated internally by the UPS) is below or equal to MINUTES,
# apcupsd, will initiate a system shutdown.
MINUTES 10
# Was 3 but I changed it to 10.

# If during a power failure, the UPS has run on batteries for TIMEOUT
# many seconds or longer, apcupsd will initiate a system shutdown.
# A value of 0 disables this timer.
#
#  Note, if you have a Smart UPS, you will most likely want to disable
#    this timer by setting it to zero. That way, you UPS will continue
#    on batteries until either the % charge remaing drops to or below BATTERYLEVEL,
#    or the remaining battery runtime drops to or below MINUTES.  Of course,
#    if you are testing, setting this to 60 causes a quick system shutdown
#    if you pull the power plug.
#  If you have an older dumb UPS, you will want to set this to less than
#    the time you know you can run on batteries.
TIMEOUT 0

[...]

 

Lead-acid batteries degrade faster if they are allowed to become flat or nearly flat, so I changed the battery level percentage to 30 instead of 10. I also changed the remaining runtime (as calculated by the UPS) from 3 minutes to 10 minutes. The resulting contents of apcupsd.conf are as follows:

$ grep -v "^#\|^;\|^$" /etc/apcupsd/apcupsd.conf
UPSNAME ES700
UPSCABLE usb
UPSTYPE usb
DEVICE
POLLTIME 60
LOCKFILE /var/lock
SCRIPTDIR /etc/apcupsd
PWRFAILDIR /etc/apcupsd
NOLOGINDIR /etc
ONBATTERYDELAY 6
BATTERYLEVEL 30
MINUTES 10
TIMEOUT 0
ANNOY 300
ANNOYDELAY 60
NOLOGON disable
KILLDELAY 0
NETSERVER on
NISIP 127.0.0.1
NISPORT 3551
EVENTSFILE /var/log/apcupsd.events
EVENTSFILEMAX 10
UPSCLASS standalone
UPSMODE disable
STATTIME 300
STATFILE /var/log/apcupsd.status
LOGSTATS off
DATATIME 0

I also edited the apccontrol script to: a) fix a typo in a message in the script; b) comment out the command to reboot the server; c) comment out the command to shutdown the server (as my version of the doshutdown script performs that task):

$ diff /etc/apcupsd/apccontrol /etc/apcupsd/apccontrol.original 
90c90
<       echo "Battery power exhausted on UPS ${2}. Doing shutdown." | ${WALL}
---
>       echo "Battery power exhaused on UPS ${2}. Doing shutdown." | ${WALL}
103c103
< #     ${SHUTDOWN} -r now "apcupsd UPS ${2} initiated reboot"
---
>       ${SHUTDOWN} -r now "apcupsd UPS ${2} initiated reboot"
107c107
< #     ${SHUTDOWN} -h now "apcupsd UPS ${2} initiated shutdown"
---
>       ${SHUTDOWN} -h now "apcupsd UPS ${2} initiated shutdown"
$ cat /etc/apcupsd/apccontrol
#!/bin/sh
#
# Copyright (C) 1999-2002 Riccardo Facchetti 
#
#  for apcupsd release 3.14.10 (13 September 2011) - debian
#
# platforms/apccontrol.  Generated from apccontrol.in by configure.
#
#  Note, this is a generic file that can be used by most
#   systems. If a particular system needs to have something
#   special, start with this file, and put a copy in the
#   platform subdirectory.
#

#
# These variables are needed for set up the autoconf other variables.
#
prefix=/usr
exec_prefix=${prefix}

APCPID=/var/run/apcupsd.pid
APCUPSD=/sbin/apcupsd
SHUTDOWN=/sbin/shutdown
SCRIPTSHELL=/bin/sh
SCRIPTDIR=/etc/apcupsd
WALL=wall

#
# Concatenate all output from this script to the events file
#  Note, the following kills the script in a power fail situation
#   where the disks are mounted read-only.
# exec >>/var/log/apcupsd.events 2>&1

#
# This piece is to substitute the default behaviour with your own script,
# perl, or C program.
# You can customize every single command creating an executable file (may be a
# script or a compiled program) and calling it the same as the $1 parameter
# passed by apcupsd to this script.
#
# After executing your script, apccontrol continues with the default action.
# If you do not want apccontrol to continue, exit your script with exit 
# code 99. E.g. "exit 99".
#
# WARNING: the apccontrol file will be overwritten every time you update your
# apcupsd, doing `make install'. Your own customized scripts will _not_ be
# overwritten. If you wish to make changes to this file (discouraged), you
# should change apccontrol.sh.in and then rerun the configure process.
#
if [ -f ${SCRIPTDIR}/${1} -a -x ${SCRIPTDIR}/${1} ]
then
    ${SCRIPTDIR}/${1} ${2} ${3} ${4}
    # exit code 99 means he does not want us to do default action
    if [ $? = 99 ] ; then
        exit 0
    fi
fi

case "$1" in
    killpower)
        echo "Apccontrol doing: ${APCUPSD} --killpower on UPS ${2}" | ${WALL}
        sleep 10
        ${APCUPSD} --killpower
        echo "Apccontrol has done: ${APCUPSD} --killpower on UPS ${2}" | ${WALL}
    ;;
    commfailure)
        echo "Warning communications lost with UPS ${2}" | ${WALL}
    ;;
    commok)
        echo "Communications restored with UPS ${2}" | ${WALL}
    ;;
#
# powerout, onbattery, offbattery, mainsback events occur
#   in that order.
#
    powerout)
    ;;
    onbattery)
        echo "Power failure on UPS ${2}. Running on batteries." | ${WALL}
    ;;
    offbattery)
        echo "Power has returned on UPS ${2}..." | ${WALL}
    ;;
    mainsback)
        if [ -f /etc/apcupsd/powerfail ] ; then
           printf "Continuing with shutdown."  | ${WALL}
        fi
    ;;
    failing)
        echo "Battery power exhausted on UPS ${2}. Doing shutdown." | ${WALL}
    ;;
    timeout)
        echo "Battery time limit exceeded on UPS ${2}. Doing shutdown." | ${WALL}
    ;;
    loadlimit)
        echo "Remaining battery charge below limit on UPS ${2}. Doing shutdown." | ${WALL}
    ;;
    runlimit)
        echo "Remaining battery runtime below limit on UPS ${2}. Doing shutdown." | ${WALL}
    ;;
    doreboot)
        echo "UPS ${2} initiating Reboot Sequence" | ${WALL}
#       ${SHUTDOWN} -r now "apcupsd UPS ${2} initiated reboot"
    ;;
    doshutdown)
        echo "UPS ${2} initiated Shutdown Sequence" | ${WALL}
#       ${SHUTDOWN} -h now "apcupsd UPS ${2} initiated shutdown"
    ;;
    annoyme)
        echo "Power problems with UPS ${2}. Please logoff." | ${WALL}
    ;;
    emergency)
        echo "Emergency Shutdown. Possible battery failure on UPS ${2}." | ${WALL}
    ;;
    changeme)
        echo "Emergency! Batteries have failed on UPS ${2}. Change them NOW" | ${WALL}
    ;;
    remotedown)
        echo "Remote Shutdown. Beginning Shutdown Sequence." | ${WALL}
    ;;
    startselftest)
    ;;
    endselftest)
    ;;
    battdetach)
    ;;
    battattach)
    ;;
    *)  echo "Usage: ${0##*/} command"
        echo "       warning: this script is intended to be launched by"
        echo "       apcupsd and should never be launched by users."
        exit 1
    ;;
esac

I made sure the /etc/apcupsd/hosts.conf file specifies the daemon is monitoring the server:

$ grep -v "^#\|^;\|^$" hosts.conf 
MONITOR 127.0.0.1 "Local Host"

I configured the scripts in /etc/apcupsd/ as shown in the listings below (I have obscured my e-mail address for security reasons). Note that the firewall for my server is a virtual machine (with hostname serverfw) on the server, hence the additional command to shutdown the virtual machine too.

$ cat /etc/apcupsd/annoyme 
#!/bin/sh
#
# This shell script if placed in /etc/apcupsd
# will be called by /etc/apcupsd/apccontrol when apcupsd
# starts sending out 'annoy me' messages.
#
cat /home/fitzcarraldo/apcups/ups-email-annoyme.txt | /usr/sbin/sendmail -4 -t
exit 0
$ cat ~/apcups/ups-email-annoyme.txt
To: fitzcarraldo@xxxxx.com
From: fitzcarraldo@xxxxx.com
Subject: Important message about Back-UPS ES 700

The UPS is sending 'annoy me' messages - investigate now.

 

$ cat /etc/apcupsd/changeme 
#!/bin/sh
#
# This shell script if placed in /etc/apcupsd
# will be called by /etc/apcupsd/apccontrol when apcupsd
# detects that the battery should be replaced.
#
cat /home/fitzcarraldo/apcups/ups-email-changeme.txt | /usr/sbin/sendmail -4 -t
exit 0
$ cat ~/apcups/ups-email-changeme.txt
To: fitzcarraldo@xxxxx.com
From: fitzcarraldo@xxxxx.com
Subject: Important message about Back-UPS ES 700

The UPS battery needs to be changed.

 

$ cat /etc/apcupsd/commfailure 
#!/bin/sh
#
# This shell script if placed in /etc/apcupsd
# will be called by /etc/apcupsd/apccontrol when apcupsd
# loses contact with the UPS (i.e. the serial connection is not responding).
#
cat /home/fitzcarraldo/apcups/ups-email-commfailure.txt | /usr/sbin/sendmail -4 -t
exit 0
$ cat ~/apcups/ups-email-commfailure.txt
To: fitzcarraldo@xxxxx.com
From: fitzcarraldo@xxxxx.com
Subject: Important message about Back-UPS ES 700

Host has lost communication to the UPS.

 

$ cat /etc/apcupsd/commok 
#!/bin/sh
#
# This shell script if placed in /etc/apcupsd
# will be called by /etc/apcupsd/apccontrol when apcupsd
# restores contact with the UPS (i.e. the serial connection is restored).
#
cat /home/fitzcarraldo/apcups/ups-email-commok.txt | /usr/sbin/sendmail -4 -t
exit 0
$ cat ~/apcups/ups-email-commok.txt
To: fitzcarraldo@xxxxx.com
From: fitzcarraldo@xxxxx.com
Subject: Important message about Back-UPS ES 700

Host to UPS communication has resumed.

 

$ cat /etc/apcupsd/doreboot 
#!/bin/sh
#
# This shell script if placed in /etc/apcupsd
# will be called by /etc/apcupsd/apccontrol when apcupsd
# requests a reboot. We do nothing - the APC must not request a reboot.
#
# This script should never be run, as I commented it out in apccontrol.
cat /home/fitzcarraldo/apcups/ups-email-doreboot.txt | /usr/sbin/sendmail -4 -t
exit 0
$ cat ~/apcups/ups-email-doreboot.txt
To: fitzcarraldo@xxxxx.com
From: fitzcarraldo@xxxxx.com
Subject: Important message about Back-UPS ES 700

The UPS has requested a reboot - doing nothing.

 

$ cat /etc/apcupsd/doshutdown 
#!/bin/sh
#
# This shell script if placed in /etc/apcupsd
# will be called by /etc/apcupsd/apccontrol when apcupsd
# detects that a  shutdown is needed.
#
cat /home/fitzcarraldo/apcups/ups-email-doshutdown.txt | /usr/sbin/sendmail -4 -t
sudo -u fitzcarraldo ssh serverfw sudo shutdown -h now
sleep 30
shutdown -h now
exit 0
$ cat ~/apcups/ups-email-doshutdown.txt
To: fitzcarraldo@xxxxx.com
From: fitzcarraldo@xxxxx.com
Subject: Important message about Back-UPS ES 700

UPS requested shutdown, shutting down the systems.

The server has to be powered up manually after it has powered down.
It will not boot automatically when the mains power supply is restored.

 

$ cat /etc/apcupsd/emergency 
#!/bin/sh
#
# This shell script if placed in /etc/apcupsd
# will be called by /etc/apcupsd/apccontrol when apcupsd
# detects that an emergency shutdown is needed.
#
cat /home/fitzcarraldo/apcups/ups-email-emergency.txt | /usr/sbin/sendmail -4 -t
sudo -u fitzcarraldo ssh serverfw sudo shutdown -h now
sleep 30
shutdown -h now
exit 0
$ cat ~/apcups/ups-email-emergency.txt
To: fitzcarraldo@xxxxx.com
From: fitzcarraldo@xxxxx.com
Subject: Important message about Back-UPS ES 700

UPS emergency shutdown requested, shutting down the systems.

 

$ cat /etc/apcupsd/failing 
#!/bin/sh
#
# This shell script if placed in /etc/apcupsd
# will be called by /etc/apcupsd/apccontrol when apcupsd
# detects that the battery charge is below the minimum level.
#
cat /home/fitzcarraldo/apcups/ups-email-failing.txt | /usr/sbin/sendmail -4 -t
sudo -u fitzcarraldo ssh serverfw sudo shutdown -h now
sleep 30
shutdown -h now
exit 0
$ cat ~/apcups/ups-email-failing.txt
To: fitzcarraldo@xxxxx.com
From: fitzcarraldo@xxxxx.com
Subject: Important message about Back-UPS ES 700

The UPS battery is failing, shutting down the systems.

 

$ cat /etc/apcupsd/killpower 
#!/bin/sh
#
# This shell script if placed in /etc/apcupsd
# will be called by /etc/apcupsd/apccontrol before
# apcupsd kills the power in the UPS. You probably
# need to edit this to mount read-only /usr and /var,
# otherwise apcupsd will not run.
#
cat /home/fitzcarraldo/apcups/ups-email-killpower.txt | /usr/sbin/sendmail -4 -t
sudo -u fitzcarraldo ssh serverfw sudo shutdown -h now
sleep 30
shutdown -h now
exit 0
$ cat ~/apcups/ups-email-killpower.txt
To: fitzcarraldo@xxxxx.com
From: fitzcarraldo@xxxxx.com
Subject: Important message about Back-UPS ES 700

The APC daemon is powering off the UPS - shutting down the systems.

Actually the APC daemon does not power off the UPS since I edited
/etc/apcupsd/killpower so that it only performs the same actions
as /etc/apcupsd/doshutdown, namely 'shutdown -h now'. This means
the UPS continues to supply output power until the battery has
run down completely if there is a long delay until the mains power
supply is restored. The server has to be powered up manually if
it has powered down; it will not boot automatically when the mains
power supply is restored.

 

$ cat /etc/apcupsd/loadlimit 
#!/bin/sh
#
# This shell script if placed in /etc/apcupsd
# will be called by /etc/apcupsd/apccontrol when apcupsd
# detects that the remaining battery charge is below the min threshold.
#
cat /home/fitzcarraldo/apcups/ups-email-loadlimit.txt | /usr/sbin/sendmail -4 -t
sudo -u fitzcarraldo ssh serverfw sudo shutdown -h now
sleep 30
shutdown -h now
exit 0
$ cat ~/apcups/ups-email-loadlimit.txt
To: fitzcarraldo@xxxxx.com
From: fitzcarraldo@xxxxx.com
Subject: Important message about Back-UPS ES 700

UPS battery charge below threshold, shutting down the systems.

 

$ cat /etc/apcupsd/mainsback 
#!/bin/sh
#
# This shell script if placed in /etc/apcupsd
# will be called by /etc/apcupsd/apccontrol when apcupsd
# detects that the mains has returned with /etc/apcupsd/powerfail
# file created.
#
cat /home/fitzcarraldo/apcups/ups-email-mainsback.txt | /usr/sbin/sendmail -4 -t
exit 0
$ cat ~/apcups/ups-email-mainsback.txt
To: fitzcarraldo@xxxxx.com
From: fitzcarraldo@xxxxx.com
Subject: Important message about Back-UPS ES 700

Mains back on UPS.

 

$ cat /etc/apcupsd/offbattery 
#!/bin/sh
#
# This shell script if placed in /etc/apcupsd
# will be called by /etc/apcupsd/apccontrol when the
# UPS goes back on to the mains after a power failure.
#
cat /home/fitzcarraldo/apcups/ups-email-offbattery.txt | /usr/sbin/sendmail -4 -t
exit 0
$ cat ~/apcups/ups-email-offbattery.txt
To: fitzcarraldo@xxxxx.com
From: fitzcarraldo@xxxxx.com
Subject: Important message about Back-UPS ES 700

Power resumed to UPS. No longer running on batteries.

 

$ cat /etc/apcupsd/onbattery 
#!/bin/sh
#
# This shell script if placed in /etc/apcupsd
# will be called by /etc/apcupsd/apccontrol when the UPS
# goes on batteries.
#
cat /home/fitzcarraldo/apcups/ups-email-onbattery.txt | /usr/sbin/sendmail -4 -t
exit 0
$ cat ~/apcups/ups-email-onbattery.txt
To: fitzcarraldo@xxxxx.com
From: fitzcarraldo@xxxxx.com
Subject: Important message about Back-UPS ES 700

Power failure on UPS. Running on batteries.

 

$ cat /etc/apcupsd/powerout 
#!/bin/sh
cat /home/fitzcarraldo/apcups/ups-email-powerout.txt | /usr/sbin/sendmail -4 -t
exit 0
$ cat ~/apcups/ups-email-powerout.txt
To: fitzcarraldo@xxxxx.com
From: fitzcarraldo@xxxxx.com
Subject: Important message about Back-UPS ES 700

Power out on UPS.

 

$ cat /etc/apcupsd/remoteshutdown 
#!/bin/sh
#
# This shell script if placed in /etc/apcupsd
# will be called by /etc/apcupsd/apccontrol when apcupsd
# is being shut down remotely - should never happen so do nothing.
#
cat /home/fitzcarraldo/apcups/ups-email-remoteshutdown.txt | /usr/sbin/sendmail -4 -t
exit 0
$ cat ~/apcups/ups-email-remoteshutdown.txt
To: fitzcarraldo@xxxxx.com
From: fitzcarraldo@xxxxx.com
Subject: Important message about Back-UPS ES 700

Remote UPS shutdown requested - do nothing but investigate.

 

$ cat /etc/apcupsd/runlimit 
#!/bin/sh
#
# This shell script if placed in /etc/apcupsd
# will be called by /etc/apcupsd/apccontrol when apcupsd
# detects that the remaining battery run time is below the threshold.
#
cat /home/fitzcarraldo/apcups/ups-email-runlimit.txt | /usr/sbin/sendmail -4 -t
sudo -u fitzcarraldo ssh serverfw sudo shutdown -h now
sleep 30
shutdown -h now
exit 0
$ cat ~/apcups/ups-email-runlimit.txt
To: fitzcarraldo@xxxxx.com
From: fitzcarraldo@xxxxx.com
Subject: Important message about Back-UPS ES 700

The UPS remaining run time is below limit, shutting down the systems.

 

$ cat /etc/apcupsd/timeout 
#!/bin/sh
#
# This shell script if placed in /etc/apcupsd
# will be called by /etc/apcupsd/apccontrol when apcupsd
# detects that the battery run time limit has been exceeded.
#
cat /home/fitzcarraldo/apcups/ups-email-timeout.txt | /usr/sbin/sendmail -4 -t
sudo -u fitzcarraldo ssh serverfw sudo shutdown -h now
sleep 30
shutdown -h now
exit 0
$ cat ~/apcups/ups-email-timeout.txt
To: fitzcarraldo@xxxxx.com
From: fitzcarraldo@xxxxx.com
Subject: Important message about Back-UPS ES 700

The UPS run time limit is exceeded, shutting down the systems

 

$ cat /etc/apcupsd/ups-monitor
#!/bin/sh
case "$1" in
        poweroff | killpower)
                if [ -f /etc/apcupsd/powerfail ]; then
                        echo ""
                        echo -n "apcupsd: Ordering UPS to kill power... "
                        /etc/apcupsd/apccontrol killpower
                        echo "done."
                        echo ""
                        echo "Please ensure the UPS has powered off before rebooting."
                        echo "Otherwise, the UPS may cut the power during the reboot!"
                        echo ""
                fi
        ;;
        *)
        ;;
esac
exit 0