Work-around if movie subtitles restart after the final subtitle is displayed

If I’m watching movies in a language I don’t understand, I want subtitles. On my computers this is possible as long as there is a subtitles file with the name suffix .srt and the same prefix name as the .mp4 video file in the same directory. I usually prefer to view movies on my TV with a bigger screen, so I copy the movie to a HDD that is normally connected to my TV (a FINLUX model 43-FUD-8020). However, the built-in media player in the TV does not show the subtitles in the .srt file, even when it is in the same directory as the .mp4 file. Therefore I use the MKVToolNix utility mkvmerge to put the movie and subtitles into a Matroska multimedia container (.mkv file), and the TV’s media player can play these .mkv files and does display the subtitles. In fact, so can my laptops and desktop running Linux (I have not tried on a machine running Windows 10, but I assume Windows 10 would have no trouble either).

To install in Lubuntu 20.10:

user $ sudo apt install mkvtoolnix

To install in Gentoo Linux:

root # emerge mkvtoolnix

To create a Matroska file containing the movie plus subtitles:

user $ mkvmerge -o movie_with_subtitles.mkv movie_without_subtitles.mp4 subtitles.srt

Normally the last subtitle in a movie does not occur at the very end of the movie. For example, there could be action without dialogue at the end of the movie, and/or final credits without dialogue. The media players on my laptops and desktop running Linux display the last subtitle and play the rest of the movie in the Matroska container as expected. However, the media player in my FINLUX TV displays the last subtitle and then displays the subtitles from the beginning again, at breakneck speed. Annoying to say the least. As the problem does not occur on my laptops and desktop with the same .mkv file, I assume the problem lies with the media player in the TV.

At first I suspected that the .srt file was the cause, but it correctly uses UTF-8 encoding and the syntax of the contents is correct. Anyway, just to be sure I ran it through an online cleaner for .srt files and re-generated the .mkv file, but that made no difference on the TV. Since there is no problem playing the .mkv file on my computers, I can only assume the TV’s media player is indeed at fault. I cannot do anything about the TV’s media player, so I came up with an acceptable work-around: I added a dummy subtitle at the end of the .srt file that is set to be displayed at the very end of the movie. For example, let’s say the movie duration is two hours, 12 minutes and twenty-two seconds but the last subtitle is at 01:56:38,201:

188
01:56:38,201 --> 01:56:40,286
The end justifies the means.

I edited the file and added a dummy subtitle at the end:

188
01:56:38,201 --> 01:56:40,286
The end justifies the means.

189
02:12:19,001 --> 02:12:21,999
THE END.

I then re-generated the .mkv file using the mkvmerge command and, lo and behold, after the subtitle displayed between at 01:56:38,201 and 01:56:40,286 the TV no longer displays any more subtitles until the very end of the movie when it displays ‘THE END’ and the video ends. Actually, in reality the movie must be very slightly longer than 02:12:21,999 because, after displaying ‘THE END’, the first six subtitles in the subtitle file are displayed in rapid succession before the media player stops playing, but that is no big deal.

I searched the Web quite a lot and was unable to find any mention of this particular problem, so I am posting my work-around here just in case it helps someone else experiencing the same problem.

‘IP configuration was unavailable’: a laptop cannot connect wirelessly to a router

I recently switched my ISP from BT to Virgin Media because the speed and reliability of the broadband connection were low. A Virgin Media Hub 3 was supplied as part of the package, and the TV, laptops (Gentoo Linux, Windows 10 and macOS), desktops (Lubuntu and Windows 10), tablets and phones (Android and iOS) could connect to the Hub 3 without any trouble. A few weeks later Virgin Media offered to upgrade the hub to a Hub 4. I don’t look a gift horse in the mouth, so I accepted the offer. The Hub 4 does indeed improve on the already excellent broadband speeds I was getting with the Hub 3. On the downside the Hub 4’s configuration software has a couple of bugs, but I was able to live with them.

In addition to the above-mentioned hub configuration bugs, one of my laptops (a Compal NBLB2 with Intel Wireless WiFi Link 5300 AGN adapter) running Linux could not connect to the hub via Wi-Fi, even though it had no trouble connecting to the Hub 3. All other devices so far can connect to the Hub 4, so I was scratching my head. The laptop has no trouble connecting to the Hub 4 via Ethernet cable.

The hub’s 5G and 2.4G Wi-Fi bands originally had the same SSID (I’ll call it ‘VM1234567‘ here). I decided to rename the two bands ‘VM1234567_5G‘ and ‘VM1234567_2.4G‘ respectively, via the hub’s Settings in a Web browser. Very occasionally the laptop could connect to either SSID, but usually it could not connect and the following notification would pop up:

Wireless interface (wlan0)
IP configuration was unavailable

I did various things to try to get the laptop to connect, such as:

  • changing Wi-Fi channel selection in the hub from Auto to Manual and specifying different channels myself;
  • specifying the BSSID in the Desktop Environment’s GUI front-end to NetworkManager;
  • explicitly restricting the connection to the specific (and only) Wi-Fi interface (‘wlan0‘, in my case) in the DE’s GUI front-end to NetworkManager;
  • disabling IPv6 (Virgin Media does not use IPv6) in the DE’s GUI front-end to NetworkManager;
  • disabling the UFW firewall.

None of the above enabled the laptop to connect to the hub via Wi-Fi.

I installed the GUI Wi-Fi scanner LinSSID on my other Linux machines so I could check which 2.4G and 5G Wi-Fi channels were being used by the hub and by my neighbours’ hubs/routers. Note that LinSSID requires the utility iw to be installed and CONFIG_CFG80211_WEXT to be set in the kernel. The NetworkManager command ‘nmcli dev wifi list‘ can also be used to check which channels are being used. The channels selected automatically by the hub looked reasonable to me, and the different channels I selected manually did not improve the situation.

Now, coincidentally that laptop can dual-boot Windows 7, so I booted Windows 7 to see if it could connect to the hub via Wi-Fi. However, Windows 7 had the same Wi-Fi connectivity problem as Linux. The Network and Sharing Centre displayed the error message ‘The default gateway is not available’ and allowed me to run the so-called Troubleshooter, which fixed the problem in Windows 7. The laptop could then connect to the hub and to the Internet via the 5G Wi-Fi band (the hub’s DHCP server allocated IP address 192.168.0.145 to the laptop). So it appeared the lack of a specified default gateway was the problem in both OSs. This surprised me because I had never had to specify a default gateway on my machines, and still do not have to on the other machines. Anyway, I booted back into Linux and did the following:

STEP 1 (on the Compal laptop)

Connected to the hub via an Ethernet cable.

Opened the Hub 4 Settings page (192.168.0.1) in a Web browser.

Selected ‘Advanced settings’ > ‘DHCP’

Added the MAC address of the laptop’s Wi-Fi adapter and the IP address 192.168.0.145 to the Reserved list.

STEP 2 (on the Compal laptop)

Selected ‘System Settings’ > ‘Network’ | ‘Connections’

Selected Wi-Fi connection VM1234567_5G

Entered the following on the ‘IPv4’ tab:

Method: Manual
DNS Servers: 194.168.4.100,194.168.8.100
Search Domains: cable.virginm.net (The laptop connects without this entry, so I’m not sure if it makes any difference.)

Clicked ‘+ Add’ and added the gateway details as follows:

Address
192.168.0.145

Netmask
255.255.255.0

Gateway
192.168.0.1

Ticked ‘IPv4 is required for this connection’.

Set the following on the ‘Wi-Fi’ tab (this is optional):

BSSID: <hub’s MAC address corresponding to the SSID>
Restrict to device: wlan0 (<MAC address of the laptop’s Wi-Fi adapter>)

The BSSID can be found either by using LinSSID on a machine that can access the Wi-Fi network or by using the command ‘nmcli dev wifi list‘ in a terminal window. The MAC address of the laptop’s Wi-Fi adapter can be found using the commands ‘ip link‘ or ‘ifconfig‘.

Clicked on the down arrow in the ‘Restrict to device:’ box and selected the device (wlan0, in my case).

STEP 3 (on the Compal laptop)

Selected ‘System Settings’ > ‘Network’ | ‘Connections’

Selected Wi-Fi connection VM1234567_2.4G

Performed the same configuration steps as for VM1234567_5G except that the SSID V1234567_2.4G has a different BSSID (found using LinSSID or nmcli) to the SSID V1234567_5G.

The laptop’s 5G W-Fi connection now works very well with the Hub 4. The 2.4G connection can be slow (even when the signal is at 100%) and sometimes stalls, so I’m not sure I have fixed that connection completely, or even if it is fixable in this case. I still do not know why the problem occurs with the Hub 4 but not the Hub 3, and why it only happens with one specific machine. Anyway, the 5G connection now works fine, so I’m happy.

Gentoo Linux: Building/rebuilding a kernel and Intel CPU microcode in an installation without initramfs

In a 2016 post I explained how to update the Intel CPU microcode in a Gentoo Linux Stable Branch installation without an initramfs (I do not use sys-kernel/genkernel to build the kernel in the installation on my Clevo W230SS laptop). The behaviour of the tool sys-apps/iucode_tool for updating the Intel CPU microcode has changed since that post, hence this update.

Although not essential I normally perform the microcode upgrade procedure when I either rebuild or upgrade the Linux kernel, therefore I explain both procedures contiguously here.

These days the grub-mkconfig command edits the file /boot/grub/grub.cfg to add a line to the GRUB menu entries, to load the CPU microcode at boot, but nevertheless I prefer to follow a slightly different method that works reliably for me.

Below is the procedure I follow to build/rebuild the kernel and the Intel CPU microcode. Others may have a different approach, but this has always worked well for me, even if some of the steps are sometimes nugatory.

1. Mount the boot directory if it is on a separate partition

root # mount /dev/sda1 /boot

2. Check which kernel sources are installed and which of those sources is currently selected

root # eselect kernel list

3. Make a back-up of the current kernel configuration file

root # cp /usr/src/linux-`uname -r`/.config /home/fitzcarraldo/kernel-config-`uname -r`

4. Select the kernel sources I want to build

root # eselect kernel set <n>

5. Change to the currently selected kernel sources directory

root # cd /usr/src/linux

6. If wanting to build a new version of the kernel, create a template configuration file

N.B. Do NOT do this if rebuilding the kernel version that is currently in use.

root # cp /usr/src/linux-`uname -r`/.config /usr/src/linux/.config

7. Remove any existing object files

Definitely needed if the ‘make‘ command (see further on) returns an error message mentioning an old version of the compiler. It does no harm to perform this step in any case, so I always do it.

root # make clean

8. If building a new version of the kernel, create a new configuration file

N.B. Do NOT do this if rebuilding the kernel version that is currently in use.

root # make olddefconfig

The command ‘make olddefconfig‘ will edit the existing /usr/src/linux/.config file, keeping all the existing options in the file and setting any new options to their recommended (i.e. default) values.

9. Display a TUI menu of the kernel options in the .config file and make any desired changes

root # make menuconfig

I have configured the following kernel options relating to the early loading of the Intel CPU microcode (see later):

root # grep CONFIG_BLK_DEV_INITRD /usr/src/linux/.config
CONFIG_BLK_DEV_INITRD=y
root # grep CONFIG_MICROCODE /usr/src/linux/.config
CONFIG_MICROCODE=y
CONFIG_MICROCODE_INTEL=y
# CONFIG_MICROCODE_AMD is not set
# CONFIG_MICROCODE_OLD_INTERFACE is not set
root # grep CONFIG_INITRAMFS_SOURCE /usr/src/linux/.config
CONFIG_INITRAMFS_SOURCE=""

10. Build the kernel and modules

root # make && make modules_install
root # make install

11. Rebuild any third-party packages containing kernel modules

These could include packages such as nvidia-drivers, for example.

root # emerge @module-rebuild

In my case, currently the @module-rebuild set only comprises the following two packages:

root # cat /var/lib/module-rebuild/moduledb
a:1:app-emulation/virtualbox-modules-6.1.24
a:1:x11-drivers/nvidia-drivers-470.63.01

12. Rebuild the X Windows Server and X Windows drivers

I always do this even though not always necessary. One less thing to think about (not rebuilding them has sometimes caused me problems).

root # emerge xorg-server xorg-drivers

13. Rebuild NetworkManager if it is installed

I always do this even though not always necessary. One less thing to think about (not rebuilding it has sometimes caused me problems).

root # emerge networkmanager

14. If there is a new version of the Intel CPU microcode, generate it and copy it to the boot directory

Updates to the package sys-firmware/intel-microcode in the last couple of years have not resulted in a change to the version of Intel CPU microcode for the fourth-generation Intel Core i7-4810MQ CPU in my Clevo W230SS laptop, so I assume Intel no longer supports that version of CPU. Nevertheless it does no harm to repeat the procedure.

root # rm /boot/microcode.cpio
root # iucode_tool -S --write-earlyfw=/boot/microcode.cpio /lib/firmware/intel-ucode/*
root # rm /boot/intel-uc.img

(The third command is to stop the grub-mkconfig command (see later) adding intel-uc.img to the initrd line in the grub.cfg file.)

15. If a different version of the kernel has just been built, or if this is the first time upgrading the CPU microcode, create a new grub.cfg file

15.1 First check the contents of /etc/default/grub to make sure it will be OK for the new version of the kernel

root # nano /etc/default/grub

Modify the contents of /etc/default/grub if necessary for the kernel that has just been built.

15.2 Generate a new grub.cfg file

root # grub-mkconfig -o /boot/grub/grub.cfg

15.3 Check the new grub.cfg file includes the loading of the CPU microcode

root # nano /boot/grub/grub.cfg

The last line for each menu entry (i.e. the line before the closing curly bracket of the menu entry) should contain only ‘initrd /microcode.cpio‘, as shown in the example file excerpt below:

[...]
### BEGIN /etc/grub.d/10_linux ###
menuentry 'Gentoo GNU/Linux' --class gentoo --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-simple-525a90f1-8ad2-44a3-ade3-20f18a0a9595' {
load_video
insmod gzio
insmod part_msdos
insmod ext2
set root='hd0,msdos1'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1  f6ffc085-66fe-4bbe-b080-cec355749f85
else
search --no-floppy --fs-uuid --set=root f6ffc085-66fe-4bbe-b080-cec355749f85
fi
echo	'Loading Linux 5.10.61-gentoo ...'
linux	/vmlinuz-5.10.61-gentoo root=/dev/sda5 ro  locale=en_GB i965.modeset=1 rcutree.rcu_idle_gp_delay=1 acpi_enforce_resources=lax reboot=force raid=noautodetect resume=/dev/sda2
echo	'Loading initial ramdisk ...'
initrd	/microcode.cpio
}
submenu 'Advanced options for Gentoo GNU/Linux' $menuentry_id_option 'gnulinux-advanced-525a90f1-8ad2-44a3-ade3-20f18a0a9595' {
menuentry 'Gentoo GNU/Linux, with Linux 5.10.61-gentoo' --class gentoo --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-5.10.61-gentoo-advanced-525a90f1-8ad2-44a3-ade3-20f18a0a9595' {
load_video
insmod gzio
insmod part_msdos
insmod ext2
set root='hd0,msdos1'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1  f6ffc085-66fe-4bbe-b080-cec355749f85
else
search --no-floppy --fs-uuid --set=root f6ffc085-66fe-4bbe-b080-cec355749f85
fi
echo	'Loading Linux 5.10.61-gentoo ...'
linux	/vmlinuz-5.10.61-gentoo root=/dev/sda5 ro  locale=en_GB i965.modeset=1 rcutree.rcu_idle_gp_delay=1 acpi_enforce_resources=lax reboot=force raid=noautodetect resume=/dev/sda2
echo	'Loading initial ramdisk ...'
initrd	/microcode.cpio
}
menuentry 'Gentoo GNU/Linux, with Linux 5.10.61-gentoo (recovery mode)' --class gentoo --class gnu-linux --class gnu --class os $menuentry_id_option 'gnulinux-5.10.61-gentoo-recovery-525a90f1-8ad2-44a3-ade3-20f18a0a9595' {
load_video
insmod gzio
insmod part_msdos
insmod ext2
set root='hd0,msdos1'
if [ x$feature_platform_search_hint = xy ]; then
search --no-floppy --fs-uuid --set=root --hint-bios=hd0,msdos1 --hint-efi=hd0,msdos1 --hint-baremetal=ahci0,msdos1  f6ffc085-66fe-4bbe-b080-cec355749f85
else
search --no-floppy --fs-uuid --set=root f6ffc085-66fe-4bbe-b080-cec355749f85
fi
echo	'Loading Linux 5.10.61-gentoo ...'
linux	/vmlinuz-5.10.61-gentoo root=/dev/sda5 ro single
echo	'Loading initial ramdisk ...'
initrd	/microcode.cpio
}
}

### END /etc/grub.d/10_linux ###
[...]

16. Reboot

17. Rebuild VirtualBox if it is installed

root # emerge virtualbox

18. Check the current version of the Intel CPU microcode

Either:

root # dmesg | grep microcode

or:

root # grep microcode /proc/cpuinfo

For example:

root # dmesg | grep microcode
[    0.000000] microcode: microcode updated early to revision 0x28, date = 2019-11-12
[    0.335631] microcode: sig=0x306c3, pf=0x10, revision=0x28
[    0.335730] microcode: Microcode Update Driver: v2.2.
root # grep microcode /proc/cpuinfo
microcode       : 0x28
microcode       : 0x28
microcode       : 0x28
microcode       : 0x28
microcode       : 0x28
microcode       : 0x28
microcode       : 0x28
microcode       : 0x28

19. Edit /var/lib/portage/world and add (or change) the specific kernel sources package version

I do this in order to ensure the command ‘emerge --depclean‘ does not remove a specific kernel’s source code during a world update. I want Portage always to install the latest (stable) version of gentoo-sources but not to delete the version of gentoo-sources that corresponds to the kernel my installation is currently using.

For example, let’s say I have just replaced a kernel built from gentoo-sources:4.19.57 with a kernel built from gentoo-sources:4.19.66. My world file would initially contain the following:

[...]
sys-kernel/gentoo-sources
sys-kernel/gentoo-sources:4.19.57
[...]

If, following a successful reboot with kernel 4.19.66, I want to delete the files for kernel 4.19.17 in /boot/ (System.map-4.19.17-gentoo, config-4.19.17-gentoo and vmlinuz-4.19.17-gentoo) and to edit the file /boot/grub/grub.cfg to remove the menu entries for kernel 4.19.57, I would change the world file’s contents to:

[...]
sys-kernel/gentoo-sources
sys-kernel/gentoo-sources:4.19.66
[...]

On the other hand, if, following a successful reboot, I want to keep the files for both kernel 4.19.17 and kernel 4.19.66, I would change the world file’s contents to:

[...]
sys-kernel/gentoo-sources
sys-kernel/gentoo-sources:4.19.57
sys-kernel/gentoo-sources:4.19.66
[...]

Browsing a WebDAV share in Linux and Windows 10

In this post I explain how I configured my machines running two Linux distributions (Gentoo Linux and Lubuntu 20.10) and my Windows 10 test machine to enable me to browse a shared folder on my file server (running ownCloud, in my case) that uses the WebDAV protocol. I cover two options for configuring Linux to browse WebDAV shares. Further options exist in Linux, but the two methods I give here are fine for my purposes.

I installed ownCloud on my Linux server in a slightly different way to the method in the ownCloud installation manual, and my examples in this post use the URI https://fitzcarraldo.ddns.net/owncloud/remote.php/webdav rather than the usual https://fitzcarraldo.ddns.net/remote.php/webdav for ownCloud, so replace the URI in my examples with the appropriate URI in your case. The username of the user account on each client machine is ‘fitz’, and the ownCloud username (davusername) on the server is ‘bsf’. Obviously replace those with the usernames in your case.

PART 1 – LINUX

Unless I mention the distribution explicitly, the following steps apply to both Linux distributions. As my Gentoo Linux installations use KDE, the steps for Gentoo Linux assume the file manager is Dolphin. My Lubuntu installation uses the file manager PCManFM-Qt.

1. Install davfs2 if it is not already installed

Gentoo Linux:

root # emerge davfs2

That command installs three packages:

acct-group/davfs2
acct-user/davfs2
net-fs/davfs2

Lubuntu 20.10:

user $ sudo apt install davfs2

2. Lubuntu 20.10: Allow mounting by non-root users

user $ sudo dpkg-reconfigure davfs2

   Package configuration
   
    ┌──────────────────────────────────────────┤ Configuring davfs2 ├───────────────────────────────────────────┐
    │                                                                                                           │
    │ The file /sbin/mount.davfs must have the SUID bit set if you want to allow unprivileged (non-root) users  │
    │ to mount WebDAV resources.                                                                                │
    │                                                                                                           │
    │ If you do not choose this option, only root will be allowed to mount WebDAV resources. This can later be  │
    │ changed by running 'dpkg-reconfigure davfs2'.                                                             │
    │                                                                                                           │
    │ Should unprivileged users be allowed to mount WebDAV resources?                                           │
    │                                                                                                           │
    │                               <Yes>                                  <No>                                 │
    │                                                                                                           │
    └───────────────────────────────────────────────────────────────────────────────────────────────────────────┘

(Do not do anything in Gentoo Linux; the SUID bit should be set automatically.)

3. Check the SUID bit has been set (notice the ‘s’ in the file’s permissions)

Gentoo Linux:

user $ ls -la /sbin/mount.davfs
lrwxrwxrwx 1 root root 21 Sep 25 23:03 /sbin/mount.davfs -> /usr/sbin/mount.davfs
user $ ls -la /usr/sbin/mount.davfs
-rws--x--x 1 root root 130752 Sep 25 23:03 /usr/sbin/mount.davfs

If the SUID bit has not be set automatically, you can do it manually:

user $ sudo chmod u+s /usr/sbin/mount.davfs

Lubuntu 20.10:

user $ ls -la /sbin/mount.davfs
-rwsr-xr-x 1 root root 137464 Aug  8  2020 /sbin/mount.davfs

4. Add the user to the davfs2 group

user $ sudo usermod -aG davfs2 fitz

Logout and login again and check the user is a member of the group:

user $ groups | grep -q davfs2 && echo "OK"
OK

5. Leave the lines in the following files commented out (i.e. accept the defaults)

/etc/davfs2/davfs2.conf (system-wide)

~/.davfs2/davfs2.conf (user-specific)

6. Option 1 (simplest!) – Enter the URI in the file manager and bookmark it

6.1 Gentoo Linux with KDE

Enter the following URI on the Dolphin file manager’s address line and press Enter:

webdavs://fitzcarraldo.ddns.net/owncloud/remote.php/webdav

You will be prompted to enter the username and password for the WebDAV share.

Select ‘File’ > ‘Add to Places’ in Dolphin to bookmark the share. From then on, you can browse the share by clicking on the share in the Remote section in Dolphin’s Places pane. You can rename the bookmark if you wish (right-click and select ‘Edit…’).

Another way to do this in KDE is as follows:

  1. click on ‘Network’ in the Places pane;
  2. click on ‘Add Network Folder’ next to the address bar;
  3. select ‘WebFolder (webdav)’ and click ‘Next’;
  4. enter the fields as follows:
    • Name: webdav
    • User: bsf
    • Server: fitzcarraldo.ddns.net
    • Port: 443 (I use Port 443 but you may be using a different port)
    • Folder: owncloud/remote.php/webdav
  5. select ‘Create an icon for this folder’ and ‘Use encryption’;
  6. click ‘Save & Connect’;
  7. right-click on the webdav icon in the main Dolphin pane and select ‘Add to Places’.

6.2 Lubuntu 20.10

Enter the following URI on the PCManFM-Qt file manager’s address line and press Enter:

davs://fitzcarraldo.ddns.net/owncloud/remote.php/webdav

You will be prompted to enter the username and password for the WebDAV share.

Select ‘Bookmarks’ > ‘Add to Bookmarks’ in PCManFM-Qt to bookmark the share. From then on, you can browse the share by clicking on the share in the Bookmarks section in PCManFM-Qt’s Lists pane. You can rename the bookmark if you wish (Bookmarks > Edit Bookmarks).

7. Option 2 – Assign a mountpoint at boot:

Add the following credentials line in the file ~/.davfs2/secrets:

https://fitzcarraldo.ddns.net/owncloud/remote.php/webdav <davusername> <davpassword>

and set the file permissions as follows:

user $ chmod 600 ~/.davfs2/secrets

Create a user directory onto which to mount the share:

user $ mkdir ~/webdav

Add a line in /etc/fstab to map the WebDAV share onto that directory at boot:

# <file system>                                            <mount point>       <type>  <options>        <dump>  <pass>
https://fitzcarraldo.ddns.net/owncloud/remote.php/webdav   /home/fitz/webdav   davfs   noauto,user,rw   0       0

The options ‘auto‘ and ‘_netdev‘ do not mount the WebDAV share automatically at boot in my installations; I am prompted to enter the davuser and davpassword manually early in the boot process if I include those options. To avoid the latter I use the ‘noauto‘ option and do not bother including the ‘_netdev‘ option. There are ways to mount a WebDAV share automatically at boot whether your installation uses systemd, OpenRC or other rc systems. Nevertheless I prefer the WebDAV share not to be mounted auomatically at boot, especially in the case of my laptops.

Reboot to check everything works.

Lubuntu 20.10:

The share will be listed as ‘webdav’ (unmounted) in the Devices section under Lists in PCManFM-Qt. You can click on the unmounted share to mount it, and click on the Unmount icon to unmount it. Everything works as expected.

Gentoo Linux with KDE:

The share is not listed in the Places pane in Dolphin but the share can be mounted manually from the command line as follows:

user $ mount ~/webdav
/sbin/mount.davfs: warning: the server does not support locks

(The ‘user‘ option in /etc/fstab allows the non-root user to mount the share.)

The main pane displaying the contents of ~/webdav/ will only be populated with the contents of the remote folder after the share is mounted.

The share is browsable in Dolphin. I can perform all file and folder operations in KDE apart from one thing: I cannot copy files to the server (neither from the local machine nor from the server); Dolphin displays messages such as ‘There is not enough space on the disk to write file:///home/fitz/testfile.txt’. I suspect the problem is with KDE, because I can copy files to and on the share by using the command line (for example the commands ‘cp ~/test1.txt ~/webdav/‘ and ‘cp ~/webdav/test2.txt ~/webdav/test3.txt‘ work fine). I have yet to find a solution to this issue, so I use Option 1 for Gentoo Linux running KDE, which works fine. To create a bookmark in Dolphin’s Places pane, browse the share and select ‘File’ > ‘Add to Places’.
 
PART 2 – WINDOWS 10

There is a Map Network Drive Wizard, but it is not as straightforward for WebDAV shares as it is with SMB shares. See the thread Cannot connect to webdav service for the type of behaviour I experienced, althought in my case I could rarely establish a connection using either ‘Map network drive’ or ‘Add a network location’, and the mapping was always lost if I logged out or rebooted, despite selecting ‘Reconnect at sign-in’. I then discovered several invalid URIs in Registry keys. Presumably these were left in the Registry after my various unsuccessful configuration attempts using the wizard. To finally succeed in mapping the ownCloud WebDAV shared folder I had to search for the string ‘fitzcarraldo.ddns.net’ in the Registry (see Steps 1 & 2 below for how to open the Registry) and delete any existing strings similar or identical to ‘https://fitzcarraldo.ddns.net/ownloud/remote.php/webdav‘, as they seemed to interfere with successful mapping of the network directory.

After making sure the Registry no longer contained any incorrect-looking WebDAV URIs for my ownCloud server, I used the following steps:

  1. Right-click on Windows’ Start Menu icon on the left of the Task Bar and select ‘Run’.
  2. Enter ‘regedit’ in the Open box and click ‘OK’.
  3. Select Computer\HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\WebClient\Parameters
  4. If the value in BasicAuthLevel is not already 2, change it to 2.
  5. In the ‘Type here to search’ box on the Task Bar, enter ‘Services’ and press Enter.
  6. Click ‘Services App’.
  7. Scroll down to ‘WebClient’ in the Services window.
  8. Right-click ‘WebClient’ and select ‘Properties’.
  9. If ‘Startup type’ is not already set to ‘Automatic’, change it to ‘Automatic’ and click ‘Apply’.
  10. Launch File Explorer.
  11. Right-click ‘This PC’ and select ‘Map network drive…’.
  12. Select the drive letter (default is Z:).
  13. In the Folder box enter \\fitzcarraldo.ddns.net@SSL\owncloud\remote.php\webdav and make sure only ‘Reconnect at sign-in’ is ticked.
  14. Click ‘Finish’.
  15. A network icon and the label ‘webdav (\\fitzcarraldo.ddns.net@SSL\owncloud\remote.php) (Z:)’ should appear under ‘My PC’. Clicking that icon displays the contents of the shared folder of my ownCloud account on my server.

The only Registry entries containing ‘fitzcarraldo.ddns.net’ found by ‘Edit’ > ‘Find…’ are now the following:

Computer\HKEY_CURRENT_USER\Network\Z
RemotePath     REG_SZ     \\fitzcarraldo.ddns.net@SSL\owncloud\remote.php\webdav

Computer\HKEY_CURRENT_USER\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\Map Network Drive MRU
a     REG_SZ     \\fitzcarraldo.ddns.net@SSL\owncloud\remote.php\webdav

Computer\HKEY_CURRENT_USER\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\MountPoints2\##fitzcarraldo.ddns.net@SSL#owncloud#remote.php#webdav
LabelFromReg     REG_SZ     webdav (\\fitzcarraldo.ddns.net@SSL\owncloud\remote.php)

Computer\HKEY_CURRENT_USER\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\PublishingWizard\AddNetworkPlace\AddNetPlace\LocationMRU
a     REG_SZ     https://fitzcarraldo.ddns.net/owncloud/remote.php/webdav

Computer\HKEY_USERS\S-1-5-21-4039722433-590489090-552845671-1001\Network\Z
RemotePath     REG_SZ     \\fitzcarraldo.ddns.net@SSL\owncloud\remote.php\webdav

Computer\HKEY_USERS\S-1-5-21-4039722433-590489090-552845671-1001\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\Map Network Drive MRU
a     REG_SZ     \\fitzcarraldo.ddns.net@SSL\owncloud\remote.php\webdav

Computer\HKEY_USERS\S-1-5-21-4039722433-590489090-552845671-1001\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\MountPoints2\##fitzcarraldo.ddns.net@SSL#owncloud#remote.php#webdav
LabelFromReg     REG_SZ     webdav (\\fitzcarraldo.ddns.net@SSL\owncloud\remote.php)

Computer\HKEY_USERS\S-1-5-21-4039722433-590489090-552845671-1001\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\PublishingWizard\AddNetworkPlace\AddNetPlace\LocationMRU
a     REG_SZ     https://fitzcarraldo.ddns.net/owncloud/remote.php/webdav

 
CONCLUSION

There you have it. I can browse my ownCloud user account folders on my server from my machines running Linux and from my test machine running Windows 10.

Installing and configuring davfs2 in Linux, and using Option 1 to browse a WebDAV share is very easy in both Gentoo Linux running KDE and in Lubuntu 20.10. Using Option 2 is also very easy in Lubuntu 20.10 but is not easy in Gentoo Linux running KDE, and I still need to find out if there is a better approach for Option 2 in Gentoo Linux running KDE.

I found Windows 10 the most problematic, despite the apparent simplicity of the ‘Map network drive’ and ‘Add a network location’ wizards. I discovered that, if I didn’t get the format of the URI correct the first time, Windows 10 would leave ‘cruft’ in the Registry that apparently prevented further mapping attempts from working properly and consistently.

Anyway, everything works the way I want and I hope this post is of some help to others wanting to browse a share using WebDAV, be that a folder in ownCloud, Nextcloud or any other network service requiring the WebDAV protocol.

Removing PipeWire in Gentoo Linux

PipeWire, all the rage these days, was originally developed for video but was later enhanced to support audio as well, and is now an alternative to PulseAudio and JACK. My laptop running Gentoo Stable (amd64) with the KDE Plasma Desktop had been working fine with PipeWire for some time. The pulseaudio and screencast USE flags were both declared in the file /etc/portage/make.conf. Both audio playback and recording worked fine until a recent upgrade of the packages in my world file, when neither worked any more. The Audio Volume loudspeaker icon (the applet kde-plasma/plasma-pa) on the KDE Plasma panel had a red line through it, and the KMix loudspeaker icon (the applet kde-apps/kmix) on the panel was greyed out. Although I cannot be sure, I suspect the problem started when the first version of PipeWire that supported audio was released. The output of the command ‘ps -ef | grep pulse‘ showed me that both PulseAudio and PipeWire were running. At the time I did not know that PulseAudio is not supposed to be running at the same time as PipeWire. Sometimes when I booted the laptop and logged in, the loudspeaker icons on the Panel would appear correctly and audio output would work properly, but usually this was not the case. This behaviour made me wonder if there was some sort of race condition between the two applications at startup.

Anyway, I stopped PulseAudio being launched automatically at startup. I did this by editing the file /etc/pulse/client.conf to add the line ‘autospawn = no‘ (a comment in the as-installed file indicates that the default value for autospawn is ‘yes‘). That did indeed stop PulseAudio from being launched automatically, and left only PipeWire running. The loudspeaker icons were then displayed correctly on the Panel when I logged in to the KDE Plasma Desktop, and audio output then worked. However, PipeWire did not detect the laptop’s built-in microphone, and no Recording channel was displayed by KMix and Audio Volume. The troubleshooting chapter of the Arch Linux Wiki article on PipeWire has a section suggesting a couple of fixes for this problem (Microphone is not detected by PipeWire) but, even so, I decided to ditch PipeWire and revert to PulseAudio. As much as I dislike PulseAudio (see some of my previous posts on the various problems I have experienced with it), these days it is more or less stable on this laptop and I do not have to mess around too much with audio settings.

A few KDE packages in Gentoo Linux depend on PipeWire (they require the screencast USE flag to be set). I therefore added the following two entries to a file in the directory /etc/portage/package.use/ in order to stop PipeWire being required:

>=sys-apps/xdg-desktop-portal-1.8.1 -screencast
>=kde-apps/krfb-20.12.3 -wayland

I was then able to use the usual command ‘emerge -uvDN @world‘, followed by the command ‘emerge --ask --depclean‘, to rebuild the affected packages and remove PipeWire. I also deleted the line ‘autospawn = no‘ that I had previously added to the file /etc/pulse/client.conf, so that PulseAudio would again be launched automatically at startup. Audio playback and recording are now back to normal. I will probably try PipeWire again in the future but, for the moment, I don’t need it. According to the Gentoo Linux Wiki article on PipeWire:

Warning
As of mid 2021, PipeWire is still in active development and not everything is fully integrated, tested, or implemented – though the project is moving along. While replacing existing audio solutions on Gentoo is possible, the experience is currently not guaranteed to be perfect or free of issues and bugs.

I will therefore wait until the concensus amongst Gentoo Linux users is that PipeWire is trouble-free before I try it again.

croc – another file transfer method

I have lost count of the number of times I have had to send a large file to someone at work, usually in a hurry. I’ve used Dropbox, ownCloud, Firefox Send (no longer available) etc. Transferring large files became a bit easier when e-mail service providers increased the size limit for attachments, but that is still not a solution for very large files. The xkcd cartoon FILE TRANSFER sums up the situation nicely.

I recently discovered the command line utility croc, which the author claims is a way to ‘easily and securely transfer stuff from one computer to another.’ I thought I’d give it a try, if only to have another tool to fall back on in an emergency. It does rely on both ends having croc installed, but hopefully that should not be a show-stopper as croc is available for Linux, Windows, macOS and BSD. To quote the author:

croc differs from a utility like scp because it doesn’t require any two computers to have enabled port-forwarding. Instead, croc will uses a relay – a temporary server setup locally (if both computers are on lan) or publicly (default is at croc4.schollz.com). Any two computers can connect to the relay, and after securing their channel with PAKE [password authenticated key exchange], they can transfer encrypted metadata and data through the relay. The relay works by first having the computers communicate the PAKE protocol via websockets, and then exchanging encrypted metadata, and then stapling the TCP connections directly so that they can transfer directly.

So, to use croc you will be dependent on the public relay provided by the author unless you set up your own relay (instructions are provided in the author’s original 2018 blog post introducing croc – see link above – and in various third-party articles about croc, such as ‘Securely Transfer Files and Folders Between Computers Using Croc‘ and ‘Transfer Files And Folders Between Computers With Croc‘).

Anyway, I installed croc in Lubuntu and Gentoo Linux from the author’s GitHub repository and indeed it is easy to use and works fine. The binary releases for the various OSs and Linux distributions can be found on the Releases page of the GitHub repository or via the OS package manager.

Lubuntu 20.10:

user $ wget https://github.com/schollz/croc/releases/download/v9.1.6/croc_9.1.6_Linux-64bit.deb
user $ sudo dpkg -i croc_9.1.6_Linux-64bit.deb

Gentoo Linux:

root # emerge net-misc/croc

(Note that croc ebuilds are not currently marked as Stable in the Gentoo Linux Portage tree, so you’ll have to unmask them by keyword if you are using the Stable branch.)

Termux:

I even installed croc in Termux on my Samsung Galaxy Note 20 Ultra 5G, and it works in Android too:

$ pkg install croc

Other OSs and other Linux distributions:

See the instructions in the README file online.

Using croc

Using croc is as simple as entering a command on one computer, informing (via e-mail, telephone, SMS, Signal or other social media) the person using the other computer of the command to use, and entering that command on the other computer. For example:

Sender

user $ croc send Documents/flight-times.ods
Sending 'flight-times.ods' (16.6 kB)
Code is: 8878-salary-courage-roger
On the other computer run

croc 8878-salary-courage-roger

Receiver

user $ croc 8878-salary-courage-roger
Accept 'flight-times.ods' (16.6 kB)? (Y/n) 

If the receiving user then enters ‘Y’, the sending user sees something similar to this:

user $ croc send Documents/flight-times.ods
Sending 'flight-times.ods' (16.6 kB)
Code is: 8878-salary-courage-roger
On the other computer run

croc 8878-salary-courage-roger

Sending (->192.168.1.74:60740)
 100% |████████████████████| (17/17 kB, 10.918 MB/s)
user $ 

and the receiving user sees something similar to this:

user $ croc 8878-salary-courage-roger
Accept 'flight-times.ods' (16.6 kB)? (Y/n) Y

Receiving (<-[::1]:39442)
 100% |████████████████████| (17/17 kB, 3.989 MB/s)
user $ 

The observant reader will notice that the above example shows a file being transferred on the same computer. When transferred between different computers the IP addresses of each computer will be displayed instead. I have used croc to transfer files between different computers on my home network (I would normally just use my NAS for this, though), between remote computers on the Internet, and between my computers and my phone via mobile broadband, and croc works in all cases.

I have not mentioned all croc’s features. I’ll leave you to read up on croc in more detail in the links I’ve given above. It looks like it might be a useful tool to have installed.

Using adb tools in Linux to remove bloatware from my Samsung Galaxy Note 20 Ultra

Samsung included a lot of bloatware on my Galaxy Note 20 Ultra 5G, and it is not possible to uninstall it using Play Store. However, it is possible to remove this stuff using adb tools. I got rid of the bloatware I don’t want very easily using the Linux version of the adb tools.

I have never had a Facebook account and never will, so I decided to remove all trace of it as follows:

1. Installed adb tools

In Lubuntu 20.10:

user $ sudo apt install android-tools-adb

In Gentoo Linux:

root # emerge dev-util/android-tools

2. Enabled ‘Developer Options’ on the phone

‘Settings’ > ‘About Phone’ > ‘Software Information’ and quickly tapped 7 times on ‘Build number’.

3. Enabled USB Debugging on the phone

‘Settings’ > ‘Developer options’, scrolled down and tapped on ‘USB debugging’.

4. Launched adb

user $ adb start-server
* daemon not running; starting now at tcp:5037
* daemon started successfully

5. Connected the phone to the computer using the USB cable

A few prompts on the phone asked whether or not I wanted to allow USB debugging. Tapped ‘Always allow from this computer’ and tapped ‘OK’.

6. Uninstalled Facebook

The packages I needed to uninstall were:

com.facebook.appmanager
com.facebook.katana
com.facebook.services
com.facebook.system

First I tried to uninstall with the ‘-k‘ option:

user $ adb uninstall -k --user 0 com.facebook.appmanager
The -k option uninstalls the application while retaining the data/cache.
At the moment, there is no way to remove the remaining data.
You will have to reinstall the application with the same signature, and fully uninstall it.
If you truly wish to continue, execute 'adb shell cmd package uninstall -k'.

See ‘Difference between pm clear and pm uninstall -k on Android

I have never been a member of Facebook and never will, so I dispensed with the ‘-k‘ option and entered the following commands:

user $ adb uninstall --user 0 com.facebook.appmanager
Success
user $ adb uninstall --user 0 com.facebook.katana
Success
user $ adb uninstall --user 0 com.facebook.services
Success
user $ adb uninstall --user 0 com.facebook.system
Success

I didn’t want the LinkedIn, Samsung Global Goals and Spotify apps either, so I uninstalled those too:

user $ adb uninstall --user 0 com.linkedin.android
Success
user $ adb uninstall --user 0 com.samsung.sree
Success
user $ adb uninstall --user 0 com.spotify.music
Success

7. Stopped the adb server on the computer

user $ adb kill-server

8. Unplugged the phone from the computer.

That’s it.

In order to disable the apps using this method, you will need to know the exact package name of the app you want to get rid of. For this, use Play Store and install App Inspector (there are several apps with this name in Play Store; I installed the app by Projectoria Ltd but the others look OK too). Launch App Inspector and you can find the package name under the name of the app. This starts with a ‘com‘ or ‘net‘ followed by words separated by dots.

For example, App Inspector shows the package name for LinkedIn as ‘com.linkedin.android‘.

Some useful links:

To get a list of all the packages installed on my phone:

user $ adb shell pm list packages

To get a list of system apps only:

user $ adb shell pm list packages -s

To get a list of only Samsung packages:

user $ adb shell pm list packages | grep samsung

To search for e.g. facebook packages:

user $ adb shell pm list packages | grep facebook

(Returns nothing now, as I already deleted all the Facebook packages. Yay!)

To search for other packages, e.g.:

user $ adb shell pm list packages | grep kids
package:com.samsung.android.kidsinstaller
package:com.sec.android.app.kidshome

Is Gentoo Linux an anachronism?

When I started visiting the Gentoo Linux discussion forums in 2007 there were at least three pages of posts daily, if not more. These days there is usually one page. I’m sure the number of Gentoo Linux users has dropped significantly since then. Interest in the distribution has certainly decreased since its heyday: Google Trends – gentoo linux.

I don’t think the drop in interest is limited to individuals either. Articles such as ‘Flying Circus Internet Operations GmbH – Migrating a Hosting Infrastructure from Gentoo to NixOS‘ lead me to suspect that some companies have switched to other distributions over the years. NASDAQ’s use of ‘a modified version of Gentoo Linux’ was publicised in 2011 (How Linux Mastered Wall Street) but I do not know if it still uses the distribution and, in any case, that is only a single significant entity. I personally have never come across another user (corporation or individual) of Gentoo Linux, although I do know several companies and individuals using distributions such as Ubuntu and Fedora.

Gentoo Linux is certainly not for everyone. In recent years the user base seems to have settled down to a smaller number of people, primarily consisting of enthusiasts who appreciate its advanced features and are prepared to put in the extra effort and time required to create and maintain a working installation. I’m sure it also still has a place in some specialised commercial applications, but I have my doubts its deployment comes anywhere near that of the major distributions such as Ubuntu, Red Hat, Fedora, etc. If I were only interested in using an OS that enabled me to perform typical personal and professional tasks, I wouldn’t be using Gentoo Linux. Some people touted Gentoo Linux’s configurability as giving it a speed advantage over binary distributions but, having correctly installed and used Gentoo Linux and various other distributions on the same hardware, I cannot say I noticed an improvement in performance.

I think one has to choose the right tool for the job. I wouldn’t dream of installing Gentoo Linux on any of my family’s machines or on older hardware. Personal experience doing the latter has taught me it is a waste of time (both installation itself and subsequent maintenance). I installed Lubuntu on my family’s desktop machine because it is a reliable, low-maintenance OS with automatic update notifications and painless, fast updates. On the other hand I installed Gentoo Linux on my laptops because I want to tinker with the OS, configure it exactly the way I want, be able to apply patches to source code easily, install multiple versions of the same application (‘slotted applications’), learn more about how the OS works, and experiment. You can do that too with a binary distribution, but with Gentoo Linux I feel I know a lot more about the kernel, OS internals, package management and package customisation than with a pre-canned binary distribution. It really is good for learning about Linux in more depth than a binary distribution.

My old Compal NBLB2 laptop with a first-generation Intel Core i7 CPU is eleven years old and I have never touched the package installation log file /var/log/emerge.log since installation in March 2010. Ten years after I first installed Gentoo Linux on it I ran the command ‘qlop -c -H‘ out of curiosity to see how much time had been spent building packages over its lifetime. The reported statistics were as follows:

total: 492 days, 5 hours, 47 minutes, 44 seconds for 67841 merges, 4295 unmerges, 446 syncs

That’s roughly 13% of its then 124 months ‘life’ spent compiling.

It has an Intel Core i7 720QM CPU (1.6 GHz but throttled to 933 MHz by Compal due to Compal’s PSU size, although I bought a higher Wattage PSU a year ago and it seems to run at 1.6 GHz since then). It has always had KDE installed, and numerous upgrades to KDE have kept it busy compiling. Each version of LibreOffice, qtwebengine, Firefox etc. has also taken a very long time to compile. Until I removed qtwebengine and the few packages dependent on it this year, even with jumbo-build enabled qtwebengine took more than a day to build. Admittedly I did have trouble some years ago with the HDD becoming almost full with temporary directories and files over a long period of time (/usr/tmp/portage/ contained a whopping 30GB of directories and files until I cleared it out), which also slowed things down, but that is no longer the case. Unfortunately that laptop has ~amd64 (Gentoo Linux Testing) installed rather than amd64 (Gentoo Linux Stable), so it’s not possible to install the binary package of LibreOffice due to dependency conflicts. As all the big packages take so long to compile on this particular laptop I ended up merging the firefox-bin package rather than the firefox source code package, and I use Microsoft Office 2007 running under WINE rather than LibreOffice.

My Clevo W230SS laptop (fourth-generation Intel Core i7-4810MQ CPU @ 2.80GHz) running Gentoo amd64 (Gentoo Linux Stable) with a few ~amd64 (testing) packages is six years old and I have not touched /var/log/emerge.log since installation in April 2015. Five years after I first installed Gentoo Linux on it I ran the command ‘qlop -c -H‘ to see how it compared to the older Compal NBLB2 laptop running Gentoo ~amd64 mentioned above. The reported statistics were as follows:

total: 53 days, 11 hours, 3 minutes, 31 seconds for 24494 merges, 1717 unmerges, 169 syncs

That’s roughly 3% of its then 64 months ‘life’ spent compiling. Nowhere near as bad as my older laptop, but still a lot of time spent compiling. The merge time for qtwebengine 5.14.2 was 4 hours 25 minutes with that fourth-generation Intel Core i7 CPU, but later versions of qtwebengine take even longer to build.

I personally would now only consider installing Gentoo Linux on a machine with at least 16 GB RAM and a CPU with at least four cores and a speed of circa 3 GHz or more. Additionally, although I have been a user of KDE in Gentoo Linux all these years, I would probably switch from KDE to a simpler, less resource-hungry and less feature-rich (some might say less ‘bloated’!) desktop environment such as LXQt in new installations of Gentoo Linux.

One thing that has improved a lot since I started using Gentoo Linux over a decade ago is the package manager Portage, at least in terms of dependency resolution and blockage handling. I used to have to do a lot more work to resolve problems during package upgrades; ‘merging world’ (upgrading installed packages) is generally a lot less troublesome than it used to be ten years ago. Portage is a lot slower than it used to be, but that’s because it does a lot more than it used to do. I used to have to use revdep-rebuild – a utility to resolve reverse dependencies and rebuild affected packages – frequently, but not any more. Building software from source code takes time, though, so plenty of RAM and a fast CPU are important for installing packages, however good the package manager itself.

Some people maintain that the reduction in posts in the Gentoo Linux Forums could just mean users have fewer problems these days compared to earlier years. However, I have my doubts that would account for the much larger number of pages of ‘posts from the last 24 hours’ in earlier years, nor for the big drop in Google Trends statistics since 2004. Posts from new users do appear from time to time in the forums, so I suspect there are simply not as many new users as a decade or more ago. There are also posts from long-time users when there are major changes such as an upgrade to a newer version of Python or a profile change.

Another argument against a drop in popularity is that many of the users in the high number of users online in a 24-hour period in earlier years were spambots. I used to be a moderator of the Sabayon Linux forums, so I’m well aware of the phenomenon and I had to ban quite a few spammers & spambots in my time. But I’m not buying for one moment that the majority or even a significant number of the 1850 users logged in to the Gentoo Forums on 30 December 2004 were spambots. I am aware of puerile, more-recent attempts by a few lone individuals to boost the distribution’s exposure, such as ‘Gentoo Linux Forums – I’ve just set up Gentoo on distrowatch as my homepage‘ but I doubt very much that has had any impact on uptake. Mind you, such antics are not confined to Gentoo Linux; I’ve seen similar posts in the forums of some other distributions.

I think former Gentoo Linux developer and Council member Donnie Berkholz got it about right in his 2013 article Ranking Linux distributions, and the decline of the traditional distros.

A discussion on Reddit in 2016 indicated that other Linux users have noticed a decline in use of the distribution: Why did Gentoo peak in popularity in 2005, then fade into obscurity?.

The decline in use of Gentoo Linux is not just due to lower uptake by new users; veteran users have also moved away due to its demands on time and effort: ‘Au Revoir, Gentoo – Sell Me A New Linux Distro‘. There are occasionally posts in the Gentoo Linux forums by previous users announcing that they have started using the distribution again, but I strongly suspect they are exceptions to the general trend.

Gentoo Linux is not as popular as it used to be, and there is no way of dressing it up any other way. However, Gentoo Linux can still be worthwhile for the Linux enthusiast and ‘power user’ who enjoys tinkering and learning more about Linux internals, and who does not mind the significant additional time required to maintain it, and the time, effort and extra energy consumption required to compile packages. But I would not recommend Gentoo Linux if you just want a Linux installation in order to perform typical desktop tasks such as browsing the Web, sending e-mails, word processing, working on spreadsheets and so on.

Hardware has become much more powerful since Gentoo Linux’s heyday, and drivers have improved significantly (I shudder to think of the time I spent years ago getting Linux to work with some devices), making the optimisation and lower-level tinkering that Gentoo Linux facilitates less of a necessity. Furthermore, binary distributions have improved noticeably over the years, becoming easier to install, more user-friendly, easier to maintain, more reliable and better-looking. The improvements in binary distributions have, in my opinion, also contributed to the drift away from Gentoo Linux.

Nevertheless, I believe Gentoo Linux will not disappear; it is rather unique and there will always be people who enjoy the challenge of developing it and/or using it rather than a binary distribution. Furthermore, the additional control Gentoo Linux offers those who are prepared to put in the extra time and effort to use it, plus its high degree of ‘customisability’, make it attractive to certain users or for certain specialist applications. Then there are those who simply prefer not to follow the mainstream and want to try something different. I certainly hope Gentoo Linux continues long into the future and manages to maintain its distinctiveness, including the ease in not using systemd if the user so choses. Using OpenRC – which has never caused me a problem in over a decade – instead of systemd has become increasingly difficult for many Gentoo Linux users because upstream software is increasingly being written specifically to use systemd and would require significant effort to patch (KDE Control Module Plasma Firewall being a recent example). Portage is an excellent, powerful package manager, as is the accompanying suite of tools, and I don’t think there is anything that can beat that (probably one of the reasons the developers of Google Chrome OS opted to use Portage). Now, if only someone could release a machine an average home user could afford that could compile source-code packages such as qtwebengine, LibreOffice and Firefox in, say, one minute, perhaps Gentoo Linux’s popularity would increase! 😉 Until Moore’s Law results in manufacturers of home computers catching up with the build requirements of Gentoo Linux, the distribution will definitely remain a niche player. Personally, that does not bother me, although I must admit I am finding the time and effort to maintain my installations rather irksome these days.

Review of an MT-ViKI 2-port automatic KVM switch

Three years ago I bought a two-port KVM (keyboard, video and mouse) switch with the intention of using it to connect my keyboard and monitor to my headless server to investigate a boot-up problem. But I found the cause of the problem quickly and never needed to use the KVM switch, which was sitting on a shelf ever since.

Recently I bought a cheap second-hand desktop machine for another project and, rather than having a second keyboard, mouse and monitor on my desk, I decided to use the spare KVM switch.

Schematic diagram of connections to MT-261KL KVM switch

Schematic diagram of connections to MT-261KL KVM switch.

The KVM switch was manufactured by MT-ViKI Electronic Technology Co., Ltd, a Chinese company that manufactures a range of KVM switches. The model I bought is the MT-261KL-FBA AUTO KVM USB+AUDIO. It has two DE-15 input ports for connection to two computers using the custom cables provided, a DE-15 VGA output port, an audio Line-Out port, a Microphone port and three USB 2.0 ports. Two cables with pigtails were supplied with the switch. At one end of each cable there is a DE-15 (VGA) plug, a pigtail with a USB Type-A plug, a pigtail with a 3.5 mm Line-In plug and a pigtail with a 3.5 mm Microphone plug. All these are for connection to the computer. At the other end of each cable is a DE-15 plug, which is for connection to one of the DE-15 ports labelled PC1 and PC2 on the KVM switch. Video, audio and USB signals are all transferred via this DE-15 plug at the KVM switch end. The device does not require an external power supply unit, so I assume it is powered from either of the two computers’ USB ports.

The two custom cables supplied with the MT-261KL KVM switch

The two custom cables supplied with the MT-261KL KVM switch.

MT-261KL KVM switch with cables connected

MT-261KL KVM switch with cables connected.

Left end of MT-261KL KVM switch with audio sockets

Left end of MT-261KL KVM switch with audio sockets.

VGA, USB, Line Out and Mic plugs of MT-261KL custom cable connected to desktop

VGA, USB, Line Out and Mic plugs of MT-261KL custom cable connected to desktop.

VGA, USB, Line Out and Mic plugs of MT-261KL custom cable connected to laptop

VGA, USB, Line Out and Mic plugs of MT-261KL custom cable connected to laptop.

My USB keyboard and USB mouse are plugged into two of the three USB ports on the KVM switch. I can switch them and the monitor between the two computers either by pressing a push-button on top of the KVM switch or by pressing specific keyboard keys in sequence within 2 seconds of each other:

  • Scroll Lock + Scroll Lock + 1 (or 2) to select PC port directly
  • Scroll Lock + Scroll Lock + Down Arrow to select Next Port
  • Scroll Lock + Scroll Lock + Up Arrow to select Previous Port
  • Scroll Lock + Scroll Lock + S to select Auto Scan
  • Scroll Lock + Scroll Lock + B to toggle Beep On/Off
  • ESC to exit Auto Scan mode

Two LEDs on the KVM switch are used to indicate which computer is currently connected to the keyboard, monitor and mouse. The loud beep that the switch emits when switching from one computer to the other can be disabled if desired.

This switch supports monitor resolutions up to 2048 x 1536, and I’m using 1920 x 1080 in both OSs. Any monitor that supports a VGA connection should work. My monitor happens to be a 23-inch ViewSonic VX2363SMHL which has both VGA and HDMI sockets and cables. Any USB keyboard and mouse should work; I’m using an HP K45 keyboard and a Logitech M90 mouse. My laptop runs Gentoo Linux and the desktop runs Windows 10, and the switch works fine with both machines.

Although the custom cables between the KVM switch and the computers are quite bulky and stiff, I managed to connect everything to the KVM switch with it in a convenient position on my desk. Selecting the computer from the keyboard instead of the push-button on the KVM switch is easier, though. There is somewhat of a ‘cable spaghetti’ on my desk due to all the cables, but I have arranged them as tidily as possible. The audio sockets on my laptop are on the opposite side of the laptop to the VGA socket, which does not help. Fortunately the audio jack plug cables that branch out of the custom cable are just long enough to reach the Headphone and Mic sockets on the laptop.

There is a third USB port on the KVM switch, that I am not using. It would be possible to use this third USB port to connect another USB device (a printer, for example) that could be switched between the two computers. As there is only one USB connection to each computer, the KVM switch must be acting as a USB hub.

I have not yet connected an external microphone to the KVM switch, but I do have my external stereo powered speakers connected to it. The audio from the external speakers connected via the switch is still OK, although some noise is being picked up from all the cables on and under my desk. But I believe that is as much to do with the long thin unshielded audio cable from the powered speakers (Logitech X-140 Multimedia speakers, not of high quality). I suspect a shorter, shielded cable would perform much better.

Anyway, if you ever need a KVM switch that supports a monitor with a VGA port, this model is reasonable. MT-ViKI also make switches that can switch a keyboard, monitor and mouse between more than two computers, and switches with HDMI ports if you want to switch a keyboard, monitor and mouse between computers that do not have VGA ports. By the way, I have no association with the company.

Digital audio fidelity

Take the following two hearing tests while wearing high-quality over-ear headphones connected to a high-quality sound card:

The first tests your ability to hear sound of different frequencies. Older people will be doing well if they can hear up to 15 kHz. A lot of older people can’t even hear up to that; in one ear I can hear 10 kHz and I think I can hear 14 kHz in the other.

The second tests your ability to discern audio quality (quantisation and sample frequency). My score was 33%!

Those tests are eye-openers. My family did a lot better than me, especially the younger members. One of them could hear above 20 kHz in the first test, and scored 100% in the second test (which is exceptional because all the others scored 50%, so even young people struggle to hear a difference).

Even with my poor hearing I can hear how bad a 128 kb/s mp3 music track sounds, but when you get up to 320 kb/s it’s a different matter. In most cases I can’t hear the difference between 320 kb/s and a 16-bit 44.1 kHz Audio CD, and, as the tests in the above links demonstrate, most people struggle to tell the difference too (watch the video ‘Audiophile or Audio-Fooled? How Good Are Your Ears?‘).

Regarding sampling theory, the video ‘Digital Audio: The Line Between Audiophiles and Audiofools‘ is quite good if someone does not understand why 16-bit 44.1 kHz was chosen for Audio CDs. As to finer quantisation and higher frequencies, ‘The Difference Between 24-bit & 16-bit Audio is Inaudible Noise‘.

As to the perennial discussion regarding CD audio versus vinyl audio, an audiophile friend of mine with a life-long passion for hi-fi has an insanely expensive hi-fi system which is integrated throughout his house – including a room designed exclusively for listening to music – and controlled via iPads, with hand-built pre-amps imported from a small, specialist manufacturer. His main speakers alone cost a lot more than most people pay for an expensive sound system. He switched to dedicated music servers with uncompressed (FLAC and WAV) files either purchased directly or copied from well-produced 16-bit 44.1 kHz Audio CDs, and got rid of his expensive top-end record deck.

As for legacy physical media, Audio CDs are more vulnerable than vinyl. Some of the Audio CDs I bought around 20 years ago have already suffered the well-known phenomenon of disc rot despite being carefully kept and handled. Optical discs are rubbish from a longevity point of view. Vinyls, on the other hand, if kept in a controlled environment, will last almost indefinitely: ‘Record collector builds world’s largest vinyl hoard – six million and counting‘.

However, much as I love LP artwork I’d rather have my friend’s digital system any day. Even with my degraded hearing the music it produces sounds fabulous. Not to mention that the slightest click from a dust particle in an LP groove is, to me, akin to nails scraping on a blackboard. Those were the days!