Configuration of the APC UPS Daemon on my Linux server

For obvious reasons my Linux home server supplying NAS and Web services 24/7 is connected to a UPS. The UPS model (now discontinued) I use is a 700VA 230V APC Back-UPS ES-BE700G-UK. It is connected to one of the server’s USB ports via an APC-supplied cable so that the server can interrogate the UPS and so that the UPS can send unsolicited messages to the server (e.g. mains power supply interrupted, mains power supply restored, shut down the server now, and so on). The open-source APC UPS Daemon apcupsd that I installed on the server enables the server to react automatically to UPS events. apcupsd provides a shell script apccontrol and various other shell scripts to act on these events. All these scripts can be customised by the user. As users with an APC UPS that supports this functionality are likely to be interested in configuration of apcupsd, I think it might be useful for me to explain how I configured apcupsd.

An Ethernet switch and an external USB 6 TB HDD (connected to the server for automated daily backups) are in the same room as the server and also connected to the UPS. If my router were in the same room as the server then it would be connected to the same UPS as the server but, as it has to be in a different room next to the broadband provider’s master socket, it is instead connected to a separate mini UPS so that the server can still send e-mails after an interruption to the mains power supply.

Before getting into the configuration of apcupsd, I should mention that I have come across some home users who think the purpose of a UPS is solely to protect against loss of mains supply from the electricity utility company. Whilst that is one of the purposes of a UPS, home users should note that home fuses can blow and RCD consumer units can trip even when there is no interuption to the mains supply to the house from the utility company. So the argument that the local utility company is extremely reliable is not a reason to dispense with a UPS for a server. Well, not unless you are prepared to accept the risk of corruption of the OS and/or users’ data.

It is possible to configure apcupsd to perform a controlled shutdown of a server if the mains power supply to a UPS has been interrupted for a user-specified amount of time or if the UPS battery’s remaining charge has dropped to a user-specified percentage of its full capacity. If desired, it would also be possible to configure apcupsd and a server’s firmware to reboot the server automatically once mains power has been restored to the UPS following an earlier controlled shutdown of the server (see ‘Arranging for Reboot on Power-Up‘ in the APCUPSD User Manual). However, as I am often away from home on work trips and cannot immediately check what has happened, I do not want the server to reboot automatically when there is power to the server, in case the mains power supply is intermittent for whatever reason. Instead, after receiving an e-mail from the server informing me it is shutting down, I would phone home and ask a family member what has happened and, if I were satisfied everything is now OK, I would then ask them to power up the server. Therefore I configured the server’s BIOS not to reboot automatically if there is power to the server after it has been shut down.

Although apcupsd offers a mechanism to tell the UPS to go into hibernation, I am not interested in trying to get the UPS to hibernate once the OS shuts down, because I do not want to risk the UPS going into hibernation before my server has shutdown the OS completely and powered down the server. Furthermore, the server is not the only device powered by the UPS. Therefore, if there were a long delay until the mains power supply to the UPS is restored, the UPS would continue to supply power until its battery is flat. However, it is unlikely the power supply to the UPS would be down for long, so the possibility of draining the battery completely is unlikely once the server has been powered down; power to the UPS will usually be restored before the battery is flat. The power requirement of the tiny Ethernet switch is small and the external USB HDD goes to sleep automatically after a few minutes of inactivity anyway. It is more important that the server is powered down ‘gracefully’.

The mechanism an OS would use to tell a UPS to go into hibernation is the command ‘/sbin/apcupsd --killpower‘, when apcupsd runs the killpower script. My understanding of the intended process is as follows:

  1. The mains supply to the UPS ceases.
  2. The UPS tells apcupsd that the mains supply has ceased.
  3. apcupsd uses $BATTERYLEVEL, $MINUTES, and $TIMEOUT (set in /etc/apcupsd.conf) to determine when to shutdown the OS (the next step below).
  4. apcupsd runs /etc/apcupsd/doshutdown to initiate shutdown of the OS.
  5. After the OS initiates shutdown, apcupsd (which runs /etc/apcupsd/killpower) tells the UPS to go into hibernation. I think the message to tell the UPS to hibernate is sent $KILLDELAY seconds after /etc/apcupsd/doshutdown runs, where $KILLDELAY is user-configurable. In the case of Gentoo Linux, the apcupsd.powerfail init script (if the user has enabled it) tries to put the UPS into hibernation when the OS is in Runlevel 0 and the OS has almost completed shutting down (the file systems have already been mounted Read-Only).

The message telling the UPS to hibernate can be disabled by setting KILLDELAY=0 in /etc/apcupsd.conf, which I have done. And, just to be sure, I also modified the script /etc/apcups/killpower to do the same thing as the script /etc/apcupsd/doshutdown, and I configured the server’s BIOS not to boot automatically when power is supplied to the server.

I think my caution and disabling of killpower are justified, as the APCUPSD User Manual states:

KILLDELAY time in seconds
If KILLDELAY is set, apcupsd will continue running after a shutdown has been requested, and after the specified time in seconds, apcupsd will attempt to shut off the UPS the power. This directive should normally be disabled by setting the value to zero, but on some systems such as Win32 systems apcupsd cannot regain control after a shutdown to force the UPS to shut off the power. In this case, with proper consideration for the timing, the KILLDELAY directive can be useful. Please be aware, if you cause apcupsd to kill the power to your computer too early, the system and the disks may not have been properly prepared. In addition, apcupsd must continue running after the shutdown is requested, and on Unix systems, this is not normally the case as the system will terminate all processes during the shutdown.

The as-installed configuration file apcupsd.conf contained the following settings:

$ grep -v "^#\|^;\|^$" /etc/apcupsd/apcupsd.conf.original
UPSCABLE smart
UPSTYPE apcsmart
DEVICE /dev/ttyS0
LOCKFILE /var/lock
SCRIPTDIR /etc/apcupsd
PWRFAILDIR /etc/apcupsd
NOLOGINDIR /etc
ONBATTERYDELAY 6
BATTERYLEVEL 5
MINUTES 3
TIMEOUT 0
ANNOY 300
ANNOYDELAY 60
NOLOGON disable
KILLDELAY 0
NETSERVER on
NISIP 127.0.0.1
NISPORT 3551
EVENTSFILE /var/log/apcupsd.events
EVENTSFILEMAX 10
UPSCLASS standalone
UPSMODE disable
STATTIME 0
STATFILE /var/log/apcupsd.status
LOGSTATS off
DATATIME 0

The purposes of BATTERYLEVEL, MINUTES and TIMEOUT are explained in the configuration file’s comments:

[...]
#
# Note: BATTERYLEVEL, MINUTES, and TIMEOUT work in conjunction, so
# the first that occurs will cause the initation of a shutdown.
#

# If during a power failure, the remaining battery percentage
# (as reported by the UPS) is below or equal to BATTERYLEVEL,
# apcupsd will initiate a system shutdown.
BATTERYLEVEL 30
# Was 10 but I changed it to 30.

# If during a power failure, the remaining runtime in minutes
# (as calculated internally by the UPS) is below or equal to MINUTES,
# apcupsd, will initiate a system shutdown.
MINUTES 10
# Was 3 but I changed it to 10.

# If during a power failure, the UPS has run on batteries for TIMEOUT
# many seconds or longer, apcupsd will initiate a system shutdown.
# A value of 0 disables this timer.
#
#  Note, if you have a Smart UPS, you will most likely want to disable
#    this timer by setting it to zero. That way, you UPS will continue
#    on batteries until either the % charge remaing drops to or below BATTERYLEVEL,
#    or the remaining battery runtime drops to or below MINUTES.  Of course,
#    if you are testing, setting this to 60 causes a quick system shutdown
#    if you pull the power plug.
#  If you have an older dumb UPS, you will want to set this to less than
#    the time you know you can run on batteries.
TIMEOUT 0

[...]

 

Lead-acid batteries degrade faster if they are allowed to become flat or nearly flat, so I changed the battery level percentage to 30 instead of 10. I also changed the remaining runtime (as calculated by the UPS) from 3 minutes to 10 minutes. The resulting contents of apcupsd.conf are as follows:

$ grep -v "^#\|^;\|^$" /etc/apcupsd/apcupsd.conf
UPSNAME ES700
UPSCABLE usb
UPSTYPE usb
DEVICE
POLLTIME 60
LOCKFILE /var/lock
SCRIPTDIR /etc/apcupsd
PWRFAILDIR /etc/apcupsd
NOLOGINDIR /etc
ONBATTERYDELAY 6
BATTERYLEVEL 30
MINUTES 10
TIMEOUT 0
ANNOY 300
ANNOYDELAY 60
NOLOGON disable
KILLDELAY 0
NETSERVER on
NISIP 127.0.0.1
NISPORT 3551
EVENTSFILE /var/log/apcupsd.events
EVENTSFILEMAX 10
UPSCLASS standalone
UPSMODE disable
STATTIME 300
STATFILE /var/log/apcupsd.status
LOGSTATS off
DATATIME 0

I also edited the apccontrol script to: a) fix a typo in a message in the script; b) comment out the command to reboot the server; c) comment out the command to shutdown the server (as my version of the doshutdown script performs that task):

$ diff /etc/apcupsd/apccontrol /etc/apcupsd/apccontrol.original 
90c90
<       echo "Battery power exhausted on UPS ${2}. Doing shutdown." | ${WALL}
---
>       echo "Battery power exhaused on UPS ${2}. Doing shutdown." | ${WALL}
103c103
< #     ${SHUTDOWN} -r now "apcupsd UPS ${2} initiated reboot"
---
>       ${SHUTDOWN} -r now "apcupsd UPS ${2} initiated reboot"
107c107
< #     ${SHUTDOWN} -h now "apcupsd UPS ${2} initiated shutdown"
---
>       ${SHUTDOWN} -h now "apcupsd UPS ${2} initiated shutdown"
$ cat /etc/apcupsd/apccontrol
#!/bin/sh
#
# Copyright (C) 1999-2002 Riccardo Facchetti 
#
#  for apcupsd release 3.14.10 (13 September 2011) - debian
#
# platforms/apccontrol.  Generated from apccontrol.in by configure.
#
#  Note, this is a generic file that can be used by most
#   systems. If a particular system needs to have something
#   special, start with this file, and put a copy in the
#   platform subdirectory.
#

#
# These variables are needed for set up the autoconf other variables.
#
prefix=/usr
exec_prefix=${prefix}

APCPID=/var/run/apcupsd.pid
APCUPSD=/sbin/apcupsd
SHUTDOWN=/sbin/shutdown
SCRIPTSHELL=/bin/sh
SCRIPTDIR=/etc/apcupsd
WALL=wall

#
# Concatenate all output from this script to the events file
#  Note, the following kills the script in a power fail situation
#   where the disks are mounted read-only.
# exec >>/var/log/apcupsd.events 2>&1

#
# This piece is to substitute the default behaviour with your own script,
# perl, or C program.
# You can customize every single command creating an executable file (may be a
# script or a compiled program) and calling it the same as the $1 parameter
# passed by apcupsd to this script.
#
# After executing your script, apccontrol continues with the default action.
# If you do not want apccontrol to continue, exit your script with exit 
# code 99. E.g. "exit 99".
#
# WARNING: the apccontrol file will be overwritten every time you update your
# apcupsd, doing `make install'. Your own customized scripts will _not_ be
# overwritten. If you wish to make changes to this file (discouraged), you
# should change apccontrol.sh.in and then rerun the configure process.
#
if [ -f ${SCRIPTDIR}/${1} -a -x ${SCRIPTDIR}/${1} ]
then
    ${SCRIPTDIR}/${1} ${2} ${3} ${4}
    # exit code 99 means he does not want us to do default action
    if [ $? = 99 ] ; then
        exit 0
    fi
fi

case "$1" in
    killpower)
        echo "Apccontrol doing: ${APCUPSD} --killpower on UPS ${2}" | ${WALL}
        sleep 10
        ${APCUPSD} --killpower
        echo "Apccontrol has done: ${APCUPSD} --killpower on UPS ${2}" | ${WALL}
    ;;
    commfailure)
        echo "Warning communications lost with UPS ${2}" | ${WALL}
    ;;
    commok)
        echo "Communications restored with UPS ${2}" | ${WALL}
    ;;
#
# powerout, onbattery, offbattery, mainsback events occur
#   in that order.
#
    powerout)
    ;;
    onbattery)
        echo "Power failure on UPS ${2}. Running on batteries." | ${WALL}
    ;;
    offbattery)
        echo "Power has returned on UPS ${2}..." | ${WALL}
    ;;
    mainsback)
        if [ -f /etc/apcupsd/powerfail ] ; then
           printf "Continuing with shutdown."  | ${WALL}
        fi
    ;;
    failing)
        echo "Battery power exhausted on UPS ${2}. Doing shutdown." | ${WALL}
    ;;
    timeout)
        echo "Battery time limit exceeded on UPS ${2}. Doing shutdown." | ${WALL}
    ;;
    loadlimit)
        echo "Remaining battery charge below limit on UPS ${2}. Doing shutdown." | ${WALL}
    ;;
    runlimit)
        echo "Remaining battery runtime below limit on UPS ${2}. Doing shutdown." | ${WALL}
    ;;
    doreboot)
        echo "UPS ${2} initiating Reboot Sequence" | ${WALL}
#       ${SHUTDOWN} -r now "apcupsd UPS ${2} initiated reboot"
    ;;
    doshutdown)
        echo "UPS ${2} initiated Shutdown Sequence" | ${WALL}
#       ${SHUTDOWN} -h now "apcupsd UPS ${2} initiated shutdown"
    ;;
    annoyme)
        echo "Power problems with UPS ${2}. Please logoff." | ${WALL}
    ;;
    emergency)
        echo "Emergency Shutdown. Possible battery failure on UPS ${2}." | ${WALL}
    ;;
    changeme)
        echo "Emergency! Batteries have failed on UPS ${2}. Change them NOW" | ${WALL}
    ;;
    remotedown)
        echo "Remote Shutdown. Beginning Shutdown Sequence." | ${WALL}
    ;;
    startselftest)
    ;;
    endselftest)
    ;;
    battdetach)
    ;;
    battattach)
    ;;
    *)  echo "Usage: ${0##*/} command"
        echo "       warning: this script is intended to be launched by"
        echo "       apcupsd and should never be launched by users."
        exit 1
    ;;
esac

I made sure the /etc/apcupsd/hosts.conf file specifies the daemon is monitoring the server:

$ grep -v "^#\|^;\|^$" hosts.conf 
MONITOR 127.0.0.1 "Local Host"

I configured the scripts in /etc/apcupsd/ as shown in the listings below (I have obscured my e-mail address for security reasons). Note that the firewall for my server is a virtual machine (with hostname serverfw) on the server, hence the additional command to shutdown the virtual machine too.

$ cat /etc/apcupsd/annoyme 
#!/bin/sh
#
# This shell script if placed in /etc/apcupsd
# will be called by /etc/apcupsd/apccontrol when apcupsd
# starts sending out 'annoy me' messages.
#
cat /home/fitzcarraldo/apcups/ups-email-annoyme.txt | /usr/sbin/sendmail -4 -t
exit 0
$ cat ~/apcups/ups-email-annoyme.txt
To: fitzcarraldo@xxxxx.com
From: fitzcarraldo@xxxxx.com
Subject: Important message about Back-UPS ES 700

The UPS is sending 'annoy me' messages - investigate now.

 

$ cat /etc/apcupsd/changeme 
#!/bin/sh
#
# This shell script if placed in /etc/apcupsd
# will be called by /etc/apcupsd/apccontrol when apcupsd
# detects that the battery should be replaced.
#
cat /home/fitzcarraldo/apcups/ups-email-changeme.txt | /usr/sbin/sendmail -4 -t
exit 0
$ cat ~/apcups/ups-email-changeme.txt
To: fitzcarraldo@xxxxx.com
From: fitzcarraldo@xxxxx.com
Subject: Important message about Back-UPS ES 700

The UPS battery needs to be changed.

 

$ cat /etc/apcupsd/commfailure 
#!/bin/sh
#
# This shell script if placed in /etc/apcupsd
# will be called by /etc/apcupsd/apccontrol when apcupsd
# loses contact with the UPS (i.e. the serial connection is not responding).
#
cat /home/fitzcarraldo/apcups/ups-email-commfailure.txt | /usr/sbin/sendmail -4 -t
exit 0
$ cat ~/apcups/ups-email-commfailure.txt
To: fitzcarraldo@xxxxx.com
From: fitzcarraldo@xxxxx.com
Subject: Important message about Back-UPS ES 700

Host has lost communication to the UPS.

 

$ cat /etc/apcupsd/commok 
#!/bin/sh
#
# This shell script if placed in /etc/apcupsd
# will be called by /etc/apcupsd/apccontrol when apcupsd
# restores contact with the UPS (i.e. the serial connection is restored).
#
cat /home/fitzcarraldo/apcups/ups-email-commok.txt | /usr/sbin/sendmail -4 -t
exit 0
$ cat ~/apcups/ups-email-commok.txt
To: fitzcarraldo@xxxxx.com
From: fitzcarraldo@xxxxx.com
Subject: Important message about Back-UPS ES 700

Host to UPS communication has resumed.

 

$ cat /etc/apcupsd/doreboot 
#!/bin/sh
#
# This shell script if placed in /etc/apcupsd
# will be called by /etc/apcupsd/apccontrol when apcupsd
# requests a reboot. We do nothing - the APC must not request a reboot.
#
# This script should never be run, as I commented it out in apccontrol.
cat /home/fitzcarraldo/apcups/ups-email-doreboot.txt | /usr/sbin/sendmail -4 -t
exit 0
$ cat ~/apcups/ups-email-doreboot.txt
To: fitzcarraldo@xxxxx.com
From: fitzcarraldo@xxxxx.com
Subject: Important message about Back-UPS ES 700

The UPS has requested a reboot - doing nothing.

 

$ cat /etc/apcupsd/doshutdown 
#!/bin/sh
#
# This shell script if placed in /etc/apcupsd
# will be called by /etc/apcupsd/apccontrol when apcupsd
# detects that a  shutdown is needed.
#
cat /home/fitzcarraldo/apcups/ups-email-doshutdown.txt | /usr/sbin/sendmail -4 -t
sudo -u fitzcarraldo ssh serverfw sudo shutdown -h now
sleep 30
shutdown -h now
exit 0
$ cat ~/apcups/ups-email-doshutdown.txt
To: fitzcarraldo@xxxxx.com
From: fitzcarraldo@xxxxx.com
Subject: Important message about Back-UPS ES 700

UPS requested shutdown, shutting down the systems.

The server has to be powered up manually after it has powered down.
It will not boot automatically when the mains power supply is restored.

 

$ cat /etc/apcupsd/emergency 
#!/bin/sh
#
# This shell script if placed in /etc/apcupsd
# will be called by /etc/apcupsd/apccontrol when apcupsd
# detects that an emergency shutdown is needed.
#
cat /home/fitzcarraldo/apcups/ups-email-emergency.txt | /usr/sbin/sendmail -4 -t
sudo -u fitzcarraldo ssh serverfw sudo shutdown -h now
sleep 30
shutdown -h now
exit 0
$ cat ~/apcups/ups-email-emergency.txt
To: fitzcarraldo@xxxxx.com
From: fitzcarraldo@xxxxx.com
Subject: Important message about Back-UPS ES 700

UPS emergency shutdown requested, shutting down the systems.

 

$ cat /etc/apcupsd/failing 
#!/bin/sh
#
# This shell script if placed in /etc/apcupsd
# will be called by /etc/apcupsd/apccontrol when apcupsd
# detects that the battery charge is below the minimum level.
#
cat /home/fitzcarraldo/apcups/ups-email-failing.txt | /usr/sbin/sendmail -4 -t
sudo -u fitzcarraldo ssh serverfw sudo shutdown -h now
sleep 30
shutdown -h now
exit 0
$ cat ~/apcups/ups-email-failing.txt
To: fitzcarraldo@xxxxx.com
From: fitzcarraldo@xxxxx.com
Subject: Important message about Back-UPS ES 700

The UPS battery is failing, shutting down the systems.

 

$ cat /etc/apcupsd/killpower 
#!/bin/sh
#
# This shell script if placed in /etc/apcupsd
# will be called by /etc/apcupsd/apccontrol before
# apcupsd kills the power in the UPS. You probably
# need to edit this to mount read-only /usr and /var,
# otherwise apcupsd will not run.
#
cat /home/fitzcarraldo/apcups/ups-email-killpower.txt | /usr/sbin/sendmail -4 -t
sudo -u fitzcarraldo ssh serverfw sudo shutdown -h now
sleep 30
shutdown -h now
exit 0
$ cat ~/apcups/ups-email-killpower.txt
To: fitzcarraldo@xxxxx.com
From: fitzcarraldo@xxxxx.com
Subject: Important message about Back-UPS ES 700

The APC daemon is powering off the UPS - shutting down the systems.

Actually the APC daemon does not power off the UPS since I edited
/etc/apcupsd/killpower so that it only performs the same actions
as /etc/apcupsd/doshutdown, namely 'shutdown -h now'. This means
the UPS continues to supply output power until the battery has
run down completely if there is a long delay until the mains power
supply is restored. The server has to be powered up manually if
it has powered down; it will not boot automatically when the mains
power supply is restored.

 

$ cat /etc/apcupsd/loadlimit 
#!/bin/sh
#
# This shell script if placed in /etc/apcupsd
# will be called by /etc/apcupsd/apccontrol when apcupsd
# detects that the remaining battery charge is below the min threshold.
#
cat /home/fitzcarraldo/apcups/ups-email-loadlimit.txt | /usr/sbin/sendmail -4 -t
sudo -u fitzcarraldo ssh serverfw sudo shutdown -h now
sleep 30
shutdown -h now
exit 0
$ cat ~/apcups/ups-email-loadlimit.txt
To: fitzcarraldo@xxxxx.com
From: fitzcarraldo@xxxxx.com
Subject: Important message about Back-UPS ES 700

UPS battery charge below threshold, shutting down the systems.

 

$ cat /etc/apcupsd/mainsback 
#!/bin/sh
#
# This shell script if placed in /etc/apcupsd
# will be called by /etc/apcupsd/apccontrol when apcupsd
# detects that the mains has returned with /etc/apcupsd/powerfail
# file created.
#
cat /home/fitzcarraldo/apcups/ups-email-mainsback.txt | /usr/sbin/sendmail -4 -t
exit 0
$ cat ~/apcups/ups-email-mainsback.txt
To: fitzcarraldo@xxxxx.com
From: fitzcarraldo@xxxxx.com
Subject: Important message about Back-UPS ES 700

Mains back on UPS.

 

$ cat /etc/apcupsd/offbattery 
#!/bin/sh
#
# This shell script if placed in /etc/apcupsd
# will be called by /etc/apcupsd/apccontrol when the
# UPS goes back on to the mains after a power failure.
#
cat /home/fitzcarraldo/apcups/ups-email-offbattery.txt | /usr/sbin/sendmail -4 -t
exit 0
$ cat ~/apcups/ups-email-offbattery.txt
To: fitzcarraldo@xxxxx.com
From: fitzcarraldo@xxxxx.com
Subject: Important message about Back-UPS ES 700

Power resumed to UPS. No longer running on batteries.

 

$ cat /etc/apcupsd/onbattery 
#!/bin/sh
#
# This shell script if placed in /etc/apcupsd
# will be called by /etc/apcupsd/apccontrol when the UPS
# goes on batteries.
#
cat /home/fitzcarraldo/apcups/ups-email-onbattery.txt | /usr/sbin/sendmail -4 -t
exit 0
$ cat ~/apcups/ups-email-onbattery.txt
To: fitzcarraldo@xxxxx.com
From: fitzcarraldo@xxxxx.com
Subject: Important message about Back-UPS ES 700

Power failure on UPS. Running on batteries.

 

$ cat /etc/apcupsd/powerout 
#!/bin/sh
cat /home/fitzcarraldo/apcups/ups-email-powerout.txt | /usr/sbin/sendmail -4 -t
exit 0
$ cat ~/apcups/ups-email-powerout.txt
To: fitzcarraldo@xxxxx.com
From: fitzcarraldo@xxxxx.com
Subject: Important message about Back-UPS ES 700

Power out on UPS.

 

$ cat /etc/apcupsd/remoteshutdown 
#!/bin/sh
#
# This shell script if placed in /etc/apcupsd
# will be called by /etc/apcupsd/apccontrol when apcupsd
# is being shut down remotely - should never happen so do nothing.
#
cat /home/fitzcarraldo/apcups/ups-email-remoteshutdown.txt | /usr/sbin/sendmail -4 -t
exit 0
$ cat ~/apcups/ups-email-remoteshutdown.txt
To: fitzcarraldo@xxxxx.com
From: fitzcarraldo@xxxxx.com
Subject: Important message about Back-UPS ES 700

Remote UPS shutdown requested - do nothing but investigate.

 

$ cat /etc/apcupsd/runlimit 
#!/bin/sh
#
# This shell script if placed in /etc/apcupsd
# will be called by /etc/apcupsd/apccontrol when apcupsd
# detects that the remaining battery run time is below the threshold.
#
cat /home/fitzcarraldo/apcups/ups-email-runlimit.txt | /usr/sbin/sendmail -4 -t
sudo -u fitzcarraldo ssh serverfw sudo shutdown -h now
sleep 30
shutdown -h now
exit 0
$ cat ~/apcups/ups-email-runlimit.txt
To: fitzcarraldo@xxxxx.com
From: fitzcarraldo@xxxxx.com
Subject: Important message about Back-UPS ES 700

The UPS remaining run time is below limit, shutting down the systems.

 

$ cat /etc/apcupsd/timeout 
#!/bin/sh
#
# This shell script if placed in /etc/apcupsd
# will be called by /etc/apcupsd/apccontrol when apcupsd
# detects that the battery run time limit has been exceeded.
#
cat /home/fitzcarraldo/apcups/ups-email-timeout.txt | /usr/sbin/sendmail -4 -t
sudo -u fitzcarraldo ssh serverfw sudo shutdown -h now
sleep 30
shutdown -h now
exit 0
$ cat ~/apcups/ups-email-timeout.txt
To: fitzcarraldo@xxxxx.com
From: fitzcarraldo@xxxxx.com
Subject: Important message about Back-UPS ES 700

The UPS run time limit is exceeded, shutting down the systems

 

$ cat /etc/apcupsd/ups-monitor
#!/bin/sh
case "$1" in
        poweroff | killpower)
                if [ -f /etc/apcupsd/powerfail ]; then
                        echo ""
                        echo -n "apcupsd: Ordering UPS to kill power... "
                        /etc/apcupsd/apccontrol killpower
                        echo "done."
                        echo ""
                        echo "Please ensure the UPS has powered off before rebooting."
                        echo "Otherwise, the UPS may cut the power during the reboot!"
                        echo ""
                fi
        ;;
        *)
        ;;
esac
exit 0

 

How to prevent CUPS omitting the bottom of the CUPS Printer test page

This is something that has been bugging me for years but I never bothered to look into it until now. When I set up a printer using CUPS Administration and then print a test page, for some printers the bottom of the test page image is cut off, as shown in the scanned image below. Also, the left side of the test page image is too close to the left side of the sheet of paper. This happens when I use the Gutenprint printer drivers, although I do not know if that is a coincidence. The CUPS printer test page (A4 paper) shown below is from a Canon PIXMA MP510 printer using the Gutenprint v5.3.3 driver for that model.

Printer test page printed by CUPS before modifying the Canon PIXMA MP510 PPD file

I had a look at the values of the ImageableArea for A4 paper in the printer’s PPD file, and the as-installed values were as follows:

user $ sudo grep "ImageableArea A4" /etc/cups/ppd/MP510.ppd
*ImageableArea A4/A4:   "0.000 0.000 595.000 842.000"

I then edited the PPD file and changed the x,y coordinates of the bottom left of the imageable area from 0,0 to 10,3 for A4 paper so the file now contains the following:

user $ sudo grep "ImageableArea A4" /etc/cups/ppd/MP510.ppd
*ImageableArea A4/A4:   "10.000 3.000 595.000 842.000"

It is necessary to restart CUPS when a change is made to the PPD file:

Gentoo Linux installation using OpenRC

user $ sudo rc-service cupsd restart

Lubuntu 20.10 installation using systemd

user $ sudo systemctl restart cups

Now the ‘Printer test page’ printed by CUPS looks like this:

Printer test page printed by CUPS after modifying the Canon PIXMA MP510 PPD file

Much better.
 
 
ADDENDUM (2 May 2021): I have discovered that the ImageableArea is not the only factor…

I also have a Canon PIXMA MP560 printer, and I printed a CUPS ‘Printer test page’ on that using the Gutenprint v5.3.3 driver for the Canon PIXMA MP560. A scan of the printed test page is shown below:

Printer test page printed by CUPS before modifying the Canon PIXMA MP560 PPD file

The as-installed values of the ImageableArea for A4 paper in the printer’s PPD file were as follows:

user $ sudo grep "ImageableArea A4" /etc/cups/ppd/Canon_MP560_Wi-Fi.ppd
*ImageableArea A4/A4:   "0.000 0.000 595.000 842.000"

Unlike the original test page for the Canon PIXMA MP510, the vertical lines on the left and right sides of the test image are more or less equidistant from the edges of the paper. However, as with the original test page for the Canon PIXMA MP510, the bottom line of the test page was missing. So I tried editing the y coordinate of the bottom left of the ImageableArea in the PPD file for the Canon PIXMA MP560. However, whatever value I used for the y coordinate of the bottom left of the test image, the bottom line was never printed.

I then looked at the contents of the file /etc/cups/printers.conf and found that the configuration for the Canon PIXMA MP510 included a line ‘Option fitplot True‘ whereas the configuration for the Canon PIXMA MP560 did not:

# Printer configuration file for CUPS v2.3.3
# Written by cupsd
# DO NOT EDIT THIS FILE WHEN CUPSD IS RUNNING
NextPrinterId 3
<Printer Canon_MP560_Wi-Fi>
PrinterId 2
UUID urn:uuid:428a074e-0e81-3ba3-7789-f8050da82c5a
Info Canon MP560 Wi-Fi
Location My office upstairs
MakeModel Canon PIXMA MP560 - CUPS+Gutenprint v5.3.3
DeviceURI lpd://192.168.1.78/lpt1
State Idle
StateTime 1619978009
ConfigTime 1619880075
Type 36892
Accepting Yes
Shared No
JobSheets none none
QuotaPeriod 0
PageLimit 0
KLimit 0
OpPolicy default
ErrorPolicy retry-job
</Printer>
<DefaultPrinter MP510>
PrinterId 1
UUID urn:uuid:0a2a12b5-ea49-33eb-572a-341c1af02f7e
Info Canon MP510
Location aspirexc600
MakeModel Canon MP510 series - CUPS+Gutenprint v5.3.3
DeviceURI usb://Canon/MP510?serial=934631&interface=1
State Idle
StateTime 1619662185
ConfigTime 1619628669
Type 36876
Accepting Yes
Shared Yes
JobSheets none none
QuotaPeriod 0
PageLimit 0
KLimit 0
OpPolicy default
ErrorPolicy retry-job
Option fitplot True
</DefaultPrinter>

 
So I stopped the CUPS service, edited the file to add the line ‘Option fitplot True‘ for the Canon PIXMA MP560, and restarted the CUPS service:

user $ sudo systemctl stop cups
user $ sudo nano /etc/cups/printers.conf
user $ sudo systemctl start cups

The file now looks like this:

# Printer configuration file for CUPS v2.3.3
# Written by cupsd
# DO NOT EDIT THIS FILE WHEN CUPSD IS RUNNING
NextPrinterId 3
<Printer Canon_MP560_Wi-Fi>
PrinterId 2
UUID urn:uuid:428a074e-0e81-3ba3-7789-f8050da82c5a
Info Canon MP560 Wi-Fi
Location My office upstairs
MakeModel Canon PIXMA MP560 - CUPS+Gutenprint v5.3.3
DeviceURI lpd://192.168.1.78/lpt1
State Idle
StateTime 1619978009
ConfigTime 1619880075
Type 36892
Accepting Yes
Shared No
JobSheets none none
QuotaPeriod 0
PageLimit 0
KLimit 0
OpPolicy default
ErrorPolicy retry-job
Option fitplot True
</Printer>
<DefaultPrinter MP510>
PrinterId 1
UUID urn:uuid:0a2a12b5-ea49-33eb-572a-341c1af02f7e
Info Canon MP510
Location aspirexc600
MakeModel Canon MP510 series - CUPS+Gutenprint v5.3.3
DeviceURI usb://Canon/MP510?serial=934631&interface=1
State Idle
StateTime 1619662185
ConfigTime 1619628669
Type 36876
Accepting Yes
Shared Yes
JobSheets none none
QuotaPeriod 0
PageLimit 0
KLimit 0
OpPolicy default
ErrorPolicy retry-job
Option fitplot True
</DefaultPrinter>

 

I have the ImageableArea for A4 paper configured as follows in the Canon PIXMA MP560 PPD file for the Gutenprint v5.3.3 driver (I had to increase the y coordinate of the bottom left of the area to 2.000 in order for the bottom line to be printed):

user $ sudo grep "ImageableArea A4" /etc/cups/ppd/Canon_MP560_Wi-Fi.ppd
*ImageableArea A4/A4:   "0.000 2.000 595.000 842.000"

After restarting the CUPS service I printed another CUPS Printer test page and the result is shown below. As you can see, the bottom line is now printed.

Printer test page printed by CUPS after modifying the Canon PIXMA MP560 PPD file

So, if the outline of the CUPS Printer test page is not centred or is missing one or more of the lines, first adjust the ImageableArea for the paper size on which the test page is being printed, and, if that does not result in success, check if ‘Option fitplot‘ exists for the printer in the file /etc/cups/printers.conf and that it is set to ‘True‘.

Implementing a quick and easy way to check from the Linux Desktop Environment if the ClamAV signatures database is up-to-date

If you use ClamAV with the Freshclam daemon and your Linux installation does not hide the console output during boot, you might see a message similar to the following on the console briefly during boot if the signatures database has not been updated recently:

LibClamAV Warning: **************************************************
LibClamAV Warning: ***  The virus database is older than 7 days.  ***
LibClamAV Warning: ***        Please update it IMMEDIATELY!       ***
LibClamAV Warning: **************************************************

This can happen for a number of reasons. The Freshclam daemon may not have been enabled, for example. Or you purposely configured your installation not to use the Freshclam daemon but forgot to run Freshclam manually (either from the command line or via ClamTk) during the past seven days to update the database. Or there is a problem with the Freshclam configuration or software installation itself. And so on.

This happened to me recently simply because I had forgotten to enable the Freshclam service in one of my Linux installations but had not noticed the error message on the console at boot. Anyway, I fixed it quickly and ran Freshclam from the command line to update the database. The database was very out-of-date and I had to run Freshclam several times – do not enter the sudo freshclam command more frequently than once per hour otherwise Cisco Systems’ ClamAV server will block you for several hours due to excessive use of their bandwidth – but I got everything working in the end.

If Freshclam is actually running, the situation with database updating can be checked by looking in the file /var/log/clamav/freshclam.log. However, as all my Linux machines use ClamAV I decided it would be worth adding a quicker way of checking on the database status that is easy to do from the Desktop. I created a Bash script which can be launched by double-clicking on an icon on the Desktop. It opens a terminal window and reports the current status of the ClamAV signatures database. The current status will depend on the frequency you update the database, so you would expect the database to be out of date briefly from time to time; there is nothing wrong with that. But if it consistently reports that the database is out of date longer than the update frequency specified in freshclam.conf (don’t forget to look in the system freshclam.conf file and, if it exists, the user freshclam.conf file) then further investigation would be warranted.

I created a Bash script ~/.clamav_db_up-to-date_check.sh containing the following:

#!/bin/bash
echo
echo "+--------------------------------------------------------------+"
echo "|    Check if ClamAV database is up-to-date on this machine    |"
echo "+--------------------------------------------------------------+"
((ping -w5 -c3 8.8.8.8 || ping -w5 -c3 4.2.2.1) > /dev/null 2>&1) && INTERNET="y" || (INTERNET="n")
if [ "$INTERNET" = "y" ]; then
  echo
  echo "       ** Internet check for latest update available **"
  echo
  echo -n "    Date update available: "
  DNSLKUP=$( host -t txt current.cvd.clamav.net )
  date -d @$( echo $DNSLKUP | awk '{ print $4 }' | awk -F ":" '{ print $4 }' )
  echo
  echo -n "    Signatures version:    "
  RMTSIGV=$( echo $DNSLKUP | awk '{ print $4 }' | awk -F ":" '{ print $3 }' )
  echo $RMTSIGV
else
  echo
  echo "** No connection to the Internet - Cannot check remote server **"
fi
echo
echo -n "    Date when checked:     "
date
echo
echo "----------------------------------------------------------------"
echo
echo "         ** Currently installed on this machine **"
echo
CLAMINST=$( clamscan --version )
echo -n "    Signatures version:    "
LCLSIGV=$( echo $CLAMINST | awk -F "/" '{ print $2 }' )
echo $LCLSIGV
echo
echo -n "    Date of signatures:    "
echo $CLAMINST | awk -F "/" '{ print $3 }'
echo
echo -n "    ClamAV version:        "
echo $CLAMINST | awk -F "/" '{ print $1 }'
echo
echo "----------------------------------------------------------------"
echo
if [ "$INTERNET" = "y" ]; then
  if [ "$LCLSIGV" = "$RMTSIGV" ]; then
    echo " Same version of signatures as the latest on the remote server"
  else
    echo " Different version of signatures to latest on the remote server"
  fi
fi
echo
read -p "Press any key to exit..." -n1 -s
exit

and made it executable:

user $ chmod +x ~/.clamav_db_up-to-date_check.sh

On a machine running Lubuntu 20.10 (LXQt Desktop Environment), I created the Desktop Configuration File ~/Desktop/ClamAV_DB_check.desktop containing the following:

[Desktop Entry]
Name=ClamAV_DB_check
GenericName=ClamAV_DB_check
Comment=Check if ClamAV database is up-to-date
Exec=qterminal -e '/home/fitzcarraldo/.clamav_db_up-to-date_check.sh'
Type=Application
Icon=/home/fitzcarraldo/Pictures/Icons/clamav-icon.png
Terminal=false

I downloaded from the Web a nice ClamAV icon and specified it in the Desktop Configuration File.

I right-clicked on the icon on the Desktop and selected ‘Trust this executable’.

In my Gentoo Linux installations that use KDE, the Desktop Configuration File looks like this:

[Desktop Entry]
Comment[en_GB]=Check if ClamAV database is up-to-date
Comment=Check if ClamAV database is up-to-date
Exec=konsole -e '/home/fitzcarraldo/.clamav_db_up-to-date_check.sh'
GenericName[en_GB]=Run ClamAV DB check in Konsole
GenericName=Run ClamAV DB check in Konsole
Icon=/home/fitzcarraldo/Pictures/Icons/clamav-icon.png
MimeType=
Name[en_GB]=ClamAV_DB_check
Name=ClamAV_DB_check
Path=
StartupNotify=true
Terminal=true
TerminalOptions=
Type=Application
X-DBUS-ServiceName=
X-DBUS-StartupType=none
X-KDE-SubstituteUID=false
X-KDE-Username=

When I checked earlier today on one of my machines, the output of the script looked like this:


+--------------------------------------------------------------+
|    Check if ClamAV database is up-to-date on this machine    |
+--------------------------------------------------------------+

       ** Internet check for latest update available **

    Date update available: Tue 27 Apr 12:29:00 BST 2021

    Signatures version:    26153

    Date when checked:     Tue 27 Apr 12:52:49 BST 2021

----------------------------------------------------------------

         ** Currently installed on this machine **

    Signatures version:    26152

    Date of signatures:    Mon Apr 26 12:04:28 2021

    ClamAV version:        ClamAV 0.103.2

----------------------------------------------------------------

 Different version of signatures to latest on the remote server

Press any key to exit...


The next time I checked, roughly 50 minutes later, the output of the script then looked like this:


+--------------------------------------------------------------+
|    Check if ClamAV database is up-to-date on this machine    |
+--------------------------------------------------------------+

       ** Internet check for latest update available **

    Date update available: Tue 27 Apr 12:29:00 BST 2021

    Signatures version:    26153

    Date when checked:     Tue 27 Apr 13:41:38 BST 2021

----------------------------------------------------------------

         ** Currently installed on this machine **

    Signatures version:    26153

    Date of signatures:    Tue Apr 27 12:09:27 2021

    ClamAV version:        ClamAV 0.103.2

----------------------------------------------------------------

 Same version of signatures as the latest on the remote server

Press any key to exit...


As you can see, the signatures database had been updated automatically by Freshclam in the intervening period.

Using open-plc-utils in Linux with Powerline (HomePlug) adapters

According to the open-plc-utils documentation, open-plc-utils supports INT6000, INT6300, INT6400, AR6410, QCA7000, AR7400 and AR7420 and later Powerline products from Qualcomm Atheros. ‘INT’ stands for ‘Intellon’, which was acquired by Atheros in 2009. ‘AR’ stands for ‘Atheros’, which was acquired by Qualcomm in 2011. ‘QCA’ stands for ‘Qualcomm Atheros’.

The open-plc-utils command int6k supports legacy chipsets INT6000, INT6300 and INT6400.

The open-plc-utils command plctool supports QCA6410, QCA7000 and QCA7420 chipsets.

The open-plc-utils command amptool supports AR7400 and QCA7450 chipsets.

I have used open-plc-utils successfully with the following Powerline products:

  • NETGEAR XAVB1301-100UKS (uses AR6405 chipset).
  • NETGEAR XAVB5221-100UKS (uses QCA7420 chipset).
  • TP-Link TL-PA4010 (uses QCA7420 chipset).
  • TP-Link TL-PA4010P (uses QCA7420 chipset).
  • TP-Link TL-PA4020P (uses QCA7420 chipset).

For example, I used open-plc-utils to update the chipset firmware in my TP-Link Powerline adapters, as explained in my earlier post ‘Updating the Powerline adapters in my home network‘.

Below I summarise how I install open-plc-utils in Linux and how I use them to interrogate the Powerline adapters in my home network.

1. Download the open-plc-utils source code

user $ cd
user $ wget https://github.com/qca/open-plc-utils/archive/refs/heads/master.zip
user $ unzip master.zip # (This creates ~/open-plc-utils-master directory.)

2. Install plc-utils

user $ cd ~/open-plc-utils-master/
user $ cat README # Tells you how to install/uninstall plc-utils.
user $ sudo make
user $ sudo make install
user $ sudo make manuals

3. Bookmark the documentation index pages in your Web browser

user $ cd ~/open-plc-utils-master/docbook

Bookmark file:///home/<username>/open-plc-utils-master/docbook/index.html

Bookmark file:///home/<username>/open-plc-utils-master/docbook/toolkit.html

4. Use open-plc-utils commands to interrogate the adapters in the network

One example of the many possible commands:

user $ plcstat -t -i eno1 # eno1 is the Ethernet interface on this computer.
 P/L NET TEI ------ MAC ------ ------ BDA ------ TX  RX  CHIPSET FIRMWARE
 LOC STA 038 11:11:11:11:11:11 88:88:88:88:88:88 n/a n/a QCA7420 MAC-QCA7420-1.5.0.26-02-20200114-CS
 REM STA 003 33:33:33:33:33:33 55:55:55:55:55:55 277 268 QCA7420 MAC-QCA7420-1.5.0.26-02-20200114-CS
 REM CCO 004 22:22:22:22:22:22 FF:FF:FF:FF:FF:FF 009 009 QCA7420 MAC-QCA7420-1.5.0.26-02-20200114-CS

(For security reasons, in the output above I have edited the MAC addresses of the three adapters, and the BDA of the two STAs. The BDA of the CCO adapter, which is automatically selected, really is displayed as FF:FF:FF:FF:FF:FF though.)

  • LOC = ‘Local’, i.e. the Powerline adapter connected to this computer.
  • REM = ‘Remote’, i.e. the other Powerline adapters in the network.
  • CCO = ‘Central Coordinator’, i.e. the automatically selected Powerline adapter acting as the coordinator of the Powerline adapters in this network.
  • STA = ‘Station’, i.e. the Powerline adapters being coordinated by the CCO.
  • MAC = The MAC address of the adapter.
  • BDA = ‘Bridged Destination Address’ (see the Powerline specifications for the meaning).
  • TX/RX = the transmission/reception rate in Mbps of the adapter.
  • CHIPSET = Atheros Qualcomm chipset type.
  • FIRMWARE = Atheros Qualcomm chipset firmware version.

For other open-plc-utils commands, consult the documentation in a Web browser.

5. Optional: Create a Bash script to interrogate Powerline adapters in your network

user $ cd
user $ nano ~/homeplug.sh
user $ chmod +x ~/homeplug.sh

homeplug.sh

#!/bin/bash
#
# This script is to interrogate a network to find the details of the Powerline
# HomePlug wall adapters in the network. It uses open-plc-utils tools:
# https://github.com/qca/open-plc-utils
# See https://github.com/qca/open-plc-utils/blob/master/README for
# instructions on how to install (and uninstall) the tools.
# Therefore this script is limited to the chipsets that open-plc-utils supports:
# https://github.com/qca/open-plc-utils/blob/master/plc/chipset.h
#
# The command int6k supports legacy chipsets INT6000, INT6300 and INT6400.
# The command plctool supports QCA6410, QCA7000 and QCA7420 devices.
# The command amptool supports chipsets AR7400 and QCA7450.
# NETGEAR XAVB1301-100UKS uses AR6405. NETGEAR XAVB5221-100UKS uses QCA7420.
# TP-Link TL-PA4010, TL-PA4010P and TL-PA4020P use QCA7420.
#
echo "================================================================================"
# Specify the interface on this PC connected to a HomePlug device:
export PLC=$( ifconfig | head -1 | cut -d ":" -f1 )
echo
echo -n "The Ethernet interface on this PC is: "
echo $PLC
echo
echo "================================================================================"
echo
#
# Step 1. Send VS_SW_VER to local device to determine its MAC address and device type.
#
MACINT6K=$( int6k -qr | awk -F ' ' '{print $2}' )
MACPLCTOOL=$( plctool -qr | awk -F ' ' '{print $2}' )
if [[ $MACINT6K != $MACPLCTOOL ]]
then
  echo "Unable to determine MAC address of local HomePlug wall adapter."
  exit
else
  MAC=$MACINT6K
fi
echo "Details for the HomePlug wall adapter connected to this computer:"
echo
if [ $( int6k -qI $MAC | wc -l ) -lt 2 ]
then
  plctool -m $MAC
  plctool -qI $MAC
  echo
  CHIPSET=$( plctool -qr $MAC | awk -F ' ' '{print $3}' )
  echo -n "Chipset: "
  echo $CHIPSET
  CHIPSETTYPE=2
else
  int6k -m $MAC
  int6k -qI $MAC
  echo
  CHIPSET=$( int6k -qr $MAC | awk -F ' ' '{print $3}' )
  echo -n "Chipset: "
  echo $CHIPSET
  CHIPSETTYPE=1
fi
echo
echo "================================================================================"
#
# Step 2. Send VS_NW_INFO (int6k -m or plctool -m, depending on device type)
# to local MAC address to find MAC addresses of the other devices.
#
if [[ $CHIPSETTYPE == 2 ]]
then
  plctool -qm $MAC | grep MAC | cut -d " " -f3 > maclist.txt
elif [[ $CHIPSETTYPE == 1 ]]
then
  int6k -qm $MAC | grep MAC | cut -d " " -f3 > maclist.txt
else
  echo "Unable to determine chipset of the local HomePlug wall adapter."
  exit
fi
#
# Step 3. Send VS_SW_VER (int6k -r or plctool -r, depending on device type) to
# each device to find the device type of each.
#
echo -n "" > chipsetlist.txt
while read -r MAC
do
  if [ $( int6k -qI $MAC | wc -l ) -lt 2 ]
  then
    CHIPSET=$( plctool -qr $MAC | awk -F ' ' '{print $3}' )
    echo $CHIPSET >> chipsetlist.txt
  else
    CHIPSET=$( int6k -qr $MAC | awk -F ' ' '{print $3}' )
    echo $CHIPSET >> chipsetlist.txt
  fi
done < maclist.txt
#
# Step 4. Send VS_NW_INFO (int6k -m or plctool -m, depending on device type) to
# each device to determine full PHY Rate.
#
echo
echo "Details for the other HomePlug wall adapters in the network"
echo "(adapters in Power Saving Mode are not shown):"
while read -r MAC && read -r CHIPSET <&3
do
  echo
  if [ $( int6k -qI $MAC | wc -l ) -lt 2 ]
  then
    plctool -m $MAC
    plctool -qI $MAC
  else
    int6k -m $MAC
    int6k -qI $MAC
  fi
  echo
  echo -n "Chipset: "
  echo $CHIPSET
  echo
  echo "--------------------------------------------------------------------------------"
done <maclist.txt 3<chipsetlist.txt
rm maclist.txt chipsetlist.txt
echo
echo "Some of the abbreviations are listed below, but refer to the open-plc-utils"
echo "documentation for more details. (Also see http://www.homeplug.org/ for"
echo "detailed HomePlug specifications)"
echo
echo "BDA   Bridged Destination Address"
echo "CCo   Central Coordinator"
echo "DAK   Device Access Key"
echo "MDU   Multiple Dwelling Unit"
echo "NID   Network Identifier"
echo "NMK   Network Membership Key"
echo "PIB   Parameter Information Block"
echo "SNID  Short Network Identifier"
echo "STA   Station"
echo "TEI   Terminal Equipment Identifier"
echo
exit

 
Run homeplug.sh to see details of Powerline adapters with Qualcomm Atheros chipsets in the network:

user $ ./homeplug.sh

N.B. Adapters in Power Saving Mode are not detected, so, if you want to see details of all Powerline adapters on the network, make sure none of the adapters are in Power Saving Mode before you run the script.

Below is the script’s output for my home network with the following three TP Link Powerline adapters currently connected to wall power sockets:

  • TP-Link TL-PA4010P(UK) VER:5.0 (one device)
  • TP-Link TL-PA4010(UK) VER:3.0 (two devices)

I also own the following Powerline adapters, which are currently not plugged in to wall power sockets, but this script would detect them if they were plugged in (as I have seen previously):

  • TL-PA4020P(UK) VER:4.0 (one adapter)
  • NETGEAR XAVB1301-100UKS (three adapters)
  • NETGEAR XAVB5221-100UKS (two adapters)
user $ ./homeplug.sh 
================================================================================

The Ethernet interface on this PC is: eno1

================================================================================

Details for the HomePlug wall adapter connected to this computer:

eno1 11:11:11:11:11:11 Fetch Network Information
eno1 11:11:11:11:11:11 Found 1 Network(s)

source address = 11:11:11:11:11:11

        network->NID = 99:99:99:99:99:99:99
        network->SNID = 5
        network->TEI = 38
        network->ROLE = 0x00 (STA)
        network->CCO_DA = 22:22:22:22:22:22
        network->CCO_TEI = 4
        network->STATIONS = 2

                station->MAC = 33:33:33:33:33:33
                station->TEI = 3
                station->BDA = 55:55:55:55:55:55
                station->AvgPHYDR_TX = 279 mbps Primary
                station->AvgPHYDR_RX = 276 mbps Primary

                station->MAC = 22:22:22:22:22:22
                station->TEI = 4
                station->BDA = FF:FF:FF:FF:FF:FF
                station->AvgPHYDR_TX = 009 mbps Primary
                station->AvgPHYDR_RX = 009 mbps Primary

        PIB 0-0 8836 bytes
        MAC 11:11:11:11:11:11
        DAK 66:66:66:66:66:66:66:66:66:66:66:66:66:66:66:66
        NMK 77:77:77:77:77:77:77:77:77:77:77:77:77:77:77:77
        NID 99:99:99:99:99:99:99
        Security level 0
        NET Qualcomm Atheros Enabled Network
        MFG tpver_401115_191120_901
        USR tpver_401115_191120_901
        CCo Auto
        MDU N/A

Chipset: QCA7420

================================================================================

Details for the other HomePlug wall adapters in the network
(adapters in Power Saving Mode are not shown):

eno1 33:33:33:33:33:33 Fetch Network Information
eno1 33:33:33:33:33:33 Found 1 Network(s)

source address = 33:33:33:33:33:33

        network->NID = 99:99:99:99:99:99:99
        network->SNID = 5
        network->TEI = 3
        network->ROLE = 0x00 (STA)
        network->CCO_DA = 22:22:22:22:22:22
        network->CCO_TEI = 4
        network->STATIONS = 2

                station->MAC = 22:22:22:22:22:22
                station->TEI = 4
                station->BDA = FF:FF:FF:FF:FF:FF
                station->AvgPHYDR_TX = 305 mbps Primary
                station->AvgPHYDR_RX = 319 mbps Primary

                station->MAC = 11:11:11:11:11:11
                station->TEI = 38
                station->BDA = 88:88:88:88:88:88
                station->AvgPHYDR_TX = 276 mbps Primary
                station->AvgPHYDR_RX = 279 mbps Primary

        PIB 0-0 8836 bytes
        MAC 33:33:33:33:33:33
        DAK 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 (none/secret)
        NMK 77:77:77:77:77:77:77:77:77:77:77:77:77:77:77:77
        NID 99:99:99:99:99:99:99
        Security level 0
        NET Qualcomm Atheros Enabled Network
        MFG tpver_401013_171025_901
        USR tpver_401013_171025_901
        CCo Auto
        MDU N/A

Chipset: QCA7420

--------------------------------------------------------------------------------

eno1 22:22:22:22:22:22 Fetch Network Information
eno1 22:22:22:22:22:22 Found 1 Network(s)

source address = 22:22:22:22:22:22

        network->NID = 99:99:99:99:99:99:99
        network->SNID = 5
        network->TEI = 4
        network->ROLE = 0x02 (CCO)
        network->CCO_DA = 22:22:22:22:22:22
        network->CCO_TEI = 4
        network->STATIONS = 2

                station->MAC = 33:33:33:33:33:33
                station->TEI = 3
                station->BDA = 55:55:55:55:55:55
                station->AvgPHYDR_TX = 319 mbps Primary
                station->AvgPHYDR_RX = 305 mbps Primary

                station->MAC = 11:11:11:11:11:11
                station->TEI = 38
                station->BDA = 88:88:88:88:88:88
                station->AvgPHYDR_TX = 009 mbps Primary
                station->AvgPHYDR_RX = 009 mbps Primary

        PIB 0-0 8836 bytes
        MAC 22:22:22:22:22:22
        DAK 00:00:00:00:00:00:00:00:00:00:00:00:00:00:00:00 (none/secret)
        NMK 77:77:77:77:77:77:77:77:77:77:77:77:77:77:77:77
        NID 99:99:99:99:99:99:99
        Security level 0
        NET Qualcomm Atheros Enabled Network
        MFG tpver_401013_171025_901
        USR tpver_401013_171025_901
        CCo Auto
        MDU N/A

Chipset: QCA7420

--------------------------------------------------------------------------------

Some of the abbreviations are listed below, but refer to the open-plc-utils
documentation for more details. (Also see http://www.homeplug.org/ for
detailed HomePlug specifications)

BDA   Bridged Destination Address
CCo   Central Coordinator
DAK   Device Access Key
MDU   Multiple Dwelling Unit
NID   Network Identifier
NMK   Network Membership Key
PIB   Parameter Information Block
SNID  Short Network Identifier
STA   Station
TEI   Terminal Equipment Identifier


For security reasons, in the output above I have edited the network membership key, device access key, network identifier and adapter addresses in the above output as follows:

  • I have changed the three MAC addresses of the three adapters to be 11:11:11:11:11:11, 22:22:22:22:22:22 and 33:33:33:33:33:33.
  • I have changed the two BDAs of the two adapters that are Stations (STAs) to be 55:55:55:55:55:55 and 88:88:88:88:88:88.
  • I have changed the DAK of the adapter connected to the computer on which the script was run to be 66:66:66:66:66:66:66:66:66:66:66:66:66:66:66:66.
  • I have changed the NMK of the three adapters to be 77:77:77:77:77:77:77:77:77:77:77:77:77:77:77:77.
  • I have changed the NID of the three adapters to be 99:99:99:99:99:99:99.

Some of the information that can be gleaned from the above output of the script:

  • the adapter with MAC address 22:22:22:22:22:22 has been automatically set as the CCO (Central Coordinator) for the Powerline network, and the other two adapters (MAC addresses 11:11:11:11:11:11 and 33:33:33:33:33:33) are STAs (Stations);
  • the only DAK that can be read is for the adapter connected to the computer;
  • the BDA of the CCO is reported as FF:FF:FF:FF:FF:FF;
  • all three Powerline adapters use the QCA7420 chipset;
  • the two Powerline stations are different models of TP-Link adapter (TP-Link versions ending in ‘401115_191120_901’ and ‘401013_171025_901’); the central coordinator is the same TP-Link model as one of the stations (TP-Link version ending in ‘401013_171025_901’).

Indeed, a TP-Link TL-PA4010P(UK) VER:5.0 adapter is connected to this computer, and the two remote adapters are TP-Link TL-PA4010(UK) VER:3.0, one of which is currently acting as the CCO. Last year I updated the Qualcomm Atheros firmware in all of them (see my 2020 post ‘Updating the Powerline adapters in my home network‘).

Resurrecting my Iomega Zip 100 parallel-port drive – Linux comes to the rescue

Top view of Z100P2 drive with 100 MB Zip disk in front.

Top view of Z100P2 drive with 100 MB Zip disk in front.

Z100P2 drive with disk inserted.

Z100P2 drive with disk inserted.

Rear sockets of Z100P2 drive.

Rear sockets of Z100P2 drive.

Back in 1998 I purchased what was then a state-of-the-art storage medium: an external Iomega Zip 100 drive, which used removable 100 MB ‘SuperFloppy’ disks. Until 2002 I backed up my important files on removable Zip 100 MB disks. Over several years in the 1990s Iomega released various models of the Zip 100 MB drive: internal SCSI; internal IDE; internal ATAPI; external DB-25 IEEE 1284 parallel port; external USB 1.1. I bought the external DB-25 IEEE 1284 parallel port model Z100P2. When affordable CD drives and external hard disk drives started to appear I began using those for backups instead, and the Zip drive and a box full of Zip 100 MB disks had been gathering dust on a shelf at home since I stopped using them in 2002.

Now, I was fairly sure I had copied all the files off those Zip disks all those years ago, but recently I wanted to check the contents and then wipe the disks prior to disposing of them and the drive. The trouble was, I have not owned a computer with a legacy parallel port for many years. This is the story of how I managed to use the Zip 100 drive again after a hiatus of some nineteen years.

Notice that the drive has a second DB-25 port with the icon of a printer above it. That socket is to allow a legacy parallel port printer to be connected (‘daisy chained’) to the computer at the same time as the Zip 100 drive. I have not owned a parallel port printer for many years, so that port is of no interest to me.

By the way, the Iomega Zip 100 drive gained rather a bad reputation because of the so-called click of death, although Iomega stated that it affected less than 0.5 percent of all Jaz and Zip drives. I never experienced this problem with my Zip 100 drive and it is still working.

PART 1 – HARDWARE

Power supply for Z100P2

When I purchased it in 1998, the Zip 100 drive was supplied with a chunky and rather heavy 240 VAC to 5 VDC PSU. However, I gave that away several years ago with an old 250 MB external USB HDD that required a 5 VDC power supply. So my first job was to get a 5 VDC supply for the Zip 100 drive. I decided to buy a USB-to-barrel-plug cable to power the Zip drive from a USB port on a computer. So I purchased a ‘USB to 5V DC power cable compatible with the Iomega Z100P2 ZIP drive’ from Amazon. The LEDs on the drive lit up and the drive briefly made the expected noise when I connected the drive to a computer using this power cable, so I was making progress. If a computer happens to have USB Type-A ports, this turns out to be a much neater approach than having to use a 5 VDC PSU.

5 Volts DC power socket on Z100P2 and barrel connector of the cable that is connected to the computer via USB Type-A at the other end.

5 Volts DC power socket on Z100P2 and barrel connector of the cable that is connected to the computer via USB Type-A at the other end.

 
Failed first attempt: USB to legacy parallel port printer adapters do NOT work with parallel Zip drives!

None of my laptops and desktop machines have the legacy DB-25 parallel port that the Z100P2 drive requires. No problem, I thought to myself, I’ll just buy a ‘USB to Printer DB25 25-Pin Parallel Port Cable Adapter’ – there are umpteen of these adapters available on eBay and Amazon. It wasn’t expensive, but I found out the hard way that these cable adapters usually work with parallel printers but definitely do not work with Iomega Zip 100 drives. So I needed to do one of the following:

  • get a parallel printer interface card for a PCIe slot in my modern desktop machines – and hope it would work with a Z100P2 drive;
  • get a legacy computer with a bidirectional parallel port with a DB-25 socket;
  • get a legacy computer with a PCI slot into which I could insert a legacy parallel printer PCI interface card (assuming I could get hold of one).

Computer with legacy parallel port

I searched eBay and found a second-hand Dell OptiPlex 780 SFF (Small Form Factor) with a legacy DB-25 parallel port (connected to the motherboard rather than to a card in one of its PCI slots), Intel Pentium E5800 CPU (3.20 GHz, 800 Mz FSB), 4 GB of PC3-10600U (1333 MHz) DDR3 DIMM memory and Windows 10 Pro installed with a valid licence. It also has plenty of USB 2.0 Type-A ports, convenient for the USB-to-barrel-plug cable I bought to power the Z100P2 drive. The price was very reasonable indeed, so I bought it in the hope that it would be usable. The vendor assured me that Windows 10 detected the parallel port and no errors were reported, but the vendor had no legacy devices (e.g. parallel port printer) with which to actually test the port. Anyway, as it was so cheap I took a gamble and purchased it, although my research on the Web had already indicated that Windows 10 does not support parallel port Iomega Zip drives. I was thinking I could either try using a virtual machine or just wipe Windows 10 and install Linux on the machine.

The FSB speed of the legacy CPU actually limits the memory speed to 800 MHz, but performance is not too bad. I actually replaced the 4 GB of PC3-10600U memory with 8 GB of PC3-12800U (1600 MHz) memory (Crucial CT51264BD160B.C16FED2) which I purchased for a very good price on eBay, although upgrading to 8 GB of memory was not necessary for the purpose of getting the Zip 100 drive working. I decided to increase the memory because the machine is in a nice condition so I will keep it for future projects, which might need more memory.

By the way, the Dell documentation for the OptiPlex 780 SFF that I downloaded from Dell’s Web site states that the machine can only use 1066 MHz memory modules or 1333 MHz memory modules, and the 1333 MHz memory modules would only be able to have a speed of 1066 MHz. What is not obvious is that the documentation assumes that one of the E6xxx series or E7xxx series Wolfdale-3M CPUs (45 nm) is installed, as the speed of the FSB (Front Side Bus) of those CPUs is 1066 MHz. The earlier Wolfdale-3M CPUs which are installed in some OptiPlex 780 SFF machines have a FSB speed of 800 MHz, so even 1066 MHz memory modules are only going to have a speed of 800 MHz in those machines. The Wolfdale-3M CPU in my Dell machine is an E5800, which has a FSB speed of 800 MHz, so the memory speed is limited to 800 MHz (as confirmed on the BIOS System Setup screen, by the CPU-Z utility program running in Windows 10 (2 x 399.0 MHz), and by the Linux commands ‘sudo dmidecode --type 17‘ and ‘sudo lshw -short -C memory‘). The Crucial CT51264BD160B.C16FED2 PC3-12800 modules work fine in the machine, albeit limited to 800 MHz due to the CPU bus speed. On another note, if you happen to be looking for memory for a Dell OptiPlex 780 SFF, do NOT buy CT51264BD160BJ modules: the ‘J’ stands for ‘high-density’, and high-density modules do not work in this model.

Parallel port settings in the PC BIOS

The refurbished Dell OptiPlex 780 SFF has the following user-selectable options:

  1. Disable = Port is disabled
  2. AT = Port is configured for IBM AT compatibility
  3. PS/2 = Port is configured for IBM PS/2 compatibility
  4. EPP = Enhanced Parallel Port protocol
  5. ECP No DMA = Extended Capability Port protocol with no DMA
  6. ECP DMA 1 = Extended Capability Port protocol with DMA 1
  7. ECP DMA 3 = Extended Capability Port protocol with DMA 3

The BIOS had option ‘PS/2’ selected when I received the machine, which I eventually changed to ‘ECP No DMA’ but I think that was unnecessary.

The BIOS also had the Parallel Port Address set to 378h when I received it, and I left it as that.

Data connection

Fortunately I still had the original parallel cable to connect the Zip drive to a DB-25 parallel port on a computer.

Z100P2 end of cable connected to computer parallel port.

Z100P2 end of cable connected to computer parallel port.

Rear of legacy Dell PC with Z100P2 cable connected to the parallel port, and USB-to-barrel-plug power cable connected to a USB port.

Rear of legacy Dell PC with Z100P2 cable connected to the parallel port, and USB-to-barrel-plug power cable connected to a USB port.

PART 2 – SOFTWARE

First attempt – Failure: Windows XP in a VirtualBox virtual machine

My original intention was to wipe Windows 10 from the Dell machine and install Linux to see if I could get Linux to access the Zip drive. But, on second thoughts, I decided I might have a better chance in Windows because my research on the Web had already indicated that several people had successfully used Iomega Zip 100 parallel-port drives with Windows XP running in a virtual machine under Windows 10. I carefully followed a detailed article on how to do this using VirtualBox (How to use iomega zip 100 with parallel port on a windows 10 computer (so long as you have a free PCI slot)), but the Zip drive would not work with the Dell machine. I tried every BIOS option for the parallel port; I tried allowing Windows XP to install the driver; I installed the last official Iomega issue of the driver for Windows XP. Nothing worked.

Second attempt – Failure: Lubuntu 20.10 in a VirtualBox virtual machine

Then I decided to try installing Linux in a VirtualBox virtual machine under Windows 10. I chose Lubuntu 20.10 because it already has the necessary ppa (for older Zip parallel-port drives like mine) and the imm (for later versions of Zip 100 parallel-port drives than mine) modules built and either could simply be loaded from the command line. But that couldn’t access the drive either. Again, I tried without success every BIOS option for the parallel port.

Third attempt – Success: Live Lubuntu 20.10 on a USB pendrive

I was resigned to wiping Windows 10 and installing a Linux distribution when I had a brainwave: Why not try a Live Linux distribution? I used the mkusb utility to create a persistent installation of Live Lubuntu 20.10 on a USB pendrive (it had to use PC BIOS, as the legacy Dell machine does not support UEFI), booted it and used the command modprobe ppa to load the ppa parallel port driver. Shazam! The drive became device /dev/sdc4 and was auto-mounted as ‘ZIP-100’ in the LXQt file manager window. I can browse all the files on the 100 MB ZIP disks. It’s fast, too. I wish I’d thought of trying that first. I could have reformatted the disks with a Linux filesystem (ext4 or whatever) if I wanted to do that.

I then downloaded from a Debian amd64 repository the binary package for a 1996 Linux GUI utility named ‘jaZip‘ that someone named Jarrod Smith (thank you!) wrote in 1996 for Iomega Jaz and Zip drives, and I installed it easily in the Live Lubuntu 20.10 environment. It works perfectly, allowing me to mount, unmount, lock, unlock and eject Zip 100 MB disks. Linux came to the rescue again. I’m chuffed. Below are details of the steps I took to create a persistent Live USB pendrive with Lubuntu 20.10 with the ability to use my Iomega Z100P2 drive connected to the Dell OptiPlex 780 SFF PC.

By the way, a persistent Live Linux USB pendrive is not essential, it just means you don’t have to manually load the ppa module, re-install jaZip and configure it every time you boot the Live Linux environment.

1. Download the ISO of Lubuntu 20.10 from the official Lubuntu Web site.

2. Use the procedure in the following ‘How To’ article to create a persistent Live pendrive of Lubuntu 20.10 by using the utility mkusb:

Create a persistent Ubuntu USB which boots to RAM

The mkusb windows in that 2016 article are a bit different to those in the version of mkusb (12.3.9) that was installed by following the procedure, but it is fairly obvious what to do. Select the old user interface (Option e: Old User Interface). There is no need to perform the steps in ‘Extra: Boot the Live USB to RAM’ because it is now done automatically for you and added to the GRUB boot menu as an additional option.

3. Once I had created the persistent Live pendrive, I booted it and performed the installation procedure for jaZip, and configured the persistent Live installation. The console output for all these steps is shown below:

lubuntu@lubuntu:~$ sudo apt install libforms2
Reading package lists... Done
Building dependency tree       
Reading state information... Done
The following NEW packages will be installed:
  libforms2
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 327 kB of archives.
After this operation, 975 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu groovy/universe amd64 libforms2 amd64 1.2.3-1.4 [327 kB]
Fetched 327 kB in 0s (807 kB/s)  
Selecting previously unselected package libforms2.
(Reading database ... 240052 files and directories currently installed.)
Preparing to unpack .../libforms2_1.2.3-1.4_amd64.deb ...
Unpacking libforms2 (1.2.3-1.4) ...
Setting up libforms2 (1.2.3-1.4) ...
Processing triggers for libc-bin (2.32-0ubuntu3) ...
lubuntu@lubuntu:~$ cd ~/Downloads
lubuntu@lubuntu:~/Downloads$ wget http://ftp.uk.debian.org/debian/pool/main/j/jazip/jazip_0.34-15.1+b2_amd64.deb
--2021-04-14 15:09:15--  http://ftp.uk.debian.org/debian/pool/main/j/jazip/jazip_0.34-15.1+b2_amd64.deb
Resolving ftp.uk.debian.org (ftp.uk.debian.org)... 2001:1b40:5600:ff80:f8ee::1, 78.129.164.123
Connecting to ftp.uk.debian.org (ftp.uk.debian.org)|2001:1b40:5600:ff80:f8ee::1|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 81280 (79K) [application/octet-stream]
Saving to: ‘jazip_0.34-15.1+b2_amd64.deb’

jazip_0.34-15.1+b2_amd64.de 100%[===========================================>]  79.38K  --.-KB/s    in 0.08s

2021-04-14 15:09:15 (941 KB/s) - ‘jazip_0.34-15.1+b2_amd64.deb’ saved [81280/81280]

lubuntu@lubuntu:~/Downloads$ sudo dpkg -i jazip_0.34-15.1+b2_amd64.deb
Selecting previously unselected package jazip.
(Reading database ... 240059 files and directories currently installed.)
Preparing to unpack jazip_0.34-15.1+b2_amd64.deb ...
Unpacking jazip (0.34-15.1+b2) ...
Setting up jazip (0.34-15.1+b2) ...
Processing triggers for man-db (2.9.3-2) ...
lubuntu@lubuntu:~/Downloads$ sudo adduser lubuntu floppy
Adding user `lubuntu' to group `floppy' ...
Adding user lubuntu to group floppy
Done.
lubuntu@lubuntu:~/Downloads$ sudo modprobe ppa # Load the parallel port driver for the Zip drive.
lubuntu@lubuntu:~/Downloads$ sudo blkid # Check if the Zip drive has now been detected.
/dev/sda1: LABEL="system" BLOCK_SIZE="512" UUID="BCF27E52F27E10BE" TYPE="ntfs" PARTUUID="6da119a3-01"
/dev/sda2: LABEL="windows" BLOCK_SIZE="512" UUID="527280DF7280C8E5" TYPE="ntfs" PARTUUID="6da119a3-02"
/dev/sdb1: LABEL="usbdata" BLOCK_SIZE="512" UUID="347345C33A9B90D1" TYPE="ntfs" PARTUUID="793c91c2-01"
/dev/sdb3: LABEL_FATBOOT="lub201064" LABEL="lub201064" UUID="7EAA-D59C" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="793c91c2-03"
/dev/sdb4: BLOCK_SIZE="2048" UUID="2020-10-22-14-26-38-00" LABEL="Lubuntu 20.10 amd64" TYPE="iso9660" PTUUID="509643ab-f22d-4d70-8a47-8708c562cbfe" PTTYPE="gpt" PARTUUID="793c91c2-04"
/dev/loop0: TYPE="squashfs"
/dev/sdb5: LABEL="casper-rw" UUID="55459d4d-48f3-4b50-bd9b-3fd71e552bb2" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="793c91c2-05"
/dev/zram0: UUID="073aa55f-241b-4deb-b6a0-907676dfff65" TYPE="swap"
/dev/zram1: UUID="692d4cc6-21fa-48b8-8ef7-948dc13dec53" TYPE="swap"
/dev/sdc4: SEC_TYPE="msdos" LABEL_FATBOOT="ZIP-100" LABEL="ZIP-100" UUID="15F9-2C71" BLOCK_SIZE="512" TYPE="vfat" PARTUUID="726a014e-04"
lubuntu@lubuntu:~/Downloads$ sudo mkdir -p /media/lubuntu/ZIP-100
lubuntu@lubuntu:~/Downloads$ sudo /usr/sbin/jazipconfig
There are currently no entries in /etc/jazip.conf.

Zip devices detected on the system:

  1:  Device /dev/sdc

There are no Jaz devices detected on the system.

Available commands:
 (a)dd an entry listed from detected devices.
 (c)reate an entry from scratch.
 (q)uit without saving.
 (e)xit and save changes.
                           ? a

What mount point? (e.g. /zip) /media/lubuntu/ZIP-100
--------------------------------------------
These are the entries currently selected for /etc/jazip.conf:

  1:   Device /dev/sdc   Mount point /media/lubuntu/ZIP-100

There are no other Zip devices detected on the system.

There are no Jaz devices detected on the system.

Available commands:
 (d)elete an entry from /etc/jazip.conf
 (c)reate an entry from scratch.
 (q)uit without saving.
 (e)xit and save changes.
                           ? e
Creating /etc/jazip.conf
lubuntu@lubuntu:~/Downloads$ cat /etc/jazip.conf
# Configuration file for jaZip
#
# Raw Device         Mount Point                  Read but ignored
  /dev/sdc              /media/lubuntu/ZIP-100                      auto    auto        0 0
lubuntu@lubuntu:~/Downloads$ sudo jazip # Launch jaZip.
ERROR! Couldn't write entry to /etc/mtab.
lubuntu@lubuntu:~/Downloads$ sudo jazip # Launch jaZip.
lubuntu@lubuntu:~/Downloads$ sudo nano /etc/modules # Add ppa so it gets loaded automatically.

 
4. Add a jaZip icon on the Linux Desktop so that you can launch jaZip easily:

4.1 Create the file /home/lubuntu/Desktop/jazip.desktop containing:

[Desktop Entry]
Name=jazip
GenericName=Manage Iomega Jaz and Zip drives
Comment=
Exec=/home/lubuntu/.launch_jazip.sh
Type=Application
Icon=/usr/share/doc/jazip/icons/jazip1.gif
Terminal=false

4.2 Right-click on the icon on the Desktop and tick ‘Trust this executable’.

4.3 Create the file /home/lubuntu/.launch_jazip.sh containing:

#!/bin/bash
lxqt-sudo nohup jazip &

4.4 Make it executable:

lubuntu@lubuntu:~/Downloads$ chmod +x ~/.launch_jazip.sh
jaZip window open on the Lubuntu 20.10 Desktop.

jaZip window open on the Lubuntu 20.10 Desktop.

What a pleasure to find that the ppa module, which has been part of the kernel distribution since sometime in the 1.3.x series, is still available and working in today’s Linux kernels, and that jaZip, a utility program for Linux originally released in 1996 and last updated (as far as I can tell) in the year 2001, still works in today’s Linux to manage hardware that has been obsolete for almost as long.

Using jaZip to mount a Zip disk will mount the disk with ownership root:root. Therefore, if I want to copy files to a Zip disk, instead of using jaZip to mount and unmount the disk I click on the device ‘101 MB Volume’ that appears in the Lists pane of the PCManFM-Qt file manager window after a Zip disk is inserted in the drive. I just use jaZip to eject the Zip disk from the drive after unmounting it by clicking on the Unmount icon in the Lists pane of PCManFM-Qt.

Notes on keyboard configuration in X Windows: Keyboard layout, Modifier Key and Compose Key

Before I dive into X Windows, I need to mention Miguel Farah’s excellent and comprehensive Web pages on keyboard layouts and standards:

http://www.farah.cl/Keyboardery/

There are umpteen articles, blog and forum posts available on the Web covering keyboard configuration for X Windows, but my notes below may be of help to someone. I briefly cover keyboard layout configuration (non-persistent) from the command line in a pseudo terminal in an X Windows session, and also how to make the configuration persist. I also cover how to configure a ‘Modifier Key‘ and a ‘Compose Key‘, two different things.

1. Changing the layout

Look in the file /usr/share/X11/xkb/rules/xorg.lst to find out what settings are available in X Windows. The file is divided into four sections listing the different keyboard models, layouts, variants and options that X Windows allows:

user $ grep "^! " /usr/share/X11/xkb/rules/xorg.lst
! model
! layout
! variant
! option

For example, the following X Windows German-language keyboard layouts are available in the Linux installation I am using now:

user $ awk '/\!\ layout/{flag=1;next}/\!\ variant/{flag=0}flag' /usr/share/X11/xkb/rules/xorg.lst | grep German
  at              German (Austria)
  de              German
  ch              German (Switzerland)

And the following variants to those three keyboard layouts are available:

user $ awk '/\!\ variant/{flag=1;next}/\!\ option/{flag=0}flag' /usr/share/X11/xkb/rules/xorg.lst | grep "at: German"
  nodeadkeys      at: German (Austria, no dead keys)
  sundeadkeys     at: German (Austria, with Sun dead keys)
  mac             at: German (Austria, Macintosh)
user $ awk '/\!\ variant/{flag=1;next}/\!\ option/{flag=0}flag' /usr/share/X11/xkb/rules/xorg.lst | grep "de: German"
  deadacute       de: German (dead acute)
  deadgraveacute  de: German (dead grave acute)
  nodeadkeys      de: German (no dead keys)
  T3              de: German (T3)
  dvorak          de: German (Dvorak)
  sundeadkeys     de: German (with Sun dead keys)
  neo             de: German (Neo 2)
  mac             de: German (Macintosh)
  mac_nodeadkeys  de: German (Macintosh, no dead keys)
  qwerty          de: German (QWERTY)
  deadtilde       de: German (dead tilde)
user $ awk '/\!\ variant/{flag=1;next}/\!\ option/{flag=0}flag' /usr/share/X11/xkb/rules/xorg.lst | grep "ch: German"
  legacy          ch: German (Switzerland, legacy)
  de_nodeadkeys   ch: German (Switzerland, no dead keys)
  de_sundeadkeys  ch: German (Switzerland, with Sun dead keys)
  de_mac          ch: German (Switzerland, Macintosh)

Let’s say I had a desktop machine with a 104-key Swiss German keyboard. By looking through the list of keyboard models in the models section of the file /usr/share/X11/xkb/rules/xorg.lst, I think the following model best describes the keyboard:

user $ awk '/\!\ model/{flag=1;next}/\!\ layout/{flag=0}flag' /usr/share/X11/xkb/rules/xorg.lst | grep 104
  pc104           Generic 104-key PC

To inform X Windows of the keyboard’s characteristics I could, for example, enter the following command in an X Windows terminal window, which would apply for that session only:

user $ setxkbmap -model pc104 -layout ch -variant legacy

and/or I could configure X Windows permanently by creating/editing a file /etc/X11/xorg.conf.d/00-keyboard.conf containing the following:

Section "InputClass"
Identifier "system-keyboard"
MatchIsKeyboard "on"
Option "XkbModel" "pc104"
Option "XkbLayout" "ch"
Option "XkbVariant" "legacy"
EndSection

My laptop has a UK keyboard but, depending where I am, I sometimes connect an external US, Brazilian or Spanish keyboard to it.

Left side of HP UK keyboard

Left side of HP UK keyboard

Left side of HP US keyboard

Left side of HP US keyboard

Left side of HP Brazilian keyboard

Left side of HP Brazilian keyboard

Left side of HP Iberian Spanish keyboard

Left side of HP Iberian Spanish keyboard

To be able to switch the layout to the keyboard I am currently using, the following two methods achieve the same effect in X Windows:

Current session only

user $ setxkbmap -layout gb,us,br,es -model pc105 -option grp:alt_shift_toggle

Persistent

The file /etc/X11/xorg.conf.d/00-keyboard.conf contains:

Section "InputClass"
Identifier "system-keyboard"
MatchIsKeyboard "on"
Option "XkbLayout" "gb,us,br,es"
Option "XkbModel" "pc105"
Option "XkbOptions" "grp:alt_shift_toggle"
EndSection

Either of the above methods will enable me to toggle between UK, US, Brazilian and Iberian Spanish keyboard layouts in X Windows by pressing Alt+Shft. If the laptop had, say, a Brazilian keyboard instead of a UK keyboard then I could change the order of the layouts to ‘br,gb,us,es‘ or whatever order I prefer.

In fact, even when an external keyboard is not connected to my laptop I select the layout using Alt+Shft if I want to type in English, Portuguese or Spanish. For example, to type ‘ã‘ (the letter ‘a‘ with a tilde accent) I press Alt+Shft to switch to the Brazilian Portuguese layout then press the ' (apostrophe) key followed by the A key on the laptop’s UK keyboard. Transparent key-cap stickers can be purchased for various language layouts so that users can see which keys on the keyboard correspond to keys in another layout. However I don’t bother with key-cap stickers because I can remember the layouts for the few languages I use.
 
2. Using a Modifier Key and/or a Compose Key

If you do not connect external keyboards with different layouts, or you want to be able to type letters with accents – or type different symbols – that are not on the keyboard, a Modifier Key and/or a Compose Key can be used. These are two different things. You might use a Modifier Key to add an accent to a letter, for example. If you were to configure, say, AltGr as the Modifier Key, pressing AltGr and the ` (grave accent) key simultaneously then releasing them and pressing the A key could – depending on which keyboard layout you are using – result in à (‘a‘ with the grave accent) being displayed. The ` (grave accent) key is a ‘dead key’ in this case because it is not displayed by itself when pressed in conjunction with the AltGr key; it is only displayed when the next key is pressed, i.e. à, not `a, is displayed on the screen.

You might use a Compose Key to display a symbol that is not on the keyboard. If you were to configure, say, the Pause key as the Compose Key, pressing and releasing the Pause key, then the O key and then the C key could – depending on which keyboard layout you have specified – result in the © (copyright) symbol being displayed.

Let’s say that you want a US keyboard layout with AltGr dead keys, and the Windows key as the Compose key. The setxkbmap command would be:

user $ setxkbmap -layout us -variant altgr-intl -option compose:lwin

Alternatively, the file /etc/X11/xorg.conf.d/00-keyboard.conf to make that configuration permanent would contain:

Section "InputClass"
Identifier "keyboard"
MatchIsKeyboard "yes"
Option "XkbModel" "pc105"
Option "XkbLayout" "us"
Option "XkbVariant" "altgr-intl"
Option "XkbOptions" "compose:lwin"
EndSection

However, the problem with specifying the Windows key as the Compose Key is that the Windows key is usually the key that makes a desktop environment display the applications menu, so an alternative Compose Key needs to be chosen.

You can play around with the XkbModel, XkbLayout, XkbVariant and XkbOptions options to see what works. Look in the file /usr/share/X11/xkb/rules/xorg.lst to find out what are permissible/available.

Using the example of a generic US International keyboard layout with AltGr dead keys, let’s check what options for the model, layout, variant, option and Compose Key are available:

model

user $ awk '/\!\ model/{flag=1;next}/\!\ layout/{flag=0}flag' /usr/share/X11/xkb/rules/xorg.lst | grep Generic
  pc101           Generic 101-key PC
  pc102           Generic 102-key PC
  pc104           Generic 104-key PC
  pc104alt        Generic 104-key PC with L-shaped Enter key
  pc105           Generic 105-key PC

layout

user $ awk '/\!\ layout/{flag=1;next}/\!\ variant/{flag=0}flag' /usr/share/X11/xkb/rules/xorg.lst | grep "US"
  us              English (US)

variant

user $ awk '/\!\ variant/{flag=1;next}/\!\ option/{flag=0}flag' /usr/share/X11/xkb/rules/xorg.lst | grep dead | grep "us:"
  intl            us: English (US, intl., with dead keys)
  dvorak-intl     us: English (Dvorak, intl., with dead keys)
  altgr-intl      us: English (intl., with AltGr dead keys)
  workman-intl    us: English (Workman, intl., with dead keys)

option

user $ tac /usr/share/X11/xkb/rules/xorg.lst | awk '/\!\ option/ {exit} 1' | tac | grep ralt
  lv3:ralt_switch      Right Alt
  lv3:ralt_switch_multikey Right Alt; Shift+Right Alt as Compose
  lv3:ralt_alt         Right Alt never chooses 3rd level
  ctrl:rctrl_ralt      Right Ctrl as Right Alt
  compose:ralt         Right Alt
  lv5:ralt_switch      Right Alt chooses 5th level
  lv5:ralt_switch_lock Right Alt chooses 5th level and acts as a one-time lock if pressed with another 5th level chooser
  lv5:ralt_switch      Right Alt chooses 5th level
  lv5:ralt_switch_lock Right Alt chooses 5th level and acts as a one-time lock if pressed with another 5th level chooser
  korean:ralt_hangul   Make right Alt a Hangul key
  korean:ralt_hanja    Make right Alt a Hanja key

Compose Key

user $ grep "compose:" /usr/share/X11/xkb/rules/base.lst
  compose:ralt         Right Alt
  compose:lwin         Left Win
  compose:lwin-altgr   3rd level of Left Win
  compose:rwin         Right Win
  compose:rwin-altgr   3rd level of Right Win
  compose:menu         Menu
  compose:menu-altgr   3rd level of Menu
  compose:lctrl        Left Ctrl
  compose:lctrl-altgr  3rd level of Left Ctrl
  compose:rctrl        Right Ctrl
  compose:rctrl-altgr  3rd level of Right Ctrl
  compose:caps         Caps Lock
  compose:caps-altgr   3rd level of Caps Lock
  compose:102          The "<Less/Greater>" key
  compose:102-altgr    3rd level of "<Less/Greater>" key
  compose:paus         Pause
  compose:prsc         PrtSc
  compose:sclk         Scroll Lock

(Not all keyboard layouts have a ‘<Less/Greater>’ key, a single key with both < and > symbols on it.)

The following works for me in LXQt with a US keyboard layout:

user $ setxkbmap -layout us -variant altgr-intl -option compose:paus

With the above configuration, I press:

AltGr+a to get á
AltGr+` then a to get à
AltGr+~ then a to get ã
AltGr+e to get é
AltGr+` then e to get è
AltGr+^ then e to get ê
AltGr+~ then e to get
AltGr+o to get ó
AltGr+n to get ñ
AltGr+c to get ©
AltGr+< to get ç
AltGr+s to get ß
AltGr+? to get ¿

and so on, and I press:

Pause then o then o to get °
Pause then o then c to get ©
Pause then ~ then a to get ã
Pause then ~ then e to get
Pause then ^ then 2 to get ²
Pause then _ then 2 to get
Pause then 8 then 8 to get
Pause then E then = to get
Pause then . then . to get
Pause then then > to get
Pause then < then to get
Pause then < then 3 to get
Pause then CCCP to get

and so on. Notice that some characters are available using either method (©, ã and are three examples shown above). A full list of Compose Key characters can be found in the file /usr/share/X11/locale/<locale>/Compose in your installation. For the US layout keyboard the list is in the file /usr/share/X11/locale/en_US.UTF-8/Compose. Various lists of Compose Key sequences and the resulting symbols can also be found on the Web.

To make the configuration in the aforementioned setxkbmap command permanent I would edit the file /etc/X11/xorg.conf.d/00-keyboard.conf to contain the following:

Section "InputClass"
Identifier "keyboard"
MatchIsKeyboard "yes"
Option "XkbModel" "pc105"
Option "XkbLayout" "us"
Option "XkbVariant" "altgr-intl"
Option "XkbOptions" "compose:paus"
EndSection

Let’s say I want to be able to switch between British (gb), US (us), Brazilian (br) and Iberian Spanish (es) keyboard layouts by using Alt+Shft on my laptop with a UK keyboard. I could use the command:

user $ setxkbmap -model pc105 -layout gb,us,br,es -variant ,altgr-intl,, -option grp:alt_shift_toggle,compose:paus

The commas in the -variant option means the ‘altgr-intl‘ option applies solely to the US layout. The Compose Key option in the -option options will work for all layouts.

I could make that configuration permanent in /etc/X11/xorg.conf.d/00-keyboard.conf:

Section "InputClass"
Identifier "keyboard"
MatchIsKeyboard "yes"
Option "XkbModel" "pc105"
Option "XkbLayout" "gb,us,br,es"
Option "XkbVariant" ",altgr-intl,,"
Option "XkbOptions" "grp:alt_shift_toggle,compose:paus"
EndSection

Note that I would not be able to specify ‘altgr-intl‘ as a variant for the gb, br and es layouts I use because the variant ‘altgr-intl‘ is not available in those layouts:

user $ awk '/\!\ variant/{flag=1;next}/\!\ option/{flag=0}flag' /usr/share/X11/xkb/rules/xorg.lst | grep dead | grep "gb:"
  intl            gb: English (UK, intl., with dead keys)
user $ awk '/\!\ variant/{flag=1;next}/\!\ option/{flag=0}flag' /usr/share/X11/xkb/rules/xorg.lst | grep dead | grep "br:"
  nodeadkeys      br: Portuguese (Brazil, no dead keys)
user $ awk '/\!\ variant/{flag=1;next}/\!\ option/{flag=0}flag' /usr/share/X11/xkb/rules/xorg.lst | grep dead | grep "es:"
  nodeadkeys      es: Spanish (no dead keys)
  deadtilde       es: Spanish (dead tilde)
  sundeadkeys     es: Spanish (with Sun dead keys)

 
3. Virtual Terminal (TTY console) keyboard configuration

Although this post is about keyboard configuration for X Windows, I should briefly mention that configurations for X Windows do not apply to virtual terminals (TTY consoles).

If you’re using a Linux distribution running OpenRC, you specify the persistent console keymap in the file /etc/conf.d/keymaps. You can find out which console keymaps are available by examining the directories under /usr/share/keymaps/. For example, the following console keymaps are available for US keyboards in Gentoo Linux:

user $ ls /usr/share/keymaps/i386/qwerty/us*
/usr/share/keymaps/i386/qwerty/us-acentos.map.gz
/usr/share/keymaps/i386/qwerty/us.map.gz
/usr/share/keymaps/i386/qwerty/us1.map.gz

so you would be able to specify one of the following in /etc/conf.d/keymaps:

keymap="us-acentos"

keymap="us"

keymap="us1"

It is also possible to change the console keymap (non-persistent) from the command line. For example, to switch to a UK keyboard layout for a TTY console:

root # loadkeys uk

(notice it is not ‘gb‘ in the case of TTY consoles), or to switch to an Italian Apple Macintosh keyboard layout for a TTY console:

root # loadkeys mac-it

and so on.

If you’re using a Linux distribution running systemd, see my 2020 blog post ‘Reconfiguring the time zone, locales and keymaps in Sabayon Linux‘ for the commands to list and configure TTY console keymaps. The persistent TTY console keymap is specified in the file /etc/vconsole.conf, which can be edited directly and is also edited by the ‘localectl set-keymap‘ command mentioned in that post. The loadkeys command can also be used as described above to change (non-persistent) the keyboard layout for the TTY console.

Installing Linux on an old Motorola Xoom tablet

Motorola Xoom MZ604 tablet

Back in March 2012 I bought a Motorola Xoom Android tablet (Model MZ604 UK), when tablets were going to be the next big thing. It was available in two versions: 3G and Wi-Fi, and it was the latter version I purchased. When it was released in early 2011 the Xoom was state-of-the-art with its NVIDIA Tegra 2 chip, 1 GB RAM, 32 GB internal storage memory, microSD Card slot (up to 32 GB), Wi-Fi, Bluetooth, GPS, gyroscope, magnetometer, accelerometer, barometer and Android 3.0, trumping the first Apple iPad and Samsung’s Galaxy Tab. It has a 2 MP front-facing camera and 5 MP rear-facing camera that records 720p video, supports 720p video playback, has a 10.1-inch display (1280×800 pixels) and 3D graphics acceleration, and a micro HDMI port.

Apple launched the iPad 2 almost immediately after Motorola launched the Xoom, and the Xoom looked outclassed. By the time I bought my Xoom in March 2012 Motorola was already discounting it. Motorola issued a couple of Android updates for the UK Xoom before the company stopped supporting it, although I think mine lost its second update (Android 4.1.1, if I recall correctly) after I factory-reset it several years later when it became very sluggish. Anyway, ‘Settings’ > ‘About tablet’ tells me it currently has Android 4.0.4 installed.

It had been gathering dust on a shelf for several years until I decided to dust it off yesterday to see if there was anything useful I could still do with it (the answer is: not much). None of the apps on it can be upgraded. The version of the Play Store app can no longer access the Google app store. Even if it could, most of the apps in the app store cannot run in Android 4.0.4. The YouTube app cannot access YouTube. The Web browser cannot browse many modern Web sites and can no longer download files either, displaying a message that the browser is no longer supported and must be updated — except that it cannot be. The Google Talk app no longer works since Google pulled the plug on its Talk service (not that I ever used Google Talk anyway). The Gmail app still works, but I don’t use Gmail either. The Maps app still works, as do the Music and Gallery apps.

I connected the Xoom to my desktop machine using a USB cable (Type-A to Micro-USB) and was able to copy files quickly and easily to and from the Xoom. I systematically set about finding versions of Android APK files on the Web that the Xoom would be able to install. APKPure for Android is one of several Web sites to find older versions of APK files. The latest versions I found that the Xoom could install are as follows:

Google Chrome browser

com.android.chrome-42.0.2311.111-2311111-minAPI14.apk

This old version of the Google Chrome browser works better than the browser supplied with Android 4.0.4 on the Xoom but is still not much use, as it cannot browse many sites and cannot download files either. It can access YouTube and play some of the videos, which is some consolation given that neither the browser nor the YouTube apps supplied with Android 4.0.4 can access YouTube any more.

File Manager + (an excellent Android app, by the way)

File Manager_v2.6.0_apkpure.com.apk

This older version of File Manager + works well in Android 4.0.4 on the Xoom, and even enables me to browse files on my Cloud server via WebDAV, although the Xoom cannot open hi-res photos (4032×3024 etc.) via WebDAV. This version of File Manager + supports SMBv1 but not later versions of the protocol, so I cannot browse SMB shares on my home network, as all my machines use either SMBv2 or SMBv3. Pity.

Total Commander

Total Commander file manager_v3.20_apkpure.com.apk
WebDAV plugin Total Commander_v3.01_apkpure.com.apk
LAN plugin for Total Commander_v3.20_apkpure.com.apk

Although I find Total Commander’s UI rather old-fashioned, with the WebDAV and LAN plugins installed I can browse files on my Cloud server via WebDAV, and browse files on my NAS via SMBv2/v3. So Total Commander works well, and the Xoom can open hi-res photos (4032×3024 etc.) via either protocol.

NewPipe legacy (forked by sh000gun to work with Android 4.0+)

NewpipeLegacy-armeabi-v7a-API-14.apk

This open-source YouTube app works in Android 4.0.4 on the Xoom and allows me to view some YouTube videos, although the app tends to crash quite often. Still, it is better than the YouTube app supplied with Android 4.0.4 on the Xoom, as that does not work at all and cannot be upgraded.

Linux

The following Android apps enabled me to root the Xoom and install and run an old version of Linux in a chroot:

BusyBox_v64_apkpure.com.apk

Linux Deploy_v2.5.0_apkpure.com.apk

VNC Viewer Remote Desktop_v2.1.1.019679_apkpure.com.apk

Those were the most-recent versions of the BusyBox, Linux Deploy and VNC Viewer apps for Android that the Xoom could manage to install.

Motorola Xoom MZ604 tablet

I downloaded the tarball LAIOT.tar.gz from the following Web page and extracted the file TiamatCWM.img from it:

https://sourceforge.net/projects/laiot/files/LAIOT.tar.gz/download?use_mirror=phoenixnap&r=&use_mirror=master

Note: Do NOT try to run the shell scripts in LAIOT, because they are out of date and will mess up the ADB and Fastboot tools in Linux on the desktop machine.

To be able to install Linux it was first necessary to root Android 4.0.4 on the Xoom. I used a modified version of the procedure given in the 2014 blog post Motorola Xoom Root on Linux:

• I installed ADB and Fastboot on a desktop machine running Lubuntu 20.10:

user $ sudo apt install adb
user $ sudo apt install fastboot

• I enabled the USB Debugging mode on the Xoom (‘Settings’ > ‘Developer options’).
• I downloaded the file Xoom-Universal-Root.zip from XDA Developers Forums thread [Root] Universal Xoom Root – ANY XOOM ANY UPDATE. The main link in that thread no longer works but a link in Post #411 in the thread still downloads the file.
• I inserted a 32 GB microSD card in the Xoom microSD Card slot.
• I connected the Xoom to the desktop machine via a USB cable.
• I copied the file Xoom-Universal-Root.zip to the microSD card.
• I checked connectivity:

user $ adb devices
* daemon not running. starting it now on port 5037 *
* daemon started successfully *
List of devices attached 
0299918743aad023        device

• I reboot the Xoom:

user $ adb reboot bootloader

• ‘Starting Fastboot protocol support’ was displayed on the Xoom’s boot screen. I typed the following commands on the desktop machine:

user $ fastboot oem unlock

• In response to a question on the text screen on the Xoom I pressed Volume Down (accept) then Volume Up (confirm).
• I repeated the process to confirm, i.e. I pressed Volume Down (accept) then Volume Up (confirm).
• ‘Device unlock operation in progress’ appeared on the Xoom screen and the Xoom rebooted.
• The bootloader was now unlocked.
• I typed the following commands on the desktop machine:

user $ adb reboot bootloader
user $ fastboot flash recovery TiamatCWM.img

• When flashing was complete I rebooted the Xoom by pressing Volume Up + the ON/OFF button.
• Upon booting, when the Motorola logo appeared I pressed Volume Down.
• ‘Android Recovery’ appeared in the top left corner of the screen.
• I pressed Volume Up to enter recovery mode.
• This mode is called ‘ClockworkMod recovery’. I selected ‘Install zip from sdcard’ > ‘Choose zip from sdcard’, then selected the zip file I had downloaded earlier to the microSD card (Use Volume Up/Down to navigate and ON/OFF to select).
• I rebooted, and root access was enabled. I verified this by downloading the Android app ‘Root Checker_v6.5.0_apkpure.com.apk’, copying it to the Xoom via USB, installing it and launching the app.

Now that the Xoom had been rooted, I could proceed with installing Linux in a chroot. To do this I followed the procedure given in the 2017 Android Authority article How to install a Linux desktop on your Android device. In the Linux Deploy app I selected ‘Ubuntu’ as the distribution, ‘Precise [Pangolin]’ as the distribution suite, and LXDE as the desktop environment. I installed the three apps BusyBox, Linux Deploy and VNC Viewer, launched the BusyBox app and tapped ‘Install’. Then I launched Linux Deploy, tapped the configuration icon next to the STOP button in the top right of the screen and configured Linux Deploy as follows:

BOOTSTRAP

	Distribution
	Ubuntu

	Architecture
	armhf

	Distribution suite
	precise

	Source path
	http://ports.ubuntu.com/

	Installation type
	File

	Installation path
	${EXTERNAL_STORAGE}/linux.img

	Image size (MB)
	Automatic calculation

	File system
	ext4

	User name
	root

	User password
	android

	Privileged users
	root

	Localization
	C

	DNS
	Automatic detection

	Network trigger

	Power trigger

INIT

	Enable
	Allow to use a initialization system  <--- NOT TICKED

	Init system
	run-parts

	Init settings
	Change settings for the initialization system

MOUNTS

	Enable
	Allow to mount the Android resources  <--- NOT TICKED

	Mount points
	Edit the mount points list

SSH

	Enable
	Allow to use a SSH server  <--- NOT TICKED

	SSH settings
	Change settings for SSH server

PULSEAUDIO

	Enable
	Allow to use an audio output  <--- NOT TICKED

GUI

	Enable
	Allow to use a graphical environment  <--- TICKED

	Graphics subsystem
	VNC

	GUI settings
	Change settings for the graphics subsystem

	Desktop environment
	LXDE

Then I tapped the three-dot icon in the top right of the screen, tapped ‘Install’ then ‘OK’. Once the messages on the screen stopped scrolling and a final message ‘<<< deploy’ was displayed, I tapped the START arrow and ‘OK’.

Linux Deploy running on the Motorola Xoom MZ604 tablet

I launched VNC Viewer, tapped the ‘+’ icon to add a new connection, entered ‘localhost:5900’ for the address and ‘Linux’ for the name, tapped ‘CREATE’ then ‘CONNECT’. From there I was prompted to enter the password I had specified previously under ‘User password’ (see above), and the LXDE Desktop was displayed.

Motorola Xoom MZ604 tablet running Ubuntu Precise Pangolin with LXDE in a chroot

After following the procedure in the above-mentioned article to configure and install the Linux image, subsequently I use the following steps to start and stop Linux on the Xoom:

To start Linux on the Xoom, use Linux Deploy.
Press the ‘START’ arrow at the top right of the Linux Deploy screen.
Then open VNC and press ‘Connect’.

To exit Linux on the Xoom, use Linux Deploy.
Tap the square ‘STOP’ button at the top right of the Linux Deploy screen.
Tap ‘OK’ to ‘Stop services & unmount the container’.
Then tap the menu button (three horizontal bars) at the top left of the Linux Deploy screen.
Tap ‘Exit’.

To exit VNC Viewer:
Press the ‘Recent Apps’ icon (two overlapping rectangles) at the bottom left of the Xoom’s Android screen.
Swipe to the left to close the app.

How to patch kde-plasma/plasma-firewall-5.21.2 for UFW in Gentoo Linux with OpenRC

Unfortunately plasma-firewall-5.21.2, a new Plasma frontend for firewalld and UFW, has been written only for Linux installations with systemd. However, I use OpenRC and syslog-ng in Gentoo Linux and wanted to try to get plasma-firewall to work on my laptop which uses UFW. I therefore set about patching plasma-firewall-5.21.2. I did not touch the firewalld part of plasma-firewall, as I do not use firewalld (and the plasma-firewall code for firewalld is more complicated). Below is what I did.

root # wget https://invent.kde.org/plasma/plasma-firewall/-/archive/Plasma/5.21/plasma-firewall-Plasma-5.21.tar.gz
root # tar -xzf plasma-firewall-Plasma-5.21.tar.gz
root # cp -pr plasma-firewall-Plasma-5.21 a
root # cp -pr plasma-firewall-Plasma-5.21 b
root # nano b/kcm/backends/ufw/ufwclient.cpp # Apply changes shown in Part 1 below.
root # nano b/kcm/backends/ufw/helper/helper.cpp # Apply changes shown in Part 2 below.
root # nano /usr/bin/print_ufw_messages # Create Bash script shown in Part 2 below.
root # chmod 755 /usr/bin/print_ufw_messages
root # nano b/kcm/backends/ufw/ufwlogmodel.cpp # Apply changes shown in Part 3 below.
root # diff -ruN a b > plasma-firewall-5.21.2-ufw.patch
root # mkdir -p /etc/portage/patches/kde-plasma/plasma-firewall-5.21.2
root # cp plasma-firewall-5.21.2-ufw.patch /etc/portage/patches/kde-plasma/plasma-firewall-5.21.2/
root # emerge -1v plasma-firewall
root # nano /etc/syslog-ng/syslog-ng.conf # Apply changes shown in Part 4 below.

You should now be able to use plasma-firewall for UFW in KDE Plasma’s ‘System Settings’ > ‘Firewall’ in the Network section, although I have not tried all the functions. Additionally, I believe there may be some outstanding bugs in the original 5.21.2 version of the Plasma module when using it with systemd.

Part 1

In /kcm/backends/ufw/ufwclient.cpp change:

bool UfwClient::isCurrentlyLoaded() const
{
QProcess process;
const QString name = "systemctl";
const QStringList args = {"status", "ufw"};

process.start(name, args);
process.waitForFinished();

// systemctl returns 0 for status if the app is loaded, and 3 otherwise.
qDebug() << "Ufw is loaded?" << (process.exitCode() == EXIT_SUCCESS);

return process.exitCode() == EXIT_SUCCESS;
}

to:

bool UfwClient::isCurrentlyLoaded() const
{
QProcess process;
const QString name = "rc-service";
const QStringList args = {"--exists", "ufw"};

process.start(name, args);
process.waitForFinished();

// "rc-service --exists" returns 0 if the app is loaded, and -1 otherwise.
qDebug() << "Ufw is loaded?" << (process.exitCode() == EXIT_SUCCESS);

return process.exitCode() == EXIT_SUCCESS;
}

Part 2

In /kcm/backends/ufw/helper/helper.cpp change:

QStringList getLogFromSystemd(const QString &lastLine)
{
QString program = "journalctl";
QStringList arguments {"-xb","-n", "100","-g", "UFW"};

QProcess myProcess;
myProcess.start(program, arguments);
myProcess.waitForFinished();

auto resultString = QString(myProcess.readAllStandardOutput());
auto resultList = resultString.split("\n");

// Example Line from Systemd:
// Dec 06 17:42:45 tomatoland kernel: [UFW BLOCK] IN=wlan0 OUT= MAC= SRC=192.168.50.181 DST=224.0.0.252 LEN=56 TOS=0x00
//     PREC=0x00 TTL=255 ID=52151 PROTO=UDP SPT=5355 DPT=5355 LEN=36
// We need to remove everything up to the space after ']'.

QStringList result;
for(const QString& line : resultList) {
if (!lastLine.isEmpty() && line == lastLine) {
result.clear();
continue;
}
result.append(line);
}
return result;
}

to:

QStringList getLogFromSystemd(const QString &lastLine)
{
QString program = "print_ufw_messages";
QStringList arguments {"UFW", "100"};

QProcess myProcess;
myProcess.start(program, arguments);
myProcess.waitForFinished();

auto resultString = QString(myProcess.readAllStandardOutput());
auto resultList = resultString.split("\n");

// Example line from /var/log/messages populated by sylog-ng:
// Mar  6 00:10:19 localhost kernel: [UFW BLOCK] IN=wlan0 OUT= MAC=00:12:5b:8a:83:6d:b7:2a:da:59:d4:10:09:00 SRC=192.168.1.27
//      DST=192.168.1.139 LEN=52 TOS=0x00 PREC=0x00 TTL=64 ID=41659 DF PROTO=TCP SPT=445 DPT=52140 WINDOW=260 RES=0x00 ACK URGP=0
// We need to remove everything up to the space after ']'.

QStringList result;
for(const QString& line : resultList) {
if (!lastLine.isEmpty() && line == lastLine) {
result.clear();
continue;
}
result.append(line);
}
return result;
}

where the program print_ufw_messages is a user-created Bash script /usr/bin/print_ufw_messages (-rwxr-xr-x root.root) containing:

#!/bin/bash
awk '{if (/localhost syslog-ng/ && /syslog-ng starting up/ && !/COMMAND/) {chunk=""} else {chunk=chunk $0 RS}} END {printf "%s", chunk}' /var/log/messages | grep "$1" | head -n "$2" | grep -v print_ufw_messages

Part 3

During my investigations into how to modify the plasma-firewall-5.21.2 source code, I discovered a bug in the source code. In /kcm/backends/ufw/ufwlogmodel.cpp change:

for (const QString& key : {"IN", "SRC", "DST", "PROTO", "STP", "DPT"}) {

to:

for (const QString& key : {"IN", "SRC", "DST", "PROTO", "SPT", "DPT"}) {

i.e. “STP” needs to be changed to “SPT“.

Part 4

I am not sure if this makes a difference to plasma-firewall (which was coded assuming systemd-journald is installed), but the default date format for messages in /var/log/messages printed by syslog-ng has only one digit in the day of the month when it is less than the 10th day of the month. For example:

Mar  9 03:09:39 clevow230ss syslog-ng[23735]:  syslog-ng starting up; version='3.30.1'

However, systemd-journalctl always outputs two-digit days of the month, and I think (but am not certain) the following date format might be needed in order for the existing code in /kcm/backends/ufw/ufwlogmodel.cpp to parse the syslog-ng output correctly:

Mar 09 03:09:39 clevow230ss syslog-ng[23735]:  syslog-ng starting up; version='3.30.1'

Therefore edit /etc/syslog-ng/syslog-ng.conf and add a template:

template template_date_format {
template("${MONTH_ABBREV} ${DAY} ${HOUR}:${MIN}:${SEC} ${HOST} ${MSGHDR}${MSG}\n");
template_escape(no);
};

and change the line:

destination messages { file("/var/log/messages"); };

to:

destination messages { file("/var/log/messages" template(template_date_format)); };

Then restart syslog-ng:

root # rc-service syslog-ng restart

From now on the day of the month is always two digits (01, 02,…31) in /var/log/messages.

Recreating missing WINE menu entries and Desktop Configuration Files in Lubuntu 20.10

I use a few Windows applications I installed via WINE in my user account on my family’s desktop machine running Lubuntu 20.10 (LXQt Desktop Environment). A few days ago I logged in and found that the icons for the Windows applications had disappeared from my Desktop, and the ‘Wine’ entry in the LXQt applications menu had also disappeared. This was rather bizarre and I still have no idea why it happened. However, the directories for each WINEPREFIX were still present so I set about recreating the missing menu entries and Desktop Configuration Files. I reinstalled one of the Windows applications, and its icon reappeared on my Desktop but the ‘Wine’ entry in the LXQt applications menu did not reappear. I had to delve into WINE menu structures to fix everything.

Three key directories are involved in defining the ‘Wine’ menu entries:

~/.config/menus/applications-merged/

~/.local/share/applications/wine/Programs/

~/.local/share/desktop-directories/

The role and contents of these directories are best explained by studying an example of an application in the ‘Wine’ menu. One of the Windows applications I had installed previously via WINE is Visio Professional 5, and I will use it as an example to illustrate how I got everything working again. I had installed the application using a WINEPREFIX of ~/.wine-visio, and the missing icon on my Desktop had been labelled ‘Visio Professional’.

1. I recreated the directory ~/.local/share/applications/wine/Programs/Visio Professional/:

user $ mkdir -p ~/.local/share/applications/wine/Programs/Visio\ Professional

2. I recreated the file ~/.config/menus/applications-merged/wine-Programs-Visio Professional-Visio Professional.menu (chmod 664) containing the following:

<!DOCTYPE Menu PUBLIC "-//freedesktop//DTD Menu 1.0//EN"
"http://www.freedesktop.org/standards/menu-spec/menu-1.0.dtd">
<Menu>
  <Name>Applications</Name>
  <Menu>
    <Name>wine-wine</Name>
    <Directory>wine-wine.directory</Directory>
  <Menu>
    <Name>wine-Programs</Name>
    <Directory>wine-Programs.directory</Directory>
  <Menu>
    <Name>wine-Programs-Visio Professional</Name>
    <Directory>wine-Programs-Visio Professional.directory</Directory>
    <Include>
      <Filename>wine-Programs-Visio Professional-Visio Professional.desktop</Filename>
    </Include>
  </Menu>
  </Menu>
  </Menu>
</Menu>

wine-wine‘ corresponds to the ‘Wine’ entry in the top-level LXQt applications menu.

wine-Programs‘ corresponds to the second-level menu entry ‘Programs’ (i.e. ‘Wine’ > ‘Programs’).

wine-Programs-Visio Professional‘ corresponds to the third-level menu entry ‘Visio Professional’ (i.e. ‘Wine’ > ‘Programs’ > ‘Visio Professional’).

wine-Programs-Visio Professional-Visio Professional‘ corresponds to the fourth-level menu entry ‘Visio Professional’ for the application itself (i.e. ‘Wine’ > ‘Programs’ > ‘Visio Professional’ > ‘Visio Professional’).

3. Notice in the above file the syntax for menu directory files corresponding to menu entries. I had to recreate the directory files as follows:

~/.local/share/desktop-directories/wine-wine.directory (chmod 664) containing:

[Desktop Entry]
Type=Directory
Name=Wine
Icon=wine

~/.local/share/desktop-directories/wine-Programs.directory (chmod 664) containing:

[Desktop Entry]
Type=Directory
Name=Programs
Icon=folder

~/.local/share/desktop-directories/wine-Programs-Visio Professional.directory (chmod 664) containing:

[Desktop Entry]
Type=Directory
Name=Visio Professional
Icon=folder

4. I recreated the file ~/.local/share/applications/wine/Programs/Visio Professional/Visio Professional.desktop (chmod 664) containing:

[Desktop Entry]
Name=Visio Professional
Exec=env WINEPREFIX="/home/fitzcarraldo/.wine-visio" wine-stable /home/fitzcarraldo/.wine-visio/drive_c/Program\ Files/Visio/Visio32.EXE
Type=Application
StartupNotify=true
Path=/home/fitzcarraldo/.wine-visio/dosdevices/c:/Program Files/Visio
Comment=Visio Professional
Icon=AAE3_Visio32.0
StartupWMClass=visio32.exe

and I copied the file to ~/Desktop/Visio Professional.desktop (chmod 755). I right-clicked on ~/Desktop/Visio Professional.desktop and ticked ‘Trust this executable’. It is not necessary to do that for .desktop files in ~/.local/share/applications/wine/Programs/ and its sub-directories.

I used the command ‘locate -i visio | grep -i png‘ to find the name of the existing icon file (AAE3_Visio32.0.png) that WINE had created when I originally installed the application. The StartupWMClass variable seems to be the same as the application’s executable file name but all in lower case. I found the Exec and Path entries by examining the existing sub-directories and files in ~/.wine-visio/drive_c/.

The ‘Wine’ menu entry and sub-entries all reappeared correctly after I logged out and back in, and I could again launch the application either by selecting the application from the LXQt application menu or by double-clicking on the application’s icon on my Desktop.

Resulting application menu entry for Windows application Visio Professional 5

Resulting application menu entry for Windows application Visio Professional 5

The Windows applications are now all usable again, although I wish I knew what caused the problem in the first place.

Anyway the exercise was not a waste of time because I now know how to modify WINE menus. Some Windows application installation programs in WINE result in a menu entry ‘Wine’ > ‘Programs’ > ‘<application>’ > ‘<application>’ whereas others result in a menu entry ‘Wine’ > ‘Programs’ > ‘<application>’, and I now know how to change the menu hierarchy if I want to. For example, I have just now installed the Windows application SumatraPDF to read e-books. The SumatraPDF installation program launched using WINE resulted in a menu entry ‘Wine’ > ‘Programs’ > ‘SumatraPDF’. The resulting file ~/.config/menus/applications-merged/wine-Programs-SumatraPDF.menu contained the following:

<!DOCTYPE Menu PUBLIC "-//freedesktop//DTD Menu 1.0//EN"
"http://www.freedesktop.org/standards/menu-spec/menu-1.0.dtd">
<Menu>
  <Name>Applications</Name>
  <Menu>
    <Name>wine-wine</Name>
    <Directory>wine-wine.directory</Directory>
  <Menu>
    <Name>wine-Programs</Name>
    <Directory>wine-Programs.directory</Directory>
    <Include>
      <Filename>wine-Programs-SumatraPDF.desktop</Filename>
    </Include>
  </Menu>
  </Menu>
</Menu>
Original application menu entry for Windows application SumatraPDF installed via WINE

Original application menu entry for Windows application SumatraPDF installed via WINE

There was no .directory file for SumatraPDF in ~/.local/share/desktop-directories/ because the menu entry to launch SumatraPDF is under ‘Wine’ > ‘Programs’. If I wanted to change the menu entry to be under ‘Wine’ > ‘Programs’ > ‘SumatraPDF’ I could modify the contents of the file ~/.config/menus/applications-merged/wine-Programs-SumatraPDF.menu, create the file ~/.local/share/desktop-directories/wine-Programs-SumatraPDF.directory, create the directory ~/.local/share/applications/wine/Programs/SumatraPDF/ and move the file ~/.local/share/applications/wine/Programs/SumatraPDF.desktop to ~/.local/share/applications/wine/Programs/SumatraPDF/SumatraPDF.desktop. I decided to do this as an exercise:

user $ mkdir -p ~/.local/share/applications/wine/Programs/SumatraPDF/
$ mv ~/.local/share/applications/wine/Programs/SumatraPDF.desktop ~/.local/share/applications/wine/Programs/SumatraPDF/SumatraPDF.desktop

I edited the file ~/.config/menus/applications-merged/wine-Programs-SumatraPDF.menu so it now contains the following:

<!DOCTYPE Menu PUBLIC "-//freedesktop//DTD Menu 1.0//EN"
"http://www.freedesktop.org/standards/menu-spec/menu-1.0.dtd">
<Menu>
  <Name>Applications</Name>
  <Menu>
    <Name>wine-wine</Name>
    <Directory>wine-wine.directory</Directory>
  <Menu>
    <Name>wine-Programs</Name>
    <Directory>wine-Programs.directory</Directory>
  <Menu>
    <Name>wine-Programs-SumatraPDF</Name>
    <Directory>wine-Programs-SumatraPDF.directory</Directory>
    <Include>
      <Filename>wine-Programs-SumatraPDF-SumatraPDF.desktop</Filename>
    </Include>
  </Menu>
  </Menu>
  </Menu>
</Menu>

I created the file ~/.local/share/desktop-directories/wine-Programs-SumatraPDF.directory containing the following:

[Desktop Entry]
Type=Directory
Name=SumatraPDF
Icon=folder

I logged out and back in, and the application menu entry for SumatraPDF had changed from:

‘Wine’ > ‘Programs’ > ‘SumatraPDF’

where the second-level entry in the ‘Wine’ menu has a folder icon,

to:

‘Wine’ > ‘Programs’ > ‘SumatraPDF’ > ‘SumatraPDF’

where the second-level and third-level entries in the ‘Wine’ menu have folder icons. The other Windows applications in my user account are at the fourth level of the WINE menu, so the Wine menu for SumatraPDF is now consistent with the other Windows applications.

Modified application menu entry for Windows application SumatraPDF installed via WINE

Modified application menu entry for Windows application SumatraPDF installed via WINE

By the way, the Desktop Configuration File ~/Desktop/SumatraPDF.desktop created by WINE contains the following:

[Desktop Entry]
Name=SumatraPDF
Exec=env WINEPREFIX="/home/fitzcarraldo/.wine-sumatra" wine-stable C:\\\\users\\\\fitzcarraldo\\\\Local\\ Settings\\\\Application\\ Data\\\\SumatraPDF\\\\SumatraPDF.exe 
Type=Application
StartupNotify=true
Path=/home/fitzcarraldo/.wine-sumatra/dosdevices/c:/users/fitzcarraldo/Local Settings/Application Data/SumatraPDF
Icon=3EBA_SumatraPDF.0
StartupWMClass=sumatrapdf.exe

and the Desktop Configuration File ~/.local/share/applications/wine/Programs/SumatraPDF.desktop created by WINE contains the following:

[Desktop Entry]
Name=SumatraPDF
Exec=env WINEPREFIX="/home/fitzcarraldo/.wine-sumatra" wine-stable C:\\\\windows\\\\command\\\\start.exe /Unix /home/fitzcarraldo/.wine-sumatra/dosdevices/c:/users/fitzcarraldo/Start\\ Menu/Programs/SumatraPDF.lnk
Type=Application
StartupNotify=true
Path=/home/fitzcarraldo/.wine-sumatra/dosdevices/c:/users/fitzcarraldo/Local Settings/Application Data/SumatraPDF
Icon=3EBA_SumatraPDF.0
StartupWMClass=sumatrapdf.exe

I am not sure why there is a difference in the Exec command in the two files, but that is an investigation for another day.

Addendum (13 March 2021): KDE in Gentoo Linux on my laptops has essentially the same menu structure and files for Windows applications installed via WINE. However, unlike LXQt in Lubuntu 20.10, in addition to the individual .menu file per Windows application KDE has a file (~/.config/menus/applications-kmenuedit.menu) that defines the entire KDE applications menu, not just the Windows applications under ‘Wine’ in the applications menu. To make changes to the menu structure of Windows applications in KDE I therefore have to perform a further step; I have to edit the file ~/.config/menus/applications-kmenuedit.menu, which I have found to be a hassle. The file seems to collect cruft every time a menu entry is created, moved, changed, or deleted. Over time the file can become very large and confusing to read, and it can still contain entries for applications removed years ago. Also, some of the edits I make in the file are not accepted and KDE either reverts the contents or alters the contents in a way I do not want. Therefore I make a copy of the file before editing it, just in case I make a mistake and have to put things back to the way they were.

Removing qtwebengine from a Gentoo Linux installation

At the beginning of March I updated the world set in Gentoo Testing (~amd64) running the KDE suite (Plasma, Frameworks and Applications) on my secondary laptop, an eleven-year-old Compal NBLB2. It has a first-generation Core i7 CPU and the maximum amount of RAM that can be installed in that model (8 GB).

root # uname -a
Linux meshedgedx 5.0.11-gentoo #1 SMP Fri Jun 7 15:33:06 BST 2019 x86_64 Intel(R) Core(TM) i7 CPU Q 720 @ 1.60GHz GenuineIntel GNU/Linux

Gentoo Linux being a source-based distribution, updates to the largest packages take hours to build on older machines. Actually, some packages can take hours to build on newer machines too. On this older laptop I therefore merge the www-client/firefox-bin binary package instead of the www-client/firefox source-code package, and have installed Microsoft Office 2007 running in WINE instead of trying to merge the app-office/libreoffice source-code package (app-office/libreoffice-bin cannot be merged in this Testing installation because of incompatibility with the versions of installed dependencies, so it would only be a viable alternative binary package in a Stable installation).

Possibly the worst source-code package to build is dev-qt/qtwebengine. Nowadays it takes a ridiculous amount of time to build on this laptop, even with the jumbo-build USE flag set and MAKEOPTS="-j4" or even MAKEOPTS="-j1". The latest merge on the laptop took more than 14 hours:

root # genlop -t qtwebengine | tail -n 3
     Fri Mar  5 02:02:07 2021 >>> dev-qt/qtwebengine-5.15.2_p20210224
       merge time: 14 hours, 14 minutes and 7 seconds.


That is actually quite fast for that laptop; qtwebengine has sometimes taken two days to merge in the past.

What a waste of time and electricity, not to mention the unnecessary wear on the laptop (fan bearing; prolonged heat on components; etc.).

This one package is such a hassle to merge that it had me wondering if I should switch from Gentoo Linux to a binary distribution. Even on my six-year-old Compal W230SS laptop with a fourth-generation Core i7 CPU and 16 GB of RAM, qtwebengine takes circa five hours to merge. After several years putting up with this scourge of source-based Linux distributions on my secondary laptop, I had finally had enough and decided to excise the package, which did not look like an easy task with the full KDE suite installed. This is how I did it…

1. First I made sure the installation was up-to-date (see my earlier post ‘My system upgrade procedure for Gentoo Linux‘ for the steps I normally use to update all packages to their latest versions).

2. I ascertained which packages depended on qtwebengine:

root # equery depends qtwebengine
 * These packages depend on qtwebengine:
kde-apps/kaccounts-providers-20.12.2 (>=dev-qt/qtwebengine-5.15.2:5)
kde-apps/kalgebra-20.12.2 (>=dev-qt/qtwebengine-5.15.2:5[widgets])
kde-apps/kdenlive-20.12.2 (webengine ? >=dev-qt/qtwebengine-5.15.2:5)
kde-apps/kimagemapeditor-20.12.2 (>=dev-qt/qtwebengine-5.15.2:5[widgets])
kde-apps/ktp-text-ui-20.12.2 (>=dev-qt/qtwebengine-5.15.2:5[widgets])
kde-apps/marble-20.12.2 (webengine ? >=dev-qt/qtwebengine-5.15.2:5[widgets])
kde-apps/parley-20.12.2 (>=dev-qt/qtwebengine-5.15.2:5[widgets])
kde-plasma/kdeplasma-addons-5.21.1 (webengine ? >=dev-qt/qtwebengine-5.15.2:5)
kde-plasma/libksysguard-5.21.1 (webengine ? >=dev-qt/qtwebengine-5.15.2:5)
net-libs/signon-ui-0.15_p20171022-r1 (dev-qt/qtwebengine:5)
net-p2p/ktorrent-20.12.2 (rss ? >=dev-qt/qtwebengine-5.15.2:5)
                         (webengine ? >=dev-qt/qtwebengine-5.15.2:5)
www-client/falkon-3.1.0-r1 (>=dev-qt/qtwebengine-5.12.3:5[widgets])

3. I disabled the USE flag ‘webengine‘ globally:

root # nano /etc/portage/make.conf # Add -webengine to the list of USE flags

4. I merged the world set in order to incorporate the USE flag change:

root # emerge -uvDN @world

These are the packages that would be merged, in order:

Calculating dependencies... done!
[ebuild   R    ] kde-apps/marble-20.12.2:5/20.12::gentoo  USE="dbus geolocation kde nls pbf phonon -aprs -debug -designer -gps -handbook -shapefile -test -webengine*" 0 KiB
[ebuild   R    ] kde-apps/kdeedu-meta-20.12.2:5::gentoo  USE="-webengine*" 0 KiB
[ebuild   R    ] kde-apps/kdecore-meta-20.12.2:5::gentoo  USE="share thumbnail -handbook -webengine*" 0 KiB
[ebuild   R    ] net-p2p/ktorrent-20.12.2:5::gentoo  USE="bwscheduler downloadorder infowidget ipfilter kross logviewer magnetgenerator mediaplayer rss scanfolder shutdown stats upnp zeroconf -debug -handbook -test -webengine*" 0 KiB
[ebuild   R    ] kde-apps/kdenetwork-meta-20.12.2:5::gentoo  USE="bittorrent -dropbox -webengine*" 0 KiB
[ebuild   R    ] kde-apps/kdeutils-meta-20.12.2:5::gentoo  USE="cups rar -7zip -floppy -gpg -lrz -webengine*" 0 KiB

Total: 6 packages (6 reinstalls), Size of downloads: 0 KiB

>>> Verifying ebuild manifests
>>> Emerging (1 of 6) kde-apps/marble-20.12.2::gentoo
>>> Emerging (2 of 6) kde-apps/kdecore-meta-20.12.2::gentoo
>>> Emerging (3 of 6) net-p2p/ktorrent-20.12.2::gentoo
>>> Emerging (4 of 6) kde-apps/kdeutils-meta-20.12.2::gentoo
>>> Installing (2 of 6) kde-apps/kdecore-meta-20.12.2::gentoo
>>> Installing (4 of 6) kde-apps/kdeutils-meta-20.12.2::gentoo
>>> Installing (3 of 6) net-p2p/ktorrent-20.12.2::gentoo
>>> Emerging (5 of 6) kde-apps/kdenetwork-meta-20.12.2::gentoo
>>> Installing (5 of 6) kde-apps/kdenetwork-meta-20.12.2::gentoo
>>> Installing (1 of 6) kde-apps/marble-20.12.2::gentoo
>>> Emerging (6 of 6) kde-apps/kdeedu-meta-20.12.2::gentoo
>>> Installing (6 of 6) kde-apps/kdeedu-meta-20.12.2::gentoo
>>> Jobs: 6 of 6 complete                           Load avg: 1.93, 3.62, 3.86
>>> Auto-cleaning packages...

>>> No outdated packages were found on your system.

 * GNU info directory index is up-to-date.
 * After world updates, it is important to remove obsolete packages with
 * emerge --depclean. Refer to `man emerge` for more information.

5. I uninstalled packages that were no longer required by any other packages and also not required by me (I do not use the Falkon browser, Telepathy and KAlgebra, to give a few examples, and so did not mind various specific packages being removed):

root # emerge --ask --depclean

 * Always study the list of packages to be cleaned for any obvious
 * mistakes. Packages that are part of the world set will always
 * be kept.  They can be manually added to this set with
 * `emerge --noreplace `.  Packages that are listed in
 * package.provided (see portage(5)) will be removed by
 * depclean, even if they are part of the world set.
 * 
 * As a safety measure, depclean will not remove any packages
 * unless *all* required dependencies have been resolved.  As a
 * consequence of this, it often becomes necessary to run 
 * `emerge --update --newuse --deep @world` prior to depclean.

Calculating dependencies... done!
>>> Calculating removal order...

>>> These are the packages that would be unmerged:                                                                                                                                                                                                

 kde-apps/parley
    selected: 20.12.2 
   protected: none 
     omitted: none 

 www-client/falkon
    selected: 3.1.0-r1 
   protected: none 
     omitted: none 

 kde-apps/kimagemapeditor
    selected: 20.12.2 
   protected: none 
     omitted: none 

 kde-apps/plasma-telepathy-meta
    selected: 20.12.2 
   protected: none 
     omitted: none 

 kde-apps/kalgebra
    selected: 20.12.2 
   protected: none 
     omitted: none 

 kde-apps/ktp-kded-module
    selected: 20.12.2 
   protected: none 
     omitted: none 

 kde-apps/ktp-desktop-applets
    selected: 20.12.2 
   protected: none 
     omitted: none 

 kde-apps/ktp-accounts-kcm
    selected: 20.12.2 
   protected: none 
     omitted: none 

 kde-apps/ktp-send-file
    selected: 20.12.2 
   protected: none 
     omitted: none 

 kde-apps/ktp-approver
    selected: 20.12.2 
   protected: none 
     omitted: none 

 kde-apps/ktp-auth-handler
    selected: 20.12.2 
   protected: none 
     omitted: none 

 kde-apps/ktp-contact-runner
    selected: 20.12.2 
   protected: none 
     omitted: none 

 kde-apps/ktp-text-ui
    selected: 20.12.2 
   protected: none 
     omitted: none 

 kde-apps/signon-kwallet-extension
    selected: 20.12.2 
   protected: none 
     omitted: none 

 net-im/telepathy-connection-managers
    selected: 2-r2 
   protected: none 
     omitted: none 

 kde-apps/ktp-filetransfer-handler
    selected: 20.12.2 
   protected: none 
     omitted: none 

 kde-apps/ktp-contact-list
    selected: 20.12.2 
   protected: none 
     omitted: none 

 net-irc/telepathy-idle
    selected: 0.2.0-r3 
   protected: none 
     omitted: none 

 net-voip/telepathy-salut
    selected: 0.8.1-r3 
   protected: none 
     omitted: none 

 net-voip/telepathy-gabble
    selected: 0.18.4-r2 
   protected: none 
     omitted: none 

 kde-apps/ktp-common-internals
    selected: 20.12.2 
   protected: none 
     omitted: none 

 net-libs/telepathy-accounts-signon
    selected: 2.1 
   protected: none 
     omitted: none 

 net-libs/libnice
    selected: 0.1.15 
   protected: none 
     omitted: none 

 net-libs/telepathy-logger-qt
    selected: 17.09.0 
   protected: none 
     omitted: none 

 net-im/telepathy-logger
    selected: 0.8.2-r1 
   protected: none 
     omitted: none 

 net-libs/gupnp-igd
    selected: 0.2.5-r10 
   protected: none 
     omitted: none 

 net-libs/libsignon-glib
    selected: 2.1 
   protected: none 
     omitted: none 

 net-libs/telepathy-qt
    selected: 0.9.8 
   protected: none 
     omitted: none 

 net-libs/gupnp
    selected: 1.2.4 
   protected: none 
     omitted: none 

 net-libs/gssdp
    selected: 1.2.3 
   protected: none 
     omitted: none 

 net-libs/libsoup
    selected: 2.70.0 
   protected: none 
     omitted: none 

 net-libs/libpsl
    selected: 0.21.1 
   protected: none 
     omitted: none 

 net-libs/glib-networking
    selected: 2.66.0 
   protected: none 
     omitted: none 

 net-im/telepathy-mission-control
    selected: 5.16.5 
   protected: none 
     omitted: none 

 net-libs/telepathy-glib
    selected: 0.24.1-r1 
   protected: none 
     omitted: none 

All selected packages: =kde-apps/ktp-desktop-applets-20.12.2 =kde-apps/ktp-contact-runner-20.12.2 =kde-apps/ktp-contact-list-20.12.2 =net-libs/telepathy-accounts-signon-2.1 =net-libs/telepathy-glib-0.24.1-r1 =net-voip/telepathy-salut-0.8.1-r3 =kde-apps/ktp-text-ui-20.12.2 =net-libs/libsignon-glib-2.1 =net-im/telepathy-connection-managers-2-r2 =kde-apps/ktp-accounts-kcm-20.12.2 =kde-apps/kimagemapeditor-20.12.2 =kde-apps/ktp-common-internals-20.12.2 =kde-apps/parley-20.12.2 =net-libs/libnice-0.1.15 =net-libs/libsoup-2.70.0 =kde-apps/ktp-auth-handler-20.12.2 =net-libs/gssdp-1.2.3 =net-irc/telepathy-idle-0.2.0-r3 =net-libs/libpsl-0.21.1 =kde-apps/kalgebra-20.12.2 =net-libs/gupnp-igd-0.2.5-r10 =kde-apps/ktp-filetransfer-handler-20.12.2 =kde-apps/ktp-send-file-20.12.2 =net-libs/gupnp-1.2.4 =kde-apps/ktp-kded-module-20.12.2 =net-im/telepathy-mission-control-5.16.5 =kde-apps/plasma-telepathy-meta-20.12.2 =net-voip/telepathy-gabble-0.18.4-r2 =net-im/telepathy-logger-0.8.2-r1 =kde-apps/signon-kwallet-extension-20.12.2 =net-libs/telepathy-logger-qt-17.09.0 =net-libs/telepathy-qt-0.9.8 =net-libs/glib-networking-2.66.0 =kde-apps/ktp-approver-20.12.2 =www-client/falkon-3.1.0-r1

>>> 'Selected' packages are slated for removal.
>>> 'Protected' and 'omitted' packages will not be removed.

Would you like to unmerge these packages? [Yes/No] Yes 
>>> Waiting 5 seconds before starting...
>>> (Control-C to abort)...
>>> Unmerging in: 5 4 3 2 1
>>> Unmerging (1 of 35) kde-apps/parley-20.12.2...
>>> Unmerging (2 of 35) www-client/falkon-3.1.0-r1...
>>> Unmerging (3 of 35) kde-apps/kimagemapeditor-20.12.2...
>>> Unmerging (4 of 35) kde-apps/plasma-telepathy-meta-20.12.2...
>>> Unmerging (5 of 35) kde-apps/kalgebra-20.12.2...
>>> Unmerging (6 of 35) kde-apps/ktp-kded-module-20.12.2...
>>> Unmerging (7 of 35) kde-apps/ktp-desktop-applets-20.12.2...
>>> Unmerging (8 of 35) kde-apps/ktp-accounts-kcm-20.12.2...
>>> Unmerging (9 of 35) kde-apps/ktp-send-file-20.12.2...
>>> Unmerging (10 of 35) kde-apps/ktp-approver-20.12.2...
>>> Unmerging (11 of 35) kde-apps/ktp-auth-handler-20.12.2...
>>> Unmerging (12 of 35) kde-apps/ktp-contact-runner-20.12.2...
>>> Unmerging (13 of 35) kde-apps/ktp-text-ui-20.12.2...
>>> Unmerging (14 of 35) kde-apps/signon-kwallet-extension-20.12.2...
>>> Unmerging (15 of 35) net-im/telepathy-connection-managers-2-r2...
>>> Unmerging (16 of 35) kde-apps/ktp-filetransfer-handler-20.12.2...
>>> Unmerging (17 of 35) kde-apps/ktp-contact-list-20.12.2...
>>> Unmerging (18 of 35) net-irc/telepathy-idle-0.2.0-r3...
>>> Unmerging (19 of 35) net-voip/telepathy-salut-0.8.1-r3...
>>> Unmerging (20 of 35) net-voip/telepathy-gabble-0.18.4-r2...
>>> Unmerging (21 of 35) kde-apps/ktp-common-internals-20.12.2...
>>> Unmerging (22 of 35) net-libs/telepathy-accounts-signon-2.1...
>>> Unmerging (23 of 35) net-libs/libnice-0.1.15...
>>> Unmerging (24 of 35) net-libs/telepathy-logger-qt-17.09.0...
>>> Unmerging (25 of 35) net-im/telepathy-logger-0.8.2-r1...
>>> Unmerging (26 of 35) net-libs/gupnp-igd-0.2.5-r10...
>>> Unmerging (27 of 35) net-libs/libsignon-glib-2.1...
>>> Unmerging (28 of 35) net-libs/telepathy-qt-0.9.8...
>>> Unmerging (29 of 35) net-libs/gupnp-1.2.4...
>>> Unmerging (30 of 35) net-libs/gssdp-1.2.3...
>>> Unmerging (31 of 35) net-libs/libsoup-2.70.0...
>>> Unmerging (32 of 35) net-libs/libpsl-0.21.1...
>>> Unmerging (33 of 35) net-libs/glib-networking-2.66.0...
>>> Unmerging (34 of 35) net-im/telepathy-mission-control-5.16.5...
>>> Unmerging (35 of 35) net-libs/telepathy-glib-0.24.1-r1...
Packages installed:   1651
Packages in world:    329
Packages in system:   43
Required packages:    1651
Number removed:       35

 * GNU info directory index is up-to-date.

Notice that the package qtwebengine had not been removed, so something still depended on it.

6. I checked if there were any packages still installed with a dependency on qtwebengine:

root # equery depends qtwebengine
 * These packages depend on qtwebengine:
kde-apps/kaccounts-providers-20.12.2 (>=dev-qt/qtwebengine-5.15.2:5)
kde-apps/kdenlive-20.12.2 (webengine ? >=dev-qt/qtwebengine-5.15.2:5)
kde-apps/marble-20.12.2 (webengine ? >=dev-qt/qtwebengine-5.15.2:5[widgets])
kde-plasma/kdeplasma-addons-5.21.1 (webengine ? >=dev-qt/qtwebengine-5.15.2:5)
kde-plasma/libksysguard-5.21.1 (webengine ? >=dev-qt/qtwebengine-5.15.2:5)
net-libs/signon-ui-0.15_p20171022-r1 (dev-qt/qtwebengine:5)
net-p2p/ktorrent-20.12.2 (rss ? >=dev-qt/qtwebengine-5.15.2:5)
                         (webengine ? >=dev-qt/qtwebengine-5.15.2:5)

As can be seen from the above output, the only remaining installed packages that ‘hard-depended’ on the ‘webengine‘ USE flag were kde-apps/kaccounts-providers-20.12.2 and net-libs/signon-ui-0.15_p20171022-r1.

Additionally, the package net-p2p/ktorrent-20.12.2 still depended on qtwebengine because the rss USE flag was enabled. So I added the line ‘net-p2p/ktorrent -rss‘ to the file /etc/portage/package.use/package.use and re-merged net-p2p/ktorrent. Actually, I re-merged the following packages just in case they needed to be rebuilt, although in retrospect I believe that was unnecessary:

     Fri Mar  5 05:37:26 2021 >>> kde-apps/kdecore-meta-20.12.2
     Fri Mar  5 05:37:55 2021 >>> kde-apps/kdeutils-meta-20.12.2
     Fri Mar  5 05:45:49 2021 >>> net-p2p/ktorrent-20.12.2
     Fri Mar  5 05:46:49 2021 >>> kde-apps/kdenetwork-meta-20.12.2
     Fri Mar  5 05:57:41 2021 >>> kde-apps/marble-20.12.2
     Fri Mar  5 05:58:15 2021 >>> kde-apps/kdeedu-meta-20.12.2

7. By now another day had dawned, so I checked if new versions of the ebuilds for any KDE packages had been uploaded to the Portage repositories:

root # emaint sync -a
root # eix-update && updatedb

8. I rebooted the laptop and checked which packages still depended on qtwebengine. It turned out that only the two packages with a hard-dependency on qtwebengine were still preventing me from removing it:

root # equery depends qtwebengine
 * These packages depend on qtwebengine:
kde-apps/kaccounts-providers-20.12.2 (>=dev-qt/qtwebengine-5.15.2:5)
net-libs/signon-ui-0.15_p20171022-r1 (dev-qt/qtwebengine:5)

9. I checked if any packages depended on those two packages:

root # equery depends kaccounts-providers
 * These packages depend on kaccounts-providers:
kde-misc/kio-gdrive-20.12.2 (>=kde-apps/kaccounts-providers-20.12.2:5)
# equery depends kio-gdrive
 * These packages depend on kio-gdrive:
kde-apps/kdenetwork-meta-20.12.2 (>=kde-misc/kio-gdrive-20.12.2:5)
root # equery depends signon-ui
 * These packages depend on signon-ui:
kde-apps/kaccounts-providers-20.12.2 (net-libs/signon-ui)

So kdenetwork-meta hard-depends on kio-gdrive, which does not make much sense, really, given that not all KDE users have a Google Drive account and those users therefore do not need the kio-gdrive package to be installed.

10. The contents of the kdenetwork-meta-20.12.3 ebuild look like this:

root # cat /usr/portage/kde-apps/kdenetwork-meta/kdenetwork-meta-20.12.3.ebuild
# Copyright 1999-2021 Gentoo Authors
# Distributed under the terms of the GNU General Public License v2

EAPI=7

DESCRIPTION="kdenetwork - merge this to pull in all kdenetwork-derived packages"
HOMEPAGE="https://kde.org/"

LICENSE="metapackage"
SLOT="5"
KEYWORDS="~amd64 ~arm64 ~ppc64 ~x86"
IUSE="+bittorrent dropbox +webengine"

RDEPEND="
        >=kde-apps/kdenetwork-filesharing-${PV}:${SLOT}
        >=kde-apps/kget-${PV}:${SLOT}
        >=kde-apps/kopete-${PV}:${SLOT}
        >=kde-apps/krdc-${PV}:${SLOT}
        >=kde-apps/krfb-${PV}:${SLOT}
        >=kde-apps/zeroconf-ioslave-${PV}:${SLOT}
        >=kde-misc/kdeconnect-${PV}:${SLOT}
        >=kde-misc/kio-gdrive-${PV}:${SLOT}
        >=net-irc/konversation-${PV}:${SLOT}
        bittorrent? (
                >=net-libs/libktorrent-${PV}:${SLOT}
                >=net-p2p/ktorrent-${PV}:${SLOT}
        )
        dropbox? ( >=kde-apps/dolphin-plugins-dropbox-${PV}:${SLOT} )
"

so I created an ebuild for kdenetwork-meta-20.12.3 in my local overlay with the dependency on kio-gdrive removed:

root # mkdir -p /usr/local/portage/kde-apps/kdenetwork-meta
root # cd /usr/local/portage/kde-apps/kdenetwork-meta
root # cp /usr/portage/kde-apps/kdenetwork-meta/kdenetwork-meta-20.12.3.ebuild .
root # nano kdenetwork-meta-20.12.3.ebuild # Delete the line containing ">=kde-misc/kio-gdrive-${PV}:${SLOT}"
root # ebuild kdenetwork-meta-20.12.3.ebuild manifest
>>> Creating Manifest for /usr/local/portage/kde-apps/kdenetwork-meta
root # # eix-update && updatedb

11. I re-merged the world set in order to update all KDE packages that now had a newer ebuild version:

root # emerge -uvDN @world

12. I rechecked the three packages that had depended on qtwebengine:

root # equery depends signon-ui
 * These packages depend on signon-ui:
kde-apps/kaccounts-providers-20.12.3 (net-libs/signon-ui)
root # equery depends kaccounts-providers
 * These packages depend on kaccounts-providers:
kde-misc/kio-gdrive-20.12.3 (kaccounts ? >=kde-apps/kaccounts-providers-20.08.3:5)
root # equery depends kio-gdrive
 * These packages depend on kio-gdrive:
root #

As can be seen above, my modified ebuild for kdenetwork-meta-20.12.3 had indeed removed the impediment to uninstalling kio-gdrive and therefore the impediment to uninstalling kaccount-providers and signon-ui.

13. I merged my modified version of kdenetwork-meta-20.12.3:

Up to this point kde-apps/kdenetwork-meta-20.12.3 had been merged from the main Portage tree:

root # eix -I kde-apps/kdenetwork-meta
[I] kde-apps/kdenetwork-meta
     Available versions:  (5) 20.08.3-r1 (~)20.12.3 (~)20.12.3[1]
       {+bittorrent dropbox +webengine}
     Installed versions:  20.12.3(5)(15:23:08 05/03/21)(bittorrent -dropbox -webengine)
     Homepage:            https://kde.org/
     Description:         kdenetwork - merge this to pull in all kdenetwork-derived packages

[1] "local_overlay" /usr/local/portage

I then merged the version from my local overlay:

root # emerge -1v kdenetwork-meta::local_overlay

These are the packages that would be merged, in order:

Calculating dependencies... done!
[ebuild   R    ] kde-apps/kdenetwork-meta-20.12.3:5::local_overlay [20.12.3:5::gentoo] USE="bittorrent -dropbox -webengine" 0 KiB

Total: 1 package (1 reinstall), Size of downloads: 0 KiB

>>> Verifying ebuild manifests
>>> Emerging (1 of 1) kde-apps/kdenetwork-meta-20.12.3::local_overlay
>>> Installing (1 of 1) kde-apps/kdenetwork-meta-20.12.3::local_overlay
>>> Jobs: 1 of 1 complete                           Load avg: 1.76, 0.88, 0.61
>>> Auto-cleaning packages...

>>> No outdated packages were found on your system.

 * GNU info directory index is up-to-date.
root # eix -I kde-apps/kdenetwork-meta
[I] kde-apps/kdenetwork-meta
     Available versions:  (5) 20.08.3-r1 (~)20.12.3 (~)20.12.3[1]
       {+bittorrent dropbox +webengine}
     Installed versions:  20.12.3(5)[1](16:40:43 05/03/21)(bittorrent -dropbox -webengine)
     Homepage:            https://kde.org/
     Description:         kdenetwork - merge this to pull in all kdenetwork-derived packages

[1] "local_overlay" /usr/local/portage

14. I checked which packages still depended on qtwebengine:

root # equery depends qtwebengine
 * These packages depend on qtwebengine:
kde-apps/kaccounts-providers-20.12.3 (>=dev-qt/qtwebengine-5.15.2:5)
kde-apps/kdenlive-20.12.3 (webengine ? >=dev-qt/qtwebengine-5.15.2:5)
kde-apps/marble-20.12.3 (webengine ? >=dev-qt/qtwebengine-5.15.2:5[widgets])
kde-plasma/kdeplasma-addons-5.21.2 (webengine ? >=dev-qt/qtwebengine-5.15.2:5)
kde-plasma/libksysguard-5.21.2 (webengine ? >=dev-qt/qtwebengine-5.15.2:5)
net-libs/signon-ui-0.15_p20171022-r1 (dev-qt/qtwebengine:5)
net-p2p/ktorrent-20.12.3 (rss ? >=dev-qt/qtwebengine-5.15.2:5)
                         (webengine ? >=dev-qt/qtwebengine-5.15.2:5)

Eureka! kdenetwork-meta no longer depends on qtwebengine.

15. I was then able to remove qtwebengine and the remaining packages that hard-depend on it:

root # emerge --ask --depclean qtwebengine kaccounts-providers signon-ui kio-gdrive

Calculating dependencies... done!
>>> Calculating removal order...

>>> These are the packages that would be unmerged:                                                                                                                                                                                                

 kde-misc/kio-gdrive
    selected: 20.12.3 
   protected: none 
     omitted: none 

 kde-apps/kaccounts-providers
    selected: 20.12.3 
   protected: none 
     omitted: none 

 net-libs/signon-ui
    selected: 0.15_p20171022-r1 
   protected: none 
     omitted: none 

 dev-qt/qtwebengine
    selected: 5.15.2_p20210224 
   protected: none 
     omitted: none 

All selected packages: =dev-qt/qtwebengine-5.15.2_p20210224 =kde-apps/kaccounts-providers-20.12.3 =kde-misc/kio-gdrive-20.12.3 =net-libs/signon-ui-0.15_p20171022-r1

>>> 'Selected' packages are slated for removal.
>>> 'Protected' and 'omitted' packages will not be removed.

Would you like to unmerge these packages? [Yes/No] Yes
>>> Waiting 5 seconds before starting...
>>> (Control-C to abort)...
>>> Unmerging in: 5 4 3 2 1
>>> Unmerging (1 of 4) kde-misc/kio-gdrive-20.12.3...
>>> Unmerging (2 of 4) kde-apps/kaccounts-providers-20.12.3...
>>> Unmerging (3 of 4) net-libs/signon-ui-0.15_p20171022-r1...
>>> Unmerging (4 of 4) dev-qt/qtwebengine-5.15.2_p20210224...
Packages installed:   1648
Packages in world:    329
Packages in system:   43
Required packages:    1648
Number removed:       4

 * GNU info directory index is up-to-date.

\o/ \o/ \o/ \o/ No more qtwebengine in Gentoo Linux Testing (~amd64) running KDE.

Of course this was only possible because I do not need the specific packages that had been uninstalled during this entire procedure. Other people may not be in the same position.

16. I added the following lines to the file /etc/portage/package.mask/package.mask so that the packages are not pulled in automatically when merging the world set in future:

dev-qt/qtwebengine
kde-apps/kdenetwork-meta::gentoo
kde-misc/kio-gdrive
kde-apps/kaccounts-providers
net-libs/signon-ui

17. In future I will have to modify new versions of the kdenetwork-meta ebuild and add them to my local overlay. Furthermore, if other packages become dependent on qtwebengine in future and I do not require them, I will have to repeat the above steps in order to remove them (if viable). I just hope I can keep the qtwebengine package from ever being installed again.