WebRTC – A viable alternative to Skype

webrtc_logoSkype for Linux 4.3 and upwards requires the use of PulseAudio, which has caused discontent amongst those Linux users who do not use PulseAudio. Although I do use PulseAudio, I recently found out about WebRTC, an API (application programming interface) for browser-based communication offering most of the functions provided by Skype, namely: voice calling, video chat, text chat, file sharing and screen sharing. The official WebRTC site states:

WebRTC is a free, open project that enables web browsers with Real-Time Communications (RTC) capabilities via simple JavaScript APIs. The WebRTC components have been optimized to best serve this purpose.

Our mission: To enable rich, high quality, RTC applications to be developed in the browser via simple JavaScript APIs and HTML5.

WebRTC was originally released by Google but is now a draft standard of the World Wide Web Consortium, and is supported by Chrome, Firefox and Opera browsers. Several commercial Web sites offer WebRTC-based communications to fee-paying customers, but I thought I would try WebRTC by using one of the so-called ‘demo’ WebRTC pages. AppRTC is a WebRTC demo page which can be reached from a link on the official WebRTC site, but I prefer Multi-Party WebRTC Demo by TokBox which offers a more polished experience with better features. Both are free to use and viable substitutes to Skype for video chatting (one-to-one or conference).

So, how do you actually use WebRTC-based sites? Below is a quick guide to get you going.

Text and video chatting

Open the following URL in Chrome or Firefox:

https://opentokrtc.com/

Enter a Room Name that is likely to be unique. I used ‘fitzchat’ (without the quotes), but you can use any name you want.

The other party or parties can do the same thing, i.e. they enter the same Room Name as you, and you will all become connected.

Alternatively, to send an e-mail invitation to someone, click on the URL at the top of the pane on the right-hand side (which is Invite: https://opentokrtc.com/fitzchat in this example, as I chose to name the Room ‘fitzchat’). The partially visible pane at the right-hand side of the browser window will slide into full view when you click on it.

That’s all there is to it. You should see a video window showing each party, and they should see the same. Each party should also be able to hear the other parties. In the top right-hand corner of each video window is an icon (microphone for you; speaker for each of the other parties) which you can click on to mute/un-mute that party.

Click on the partially visible pane at the right-hand side of the browser window. Notice the ‘chat bar’ at the bottom where you enter commands and chat text. Read the grey instructions listed near the top of the pane:

Welcome to OpenTokRTC by TokBox
Type /nick your_name to change your name
Type /list to see list of users in the room
Type /help to see a list of commands
Type /hide to hide chat bar
Type /focus to lead the group
Type /unfocus to put everybody on equal standing

For example, to give myself a meaningful name instead of the default username Guest-0120e48c which was given to me automatically, I entered the following:

/nick Fitz

Screen sharing

I found that screen sharing already works well in Chrome 36.0.1985.125 but is not yet supported in Firefox 31.0. It will be supported in Firefox 32 or 33, apparently, or you can already use Firefox Nightly providing you add the appropriate preferences via about:config.

To be able to share screens in Chrome, I had to perform two steps: enable a Chrome flag and install a Chrome extension. The two steps, which do not need to be repeated, are given below (see Ref. 1).

To enable screen sharing in Chrome, do the following:

  1. Open a new tab or window in Chrome.
  2. Copy the following link: chrome://flags/#enable-usermedia-screen-capture and paste it in the location bar.
  3. Click on the ‘Enable’ link below ‘Enable screen capture support in getUserMedia().’ at the very top of the screen.
  4. Click on the ‘Relaunch Now’ button at the bottom of the page to restart Chrome.

To install the screen sharing extension in Chrome, do the following:

  1. Launch Chrome and click on the Menu icon.
  2. Click on ‘Settings’.
  3. Click on ‘Extensions’.
  4. Click on ‘Get more extensions’ and search for ‘webrtc’.
  5. Download ‘WebRTC Desktop Sharing’.
  6. This places an icon to the right of the URL bar in Chrome.

To share your screen or just a window, do the following in Chrome:

  1. Click on the ‘Share Desktop’ icon to the right of the URL bar and select either ‘Screen’ or the window you wish to share.
  2. Click ‘Share’.
  3. When sharing has started in a new Chrome window, select the URL of the relevant tab in that window and send it to the other parties via the chat pane on the right-hand side of the first browser window.

To stop sharing, click on ‘Stop sharing’ and click on the ‘Share Desktop’ icon to the right of the URL bar to get it to return to displaying the ‘Share Desktop’ icon instead of the || (Pause) icon.

File sharing

I did not bother to try file sharing using WebRTC, but there are various Web sites you can use to do that. One such is ShareDrop, and googling will find others.

Caveats

Chrome 36.0.1985.125 and Firefox 31.0 were used in this trial (I did not try Opera). I found that video chat worked faultlessly when both parties were using Chrome, and when both parties were using Firefox. However, when one of the parties was using Firefox and the other was using Chrome, I could not see myself in one of the video boxes in the browser window (although I could see the other party in the other video box in the browser window). Furthermore, there was a grey bar across the middle of the video images in the AppRTC demo, whereas the Multi-Party WebRTC Demo video images were normal. Other than those two issues, the experience was smooth and straightforward. My recommendation would therefore be to use Multi-Party WebRTC Demo and for all the parties to use the same browser, be it Chrome or Firefox. If you want to share your screen or a window, the logical choice at the moment would be Chrome.

References

1 LiveMinutes Blog – Beta Testers: How To Activate Screen Sharing!

Converting ape music files to mp3 in Linux

I had a file in the lossless ape (Monkey’s Audio) file format, and wanted to convert it to a .mp3 file so that I could play it on my portable mp3 player. As is usual in Linux, several alternative solutions exist, and I thought I’d try three of them for fun: shntool, ffmpeg and KDE’s Konvertible (Konvertible is a GUI for ffmpeg).

I already had ffmpeg and Konvertible installed, but not shntool. So first I installed shntool and the Monkey’s Audio codecs it uses:

# emerge media-sound/mac
# emerge media-sound/shntool

Here are the details of these two installed packages:

# eix -I shntool
[I] media-sound/shntool
Available versions: 3.0.10-r1 {alac flac mac shorten sox wavpack}
Installed versions: 3.0.10-r1(08:11:30 19/12/12)(flac -alac -mac -shorten -sox -wavpack)
Homepage: http://www.etree.org/shnutils/shntool/
Description: A multi-purpose WAVE data processing and reporting utility

# eix -I media-sound/mac
[I] media-sound/mac
Available versions: 3.99.4.5.7-r1^m {mmx static-libs}
Installed versions: 3.99.4.5.7-r1^m(07:52:12 19/12/12)(mmx -static-libs)
Homepage: http://etree.org/shnutils/shntool/
Description: Monkey's Audio Codecs

Then I used the following command to convert the file My Band 1971 CoolSounds.ape to mp3:

$ shntool conv -i ape -o 'cust ext=mp3 lame - %f' My\ Band\ 1971\ CoolSounds.ape
Converting [My Band 1971 CoolSounds.ape] (59:15.39) --> [My Band 1971 CoolSounds.mp3] : 100% OK
$

The KDE utility Konvertible was also able to convert it. I double-clicked on the file My Band 1971 CoolSounds.ape in Dolphin to launch Konvertible, selected libmp3lame in the ‘Codec:’ drop-down picklist, 192.00 kbits/s in the ‘Bitrate:’ drop-down picklist, clicked on the folder icon and selected /home/fitzcarraldo as the destination directory, and finally clicked ‘Convert’.

The mp3 files created by shntool and Konvertible were of different sizes:

File created by Konvertible:

$ file My\ Band\ 1971\ CoolSounds.mp3
My Band 1971 CoolSounds.mp3: Audio file with ID3 version 2.4.0, contains: MPEG ADTS, layer III, v1, 192 kbps, 44.1 kHz, Stereo
$ ls -la My\ Band\ 1971\ CoolSounds.mp3
-rw-r--r-- 1 fitzcarraldo users 85334024 Dec 19 08:11 My Band 1971 CoolSounds.mp3
$

File created by shntool:

$ file My\ Band\ 1971\ CoolSounds.mp3
My Band 1971 CoolSounds.mp3: MPEG ADTS, layer III, v1, 128 kbps, 44.1 kHz, JntStereo
$ ls -la My\ Band\ 1971\ CoolSounds.mp3
-rw-r--r-- 1 fitzcarraldo users 56889259 Dec 19 08:29 My Band 1971 CoolSounds.mp3
$

So I added the bitrate to the shntool command:

$ shntool conv -i ape -o 'cust ext=mp3 lame -b 192 - %f' My\ Band\ 1971\ CoolSounds.ape
Converting [My Band 1971 CoolSounds.ape] (59:15.39) --> [My Band 1971 CoolSounds.mp3] : 100% OK
$

and this time the mp3 file created by shntool is comparable to the mp3 file created by Konvertible:

$ file My\ Band\ 1971\ CoolSounds.mp3
My Band 1971 CoolSounds.mp3: MPEG ADTS, layer III, v1, 192 kbps, 44.1 kHz, JntStereo
$ ls -la My\ Band\ 1971\ CoolSounds.mp3
-rw-r--r-- 1 fitzcarraldo users 85333889 Dec 19 08:56 My Band 1971 CoolSounds.mp3
$

The ffmpeg command to do the same thing is:

$ ffmpeg -i My\ Band\ 1971\ CoolSounds.ape -ar 44100 -ab 192000 out.mp3
ffmpeg version 0.10.6 Copyright (c) 2000-2012 the FFmpeg developers
built on Nov 26 2012 07:06:40 with gcc 4.6.3
configuration: --prefix=/usr --libdir=/usr/lib64 --shlibdir=/usr/lib64 --mandir=/usr/share/man --enable-shared --cc=x86_64-pc-linux-gnu-gcc --cxx=x86_64-pc-linux-gnu-g++ --ar=x86_64-pc-linux-gnu-ar --optflags='-O2 -march=native -pipe' --extra-cflags='-O2 -march=native -pipe' --extra-cxxflags='-O2 -march=native -pipe' --disable-static --enable-gpl --enable-version3 --enable-postproc --enable-avfilter --disable-stripping --disable-debug --disable-doc --disable-vaapi --disable-vdpau --enable-runtime-cpudetect --enable-gnutls --enable-libmp3lame --enable-libvo-aacenc --enable-libtheora --enable-libvorbis --enable-libx264 --enable-libxvid --enable-libfaac --enable-nonfree --enable-libdc1394 --enable-openal --disable-indev=v4l --disable-indev=oss --enable-x11grab --enable-libpulse --disable-outdev=oss --enable-libfreetype --enable-pthreads --enable-libgsm --enable-libspeex --disable-amd3dnow --disable-amd3dnowext --disable-altivec --disable-avx --disable-mmx2 --disable-ssse3 --disable-vis --disable-neon --cpu=ho
libavutil 51. 35.100 / 51. 35.100
libavcodec 53. 61.100 / 53. 61.100
libavformat 53. 32.100 / 53. 32.100
libavdevice 53. 4.100 / 53. 4.100
libavfilter 2. 61.100 / 2. 61.100
libswscale 2. 1.100 / 2. 1.100
libswresample 0. 6.100 / 0. 6.100
libpostproc 52. 0.100 / 52. 0.100
Input #0, ape, from 'My Band 1971 CoolSounds.ape':
Metadata:
Album : CoolSounds
Title : C:\1\My Band 1971 CoolSounds
Comment : Exact Audio Copy
Duration: 00:59:15.47, start: 0.000000, bitrate: 829 kb/s
Stream #0:0: Audio: ape (APE / 0x20455041), 44100 Hz, stereo, s16
Output #0, mp3, to 'out.mp3':
Metadata:
TALB : CoolSounds
TIT2 : C:\1\My Band 1971 CoolSounds
Comment : Exact Audio Copy
TSSE : Lavf53.32.100
Stream #0:0: Audio: mp3, 44100 Hz, stereo, s16, 192 kb/s
Stream mapping:
Stream #0:0 -> #0:0 (ape -> libmp3lame)
Press [q] to stop, [?] for help
size= 83334kB time=00:59:15.55 bitrate= 192.0kbits/s
video:0kB audio:83333kB global headers:0kB muxing overhead 0.000892%
$

and, as you can see below, the resulting mp3 file is the same size as the mp3 file created using Konvertible (not surprising, since Konvertible is a GUI front-end for ffmpeg) and virtually the same as the mp3 file created by shntool.

$ file out.mp3
out.mp3: Audio file with ID3 version 2.4.0, contains: MPEG ADTS, layer III, v1, 192 kbps, 44.1 kHz, Stereo
$ ls -la out.mp3
-rw-r--r-- 1 fitzcarraldo users 85334024 Dec 20 18:14 out.mp3
$

So, there you have it: GUI or command line; take your pick!

Setting up a talking clock easily in Linux

There are several ways to set up a talking clock in Linux. One simple way to do it if you’re a KDE user is to use the Analogue Clock widget. Once you have placed the Analogue Clock widget on your Desktop, click on the widget’s spanner icon, select the ‘General’ tab and it shows the title ‘Text to Speech’ and a ‘Speak time’ box where you can select the frequency at which you want the talking clock to speak the time. When you click ‘Apply’, an icon appears in the System Tray on the Panel: Jovie KDE Text-to-speech Manager. You can right-click on the Jovie icon then click on ‘Configure’ to change the language and voice etc.

Another alternative is to install the eSpeak text-to-speech synthesizer and use the GUI KAlarm utility to run the following command at any interval you like (every hour, every half hour, every 15 minutes or whatever you want):

date +%I:%M%p | espeak

When the command above is executed on the hour, the voice speaks the hour followed by “zero zero AM/PM”. For example it says “seven zero zero PM” rather than “seven o’clock PM”. If you prefer the latter, you can modify the one-line command:

if [ $(date +%M) != "00" ]; then date +%-H:%M%p%Z; else echo -n $(date +%-H); echo -n "oh clock "; date +%p; date +%Z; fi | espeak -ven+f6

Use the command date --help to find out the different parameters available for the date command. You can also play around with the last two characters in the above command to get different voices. For example “m1″, “f4″ etc.

Using KAlarm’s GUI is less daunting for many people than setting up a cronjob to run the command, which would be yet another way of doing it. Also, by using KAlarm it is quick and easy to enable and disable the talking clock.

An alternative to the above command would be to run one of the many Bash scripts found on the Web. One such is saytime. SayTime uses the festival text-to-speech engine, an alternative to espeak, which you would need to install. The guts of SayTime is simply the command:

echo "Today is `date +%d` `date +%B` `date +%Y` and now the time is `date +%k` and `date +%M` minutes" | festival --tts

so you could use that command with KAlarm or a cronjob if you wanted. You can play around with the commands to get the time spoken the way you want.

eSpeak is also configurable; check out the Web site eSpeak text to speech. For example, the following is the time spoken in Portuguese instead of English:

date +%I:%M%p | espeak -vpt

or in English with a Scottish accent:

date +%I:%M%p | espeak -ven-sc

or in English with a Brummie accent:

date +%I:%M%p | espeak -ven-wm

or in Latin with a female voice:

date +%I:%M%p | espeak -vla+f4

Three guesses what this one does:

date +%I:%M%p | espeak -ven+whisper

You can have some fun exploring the options.

One small step for [a] man… revisited using Audacity

Earth view from Columbia

Audacity audio editor and recorder

On the 42nd anniversary of the Apollo 11 Moon landing, I look at a couple of ways that the FOSS application Audacity has been used to study that amazing event, and marvel at the sheer audacity (pardon the pun) of the Apollo programme.

Some of you may remember the 2006 audio analysis of Neil Armstrong’s famous words as he stepped onto the Lunar surface for the first time on 21 July 1969, 42 years ago tomorrow (the Lunar module landed on the 20 July). The analysis, which supposedly proved that Armstrong did say “That’s one small step for a man, one giant leap for mankind.”, was performed using the Windows application GoldWave. You can read the following BBC article about the analysis: Armstrong ‘got Moon quote right’.

But Linux users can analyse the recording for themselves using, for example, Audacity. You might want to do it to celebrate the 42nd anniversary of that momentous occasion. You can download an MP3 file (a11a1091545-1101226.mp3) of the recording from the following NASA Web page: One Small Step.

If you haven’t already got Audacity installed, you can install it using your Linux distribution’s package manager.

In KDE, an Audacity icon subtitled Sound Editor will be installed under Kickoff > Applications > Multimedia. So launch Audacity, click on File > Open and open the MP3 file you downloaded from the NASA Web site. It’s quite a large file, so it will take a little while to load into Audacity. You can click on the Play button to listen to the whole file — which I recommend you do as it’s simply awe inspiring — but then you can zoom in on those famous words (notice the Zoom In and Zoom Out buttons in the top right corner, and the scroll left and right buttons at the bottom of the Audacity window?). If you want to select only the relevant section, then you can enter the Selection Start as 00 h 08 m 38.000 s and the Selection End as 00 h 08 m 46.500 s. Then when you click on the Play and Stop buttons Audacity will play only that section. Or perhaps you prefer to hear just “That’s one small step for (a) man”, in which case set the Selection End as 00 h 08 m 41.100 s. Notice the smaller Play-at-speed button and speed slider about mid-way across the top of the Audacity control panel? You can even slow down the playback speed if you want. Try it. Now zoom in to the range 08 m 39.750 s to 08 m 40.000 s.

Well, the NASA Web page I referred to above states:

At the time of the mission, the world heard Neil say “That’s one small step for man; one giant leap for mankind”. As Andrew Chaikin details in A Man on the Moon, after the mission, Neil said that he had intended to say ‘one small step for a man’ and believed that he had done so. However, he also agreed that the ‘a’ didn’t seem to be audible in the recordings. The important point is that the world had no problem understanding his meaning. However, over the decades, people interested in details of the mission – including your editor – have listened repeatedly to the recordings, without hearing any convincing evidence of the ‘a’. In 2006, with a great deal of attendant media attention, journalist/ entrepreneur Peter Shann Ford claimed to have located the ‘a’ in the waveform of Neil’s transmission. Subsequently, more rigorous analyses of the transmission were undertaken by a number of people, including some with professional experience with audio waveforms and, most importantly, audio spectrograms. As of October 2006, none of these analyses support Ford’s conclusion. The transcription used above honors Neil’s intent.

What do you think? I’m not convinced he said the “a”.

Another twist to the tale is a dispute about the originator of the famous line itself: Apollo 11 Moon Landing: British scientist claims to have coined Neil Armstrong’s ‘one small step’ line.

While we’re at it, newspaper reports for the 21 and 22 July 1969 make fascinating reading. For example you can read on-line the UK Daily Telegraph pages about the Apollo 11 Moon landing here: Moon landings: How the Daily Telegraph reported on Apollo 11.

Also, I was fascinated to read about the Italian high school class that used Audacity to analyse the time delay between Mission Control’s and Armstrong’s replies — you can hear the delays in the MP3 file — and calculated accurately the distance between the Earth and the Moon: Echoes from the Moon. Now that is one science class those school students won’t forget. What a fantastic idea by the school teacher.

A wonderful demonstration of the laws of physics, albeit not on the Apollo 11 mission, was performed on the Moon by the Apollo 15 astronaut David Scott: he dropped a hammer and a feather simultaneously. For those of you who aren’t engineers or scientists, or who don’t remember your school physics classes, take a look at practice proving theory correct in a fun way: The Apollo 15 Hammer-Feather Drop.

Did you know that more than 300,000 people worked on the Apollo programme, and it cost between 20 and 25 billion US dollars (1969 US dollars, which would be much more today taking into account inflation between 1969 and 2011)? It also cost several lives.

As I look up at the Moon in awe, and recall watching on a black-and-white TV set in 1969 as Armstrong climbed down the ladder of the Eagle, I think the Apollo programme was one of Mankind’s most amazing technological achievements, and perhaps the most amazing of them all. To think that the Lunar Module was controlled by a computer with far less processing power and memory than the smartphone that I hold in my hand today is astounding. No wonder the Apollo astronauts came back to Earth changed men. After their mission, everything else must have paled into insignificance.

This article is a refreshed version of a post I made in 2009 Sabayon Linux Forums on the 40th anniversary of the Apollo 11 Moon landing. I used Audacity again recently, this time to reduce the loudness of an event sound for Mozilla Thunderbird, and I thought it would be nice to celebrate again both the Apollo 11 landing and the usefulness of Audacity and the fun that can be had with it.

Nostalgia for those ALSA mixer channels that KMix and GNOME Volume Control used to have?

These days the GUI mixers KMix and GNOME Sound Preferences display PulseAudio devices and streams rather than ALSA mixer channels. For example, prior to its integration with PulseAudio, KMix typically displayed a mixer window that looked like the one below.

KMix showing ALSA channels

KMix with ALSA channels

whereas, today, a KMix window typically looks like the following:

KMix with PulseAudio channels

KMix with PulseAudio channels

 

KMix 3.8 in KDE 4.6.1 does not provide separate speaker and headphone channels. You can alter the headphone and speaker volume by using PulseAudio Volume Control instead (see the picture below), but people are not as familiar with the PulseAudio GUI, and it is yet another step to perform.

PulseAudio Volume Control showing selection of Headphones channel

PulseAudio Volume Control showing selection of Headphones channel

 

If you are like me, you probably end up using KMix (or GNOME Sound Preferences) but also launch ALSA Mixer in a Konsole/Terminal for fine-grained control of the underlying ALSA channels:

ALSA Mixer running in Konsole

ALSA Mixer running in Konsole

 

This is more hassle, because you launch Konsole/Terminal and you enter the command alsamixer and press F6 (alternatively, use the command alsamixer -c 0 if your sound card is Card 0). The PulseAudio channels are displayed by default if you don’t specify your sound card when you launch ALSA Mixer.

EDIT (January 28, 2012): With recent versions of ALSA Mixer I have found that I must specify the card in the alsamixer command (e.g. alsamixer -c 0) because the command alsamixer alone results in a Segmentation fault message.

It would be handy to have an icon on the Panel or on the Desktop that you could use to launch ALSA Mixer. Well, you can. In fact, as there is also a GUI version of ALSA Mixer (albeit with a few less features than its console equivalent) you can use that instead if you prefer. Below I explain a few of the possible ways you can display ALSA Mixer easily from within a desktop environment.

 

Change KMix from a PulseAudio mixer to an ALSA mixer

By default KMix displays PulseAudio channels instead of ALSA channels. However, if you want to display the ALSA channels instead (as shown in the first picture above), quit KMix and enter the following command in a Konsole window or in KRunner:

export KMIX_PULSEAUDIO_DISABLE=1 && kmix

If you want to make this permanent then add KMIX_PULSEAUDIO_DISABLE=1 to the file /etc/conf.d/alsasound

Personally, though, I prefer not to do this as I want to control the PulseAudio channels via the KMix mixer. Try running two or more audio/video apps simultaneously and you’ll see what I mean – it’s useful! For example, I can control the volume of various applications separately (handy when you want to check something or are using Skype), as illustrated by the picture below:

KMix showing PulseAudio playback streams tab

KMix showing PulseAudio playback streams tab

 

and I run ALSA Mixer separately to tweak the underlying ALSA channels. Using Yakuake (or Guake in GNOME) is quite a good way to run ALSA Mixer in a console: it is quick and easy to pop-up a window to launch ALSA Mixer, and ALSA Mixer is displayed in colour at nearly the width of the desktop.

 

Launch ALSA Mixer GUI from an icon on the Panel

First, use your package manager to install the package alsamixergui. It’s a GUI equivalent of the console ALSA Mixer, but with a few less options.

Once you install it, you should find ALSA Mixer GUI in your desktop environment menu (e.g. Kickoff > Applications > Multimedia > ALSA Mixer GUI). By default this will show the PulseAudio channels, so use the menu editor (e.g. right-click on Kickoff and select Menu Editor) to change the command to the following if your sound card is Card 0:

alsamixergui -c 0

Once you have done this, save the new menu entry, log out and log in again, and when you launch ALSA Mixer GUI from the menu a window similar to the following will pop-up:

ALSA Mixer GUI

ALSA Mixer GUI

 

To put an icon on the Panel in order to make it even easier to launch ALSA Mixer GUI, just drag the icon from the menu to the Panel and it will be copied to the Panel. Simple as that.

 

Launch ALSA Mixer in a Konsole docked in the System Tray

You can do this using KDocker, which works in KDE, GNOME, Xfce and other desktop environments.

For KDE, create the following Desktop Configuration File Konsole-alsamixer.desktop (or whatever name you want) and put it in the directory ~/.kde4/Autostart/

[Desktop Entry]
Comment[en_GB]=Console (docked) running ALSA Mixer
Comment=Console (docked) running ALSA Mixer
Exec=kdocker konsole -e alsamixer -c 0
GenericName[en_GB]=Dock Konsole running ALSA Mixer in the System Tray
GenericName=Dock Konsole running ALSA Mixer in the System Tray
Icon=kmix
MimeType=
Name[en_GB]=Konsole (Docked)
Name=Konsole (Docked)
Path=
StartupNotify=true
Terminal=false
TerminalOptions=
Type=Application
X-DBUS-ServiceName=
X-DBUS-StartupType=
X-KDE-SubstituteUID=false
X-KDE-Username=

KDE System Tray showing Konsole docked using KDocker

KDE System Tray showing Konsole docked using KDocker

Clicking on the docked Konsole icon in the System Tray will pop-up a Konsole window with the familiar ALSA Mixer running in it, as shown in the fourth picture above. Clicking on the icon again will minimise the Konsole to the System Tray.

 

Launch ALSA Mixer in a Konsole from an icon on the Desktop

For KDE, create the following Desktop Configuration File Konsole-alsamixer.desktop (or whatever name you want) and put it in the directory ~/Desktop/

[Desktop Entry]
Comment[en_GB]=Console running ALSA Mixer
Comment=Console running ALSA Mixer
Exec=konsole -e alsamixer -c 0
GenericName[en_GB]=Konsole running ALSA Mixer
GenericName=Konsole running ALSA Mixer
Icon=kmix
MimeType=
Name[en_GB]=Konsole
Name=Konsole
Path=
StartupNotify=true
Terminal=false
TerminalOptions=
Type=Application
X-DBUS-ServiceName=
X-DBUS-StartupType=
X-KDE-SubstituteUID=false
X-KDE-Username=

You can change the icon displayed on the Desktop either by right-clicking on the icon on the Desktop and selecting Properties or by editing the file directly. For example, I specified Icon=/usr/share/icons/mono/scalable/apps/kmix.svgz which looks rather retro and I think suits the unsophisticated looks of ALSA Mixer.

 

Summary

I have not covered all the options for making it easy to display ALSA channels as well as PulseAudio channels, but hopefully one of the above methods will suit your needs or will spur you to experiment.

Follow

Get every new post delivered to your Inbox.

Join 51 other followers