Using the Tascam US-144MKII with Linux

Today I got a Tascam US-144MKII from a colleague because he couldn’t use it anymore with Mac OSX. Apparently this USB2.0 audio interface stopped working on El Capitan. Tascam claims they’re working on a driver but they’re only generating bad publicity with that announcement it seems. So he gave it to me, maybe it would work on Linux.

Tascam US-144MKII
Tascam US-144MKII

First thing I did was plugging it in. The snd_usb_122l module got loaded but that was about it. So much for plug and play. There are reports though that this interface should work so when I got home I started digging a bit deeper. Apparently you have to disable the ehci_hcd USB driver, which is actually the USB2.0 controller driver, and force the US-144MKII to use the uhci_hcd USB1.1 driver instead so that it thinks it’s in USB1.1 mode. This limits the capabilities of the device but my goal for today was to get sound out of this interface, not getting the most out of it.

I quickly found out that on my trusty XPS13 forcing USB1.1 was probably not going to work because it only has USB3.0 ports. So I can disable the ehci_hcd driver but then it seems the xhci_hcd USB3.0 driver takes over. And disabling that driver effectively disables all USB ports. So I grabbed an older notebook with USB2.0 ports and disabled the ehci_hcd driver by unbinding it since it’s not compiled as a module. Unbinding a driver is done by writing the system ID of a device to a so-called unbind file of the driver that is bound to this device. In this case we’re interested in the system ID’s of the devices that use the ehci_hcd driver which can be found in /sys/bus/drivers/ehci-pci/:

# ls /sys/bus/pci/drivers/ehci-pci/
0000:00:1a.7  bind  new_id  remove_id  uevent  unbind
# echo -n "0000:00:1a.7" > /sys/bus/pci/drivers/ehci-pci/unbind

This will unbind the ehci_hcd driver from the device with system ID 0000:00:1a.7 which in this case is an USB2.0 controller.When plugging in the USB interface it now got properly picked up by the system and I was greeted with an active green USB led on the interface as proof.

$ cat /proc/asound/cards
 0 [Intel          ]: HDA-Intel - HDA Intel
                      HDA Intel at 0xf4800000 irq 46
 1 [US122L         ]: USB US-122L - TASCAM US-122L
                      TASCAM US-122L (644:8020 if 0 at 006/002

So ALSA picked it up as a device but it doesn’t show up in the list of sound cards when issuing aplay -l. This is because you have to tell ALSA to talk to the device in a different way then to a normal audio interface. Normally an audio interface can be addressed by using the hw plugin which is the most low-level ALSA plugin that does nothing more than talking to the driver and this is what most applications use, including JACK. The US-144MKII works differently though, its driver snd_usb_122l has to be accessed with the use of the usb_stream plugin which is part of the libasound2-plugins package and that allows you to set a PCM device name that can be used with JACK for instance. This can be done with the following .asoundrc file that you have to create in the root of your home directory:

pcm.us-144mkii {
        type usb_stream
        card "US122L"
}

ctl.us-144mkii {
        type hw
        card "US122L"
}

What we do here is creating a PCM device called us-144mkii and coupling that to the card name we got from cat /proc/asound/cards which is US122L. Of course you can name the PCM device anything you want. Almost all other examples name it usb_stream but that’s a bit confusing because that is the name of the plugin and you’d rather have a name that has some relation to the device you’re using. Also practically all examples use card numbers. But who says that the USB audio interface will always be card 0, or 1. It could also be 2, or 10 if you have 9 other audio interfaces. Other examples work around this by fixing the order of the numbers that get assigned to each available audio interface by adjusting the index parameter for the snd_usb_122l driver. But why do that when ALSA also accepts the name of the card? This also makes thing a lot easier to read, it’s now clear that we are coupling the PCM name us-144mkii to the card named US122L. And we’re avoiding having to edit system-wide settings. The ctl stanza is not strictly necessary but it prevents the following warning when starting JACK:

ALSA lib control.c:953:(snd_ctl_open_noupdate) Invalid CTL us-144mkii
control open "us-144mkii" (No such file or directory)

So with the .asoundrc in place you can try starting JACK:

$ jackd -P85 -t2000 -dalsa -r48000 -p512 -n2 -Cus-144mkii -Pus-144mkii
jackd 0.124.2
Copyright 2001-2009 Paul Davis, Stephane Letz, Jack O'Quinn, Torben Hohn and others.
jackd comes with ABSOLUTELY NO WARRANTY
This is free software, and you are welcome to redistribute it
under certain conditions; see the file COPYING for details

no message buffer overruns
JACK compiled with System V SHM support.
loading driver ..
apparent rate = 48000
creating alsa driver ... us-144mkii|us-144mkii|512|2|48000|0|0|nomon|swmeter|-|32bit
configuring for 48000Hz, period = 512 frames (10.7 ms), buffer = 2 periods
ALSA: final selected sample format for capture: 24bit little-endian in 3bytes format
ALSA: use 2 periods for capture
ALSA: final selected sample format for playback: 24bit little-endian in 3bytes format
ALSA: use 2 periods for playback

This translates to the following settings in QjackCtl:

QjackCtl Settings – Parameters
QjackCtl Settings – Parameters
QjackCtl Settings – Advanced
QjackCtl Settings – Advanced

Don’t expect miracles of this setup. You won’t be able to achieve super low-latencies but at least you can still use your Tascam US-144MKII instead of having to give it away to a colleague.

Using the Tascam US-144MKII with Linux

Using a Qtractor MIDI track for both MIDI and audio

Basically Qtractor only does either MIDI or audio. The MIDI tracks are for processing MIDI and the audio tracks for processing audio. But a MIDI track in Qtractor can also post-process the audio coming out of a synth plug-in with FX plug-ins so it’s a bit more than just a MIDI track.

But what about plug-ins that do both audio and MIDI, like the LV2 version of the autotuner application zita-at1? If you put it in an audio track it will happily autotune all the audio but it won’t accept any incoming MIDI to pitch only to the MIDI notes it is being fed. And no way you can get MIDI into a Qtractor audio track. There’s no MIDI insert plug-in or the possibility to somehow expose MIDI IN ports of a plug-in in an audio track to Jack MIDI or ALSA.

But Qtractor does have a built-in Insert plug-in that can be fed audio from an audio bus and since a Qtractor MIDI track does know how to handle audio would it also know how to deal with such an insert? Well, yes it can which allows you to use a plug-in like the LV2 version of zita-at1 inside a MIDI track.

Setting up buses and tracks

You will need at least one bus and two tracks (of course you can use different bus and track names):

  • AutoTuneMix bus, input only and 2 channels
  • AutoTune MIDI track with dedicated audio outputs (this will create an audio bus called AutoTune)
  • AutoTuneMix audio track with the AutoTuneMix as input bus

Alternatively you could also skip the use of dedicated audio outputs and have the MIDI track output to the Master bus. This way you avoid the risk of introducing extra latency and the need to set up extra connections. You do lose the flexibility then to do basic stuff on the outcoming audio like panning or adjusting the gain. Which you can also workaround of course by using additional panning and/or gain plug-ins.

Once you’ve created the bus and the tracks insert the following plug-ins into the AutoTune MIDI track:

  • Insert
  • Any pre-processing effects plug-ins (like a compressor) – optional
  • LV2 version of zita-at1 autotuner
  • Any post-processing effects plug-ins (like a reverb) – optional

Insert them into this specific order. It is also possible to do the post-processing in the AutoTuneMix audio track. Now open the Properties window of the Insert plug-in and then open the Returns window. Connect the mic input of your audio device to the Insert/in ports as shown below.

Qtractor AutoTune Insert
Qtractor AutoTune Insert

Connect the AutoTune bus outputs to the AutoTuneMix inputs:

Qtractor Connections
Qtractor Connections

Create a MIDI clip with notes to autotune

Create a MIDI clip with the notes you would like to get autotuned in the AutoTune MIDI track, put the play-head on the right position and press play. Now incoming audio from the mic input of your audio device should get autotuned to the MIDI notes you entered in the MIDI clip:

Qtractor Mixer with LV2 version of zita-at1 autotuner
Qtractor Mixer with LV2 version of zita-at1 autotuner

As you can see both MIDI and audio goes through the AT1 autotuner plug-in and the outcoming audio is being fed into the AutoTuneMix track where you can do the rest of your post-processing if you wish.

Using a Qtractor MIDI track for both MIDI and audio

MOD sysadmin

A sysadmin is also jumping in to help with the infrastructure – mailing lists and wiki – so we can make sure there are usable tools available to us, after all, a brand new community is to be born!!!

From the MOD Kickstarter page, update #29.

So yes, I’m an official member of the MOD team now! I’m going to help out maintaining the current infrastructure and with the upcoming release of the MOD Duo some parts will have to be rebuilt or built up from scratch.

I will also be beta testing the MOD Duo, I’m eagerly awaiting for it to arrive in the mail. The MOD team has made some real good progress in getting the most out of the Allwinner A20 board they’re using. Also the amount of plugins that will be available for it will be staggering. This won’t be just an ordinary modelling FX unit but a complete all-in-one musical box that can also be used as a synthesizer or a loop station. And it’s going to be a rock-solid unit, the team working on making this possible contain some big names from the Linux audio community. The MOD Duo beta testing unit I will be receiving has been with Fons before for example, from what I understood he has done an in-depth analysis of the audio codec being used on the MOD Duo board.

MOD Duo final revision
MOD Duo final revision
MOD sysadmin

The Zynthian project

Recently I found out that I was not the only one trying to build a synth module out of a Raspberry Pi with ZynAddSubFX. The Zynthian project is trying to achieve the exact same goal and so far it looks very promising. I contacted the project owner to ask if he would be interested in collaborating. I got a reply promptly and we both agreed it would be a good idea to join forces. The Zynthian project has all the things that I still had to set up already in place but I think I can still help out. The Zynthian set-up might benefit from some optimizations like a real-time kernel and things like boot time can be improved. Also I could help out testing, maybe do some packaging. And if there’s a need for things like a repository, web server or other hosting related stuff I could provide those.

Protoype of the Zynthian project
Zynthian prototype

I’m very happy with these developments of our projects converging. Check out the Zynthian blog for more information on the current state of the project.

The Zynthian project

Switched to HTTPS

The title says it all. I bought a SSL certificate at Xolphin, installed it and then I had to hunt down all the things that didn’t work anymore. I even managed to render the HTTP version of autostatic.com unreachable for  about a day. But now everything seems to be working so I activated a rewrite rule that redirects all HTTP traffic to HTTPS. There are some bits that might not work as expected (embedded videos not displaying, warnings about unsecure content) but most of my blog should be accessible now in a secure way. If you encounter any issues, please let me know.

autostatic https

The certificate I’m using is a Comodo Positive SSL certificate with Domain Validation. So I’m getting a padlock but not the green address bar, for a green bar you need at least an Extended Validation certificate and that gets a bit expensive. I generated an Apache SSL configuration with the Mozilla SSL Configuration Generator but left out the OCSP stuff. I did add the HSTS header (HTTP Strict Transport Security header) because it really helped with getting that desired A+ rating in the Qualys SSL Labs SSL Server Test.

Switched to HTTPS

The hardest part

At least, for me. Now I have to code something that enables me to select banks and instruments on my synth module. I’ve settled for pyliblo to talk to ZynAddSubFX but I’m just no coder. My elbow now rests on “Learning Python” from O’Reilly, I started reading it like a year ago but never got past chapter 3 or something. Time to persist, I’ve been wanting to be able to code a little bit for years now.

As a note to self, and maybe it’s helpful for others too, what follows are some of the relevant OSC messages to change banks and instruments in ZynAddSubFX.

Changing banks can be done with /loadbank. pyliblo comes with an example script send_osc.py and loading a bank with send_osc.py works as follows:

send_osc.py 7777 /loadbank ,i 3

7777 is the port ZynAddSubFX runs on. The /loadbank message wants an integer (,i) and ,i 3 loads the fourth bank (Choir and Voice) as ZynAddSubFX starts counting from 0. Bear in mind that this will only load the bank, it won’t change the instrument that is loaded. To load an instrument from a loaded bank the following send_osc.py incantation does the trick:

send_osc.py 7777 /setprogram ,c $'\x03'

So /setprogram loads an instrument from an active bank. It takes a character (,c) as an argument because the program numbers are in hex. But hex are multiple characters so you have to add some escape sequences to make it work (and I lost it there so I could be completely wrong). The above command should load the fourth instrument from the Choir and Voice bank (Voice OOH) as that bank should have been loaded by the previous /loadbank message.

It is also possible to load instrument (.xiz) files. This can be done with the /load_xiz message:

send_osc.py 7777 /load_xiz 0 "/usr/share/zynaddsubfx/banks/Bass/0001-Bass 1.xiz"

This will load the Bass 1 instrument fom the Bass bank into part 1 (remember that ZynAddSubFX starts counting from 0). So to load the Bass 2 instrument into part 2 you’d do the following:

send_osc.py 7777 /load_xiz 1 "/usr/share/zynaddsubfx/banks/Bass/0002-Bass 2.xiz"

So now I have to incorporate this stuff into Python code that gets called when I press buttons on my LCD display. These are the mappings I’d like to accomplish:

  • Up: toggle next bank and display first instrument from that bank
  • Down: toggle previous bank and display first instrument from that bank
  • Left: toggle previous instrument and display instrument
  • Right: toggle next instrument and display instrument
  • Select: select displayed instrument

That shouldn’t be too hard right? Well, first hurdle, pyliblo can only send or dump OSC messages, it doesn’t seem to be able to handle return messages from the OSC server (ZynAddSubFX in this case). To be continued…

Edit: of course sending just messages with pyliblo won’t handle any return messages, you will need a receiving part. But maybe I should take a look at using MIDI with mididings for instance. Thanks Georg Mill for the tip!

The hardest part

Two steps further

For my little synth module project I created the following systemd unit file /etc/systemd/system/zynaddsubfx.service that starts up a ZynAddSubFX proces at boot time:

[Unit]
Description=ZynAddSubFX

[Service]
Type=forking
User=zynaddsubfx
Group=zynaddsubfx
ExecStart=/usr/local/bin/zynaddsubfx-mpk
ExecStop=/usr/bin/killall zynaddsubfx

[Install]
WantedBy=local-fs.target

/usr/local/bin/zynaddsubfx-mpk is a simple script that starts ZynAddSubFX and connects my Akai MPK:

#!/bin/bash

zynaddsubfx -r 48000 -b 64 -I alsa -O alsa -P 7777 -L /usr/share/zynaddsubfx/banks/SynthPiano/0040-BinaryPiano2.xiz &

while ! aconnect 'MPK mini' 'ZynAddSubFX'
do
  sleep 0.1
done

/usr/local/bin/zynpi.py

/usr/local/bin/zynpi.py in its turn is a small Python script that shows a message and a red LED on a 16×2 LCD display so that I know the synth module is ready to use:

#!/usr/bin/python
# Example using a character LCD plate.
import math
import time

import Adafruit_CharLCD as LCD

# Initialize the LCD using the pins
lcd = LCD.Adafruit_CharLCDPlate()

# Show some basic colors.
lcd.set_color(1.0, 0.0, 0.0)
lcd.clear()
lcd.message('Raspberry Pi 2\nZynAddSubFX')

The LCD is not an Adafruit one though but a cheaper version I found on Dealextreme. It works fine though with the Adafruit LCD Python library. Next step is to figure out if I can use the buttons on the LCD board to change banks and presets.

synth_module_lcd_startup
Raspberry Pi synth module with 16×2 LCD display
synth_module_test_setup
The synth module test environment
Two steps further

Working on a stable setup

Next step for the synth module project was to get the Raspberry Pi 2 to run in a stable manner. It seems like I’m getting close or that I’m already there. First I built a new RT kernel based on the 4.1.7 release of the RPi kernel. Therefore I had to checkout an older git commit because the RPi kernel is already at 4.1.8. The 4.1.7-rt8 patchset applied cleanly and the kernel booted right away:

pi@rpi-jessie:~$ uname -a
 Linux rpi-jessie 4.1.7-rt8-v7 #1 SMP PREEMPT RT Sun Sep 27 19:41:20 CEST 2015 armv7l GNU/Linux

After cleaning up my cmdline.txt it seems to run fine without any hiccups so far. My cmdline.txt now looks like this:

dwc_otg.lpm_enable=0 dwc_otg.speed=1 console=ttyAMA0,115200 console=tty1 root=/dev/mmcblk0p2 rootfstype=ext4 rootflags=data=writeback elevator=deadline rootwait

Setting USB speed to Full Speed (so USB1.1) by using dwc_otg.speed=1 is necessary otherwise the audio coming out of my USB DAC sounds distorted.

I’m starting ZynAddSubFX as follows:

zynaddsubfx -r 48000 -b 64 -I alsa -O alsa -P 7777 -L /usr/share/zynaddsubfx/banks/SynthPiano/0040-BinaryPiano2.xiz

With a buffer of 64 frames latency is very low and so far I haven’t run into instruments that cause a lot of xruns with this buffer size. Not even the multi-layered ones from Will Godfrey.

So I guess it’s time for the next step, creating a systemd startup unit so that ZynAddSubFX starts at boot. And it would be nice if USB MIDI devices would get connected automatically. And if you could see somehow which instrument is loaded, an LCD display would be great for this. Also I’d like to have the state of the synth saved, maybe by saving an .xmz file whenever there’s a state change or on regular intervals. And the synth module will need a housing or casing. Well, let’s get the software stuff down first.

Working on a stable setup

Building a synth module using a Raspberry Pi

Ever since I did an acid set with my brother in law at the now closed bar De Vinger I’ve been playing with the idea of creating some kind of synth module out of a Raspberry Pi. The Raspberry Pi 2 should be powerful enough to run a complex synth like ZynAddSubFX. When version  2.5.1 of that synth got released the idea resurfaced again since that version allows to remote control a running headless instance of ZynAddSubFX via OSC that is running on for instance a Raspberry Pi. I looked at this functionality before a few months ago but the developer was just starting to implement this feature so it wasn’t very usable yet.

zynaddsubfx-ext-guiBut with the release of ZynAddSubFX 2.5.1 the stabilitity of the zynaddsubfx-ext-gui utility has improved to such an extent that it’s a very usable tool. In the above screenshot you can see zynaddsubfx-ext-gui running on my notebook with Ubuntu 14.04 controlling a remote instance of ZynAddSubFX running on a Raspberry Pi.

So basically all the necessary building blocks for a synth module are there. Coupled with my battered Akai MPK Mini and a cheap PCM2704 USB DAC I started setting up a test setup.

For the OS on the Raspberry Pi 2 I chose Debian Jessie as I feel Raspbian isn’t getting you the most out of your Pi. It’s running a 4.1.6 kernel with the 4.1.5-rt5 RT patch set, which applied cleanly and seems to run so far:

pi@rpi-jessie:~$ uname -a
Linux rpi-jessie 4.1.6-rt0-v7 #1 SMP PREEMPT RT Sun Sep 13 21:01:19 CEST 2015 armv7l GNU/Linux

This isn’t a very clean solution of course so let’s hope a real 4.1.6 RT patch set will happen or maybe I could give the 4.1.6 PREEMPT kernel that rpi-update installed a try. I packaged a headless ZynAddSubFX for the RPi on my notebook using pbuilder with a Jessie armhf root and installed the package for Ubuntu 14.04 from the KXStudio repos. I slightly overclocked the RPi to 1000MHz and set the CPU scaling governor to performance. The filesystem is Ext4, mounted with noatime,nobarrier,data=writeback.

To get the USB audio interface and the USB MIDI keyboard into line I had to add the following line to my /etc/modprobe.d/alsa.conf file:

options snd-usb-audio index=0,1 vid=0x08bb,0x09e8 pid=0x2704,0x007c

This makes sure the DAC gets loaded as the first audio interface, so with index 0. Before adding this line the Akai would claim index 0 and since I’m using ZynAddSubFX with ALSA it couldn’t find an audio interface. But all is fine now:

pi@rpi-jessie:~$ cat /proc/asound/cards
 0 [DAC            ]: USB-Audio - USB Audio DAC
                      Burr-Brown from TI USB Audio DAC at usb-bcm2708_usb-1.3, full speed
 1 [mini           ]: USB-Audio - MPK mini
                      AKAI PROFESSIONAL,LP MPK mini at usb-bcm2708_usb-1.5, full speed

So no JACK as the audio back-end, the output is going directly to ALSA. I’ve decided to do it this way because I will only be running one single application that uses the audio interface so basically I don’t need JACK. And JACK tends to add a bit of overhead, you barely notice this on a PC system but on small systems like the Raspberry Pi JACK can consume a noticeable amount of resources. To make ZynAddSubFX use ALSA as the back-end I’m starting it with the -O alsa option:

zynaddsubfx -r 48000 -b 256 -I alsa -O alsa -P 7777

The -r option sets the sample rate, the -b option sets the buffer size, -I is for the MIDI input and the -P option sets the UDP port on which ZynAddSubFX starts listening for OSC messages. And now that’s the cool part. If you then start zynaddsubfx-ext-gui on another machine on the network and tell it to connect to this port it starts only the GUI and sends all changes to the GUI as OSC messages to the headless instance it is connected to:

zynaddsubfx-ext-gui osc.udp://10.42.0.83:7777

Next up is stabilizing this setup and testing with other kernels or kernel configs as the kernel I’ve cooked up now isn’t a viable long-term solution. And I’d like to add a physical MIDI in and maybe a display like described on the Samplerbox site. And the project needs a casing of course.

Building a synth module using a Raspberry Pi

Media units

media_units_editThe MK808 Android TV stick with a PCM2704 USB audio interface runs Debian Jessie with MPD and serves as our mediaplayer for audio files. It draws its power from the USB port of our cable modem so it’s always on. Most of the time Indie Pop Rocks is playing. It’s hooked up to the network via WiFi. We use MPDroid to control it.

The Raspberry Pi runs OpenELEC with Kodi. We use this for watching all kinds of video files that we stream from our NAS (an aging WD My Book Live that runs Debian Lenny) via an NFS share. It is connected to the network via ethernet.

The Chromecast is for watching Netflix. When we just got it we had some issues with connecting it to the network but after replacing our old router with an ASUS RT-AC68U it worked flawlessly.

The Technics SL-1210MK2 with Ortofon headshell and cartridge is for listening music on vinyl, you know those round black plastic units from the past with grooves in them. It doesn’t have any network connections and doesn’t run an OS. It does send electrical current to a NAD C 325BEE Stereo Integrated Amplifier with Dali Concept 2 speakers. Yeah, I’m a 2.0 guy.

The TV is an old pre Smart TV Samsung but as it still works we probably won’t replace it for the time being. It does have CEC so we can control the TV, RPi and Chromecast with a single remote.

Media units