Featured

The JAX Audio Server TX/RX 8888



Published
This video is demonstrating the operation of the JAX AudioServer TX And RX audio units.

This new JAX AudioServer Suite is introducing a client-server transmission service, based on networking via TCP or UDP protocols for transferring of audio and MIDI data between audio units across a local network in realtime and sample accurate. (It theory it is also possible to communicate the same way via internet, with certain latency and security restrictions across operating systems.)


AudioUnits are able to run inside any supporting audio unit host application like GarageBand or Logic Pro. The server technology is directly built into our RX and TX units and enables to establish a permanent bidirectional communication between them, independently of any MIDI or audio drivers, provided by the host.

The transmission protocols enable to use sample exact offsets into the current audio processing buffers, so that there is virtually no time jittering inside the communication. The only pitfall is the speed and stability of the active network connection. The audio sever has low bandwidth consumption, especially with MIDI.

Sample rate conversions will apply on the fly. So it is possible to run host applications with different sample rates too, although this is generally not a recommended practice for performance reasons. Audio buffers are currently transmitted in native processing format, uncompressed as 32 bit floating point data for best audio quality.

The fundamental difference to available common networking protocols is here, that the audio units communicate exact sample rates of their operating environments, buffer sizes and concrete sample position offsets between their transmission events. So audio data and MIDI data can result in very accurate timing and have quite low latency.

The audio transfer is for instance realised with a circular audio buffer scheme for safe read-write access with small latency on both sides, the MIDI transmission uses exactly one processing buffer offset between the RX and the TX units, so that efficiently a compensation of (lost) transmission time can be achieved on the receivers side. Please note, that processing block sizes may even vary while the host operates.


The TX module acts as a server, which is waiting for inquiries of a connected client (the RX module). The RX can operate on a completely different computer or iOS device and hosts, including the Apple TV box. The communication runs via specified port numbers (i.e. localhost, port, 8888 TCP).

When the client (RX) is regularly asking for data blocks for audio and/or MIDI data, the server will respond with its available events for the current time frame, usually a buffer between 128 and 1024 samples. These events can be small audio buffers or even MIDI and SysEx data of varying size. the receiver (client) can at any time cancel the inquiries, the sever can do this too.

The difficulty hereby is the synchronisation between the audio units, as these may operate inside host applications with very different configurations regarding sample rate and processing buffer sizes. Best results can be achieved, if both host setups match in sample rate and processing buffer sizes, as this will not incorporate additional sample rate converters. The minimum latency for the audio server is 1024 samples for stable transmission quality. This is usually small enough for realtime usage.


The video here uses a TX unit on a sandboxed iOS application to transmit MIDI data to a client (RX) unit on the macOS computer in realtime. The TX module features a sample accurate MIDI sequencer with a performing MIDI file and the receiver (RX) a multi-timbre synthesizer on the other end, which takes the events to produce audio output (playback of the realtime transmitted MIDI events). Direct playing of MIDI events is possible this way too, of course. Please note, that the local IP address with a human readable Apple Bonjour name is used for the connection of both devices, so usually no cryptic IP numbers are required for input and setup.

The los and stable network bandwidth can be watched on the debugging environment in Apples XCode.


The (sending) sequencer on the TX unit is operating with an audio clock rate at 48000 Hz, the (receiving) synthesizer in the RX module plays back with a host sample rate of 44.100 Hz or vice versa. There is no time jittering audible. As the log view reports, the sample offsets are perfectly in time, causing no timing issues. The processor usage with all visualisations and printing is also quite reasonable. No audio dropouts are audible.

...
Category
Audio
Be the first to comment