BubbleTracks
|
What is BubbleTracks?It is a graphical tactile music interface to be used for performance and leisure for iOS. BubbleTracks uses circles called "bubbles" that can be triggered, untriggered or resized to control the rendering of audio. These bubbles can be linked to one another and this link will have an impact on control or rendering of the audio. For instance, a link between an audio source and a digital audio effect will correspond to a route from that audio source to that effect. The position of the bubbles may also affect the audio rendering : parameters of the audio effect are modified by moving the bubble on the iOS device. There are several types of bubbles :
BubbleTracks is a cool and innovative way to perform using pre-recorded samples. You can upload your own samples to the iOS device and use them in the application (not implemented yet, for file format support reasons. Existing samples are provided to demonstrate the capabilities of the app). Have fun using BubbleTracks! |
Download the app :
You can download a demo of the source code of the app here. Since the app is not yet on the store, you will have to compile the source through XCode. The package provided already contains a version of MoMu. The user is required to have access to the XCode standard librairies in order to compile the app, but the app should not require any other dependencies. You can download the source here. If you have trouble installing the app, please email me at jules.testard@mail.mcgill.ca.x
How to use BubbleTracks :
At first, the user is presented with a black screen. If the user taps (long press) somewhere on the screen, the menu view appears with a choice of audio files or effects that the user can create. When an audio file or effect is selected, a corresponding bubble view is added on the screen at the location where the user last tapped. If it is an audio file, the corresponding audio data is loaded into memory at this point. When a new audio file is loaded, an audio file "slot" in the Audio Mixer class is assigned to it, and the internal pointer starts ticking and reading the file (and loops by default). When an effect is selected, an effect "slot" in the Audio Mixer class is assigned. There are in total 8 audio file slots and 8 effects slots available. If a user tries to create more bubbles than the limit, an specifying that there is no more space will be returned.
Initially, tracks are muted when loaded. If the user taps a BubbleTrack view, this will unmute it if it is muted or mute it if it isn't. Likewise, effects are initially inactive. To activate an effect, all the user has to do is top tap it, and tap it again to deactivate it.
Different bubbles of different types can be linked to one another. To link one bubble to another, the user has to first long press on a first bubble, then long press on another one and this will effectively create a link. This tells the Audio Mixer that a track is dynamically routed to an effect, and the audio stream coming out of that track will be processed through the effect if that effect is active. This link is visible to the user as a white line connecting two bubbles. To unlink two bubbles, all the player has to do is draw a line crossing the link and this will effectively remove the link.
As of right now, the routing is done in a first-in first-processed basis. This means that the effects are processed in a serial processing chain fashion, and the first effect linked to the track will effectively be the first one processed, the effect corresponding to the second link added will be processed after the first effect, and so on...
Implementation
The implementation for the application has been done in the native language for iOS, objective C/C++ for performance and scalability. The underlying structure is pretty simple with two communicating layers : the user interface layer and the audio layer :

User Interface implementation
The user interface implementation is rather typical of native iOS apps. A navigation controller allows to switch between the Menu view and the Main View. The Main View Controller contains gesture recognizers that allows the user to move the bubbles around, tap them, create new ones, link them, unlink them? Cells for the menu view and the information they contain are included in classes called AudioWrapper and FXWrapper.
The link view unlinking uses the panning gesture recognizer and the information between the last touch and the current touch to create a line that will be tested for intersection with the line of the link View. If the intersection exists, the line will be cut.
The Audio Mixer class has a render callback that is called for every frame for a given sample rate. It is in this call back that the logic for the app is implemented. The Mixer uses a "slot" system to determine how many effects/tracks are present at once. By default, there are 8 track slots and 8 effect slots but this can be changed by changing the value of a preprocessor command. Each time a new track or effect is loaded, it is assigned an audioID or an EffectID, and its slot is set to "open" in the callback. This ID is used to make tracks or effects active or inactive and for the routing strategy.
The routing strategy is implemented for each audioID using a linked list of fxIDs. Once a tick has occurred in the callback, the program will look in order to each fxIDs in the map and process them if they are active.
Synchronization of track playback is done while a track is loading. Before labeling a slot as open (which means starting the ticking of the sound file), a thread will wait until the playback arrives to a beat to open the slot. This guarantees pretty simply that different sounds are synchronized in tempo from the start. The only issue is that this presupposes that the audio files themselves possess the same tempo. Further discussion about this topic is added below.
Further improvements
One of the main issues of the application right now is it's staticity in terms of tempo. If an audio file with an inherent tempo or duration different than the overall assigned tempo (defaulted to 120) is added, with the current ticking system it will not be synchronized in tempo with the other tracks. A solution to this problem would be to detect the tempo while loading the audio file, and if its tempo is different than the overall tempo, time-stretch it accordingly. This would add a lot of flexibility to the choice of audio files to be added to the software.
Right now, there is also an issue about performance. One could ask, is creating bubbles that play back audio files really performance? It is easy to see that the user right now has a rather limited control on what is playing. This is why additional bubble types such as the Hit Bubbles or Control Bubbles (both of which are described in the introduction) would be required to enhance the control the player has on the sound produced.
It would be interesting to investigate more flexible strategies for the hierarchy of this routing, especially if one wishes to change this hierarchy on the fly (place a low pass filter before a reverb in the processing chain when the reverb is initially before the low pass filter). Also, in cases where a serial processing chain is not desirable, it would useful to have the option of using parallel processing instead.
Audio file support for the app is currently limited to 16-bit or 32-bit uncompressed audio files. The application should support more formats to let the user have more choice about what kind of audio files the user add to the app.