This is the web page of MUMT 306(Music & Audio Computing I) final project. Here presents my two softwares, More than Noise and Text Composer, built based on MAX/MSP. Hope you could have fun playing around with them. :)
[ More than Noise ]
- A volume controller -
(implemented based on MAX/MSP with a few simple objects)
To make a volume controller that could automatically adjust users' laptop's volume based on the noisy level of surroundings.
Imagine you are listening to your headphone and sitting in the coffee shop, suddenly people come in and the environment becomes a bit noisier. When you want to click the sound button and adjust it, clever More than Noise has already did it for you!
Audio input from built-in microphone of laptop.
Object to detect the level of noise.
Object that can adjust the audio output to headphone.
Object to smooth the input noise.
In order to be more fun, add an animation that users could choose any 3D shapes offered and it'll change color each time when a peak occurs(detected from surrounding).
First impression to do this is to use C++ program. But there's no built-in library that can take and analyze input audio and then modify the output.
Libraries found are not very easy to use. And it's hard to set up. After trying different libraries(SFML, RtAudio, etc.) It's still not able to function properly.
Final version was made in MAX/MSP but too simple to be a final project.
Work flows and detailed design explaination
2 main sub-patchers
Volume: Take audio input from built-in micropone and accordingly process the output.
Animation: Create 3D shape. Take audio input, whenever there's a peak then change the color of the 3D object.
abs~: Take absolute value of audio signal data.
rampsmooth~: Smooth incoming signals over number of sample specified. The larger the number is, the smoother the outcome is. This makes sure that the volume won't increase suddenly. Instead, it responses to the surrounding changes in a stable way.
Snapshot~: Outputs a value from the most recent input signal vector.
Take in the base volume user entered and added to the volume detected from surroundings to get final volume.
Set the final volume to system using object aka.systemsound from external library.
Implemented by using jitter in Max
jit.gl.gridshape: Create shapes indicated.
jit.window: Open a window that visualize the 3D animation.
peakamp~: Gettng the largest amplitude value in the time intervel specified.
Randomly get a number to select a corresponding color.
hsl message: To display and convert number to RGBA.
Though More than Noise is a practical software, it's too simple to be a final project. Then here comes my second, but official final project, Text Composer.
[ Text Composer ]
- MIDI Sonification of text message -
(implemented based on MAX/MSP)
A software that takes in user text input and sonificate the text by converting characters into ASCII number and then play it.
With extra functions that users could play around with.
Transfer text input to ASCII number.
Need to make sure the ASCII number is within the range of MIDI note numbers, i.e., 0-127.
Other functions included:
Accompaniment with 3 built-in scales and 1 extra custom scale.
Drum Beat with 3 built-in beat styles.
Velocity adjustment in melody and beat.
Program changes in melody, accompaniment and beat.
Visualization of sound waves.
Spent a lot of time in getting to this idea after More than Noise.
Stucked in processing a message of bunch of numbers being sent to noteout one by one. (Otherwise it'll just send all numbers at almost the same time and users can only hear "1" sound no matter how many words he/she has entered.)
Tried to "hard code" the accompaniment scales which made the patcher became too complex.
Work flows and detailed design explaination
3 main sub-patchers
Transfer: Take text input and transfer to ASCII numbers that within the range(0-127). Save all numbers to a message and then send it.
Play: Receive the transferred numbers and play it. Has a sub-patcher Scale that takes the current note number and play the accompniment chosen based on that number.
Drumbeat: It implemented 3 built-in beat styles. And it's independent from text message.
There're 2 extra patchers that used to define some parameters:
defscale: Load data into 3 built-in accompaniment scales.
appendmsg: Append beat program number selection menu.
Route: Textedit object sends text in front of whatever users typed. In order not to include it, using route to get rid of "text".
atoi: Convert characters to ASCII numbers.
select(32): Filter the space.
If out of bound then subtract it by 127.
send: Integrate to message and send it.
zl slice(1): Slice notes into single number one by one.
delay: Send the sliced single number after delaying a time that specified. The velocity of outputting the notes depends on delay time.
If pitch shift is too high, then output 127.
Pass current note to patcher scale(where liner accompaniment proceed).
metro: Send bang regularly.
gate: to open/close which beat style implementation.
Receive corresponding beat program number to play.
Link drumbeat to message, i.e., make it depends on the melody played and been able to automatically play the beat style and velocity that are harmonic with the melody.
Key word system:
Weather: For example, key words like "windy, wind" will trigger wind sound. While "rain, rainy" will trigger rain sound.
Emotion: Emotional words like "happy" will trigger a more dynamic beatstyle and corresponding suitable velocity. While "sad" will trigger more heavy sound.