Software Instrument Build using Max 7

I have built a max patch that’s functional as a software instrument and can be used with MIDI input & various controls ūüôā This blog outlines the stages of my build. Feel free to ask any questions in the comments!

I initially created a simple sequencer to later integrate in two parts of the software instrument: as a drum machine, and as an arpeggiator tool for the main synth presets. To do this, I used the tempo object. This way, I could input parameters so that the tempo object would output a certain number of divisions of a beat at a specified rate. Then the select object was used to output 16 successive bangs:I also used the groove~ object and its required inputs: using an if statement here in order to bang a 0 to begin playback in the groove~ object.

To allow a user to turn on and off audio samples within the sequencer, 16 toggles are connected to the first inlets of 16 gswitches, and each button is connected to the last inlet of each switch. This way, a bang is sent to trigger playback of the audio file only when the toggle is switched on.

I then began creating some instrument presets using additive synthesis. In the picture below, a¬†number for a fundamental frequency (f0) is entered manually using the number object (later using midi input and midi to f). The number is then multiplied by each overtone number (freq. of each is displayed using number boxes). Each number is sent to a cycle object to create a sine wave. And finally, sliders are used to set the amplitude for each partial. Slider range is set as 0 to 1 (allowing float outputs). These amplitudes affect each signal using the *~ object. These relative amplitudes will be different across presets (This method will be used to set overtone gains for the second two sounds. The first two will be controlled using envelopes). The output signal is divided by 10 to avoid clipping. The spectroscope~ object is used to visualise the output signal.For the first two presets, I used a separate envelope on each partial. I designed the envelopes using the function object along with the line~ object (as in the Graphic Envelope help patch). These are also multiplied by each signal (shown below). These envelopes will also be different across presets. I added a notein object to recieve messages from a controller, a kslider to visualise the input on the screen (and also to input manually), and a mtof object to convert the midi messages to frequencies. I then connected the mtof object to the number box for the fundamental frequency. I connected the kslider’s outlet to a button and then a send object in order to send a bang to my envelopes every time a note is played. The function objects receive the bang using the receive object and the same name as the send, which for my first preset is “bangEnvelopes”. Each envelope for this first preset is 5000ms long (this is edited in the inspector for each function object). The next screenshot shows my first instrument preset (gain sliders to be removed in this one as envelopes affect overtones – only left to be copied to 3 and 4).Next, I copied this design, changing the overtone values and amplitudes/envelopes to create a new sound (for a separate preset). The second preset has additional harmonics (f0*11, f0*12 and f0*13). These share envelopes with f0, f1 and f2. The 3rd preset uses only odd harmonics. For the last preset, I have altered the frequency multiplier values to generate dissonant overtones. The same midi controls are connected to all four sounds, and a selector is used in order to choose between them and send only one signal at a time to the ezdac~. This next picture shows the four additive presets together.I then linked my a section of the sequencer with an additive synthesiser in order to use it as an arpeggiator.¬†The next picture shows my arpeggiator. Much of the previously built sequencer has been removed as it was not needed for a simple arpeggiator:

The tempo and selector objects here create a sequence of notes, which depends on the selected note on the keyboard. The selector sends a set of timed bangs to each of its outlets, to which I have connected separate buttons. Each button then triggers a separate number to be added to the note inputted. Different numbers will create different intervals and the tempo can be adjusted using the dial above. The output of this arpeggiator is then connected to the mtof object, which is connected to the fundamental frequency in each additive preset. Next, I used the preset object in order to store the values on screen for my current presets. (I initially created this to store the fader values for presets 3 and 4 as shown in the screenshot). This object can also be used to allow a user to store their own settings when using the instrument.

Next, I made a synthesiser that integrates subtractive and AM synthesis techniques for further presets. Rather than using multiple reson~ objects to combine single band filters, I have used the filtercoeff~ and biquad~ objects to allow various filter types and various input signals. In order to change the type of filter generated, messages are sent to the filtercoeff~ object. To allow the user to choose efficiently between messages, I have used the umenu object. I have entered umenu items in the inspector, corresponding to messages that the filtercoeff~ object accepts (these are available in the filtercoeff~ help file).The biquad~ object, which creates the filter based on coefficients provided by the filtercoeff~ object, accepts multiple input signals. Here I have used multiple signal generators in order to allow a user to choose between them, or layer them to create different textures:

(Note: the numbers that I have input for the rect~ object arguments are frequency and pulse width).

I used the adsr~ object to allow users to shape the amplitude envelope using dials as shown in the picture below. The output from the kslider is scaled so that it is taken as 0 to 1 rather than 0 to 127. Scale object arguments are: input min, input max, output min, output max.Some additional features were then integrated in this subtractive/hybrid patch to create a synthesizer that will allow me to store different multi-textured presets. I added the input signals together, and used an LFO to integrate AM synthesis as a tremolo effect. I then added this subtractive patch to the additive presets so that it can be selected as a separate preset (or to choose between multiple subtractive presets). Below is a screenshot of the subtractive subpatch at this stage:

Combined with the additive presets and arpeggiator in the main patch, this is what it looked like:

I then created subpatches to encapsulate presets and keep the main patch tidy. In order to reduce cpu usage, I used the mute~ object to disable signal processing in all subpatches excluding the one that is selected and sent to the output. To do this, I used if statements.I then had to use a second new preset object in the subtractive subpatch in order to store different settings for 4 new sounds. These can be triggered by recalling the numbers in the main patch, which are also set to trigger the 5th subpatch. I have been using a subpatch to store the presets (screen shots below), which I also connected to the subtractive patch in order to transfer the storage location:The new preset object in subpatch 5 can then copy over settings from the main patch.

Access to these settings from the main patch:

Troubleshooting: Some errors I encountered at this stage some time to resolve, such as hanging notes (ineffective envelopes) and a routing problem in the subtractive synthesiser which rendered my ADSR and main filter inactive. To resolve these issues I worked on re-routing the subtractive patch. The following screen shot shows my signal flow:Next, I used jitter in the main patch in order to display a visual for each preset. Visuals contained the preset name and information on the synthesis technique used, along with a graphic¬†that I created using Processing (Java based) and iMovie. To do this, I connected each preset number (used to select the sounds in presentation mode) to if statements which are used to output either 0 or 1 depending on which preset is selected. These 0s and 1s control what video is played using the jit.xfade object, as shown in the screen-shot below. The jit.xfade allows me to organise the presets in pairs. A loadbang is used to ensure the files begin to play once the patch is open.I then created some¬†filepaths that are triggered once the patch is open (using loadbang) in order to load in each preset’s visual display automatically. As this file path is specific to my personal computer, the patch will need to be rendered as an application to present its full functionality:Upon loading in 10 different video files, I began to encounter some glitches with jitter as my patch became more CPU intensive. In order to reduce CPU usage I used if statements to stop playback of all video files apart from the specific corresponding to the synth preset that is currently in use: I also encountered some issues when loading 10 different video files to one single jit.pwindow object, with videos flashing in and out regardless of whether they were set to play or stop. To prevent this, I used 5 jit.pwindow objects together (one for each jit.xfade object, as it’s designed to be able to handle…….).




To use 5 windows together in presentation mode, I layered them on top of each other and scripted them to show only when their¬† specific preset was selected. To do this, I first added scripting names to each pwindow (access via inspector), and used the¬† messages “script show “object name”” and “script¬† hide “object name”” connected to a thispatcher object, which sends messages to the main patch. These scripting messages were then connected to preset selection numbers.

At this stage, I began to create my user interface by adding objects to presentation mode. In order to allow adjustment to be made to presets 5, 6, 7, 8, 9, and 10, I needed to add controls which are in subpatch 5, to do this I copied the controls to the main patch and created new inlets to link them (de-capsulating the subpatch didn’t work, because of preset save locations):

I also added a new 16 step arpeggiator, so that a user can chose between an 8 step arpeggiator that ascends and repeats, or a 16 step one that ascends and descends:
So far, I had displayed the two arpeggiators; main preset selection controls; jitter pwindows with linked preset videos; subpatch 5 synth controls (ADSR, LFO and input signals); and main output controls in presentation mode:
For synthesizer presets that are encapsulated in subpatches, controller output is mapped to copied items in the main patch using a prepend set object. Copies of dials and faders need to be in the main patch in order to be added to presentation mode:

These are linked to the copied objects outside of the patch using the new outlets created:
In some cases, I encountered difficulty with objects which could not output patterned line values (such as the graphic envelopes below). For this reason, I decapsulated subpatches 1 and 2 in order to include these controls in the user interface:
The following screenshot shows the error with the graphic envelope objects as mentioned above:
I then needed to show/hide controls depending on the preset selected in the UI.  I did this by sending scripting messages to a thispatcher object, which hide all controls if any preset other than the one that the controls belong to is selected.  In order to send scripting messages, each object needed a scripting name, which can be input using the inspector once an item is selected. The screenshots below show my scripting process (note that the black live.buttons are the preset selectors in the UI. these send a bang to all of the scripting messages to show/hide objects in presentation mode):

In the case of preset 3, no prepend set objects were used. To output the stored fader values (determining harmonic volume), a bang is sent to flonum objects (which take the fader values), and are sent to the patch outlets. These outlets are then connected to the matching faders in the main patcher. Flonum objects were inserted in the signal chain in order to check that the correct numbers were taken:

I sent a final message to “thispatcher” in order to set the zoom settings immediately once a user opens the patch. The UI needs to be at 75% for all of the objects to be viewable (note that a delay of 50ms is used to prevent a crash when Max is opened, as many loadbangs are being used at once and this allows the computer to process filepaths etc before adjusting the zoom. This happens too fast to see it but 50ms actually does make a difference for my computer…..):

The following images show my GUI at this stage, as different presets were selected. The blank space on the left is left for a sequencer and recorder.

Preset 1 with interactive envelopes and AM speed:
Preset 2 with interactive envelopes and overtone values (in Hz):
Preset 3, which allows the user to change the overtone volumes (preset 4 will be the same):Preset 4, which allows controls for the filter input signals, AM, FM and ADSR. Presets 5,6,7,8,9, and 10 use the same controls as they are generated by the same synthesiser (originally subpatch 5, which integrates Subtractive, AM, and FM synthesis). Control values are restored when the presets are selected, showing the user the interactive components):

Features added next: sequencer, recorder, graphic background with midi controls for brightness, contrast, saturation.

I integrated my sequencer (the first thing I built ^ top of blog) as a 16 step drum machine which the user can switch on, adjust tempo and play along to. Four drum sounds have been used (exported from Logic X Drummer), and the user can input any 16 step pattern in the 4 sounds. These four audio files as read from my assignment folder similarly to the .mov filed imported to jitter. For this reason it will be easier to read them if the patch is rendered as an app for use on another computer, so that filepaths do not need to be changed. This will encapsulate all of the files. The first screenshot below shows the sequencer in edit mode, and the second shows the sequencer as it appears in the bottom left corner of presentation view. A user can draw sequencer patterns in using the toggles to create a 16 step drum beat.
A recorder was added which allows the user to record synth sounds, the arpeggiator the drum sequencer, and audio from the computer’s build in microphone. I built this by using the sfrecord~ object and routing all audio signals to it. I have added a button (sending a bang to an “open” message which allows the user to create a file, select format, name and storage location on their device. Then, a toggle which triggers sfrecord~ to start recording. In order to isolate the ezadc~ object, I had to use a gate and a toggle to open its outlet. The reason for this is that one the ezdac~ is turned on, max 7 turns on audio globally and can’t be isolated from the ezadc~. Finally, I added a number~ object to show the elapsed recording time (taken from the first outlet of sfrecord~. The first screenshot below shows the recorder in edit view (showing the build), and the second shows it in presentation mode, with instructions for the user:

Brightness, contrast and saturation controls for a jitter object (controllable using MIDI or dials on GUI). These were added as a “graphic background” feature where a sixth jit.pwindow object is placed behind all other objects in presentation view. I have scripted a white panel to appear in front of the video when the user switches off the graphic background setting. I did this by connecting the buttons to “1” messages, and then adding two if statements so that if a bang is received and a 1 is sent from the “on” button, the panel will disappear (bang sent to “script hide background panel”, and if a 1 is received from the “stop” button, a bang is sent to a “script show backgroundpanel”. The on and off buttons are connected to “on” and “off” messages which bang the jit.qtmovie object. This cannot be sent directly to jit.brosca or jit.pwindow. The first screenshot below shows this in edit view, and the second shows my GUI with “graphic background” switched on:

A couple of final additions and bugfixes:

  • Bug fix: If fader values for overtones in presets 3 and 4 were altered by the user, they snapped back to the stored preset when miditof is triggered. To fix this bug, I removed a button which was used to send the controller values from the subpatch to the main patch, and sent a bang from the preset object instead . This way, the faders would only snap back when a preset is loaded, and the user can freely play notes and alter overtone volumes.
  • To store values in main patch, outlets needed to be set up in all three subpatches along with prepend set objects. These.
  • Bug fix: A loadbang is sent (with a 1000ms delay) to a stop message which stops the background video immediately once the patch is opened. This is to reduce cpu usage, and also triggers a “script show” message to a panel which is scripted to show in front of the background video (to prevent the background from turning black in presentation mode). A loadbang is also sent to preset one, which reduces cpu usage by triggering the if statements that stop all other video playback except 1. Previously, the patch would immediately try to read and play 11 different video files when the patch is opened, which causes Max to crash.
  • Bug fix: Some scripting messages had changed to “bang” rather than showing/hiding objects – this was identifiable once patch cords were routed in edit view – if a bang is sent to the right inlet of the message rather than the left it will change the message content rather than executing it. This was changed by changing to the left inlets of the scripting messages.



  • Pan control for main output:
  • A metro and a random object were attached to the miditof object in the main patch. A toggle to trigger this was added to presentation view.
  • A background audio file was added (to allow the user to change the rate of playback):
  • The screenshot below shows my finished patch in presentation mode:



Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s