Video Post Production Work Across 2017/2018

Introduction

My name is Rokaia, and I am currently an undergraduate student of Music Production in Limerick Institute of Technology (L.I.T.). In my final year of the Honours B.Sc., I have chosen to study Video Post-Production, where I have had the opportunity to apply both creative and technical skills in order to produce audio visual content. In the past four years, I have focused primarily on music and audio production, but have always had a great interest in film and visual media. I have spent the past months developing my creative and technical skills during this module, which I think will be applicable to a variety of future personal projects. This blog post outlines some interesting editing and post-production techniques I have learned in a module led by Muireann DeBarra of L.I.T, where I have worked creatively on audio-visual production projects. As documentation forms an important part of my creative process, this blog publication also doubles up as partial fulfillments for my Video Post-Production continuous assessment.

Who I am, and Why I chose Video Post-Production

I am a human with very large hair, who grew up moving back-and-forth between Co. Clare, Ireland and Agadir, Morocco. In the process, I feel that I have learned to adapt to two very different cultures. There’s a lovely picture of me just below with my kitchen spices. Today, I like to think of myself as a Progressive Secular Relativist, Equalitarian, Environmentalist, and Sassy-Feminista (although some do argue that I’m just a moody audio nerd). RokaiaThe main focus of my current undergraduate studies is research in psychoacoustics, where I am investigating auditory perception and the relationships between memory processes and the hearing system. This research is set to form a basis for my post-graduate studies, which will focus on creating and publishing a perceptual model, and applying my findings to multi-modal interface design. It is quite possible that I will be in academia for the remainder of my life. HOWEVER, I’m very much into audio and video, and have been a technology nerd since I was old enough to witness the release of Windows XP.

I chose to study Video Post-Production mainly because I felt that the skills I would learn would assist me in my personal creative projects. To my advantage, in the process the module has allowed me to develop multimedia skills that will be useful in various Screen Shot 2018-04-23 at 13.46.34.pngrespects, including my future academic work. Upon completion of the module I feel confident that I would be able to create demonstrative videos in order to help me in future research dissemination at audio, interactive systems and psychoacoustics conferences. The module has also inspired me to work on video to be used alongside my experience in Max 7 (see my previous blog post). I now feel confident that I can combine my skills to create an audio-visual interactive installation for future art exhibitions. As a hobby to complement my academic work in Music Technology & Production, I enjoy producing and performing my own electronic music. Some of my greatest musical influences include Bon Iver, Imogen Heap, Four Tet, Francis and the Lights, and James Blake.

I have some experience in creating interactive audio-visual installations, both in terms of assignments in my academic programme, and also in a professional capacity. Although this mainly involves having significant technical knowledge (Max/MSP, Arduino and Ableton Live software environments), I feel that the Video Post-Production module has greatly complimented the visual aspect of my creative work, and I plan to integrate this knowledge into the creation of further automated audio-visual performances and exhibition. I have collaborated with Limerick School of Art and Design (L.S.A.D.) student Phoebe McDonogh and exhibited an audio-visual piece as part of “Sound and Visio 1” which took place in LSAD in early 2017, and more recently, I have collaborated with Dr. John Greenwood (L.I.T.) in creating a performance-based AV installation involving performers Angie Smalis and Mark Carberry. This project was exhibited as part of Limerick Fringe Festival 2017.

Although my interest in video has mainly stemmed from abstract and more creative editing styles, such as music videos and art exhibitions, I do enjoy a variety of films, documentaries and some television shows. A film maker that inspires me is Alex Garland, because of his individual and creative approach to content production. Garland’s latest work, “Annhilation” exhibits and example of this, in the form of a psychological science fiction thriller based on the novel by Jeff VanderMeer. The film follows a group of military scientists who enter a quarantined area, to investigate its properties. The film plays on the warping of time, and the mutation of landscapes and creatures caused by the environment alone. Another work that has inspired me (more relevant to the music scene, is the music video created for Sia’s “The Greatest”, which was written and directed by Sia and Daniel Askill, and choreographed by Ryan Heffington. The director of Photography was Mathieu Plainfosse. The music video is experimental and open to interpretation, beginning with complete silence and showing grey imagery of a jail-cell, a dilapidated hallway and motionless bodies. Throughout the piece of music, the visuals slowly migrate to a more colourful and life-filled theme. I feel that the concepts embedded in the musical composition are very well represented in the visual sequences.

Editing Styles and Practice

This year, I have learned to create visual content using two classic and industry-standardised editing styles: Continuity Editing, and Montage Editing. This next section outlines a description of each of these editing techniques, and presents examples of my undergraduate work applying this theory to practice. In this next section, I will demonstrate my use of Avid Media Composer which is the industry standard video editing programme amongst high-end producers. I will also outline my use of various motion and image effects within the software. Examples of my work are scattered throughout this blog-post, so feel free to scroll over the writing and simply play the videos if that is what you came here for!

Classic Continuity Editing

Although media trends change consistently, video content that is made for commercial cinema has traditionally been in the style of “Classic Continuity Editing”, which consists of sequences that create seemingly logical, continuous narratives, allowing viewers to suspend disbelief easily and comfortably. In modern film studies, continuity editing is considered the most effective means to create smooth and seamless narrative experiences for the audience, and accordingly, it is sometimes referred to as “invisible editing”.

This module allowed me to focus on a late stage of the film production process, known as the post-production stage. In post-production, continuity editing is considered the predominant editing style for narratives, feature films and television programs. The goal when editing in this style is to “smooth out” the natural discontinuity that exists across shots when sequence editing, in order to establish coherence across visual shots.

Sequences created in this style generally combine related shots, or different components cut from a single shot, often with similar lighting, characters, props, audio and setting. In continuity editing, it is important to retain the viewer’s attention, and prevent them from getting distracted by a lack of pre-existing consistencies within the narrative. This is particularly important when relaying a time-scale as well as a physical location setting. In feature films, television programs and other narratives, continuity editing provides a sense of familiarity and comfort to the audience. A lot of practice, and quite an elaborate skill-set is required to maintain quality, elicit viewers attention, and allow the narrative to compliment character development in continuity editing. The style contrasts greatly with other approaches, such as Montage Editing” (on which I will provide further detail in the next section).

During my studies in Video Post-Production, I created two pieces that used Continuity Editing. The first, was a 90 second promotional video. In this section, I will outline my use of Avid Media Composer throughout this project. But first, here is my final video:

Following is a description of my work throughout the making of this video.

File management and organisation is extremely important when working with audio and video, and particularly when using Avid Media composer. This image shows the process of importing the footage into Avid Media Composer using the source browser, accessible from the File > Input menu list as the project window is selected. File management was a vital early stage of my learning in this module, and the organisational practice has positively contributed to the quality of my work as well as time management throughout this academic year. In the set-up used in the editing suites in L.I.T, all of the files need to be saved to a personal Z drive for each student on our lab server:ImportingFootagetoProject.PNG

 

Folders-schoolproject.PNG

 

To keep an efficient and organised workflow, I created separate bins for each file type being used in my project.

The following bins were created in my project window: Sequences, Clips, Music, Titles

 

The footage provided with this assignment’s brief was one long clip which was shot all at once as the camera operator moved to different locations in the classroom. Once the footage was  imported as one long video file, I needed to create separate cuts to save to a clip list with descriptions (detailing the action, and possibly whether it is a close up (CU) a mid shot (MS) or a wide shot (WS)), to make them easily accessible in my clip list. The following images show the process of cutting the master footage into sub clips, and saving them, with file names, to a clip bin.

Subclips-Done-Rokaia.PNG

I then created my separate sequence list in the sequences bin, to work on for each separate stage of the project. Progress was copied over and re-cut throughout the project.

Before beginning to edit my rough assembly, we spent a lab learning to use the different effects tools in Avid Media Composer (this was the purpose of my “FX Tool Sequence” shown in the image above). The effects tools are available in the window shown below (accessed using the key command Ctrl 3 (or Cmd 3 on a mac). The tools can then be copied over to empty slots in my sequence editor for easy access. In my project, this process was particularly useful for the keyframe tool, which I used regularly to fade separate audio files in and out in the final cut.

The motion effect was also very useful in order to make clips move in slow-motion. This image shows the motion effect settings used in the final clip of my project.

I then used the title tool to create text screens, which I used to convey information in my promotional video (explaining the idea, and displaying a relevant quote from influential figure Malala Yousafzai).

TitlesBin.PNG

The following image shows my Rough Assembly sequence, which contains rough visual cuts only (the audio shown is from the original footage, not yet arranged here).

roughsnipvideo2.PNG

This rough assembly sequence eventually evolved into a second edit, called the First Pass Sequence, containing my visuals, edited more finely and working towards a coherent 90 second idea.

After this, I moved on to create a second pass sequence, containing visuals, titles, along with some effects and transitions:

SecondPass.PNG

And finally, I created the final cut, containing final visuals with audio, transitions, titles and final effects:

Screen Shot 2018-04-23 at 10.30.51.png

The following screenshot demonstrates the process of importing external audio. I composed an original audio track for this project, which I recorded and mixed in Logic Pro X. The wav file was then imported into Media Composer as shown below:

Screen Shot 2018-04-23 at 10.31.27.png

Following is a screenshot of the Logic session file where I recorded this music:

Screen Shot 2018-04-23 at 10.31.50.png

­­­Further details: The following screen shot shows my general layout for efficient workflow in Avid Media Composer, as instructed in class. Here I have prioritised screen space for my source and programme monitors­ (top right) and my sequence timeline (bottom right). The space on the left of the screen is then kept for separate windows including the project window (usually on the top left). I then keep my sequences bin open and places on the right of the project window, so that I can switch between sequences quickly whilst editing. The space below these two windows is then used for any additional bins that are currently in use. In this screenshot I was using the clips bin, for example, to open each clip separately in the source monitor:

workflowSchoolProject-rokaia.PNG

The following images demonstrate the exporting process in Avid Media Composer. The export window in accessible from File > Output > Export to file, and then options are selectable for the Quicktime (.mov) and audio format. Before exporting however, some preparatory steps were completed within the project, such as:

  • Selecting the correct video quality in the timeline.
  • Mixing down all separate video files to one single video track.
  • Mixing down separate audio files to one single audio track.

A final mixdown sequence was created in the sequences bin in order to store audio and video mixdown files without altering the final cut timeline (first screenshot below). The files were then exported as Quicktime movies and submitted for grading.

Final mixdown sequence:

Screen Shot 2018-04-23 at 10.32.57.png

Export:

Screen Shot 2018-04-23 at 10.33.04.png

The second project that I created using the Continuity Editing Style involved choosing a track and creating a music video, using footage provided in class, which was a short film named “New Boy”, created by Steph Green. The original film captures the experience of a nine-year old African boy, Joseph, who had to move away from his home after losing his father. The footage clearly demonstrates his troubles settling in as a new student in Ireland. Here is my Video, followed by a description of my work within Avid Media Composer throughout this project:

The following screen shot demonstrates the process of importing the master footage provided for this project. Visuals from the short film “New Boy” (written and directed by Steph Green) was used. The full short film was imported as a single video from the shared Portfolio Footage folder, as shown in the screen shot:

Screen Shot 2018-04-23 at 12.11.21.png

 

Once the video was imported, the project format had to be adjusted to suit the footage, with its rendered letterbox frame. The video needed to be 25 frames per second, with an aspect ratio of 4:3. These settings are accessible under the format tab in the project window as shown in the screenshot. I then created the following bins and became familiar with the footage in order to cut the master footage into sub-clips: Clips, Music, Sequences, Titles, Master Footage.

 

Screen Shot 2018-04-23 at 12.11.57.pngI then cut the video into clips, giving them names relevant to the action on screen. To to this, I had the master footage in the control window, each time selecting in and out points (using the keyboard shortcuts i and o), and then holding the alt key, and clicking and dragging to copy the cut section into my newly created clips bin. Before beginning to edit the footage, I spent some time experimenting with the different tools available in the FX pallette in Avid Media Composer (to do this, I created a practice sequence).

Screen Shot 2018-04-23 at 12.13.10.png

Once I had decided on a theme for the music video, I decided to import some stock footage (creative commons). The footage included some military and war shots. I also created a new bin in my project to store this imported stock footage:

I then began to create my rough assembly sequence. The rough assembly contained mainly footage from the short film, as I experimented with some of the stock footage. I focused on the visual assembly here, and did not insert the track at this early stage of the project:Screen Shot 2018-04-23 at 12.15.15.png

I then began to create my rough assembly sequence. The rough assembly contained mainly footage from the short film, as I experimented with some of the stock footage. I focused on the visual assembly here, and did not insert the track at this early stage of the project:Screen Shot 2018-04-23 at 12.15.22.png

To suit the theme of the video, I decided to use my original track “L’ess”, which is mellow, with folk/ ballad style structure and lyrics crossed with some electronic processing.

I then copied the Rough Assembly to create a new sequence which would be my Rough Cut – First Pass, where I made some finer edits and added more stock footage. I worked on the first pass to almost relay a narrative, keeping the chosen music in mind, however, I was still working only on visuals, with no audio at this stage:

Screen Shot 2018-04-23 at 12.23.44.png

Screen Shot 2018-04-23 at 12.23.48.png

In the second pass, I inserted the audio track. The audio was cut by about one minute to suit the project brief. To ensure smooth transitions, I used Keyframes to fade the song in and out at the beginning and end, avoiding sharp cuts and artefacts. Some finer edits were made in the second pass in order to match some of the movement and action to the audio, allowing the song to lead the visual assembly. At this stage, I experimented with some effects, however I felt that dramatic effects and transitions did not suit the project, taking away from the narrative. On two shots, I used Timewarp, slowing down some of the video, which I felt added to the emotional effect of some of the footage.

Screen Shot 2018-04-23 at 12.23.53.png

Once the motion effects had been applied, I also used a letterbox effect on the imported footage for consistency in all the visuals (as seen across the source and programme monitors below). This screenshot shows my Final Cut, created originally from a copy of the second pass, with final edits, transitions, and titles to close the video:

Screen Shot 2018-04-23 at 12.27.52.png

Screen Shot 2018-04-23 at 12.27.58.png

 

 

This image shows some of the titles inserted at this stage of the project:

 

 

 

Screen Shot 2018-04-23 at 12.30.26.png

Finally, I created audio and video mixdowns, preparing my project for export as aQuicktime Movie:These rendered all cuts as one final video, with one stereo audio track, which I inserted into a new Sequence (Mixdown Sequence):

 

Choice of Music:

As mentioned above, I chose to use my original track “L’ess”, as I thought that it suited the theme and mood of the video. “L’ess” is, ironically, a composition stemmed from ideas on the topic of composition itself. Though the lyrics are quite cryptic in this sense, it is a piece that has evolved over time and become more relevant to its purpose as it developed. I originally composed this song as a single vocal line accompanied by one acoustic guitar track: a basic chord progression and the typical structure of a modern folk song. My main inspiration for this song was Elliott Smith, an American singer-songwriter and multi- instrumentalist. Because of the nature of the track, the cryptic lyrics easily fit to tell a story, with its mellow theme, timbres and dis-chords fitting the mood of the footage and adding to the emotional quality.

Conclusion and Learning Outcomes:

Overall, this project provided me with a lot of knowledge and experience, particularly regarding use of Avid Media Composer, the video editing environment and all of the steps required. Prior to this project, I had never been faced with the challenge of creating a music video arrangement, and the predefined story and imagery provided created a new creative challenge. I feel that this project greatly reinforced some of the topics discussed earlier this year on the different types and styles of video editing, as well as experimenting with and applying effects. I also feel that I gained some practice in building and enhancing a sequence by performing small edits and tweaks along the way in order to alleviate some changes that may otherwise seem quite dramatic during the Final Cut stage. If I were to repeat this project, I would allow more time to to expand and diversify my abilities regarding editing styles, beginning to explore the area of performance, dramatic effects and more artistic forms of video editing.

“L’ess” can be described as a mellow song, of Alternative Singer-Songwriter style,

with values influenced by various elements of different styles of music such as: a classical approach

choral harmonies transformed using electronic processing, or the drum beat, inspired by hip-hop and

electronic artists such as “Purity Ring”, with the general underlying structure of a traditional folk song.

 

Important Note: All Rights Reserved on the Copyright footage used in this project. Simply used for educational means in this circumstance within the confines of Limerick Institute of Technology labs. I did not create this visual content. Much of this content is strictly the intellectual property of Steph Green and other film-makers involved in the production of New Boy. The original video can be found HERE.

Audio by Rokaia Jedir, All Rights Reserved – Contactable through blog contact page (click HERE).

Montage Editing

Screen Shot 2018-04-23 at 13.45.23.pngDuring the second semester of my Level 8 programme in Limerick Institute of Technology, I was required to use contrasting editing approaches such as Montage Style Editing, which is a style that is generally used when an editor aims to generate, in the mind of the viewer, new associations among the various shots that can then be of entirely different subjects, or at least of subjects, less closely related, than would be required for the continuity approach.

Advertisements and movie trailers very often use use montage-style editing, in order to convey as much information as possible within a short space of time.

Movement and action is vital when putting together a montage style sequence. It’s important to pay attention to direction, and it helps to capture the action flowing from one shot to another. Learning to select the right cuts when limited footage is provided in order to keep the theme and create a fast moving video was a challenge, and will help me with my Montage assignment.

Montage-style editing is a technique in which visuals are cut and sequenced in an fast-paced manner in order to convey that the timeline is compressed, and relay a lot of information over short periods of time. Montage editing was primarily established by Sergei Eisenstein in the 1920’s, who advocated various developing types of montage at the time (Lefebvre and Furstenau, 2016). As one of the earliest to experiment with the juxtaposition of cuts, Eisenstein eventually revealed montage to be one of the basic principles of film composition in general. As a result, “Soviet Montage Theory” was developed at this time, it became widely understand and accepted that creating cinema was heavily reliant on editing. The word “montage”, accordingly, originates from the French for “editing”. Amongst modern filmmakers, the principles of Montage-style editing is often implemented for promotional videos for events, product advertisement, and film trailers. It can also be seen in feature film sections, where a lot of information may need to be conveyed hastily, for example, in order to introduce a new character in an opening scene.

In preparation for this project, labs were spent creating various 30 second montage sequences in Avid Media Composer using various feature-film clips. The aim of these tasks were to familiarise ourselves with fast moving video, continuous action and referring to a common theme throughout the video. In these labs, themes were pre-determined in the film clips provided. Contrastingly, our promotional video assignment will be edited to an original theme detailed by the Creative Directors, and discussed and detailed in a briefs for the CBVP students. Concept Development and Research: Two modern montage-style promotional videos were studied and discussed in detail as a class: a promotional video for “The Limerick Spring, Festival of Polotics and Ideas”, and “Limerick 2020 – Multiplicity”, a video representing Limerick as a culture-filled city, and the Limerick 2020 project as a whole. Both of these examples use fast-paced editing styles, with coherent and consistent action, movement, colouring and titles. These were useful and relevant examples, and allowed us to consider implementing similar techniques and cuts in our own montages.

During my studies in Video Post-Production, I created a promotional video using the Montage-Style Editing approach. In this next section, I will outline my use of Avid Media Composer throughout this project. Here is my final video:

Following is a description of my work throughout the making of this video.

This was creative audio-visual production project in order to deliver a promotional montage-style video for LIT Music Festival 2018, and was submitted in partial fulfillments for my video post-production portfolio (semester 2) assessment at Limerick Institute of Technology. This project is based on my collaboration with peers in Music Production, and with Creative Broadcast & Film Production (CBFP: Year 2) students. Throughout this project, Creative Broadcast students have acted as a production team, shooting and sourcing visual content, while my classmate (Anthony Byrne) and I worked together as Creative Directors, and took on editing and post-production tasks. Montage-style editing was studied and implemented in the production of this video, and post-production work was completed using Avid Media Composer.

Initial milestones included decision making in relation to the main video theme, and building content ideas around that theme. We decided to base center video around unity and inclusivity within the music scene in Limerick City and across generations of musicians. An early creative brief was drafted to build around this concept, in preparation to present our footage requirements to the CBFP students. The brief included included concept and structure notes, and a detailed shot list with descriptions, locations and the props required. Upon meeting with the production team, and after careful consideration with regards feasibility, some changes were made to creative brief across three labs. To our advantage, the production team were knowledgeable and experiecned in film production and video editing, and were able to provide us with some strong advice relating to our ideas of the end result. After our first meeting, some shots were removed from the list, and replaced with more effective content. For example, in our earliest draft, Anthony and I had requested many still shots at various locations in Limerick, in order to represent a journey, and use visuals to emphasise the Limerick-related concepts. Upon our first meeting with the production team, we were advised that at a later point, still shots may not be ideal for montage-style editing due to the lack of motion. They advised us that lively, fast moving, and colourful shots should be used and that still shots can turn out to resemble photographs rather than video, which would negatively affect our end product. A total of three drafts of the creative brief were made in this assignment.

Through this video, I wished to relate to the topic of unity and inclusivity within the music scene in Limerick City and across generations of musicians. I wanted to create a fast-paced, time-lapse inspired walkthrough of the Limerick music scene. The dominant features of Limerick City will be represented, with main colour scheme working around red-brick – hot colours which also match our Music Festival branding. Colourful action-filled shots and long walking shots will be required to be sped-up in post-production, as well as many still shots of landmarks, talent and traffic in the city.

Once the above creative brief had been finalised. We liaised with the production team consistently across the two following weeks, and on some occasions we also worked together to gather footage (branding reveal and the Millennium performance shots). While they were busy collecting the rest of the footage, we spent our lab time learning about montage editing, creating practice sequences, and experimenting with the 3D Warp tool and other effects in available in Avid Media Composer. Some time was spent preparing for these labs specifically, examining and experimenting with the footage as outlined above.

Workflow and Project Management

Once all of the footage had been gathered from CBFP students, I examined and experimented with it using Apple iMovie prior to creating a project on the Avid Z Drive, in order to ensure that the important shots outlined in our creative brief had been received, and that all of the necessary footage to realise our concept had been provided. A trial montage was created using iMovie in order to familiarise myself with the footage, and facilitate a more efficient workflow in the avid labs:

Screen Shot 2018-04-23 at 13.57.45.png

Once this was complete, the footage was copied to the Z drive in the avid lab, and proceeded to create a new project in Avid Media Composer. The project was formatted so that the resolution was 1920*1080 pixels, at 25 frames per second, with an aspect ratio of 16:9. Once these preferences were set, I began to import the footage using the source browser, naming, and organising the clips appropriately. Primarily, some time was spent solving technical issues encountered whilst importing the footage. Here, I learned about the different methods that can be used to import footage and about the different formats, frame rates and resolution settings available and how they might affect import. Project bins and various clip bins were created at this early stage. In previous montage-practice labs, master footage was cut into sub-clips upon importing it into media composer. In this case, however, the majority of shots provided were only seconds long, so rather than cutting a lot of subclips, I simply cut out blurred and shaky sections, and renamed all of the clips clips according to the action in each shot. I then organised the clips in separate bins depending on action or where they were shot (Performance, Outdoors, Props):Screen Shot 2018-04-23 at 13.58.45.png

Once the project was set up and organised, The Rough Cut: First Pass sequence was created, and I began to apply the techniques learned in the montage practice labs. As I became familiar with the footage through making a first pass sequence, I made some further creative decisions with an end vision in mind for my sequence. I decided here that I would use large titles and transitions overlaying the visuals in order to convey a message, in a similar style to that of the “Limerick Spring” promotional video mentioned earlier. To begin the first pass sequence, the branding reveal, which was intended to be spliced into sections of the video, was first placed on a video track and a motion effect was added in order to time-lapse it until it was 30 seconds long. As other clips were added to new video tracks, the branding reveal shot could then be used as a guide for the full length of the montage.

The following images demonstrate the creation of a rough-cut: first pass sequence:

Screen Shot 2018-04-23 at 13.59.47.png

The second pass sequence was heavily reliant on titles, transitions and effects, however, some of these were disregarded in the final cut. Using the music festival branding provided, a three second opening scene was created to introduce the montage. The Blur Effect (effect pallette > images > blur effect) was added to the branding originally as the titles were sequentially superimposed, and finally, the following image was revealed:

Screen Shot 2018-04-23 at 14.00.30.png

Much more footage was added in the second pass, including prop shots taken in music shops, and some performance shots from the Millennium Theatre (taken during one of our House Band rehearsals). In the second pass, I added titles to display the following message throughout the video, referring to our main theme: “March 2018 marks the return of LIT Music Festival to unite the Limerick music scene.” Although the opening animation in this second pass was later disregarded, it allowed me to practice working with titles over a background shot, which was extremely useful for the final cut. As this project progressed, titles became a more prominent element, and I needed to put more thought into the message to be conveyed and fluidity of the text on screen. As I was working in Media Composer, I created a word document to refer back to as I created the titles and transitions. Upon feedback on my second pass sequence in class, I used the document to help me make decisions on the message, by arranging the different phrases and active verbs that could be used over the visuals. In the final cut, I decided to go with the following phrase: Perform, learn, move, to unite Limerick in music”. The words were matched carefully to the visuals, and subtle colour changes and effects were added to each title. Screen Shot 2018-04-23 at 14.01.36.png

The 3D Warp tool was also used to animate and emphasise the word “Limerick”, referring again to our main theme:

Screen Shot 2018-04-23 at 14.02.08.png

Once all of the shots, titles and effects were placed and edited appropriately, a 30 second snippet of audio was created using Ableton Live, where I recorded and manipulated various vocal samples and backed it with percussive sounds in a drum sequencer. The following image demostrates my work on various scenes in Ableton Live. I used a lot of bright and “major” sounds to accompany this fast-moving visual sequence:

Screen Shot 2018-04-23 at 14.02.42.png

Finally, the following image shows my final cut, including audio. The imagery was altered slightly in order to cut effectively to the music:

Screen Shot 2018-04-23 at 14.03.01.png

In recent years, video advertising has become increasingly popular in various respects. Throughout this project, I have learned that video advertising has the ability to create a relatable and stimulating environment for consumers by meeting their expectations for content. It allows brands to quickly inform and visually entertain, which generates a powerful platform for conversion when accurately targeted across user behavior patterns. This project enabled me to learn many useful skills that are relevant to industry-related work. I have learned a lot about montage-style editing, and why it may be used in video marketing and event promotion. I feel that my editing practice throughout the past weeks has allowed me to develop my skills, both with event promotion and with video editing in Avid Media Composer. Although it was a major challenge to begin with, I feel that I am now confident in selecting important snippets from large sets of footage, and that I can manipulate 30 seconds of visual content into an effective promotional montage. Additionaly, the editing practice throughout this assignment has enabled to work efficiently in Avid Media Composer, giving me more time and improving my overall quality of work. I feel that the smaller deadlines throughout this project (including lab work), helped me with time management throughout this assignment, which is often a challenge when working outside of class. In conclusion, I feel that I have fulfilled the project brief requirements and created a montage-style video that is useable to promote LIT Music Festival on social media platforms.

If I were to repeat this assignment, I would spend more time on file and project management in Avid Media Composer, and import all footage into a new project in Media Composer prior to the labs. I felt that this was a very time consuming task during the lab, and that solving technical issues was a major challege in beginning this assignment. I now understand that it is important to provide enough storage space to allow Media Composer to store all of the footage required. Another major learning outcome is that movement and action is vital when putting together a montage style sequence. It is important to pay attention to direction, and colouring and it helps to capture the action flowing from one shot to another.

A couple of further examples of my work in video labs:

 

 

Advertisements

Software Instrument Build using Max 7

I have built a max patch that’s functional as a software instrument and can be used with MIDI input & various controls 🙂 This blog outlines the stages of my build. Feel free to ask any questions in the comments!

I initially created a simple sequencer to later integrate in two parts of the software instrument: as a drum machine, and as an arpeggiator tool for the main synth presets. To do this, I used the tempo object. This way, I could input parameters so that the tempo object would output a certain number of divisions of a beat at a specified rate. Then the select object was used to output 16 successive bangs:I also used the groove~ object and its required inputs: using an if statement here in order to bang a 0 to begin playback in the groove~ object.

To allow a user to turn on and off audio samples within the sequencer, 16 toggles are connected to the first inlets of 16 gswitches, and each button is connected to the last inlet of each switch. This way, a bang is sent to trigger playback of the audio file only when the toggle is switched on.

I then began creating some instrument presets using additive synthesis. In the picture below, a number for a fundamental frequency (f0) is entered manually using the number object (later using midi input and midi to f). The number is then multiplied by each overtone number (freq. of each is displayed using number boxes). Each number is sent to a cycle object to create a sine wave. And finally, sliders are used to set the amplitude for each partial. Slider range is set as 0 to 1 (allowing float outputs). These amplitudes affect each signal using the *~ object. These relative amplitudes will be different across presets (This method will be used to set overtone gains for the second two sounds. The first two will be controlled using envelopes). The output signal is divided by 10 to avoid clipping. The spectroscope~ object is used to visualise the output signal.For the first two presets, I used a separate envelope on each partial. I designed the envelopes using the function object along with the line~ object (as in the Graphic Envelope help patch). These are also multiplied by each signal (shown below). These envelopes will also be different across presets. I added a notein object to recieve messages from a controller, a kslider to visualise the input on the screen (and also to input manually), and a mtof object to convert the midi messages to frequencies. I then connected the mtof object to the number box for the fundamental frequency. I connected the kslider’s outlet to a button and then a send object in order to send a bang to my envelopes every time a note is played. The function objects receive the bang using the receive object and the same name as the send, which for my first preset is “bangEnvelopes”. Each envelope for this first preset is 5000ms long (this is edited in the inspector for each function object). The next screenshot shows my first instrument preset (gain sliders to be removed in this one as envelopes affect overtones – only left to be copied to 3 and 4).Next, I copied this design, changing the overtone values and amplitudes/envelopes to create a new sound (for a separate preset). The second preset has additional harmonics (f0*11, f0*12 and f0*13). These share envelopes with f0, f1 and f2. The 3rd preset uses only odd harmonics. For the last preset, I have altered the frequency multiplier values to generate dissonant overtones. The same midi controls are connected to all four sounds, and a selector is used in order to choose between them and send only one signal at a time to the ezdac~. This next picture shows the four additive presets together.I then linked my a section of the sequencer with an additive synthesiser in order to use it as an arpeggiator. The next picture shows my arpeggiator. Much of the previously built sequencer has been removed as it was not needed for a simple arpeggiator:

The tempo and selector objects here create a sequence of notes, which depends on the selected note on the keyboard. The selector sends a set of timed bangs to each of its outlets, to which I have connected separate buttons. Each button then triggers a separate number to be added to the note inputted. Different numbers will create different intervals and the tempo can be adjusted using the dial above. The output of this arpeggiator is then connected to the mtof object, which is connected to the fundamental frequency in each additive preset. Next, I used the preset object in order to store the values on screen for my current presets. (I initially created this to store the fader values for presets 3 and 4 as shown in the screenshot). This object can also be used to allow a user to store their own settings when using the instrument.

Next, I made a synthesiser that integrates subtractive and AM synthesis techniques for further presets. Rather than using multiple reson~ objects to combine single band filters, I have used the filtercoeff~ and biquad~ objects to allow various filter types and various input signals. In order to change the type of filter generated, messages are sent to the filtercoeff~ object. To allow the user to choose efficiently between messages, I have used the umenu object. I have entered umenu items in the inspector, corresponding to messages that the filtercoeff~ object accepts (these are available in the filtercoeff~ help file).The biquad~ object, which creates the filter based on coefficients provided by the filtercoeff~ object, accepts multiple input signals. Here I have used multiple signal generators in order to allow a user to choose between them, or layer them to create different textures:

(Note: the numbers that I have input for the rect~ object arguments are frequency and pulse width).

I used the adsr~ object to allow users to shape the amplitude envelope using dials as shown in the picture below. The output from the kslider is scaled so that it is taken as 0 to 1 rather than 0 to 127. Scale object arguments are: input min, input max, output min, output max.Some additional features were then integrated in this subtractive/hybrid patch to create a synthesizer that will allow me to store different multi-textured presets. I added the input signals together, and used an LFO to integrate AM synthesis as a tremolo effect. I then added this subtractive patch to the additive presets so that it can be selected as a separate preset (or to choose between multiple subtractive presets). Below is a screenshot of the subtractive subpatch at this stage:

Combined with the additive presets and arpeggiator in the main patch, this is what it looked like:

I then created subpatches to encapsulate presets and keep the main patch tidy. In order to reduce cpu usage, I used the mute~ object to disable signal processing in all subpatches excluding the one that is selected and sent to the output. To do this, I used if statements.I then had to use a second new preset object in the subtractive subpatch in order to store different settings for 4 new sounds. These can be triggered by recalling the numbers in the main patch, which are also set to trigger the 5th subpatch. I have been using a subpatch to store the presets (screen shots below), which I also connected to the subtractive patch in order to transfer the storage location:The new preset object in subpatch 5 can then copy over settings from the main patch.

Access to these settings from the main patch:

Troubleshooting: Some errors I encountered at this stage some time to resolve, such as hanging notes (ineffective envelopes) and a routing problem in the subtractive synthesiser which rendered my ADSR and main filter inactive. To resolve these issues I worked on re-routing the subtractive patch. The following screen shot shows my signal flow:Next, I used jitter in the main patch in order to display a visual for each preset. Visuals contained the preset name and information on the synthesis technique used, along with a graphic that I created using Processing (Java based) and iMovie. To do this, I connected each preset number (used to select the sounds in presentation mode) to if statements which are used to output either 0 or 1 depending on which preset is selected. These 0s and 1s control what video is played using the jit.xfade object, as shown in the screen-shot below. The jit.xfade allows me to organise the presets in pairs. A loadbang is used to ensure the files begin to play once the patch is open.I then created some filepaths that are triggered once the patch is open (using loadbang) in order to load in each preset’s visual display automatically. As this file path is specific to my personal computer, the patch will need to be rendered as an application to present its full functionality:Upon loading in 10 different video files, I began to encounter some glitches with jitter as my patch became more CPU intensive. In order to reduce CPU usage I used if statements to stop playback of all video files apart from the specific corresponding to the synth preset that is currently in use: I also encountered some issues when loading 10 different video files to one single jit.pwindow object, with videos flashing in and out regardless of whether they were set to play or stop. To prevent this, I used 5 jit.pwindow objects together (one for each jit.xfade object, as it’s designed to be able to handle…….).

 

 

 

To use 5 windows together in presentation mode, I layered them on top of each other and scripted them to show only when their  specific preset was selected. To do this, I first added scripting names to each pwindow (access via inspector), and used the  messages “script show “object name”” and “script  hide “object name”” connected to a thispatcher object, which sends messages to the main patch. These scripting messages were then connected to preset selection numbers.

At this stage, I began to create my user interface by adding objects to presentation mode. In order to allow adjustment to be made to presets 5, 6, 7, 8, 9, and 10, I needed to add controls which are in subpatch 5, to do this I copied the controls to the main patch and created new inlets to link them (de-capsulating the subpatch didn’t work, because of preset save locations):

I also added a new 16 step arpeggiator, so that a user can chose between an 8 step arpeggiator that ascends and repeats, or a 16 step one that ascends and descends:
So far, I had displayed the two arpeggiators; main preset selection controls; jitter pwindows with linked preset videos; subpatch 5 synth controls (ADSR, LFO and input signals); and main output controls in presentation mode:
For synthesizer presets that are encapsulated in subpatches, controller output is mapped to copied items in the main patch using a prepend set object. Copies of dials and faders need to be in the main patch in order to be added to presentation mode:

These are linked to the copied objects outside of the patch using the new outlets created:
In some cases, I encountered difficulty with objects which could not output patterned line values (such as the graphic envelopes below). For this reason, I decapsulated subpatches 1 and 2 in order to include these controls in the user interface:
The following screenshot shows the error with the graphic envelope objects as mentioned above:
I then needed to show/hide controls depending on the preset selected in the UI.  I did this by sending scripting messages to a thispatcher object, which hide all controls if any preset other than the one that the controls belong to is selected.  In order to send scripting messages, each object needed a scripting name, which can be input using the inspector once an item is selected. The screenshots below show my scripting process (note that the black live.buttons are the preset selectors in the UI. these send a bang to all of the scripting messages to show/hide objects in presentation mode):

In the case of preset 3, no prepend set objects were used. To output the stored fader values (determining harmonic volume), a bang is sent to flonum objects (which take the fader values), and are sent to the patch outlets. These outlets are then connected to the matching faders in the main patcher. Flonum objects were inserted in the signal chain in order to check that the correct numbers were taken:

I sent a final message to “thispatcher” in order to set the zoom settings immediately once a user opens the patch. The UI needs to be at 75% for all of the objects to be viewable (note that a delay of 50ms is used to prevent a crash when Max is opened, as many loadbangs are being used at once and this allows the computer to process filepaths etc before adjusting the zoom. This happens too fast to see it but 50ms actually does make a difference for my computer…..):

The following images show my GUI at this stage, as different presets were selected. The blank space on the left is left for a sequencer and recorder.

Preset 1 with interactive envelopes and AM speed:
Preset 2 with interactive envelopes and overtone values (in Hz):
Preset 3, which allows the user to change the overtone volumes (preset 4 will be the same):Preset 4, which allows controls for the filter input signals, AM, FM and ADSR. Presets 5,6,7,8,9, and 10 use the same controls as they are generated by the same synthesiser (originally subpatch 5, which integrates Subtractive, AM, and FM synthesis). Control values are restored when the presets are selected, showing the user the interactive components):

Features added next: sequencer, recorder, graphic background with midi controls for brightness, contrast, saturation.

I integrated my sequencer (the first thing I built ^ top of blog) as a 16 step drum machine which the user can switch on, adjust tempo and play along to. Four drum sounds have been used (exported from Logic X Drummer), and the user can input any 16 step pattern in the 4 sounds. These four audio files as read from my assignment folder similarly to the .mov filed imported to jitter. For this reason it will be easier to read them if the patch is rendered as an app for use on another computer, so that filepaths do not need to be changed. This will encapsulate all of the files. The first screenshot below shows the sequencer in edit mode, and the second shows the sequencer as it appears in the bottom left corner of presentation view. A user can draw sequencer patterns in using the toggles to create a 16 step drum beat.
A recorder was added which allows the user to record synth sounds, the arpeggiator the drum sequencer, and audio from the computer’s build in microphone. I built this by using the sfrecord~ object and routing all audio signals to it. I have added a button (sending a bang to an “open” message which allows the user to create a file, select format, name and storage location on their device. Then, a toggle which triggers sfrecord~ to start recording. In order to isolate the ezadc~ object, I had to use a gate and a toggle to open its outlet. The reason for this is that one the ezdac~ is turned on, max 7 turns on audio globally and can’t be isolated from the ezadc~. Finally, I added a number~ object to show the elapsed recording time (taken from the first outlet of sfrecord~. The first screenshot below shows the recorder in edit view (showing the build), and the second shows it in presentation mode, with instructions for the user:

Brightness, contrast and saturation controls for a jitter object (controllable using MIDI or dials on GUI). These were added as a “graphic background” feature where a sixth jit.pwindow object is placed behind all other objects in presentation view. I have scripted a white panel to appear in front of the video when the user switches off the graphic background setting. I did this by connecting the buttons to “1” messages, and then adding two if statements so that if a bang is received and a 1 is sent from the “on” button, the panel will disappear (bang sent to “script hide background panel”, and if a 1 is received from the “stop” button, a bang is sent to a “script show backgroundpanel”. The on and off buttons are connected to “on” and “off” messages which bang the jit.qtmovie object. This cannot be sent directly to jit.brosca or jit.pwindow. The first screenshot below shows this in edit view, and the second shows my GUI with “graphic background” switched on:

A couple of final additions and bugfixes:

  • Bug fix: If fader values for overtones in presets 3 and 4 were altered by the user, they snapped back to the stored preset when miditof is triggered. To fix this bug, I removed a button which was used to send the controller values from the subpatch to the main patch, and sent a bang from the preset object instead . This way, the faders would only snap back when a preset is loaded, and the user can freely play notes and alter overtone volumes.
  • To store values in main patch, outlets needed to be set up in all three subpatches along with prepend set objects. These.
  • Bug fix: A loadbang is sent (with a 1000ms delay) to a stop message which stops the background video immediately once the patch is opened. This is to reduce cpu usage, and also triggers a “script show” message to a panel which is scripted to show in front of the background video (to prevent the background from turning black in presentation mode). A loadbang is also sent to preset one, which reduces cpu usage by triggering the if statements that stop all other video playback except 1. Previously, the patch would immediately try to read and play 11 different video files when the patch is opened, which causes Max to crash.
  • Bug fix: Some scripting messages had changed to “bang” rather than showing/hiding objects – this was identifiable once patch cords were routed in edit view – if a bang is sent to the right inlet of the message rather than the left it will change the message content rather than executing it. This was changed by changing to the left inlets of the scripting messages.

 

 

  • Pan control for main output:
  • A metro and a random object were attached to the miditof object in the main patch. A toggle to trigger this was added to presentation view.
  • A background audio file was added (to allow the user to change the rate of playback):
  • The screenshot below shows my finished patch in presentation mode:

 

Brand new set from Rokaia at The Global Green, Electric Picnic 2017

Just announced: Rokaia will be showcasing some brand new music on The Village Hall stage in this years Electric Picnic Festival. The Village Hall is run by the Global Green team, in the area known as the “conscious heartbeat of the Electric Picnic; a cutting-edge nexus of green ideas and connections.”

Rokaia, chatting to Valerie Wheeler at SPIN South West this week, mentioned a huge thanks to Mark Colbert, Phillippa Robinson and the rest of the Global Green team for this amazing opportunity.

The electric picnic website describes Rokaia’s unique production style: “…blending singer-songwriter compositions with experimental soundscapes, mellow electronica, and trip hop beats.”

Screen Shot 2017-08-23 at 12.04.29

For all the latest updates from Rokaia, you can find her on Twitter and Instagram.

Rokaia ~ See | Dwell

A huge thanks to Richard Allen at A Closer Listen for these lovely words on See | Dwell xx

a closer listen

see-dwellWe have a very good feeling about Limerick, Ireland’s Rokaia, whose debut release seems a harbinger of things to come.  See | Dwell may be a short beginning, but it’s a strong one.  Like Ian William Craig, Holly Herndon and Katie Gately, Rokaia operates in the realm of textural, melodic voice, a sub-genre within the larger realm of experimental voice.  The outer edge of experimentalism tests the boundaries of listenability through scream and guttural snarl, but artists such as these win us over with sheer beauty and grace.

It’s easy to put Sea | Dwell on repeat, as it comes across as a series of waves that never crash.  Layer upon layer of Rokaia’s voice slide gently over their predecessors, while manipulations in the lower register provide the base.  Using electronics to chop, stutter and loop her voice, the artist provides an impression of obsessive composition and precise control…

View original post 119 more words