Choosing Music that Enhances your Film or Commercial Video

Music offers an experience all on its own, but it is also a powerful tool that can enhance other experiences. A song or score is an easy cheat to enhance emotional storytelling because we have such a strong reaction to it. In my opinion, music can often be more effectively used when creating branding, a story, or most things for that matter.

I am a storyteller who directs and edits films as well as a business owner who hosts events. In all of these activities, I use music to enhance the experience of what I am creating.

Read More

How to Get Professional Level Sound From a Budget Microphone

By Andrea Green

Mic by Michael Rehfeldt licensed under CC BY 2.0.

Mic by Michael Rehfeldt licensed under CC BY 2.0.

Podcasting and DIY recording have been on the rise thanks to newer, more affordable technology making it possible to record your own podcast or music project. However, achieving a professional-level sound from home requires some strategy.

Whether you’re working on a podcast or dabbling in DIY recording for your band, music, or film project, or even going a step further to start your own record label, the quality of your equipment won’t always be up to par with professional recording studios. In fact, recording equipment is often the biggest contributing factor to the costs of a new recording label, and renting out a sound studio can cost upwards of hundreds of dollars per hour. The good news is there are ways to improve the overall quality of budget-friendly recording equipment. Here are some tips that you can apply to your future recordings.

Preparing Your Space for Recording

Preparing your space can drastically boost quality. There are a variety of ways you can go about this.

Soundproofing your room can reduce unwanted noise during recording sessions. A great DIY way to do this is using tape or weather stripping to seal off windows and doors.

Another option is using a microphone isolation box. Isolating your mic will go a long way when it comes to capturing crisp and clean vocals without going through the hassle of soundproofing an entire room. And while you can purchase microphone isolation boxes, opting to make your own may be worth it if you have the time to spare as it only requires a few easily obtainable materials.

What to Look for in a Budget Microphone?

It goes without saying that budget microphones aren’t created equally, but some mics can give you more value for your money. The amount you’ll want to spend may vary but somewhere along the lines of $80-$500 will work for high-quality home recordings.

You’ll want something durable, affordable, and reliable. Several mics fit this description. The Audio-Technica AT2020 and is a great option due to its affordability and pristine sound quality. If you’re looking for something a little more user-friendly, the Blue Yeti Nano USB microphone is also a good choice. The Blue Yeti Nano USB mic is a solid plug-and-play microphone that offers less versatility than the AT2020 but may be easier for people without any previous recording experience.

Fourwind Films can also vouch for the Sennheiser ME 66 Shotgun, Zoom L/R mic, and Sennheiser G3 wireless lavalier mic which they’ve used for their Feature & a short podcast and documentary projects.

Things to Note When Mixing

There’s a chance that the recording quality from a budget microphone may still be lacking even if you consider the previous tips. Luckily, you can also improve the quality of the recording while sound mixing using compressors and limiters.

Adding reverb and echo are other great ways to improve the overall quality of vocal recordings. Using reverb gives the recording a fuller and more natural sound. However, it’s important to note that overusing reverb can dampen the recording. It’s best to play around with reverb until you get the optimal result as there’s no one size fits all solution.

If there are other questions you want to be answered in a blog post, let us know at info@fourwindfilms.com or visit our website at www.fourwindfilms.com. Also, we work with a large, diverse community of crew and artists working in most aspects of the filmmaking process and are always happy to help make connections. And we are always building our community! Send us your work for review or feedback.

Sound Mixing 101: Compressors and Limiters

By Justin Joseph Hall

compressor_edited.jpg

Compressors and Limiters are audio effects that control volume or amplitude of sound.  They are used to create equilibrium in what you hear.  They are often used in the mixing and mastering stages of music.  They are also used when mixing dialogue for film, video, and radio.

Let’s start with the compressor.  The compressor is created to lessen the ratio of dynamic range in a recorded sound.  For example, if you are recording guitar input and someone accidentally bumps the pickup (which is like the microphone for a guitar that “picks up” the sounds of the strings to amplify them), there will be a spike in the waveform that is much louder than the rest of the recording.  The compressor dampens a spike in volume so it doesn’t stick out as much.  It diminishes the amplitude of the sound wave by “compressing” any sound that registers above a certain threshold at the compression ratio you set it at.

When using a compressor there are a few key terms.  

The threshold refers to the amplitude that the compressor kicks in.  So if you set the threshold at -16dBs then anything that goes over -16 decibels will be affected by the compressor.   

The ratio refers to the amount of compression.  If you have a 3:1 ratio in your compressor then the sound above the threshold will become ⅓ as loud as the original.

The attack or attack time is how fast the compressor will kick in after the amplitude gets over the threshold.  Often too fast of an attack time may compress small peaks that barely get over the threshold and can sound odd unless it’s sustained for at least a short period of time.  The length it takes to turn on the attack time is usually measured in milliseconds.  This is an adjustment you can alter in order to make sure the compressor isn’t turned off and on too often which can be noticeably irritating.

The release or release time is the delay that the compressor should shut off if the amplitude goes under the threshold.  This is to prevent the compressor from turning off and on if the amplitude wavers on the threshold and sometimes drops below it.  The release and attack times are adjusted to make the transition into the effect smoother and less noticeable.  Often the presets work well from the factory, but play around with the setting to see if you can make it sound smoother, especially if the effect is sounding choppy.

The knee is a gradual curve in how the compressor affects the amplitude. So if your compressor is set at -16 decibels at a 3:1 ratio, the knee may prevent the compressor from compressing the sound to 3:1 ratio directly at -16dBs.  Instead, it will be applied on a gradient that is adjusted by the knee.  It makes the compressor effect more gradual and less noticeable.

Compressors limit the range of the amplitude of a sound.  This is key for making recorded sound easier to listen to on speakers.  There are as many different types of speakers as there are flavors of ice cream. Compressing sound to a smaller range makes it so listeners won’t hurt their ears if they turn up the volume during a quiet moment in a film that’s followed by a loud explosion scene. Compressing the sound should make it so that people don’t have to turn the volume up or down at all. 

Often, the more a sound is compressed the more pleasant the listening experience.  Many podcasts, audiobooks, and radio shows use compressors on the vocals so that voices sound similar throughout the program and listeners don’t have to fiddle with their knobs.

In music, pop songs are highly compressed.  This is pleasant and makes all songs have a similar dynamic range.  Some of the least compressed music is classically recorded music where the range of instruments will not be heard without a large dynamic range.  In a recording of an orchestral concert, you want to hear the comparison of the piccolo solo, strings section, and the full orchestra playing without sacrificing the uniqueness of each sound.

Photo of author.

Photo of author.

Limiters

Compressors are often used in conjunction with Limiters.  The two audio effects work very well in tandem.  As the Compressor makes the dynamic range smaller, it softens the loudest sounds.  This is because audio begins from no sound and increases like a bar graph in amplitude.  If we have a sound that is -10 decibels, the peak of that soundwave is at -10 decibels.  When we compress it, the mountain moves to a lower peak.

A Limiter is often applied after a compressor.  This is because once you have the desired ratio of your sound, all peaks are compressed to a certain height of amplitude.  A limiter brings those peaks higher or lower equally across your sound’s amplitude without going over.

The Limiter earned its name because even though it may increase the amplitude, it limits the amplitude to a Limiter’s threshold.  Like the Compressor’s threshold, a Limiter’s threshold is a cutoff point that says no sound peaks will go over this amount.  This is to prevent peaking, which is when a recorded sound is too loud for a mic and distorts.  For digital audio this is 0 decibels.  For analog audio it can vary and depends on what you’re working with, but it is often seen from +6 to +12 decibels.

Limiters also have release and attack inputs which work the same as a compressor’s. They are measured in milliseconds and are when the Limiter kicks in (attack) and when the effect drops out (release).

Why use a Limiter? 

So that the sound you are creating is loud enough to hear after a compressor is applied.  It would be very annoying for someone to compress a song to be very quiet and then listen to a song that is really loud afterward because you’d have to keep adjusting the volume.

For example, if you listen to a classical song that barely uses a compressor and has a huge dynamic range from -60 decibels all the way to -3 decibels, and follow it with a pop song, you want the loudest part of each song to be at the exact same peak.  This is so if you set the music at a house party at a certain volume, that volume is never exceeded and you don’t suddenly scare your neighbors with O Fortuna blasting right after listening to a compressed pop song.  Most music players have a setting that levels the peaks of songs so you may already be familiar with this automated process in those kinds of digital music players.

0287 Rock n' Roll Matthew Solarski Interpol.JPG

Pop songs actually use Limiters a lot. This trend of mixing music slowly became popular with the rise of Rock N’ Roll wanting loud music, and the Metal of the 1970’s pushing that trend even more. Sound mixers kept making mixes louder by compressing them to a small dynamic range and then raising the peaks to the maximum volume. This puts songs near the top of the possible amplitude without going over 0 decibels and distorting.   Eventually, the radio ads also started competing with this music to get the listener's attention and radio ads started being mixed even louder than the music.  This became known as the “Loudness Wars,” which really peaked (pardon the pun) in the 1990s and 2000s.  You can really hear the dynamic range lessening, especially in pop music during that time.

Compression and limiting are powerful tools that help you create the listening experience you want  whether it’s for a song, radio show, or even a movie mix.  Compressors and Limiters are used in virtually every professional sound mix of any sort.  Learn the ins and outs and what sounds good to your own ear, practice a lot, and you’ll have mastered one of the basics of sound mixing.

— — — — — — — — — — — — — — — — — — —

If there are other questions you want to be answered in a blog post, let us know at info@fourwindfilms.com. In addition, we work with a large, diverse community of crew and artists working in most aspects of the filmmaking process and are always happy to help make connections. And of course, we are always building our community! Send us your work for review or feedback.

I’m a Film Composer, and I Want Every Emerging Director to Read This

By Totemworlds

1_UuztySepuVRMpdEzKdd03A.jpeg

Every serious filmmaker knows that in movies, what we hear is just as important as what we see. Without music, our favorite films would lose their charm and emotional weight…even our characters would lose a bit of their essence. Thankfully, film composers exist, even though they’re always hiding in their studios, and their job is to unravel all feelings in your film, documentary, you name it!

As a filmmaker, you need to find a film composer who won’t just fill in the silence, but actually support your story in a meaningful way. I have working experience as a film composer and I wrote this short guide for filmmakers and enthusiasts with key tips on how to conceptualize and articulate musical ideas with their composers. Clear and effective communication between director and composer will ensure your film is everything you want it to be, so let’s get started!

Choosing the right composer

All musicians have unique backgrounds, it’s what defines their style and how they sound. Composers are no different, so keep your options open and take your time to listen to their previous work. Find a match for the sound you think would be best for your film. What style of music would be best? Does your budget allow for an orchestral sound, a small ensemble, or piano only? If you’re on a budget, composers can create what are called Orchestral Mock-ups. They’re a significantly cheaper alternative to recording a real orchestra while still sounding convincing. But does your film need music that will stand out or play more of a supportive role? Find a composer who will be versatile enough to do both.

Communicating With Your Composer

Spotting happens when a composer and/or a director decide upon where to (and not to) put music in the film. You’ll probably want to share your ideas right away, but I highly recommend you let the composer speak first. Their experience in the area and their experience in seeing your film for the first time could lead them to bring new ideas to the table. But if they ask for temp tracks then definitely provide them. 

Temp tracks are songs that directors use temporarily on their film to give composers an idea of how they want the music to sound. Most composers love them, but some don’t. Just ask.

Don’t use musical terms to describe what you want

A composer’s job is to translate emotional terms into music, so don’t try to throw in musical terms at them, instead, speak to them in emotional terms. Talk in terms of intensity, and your composer will modify the instrumentation, mixing, and dynamics to match what you want. Talk in terms of movement, and your composer can use musical techniques to keep the momentum going, or make space for quiet time. You should also be clear if you want the music to sound close or play more of a supportive role.

Why (and when) to add a musical cue

Consider the following reasons to add music at any given moment of your film:

  • As a narrative tool: take music into consideration right from the start, as you write the script, this opens up new and exciting possibilities. Some of the most memorable moments in cinema rely on music to work. Some examples include a 2-note motif which foreshadows the shark’s arrival in Jaws; an unmistakable tune which plays in Kill Bill every time the protagonist sees her enemies; and who can forget the string players and their last repertoire near the end of Titanic, filling our hearts with empathy towards everyone on that sinking ship.

    A motif is a recurring musical phrase.

Extrait de Titanic - James Cameron

  • To set the mood: music is just as important as color-grading and lighting for setting specific moods. And just like a colorist or lighting expert, a film composer will use every bit of information you give them to craft the right sound for a scene. Be clear about the mood you want to convey and include all the little details that make your scene unique. Music is so versatile that composers have a HUGE array of musical devices to work with, so having a pool of information to derive ideas from would help them focus their creativity.

Here’s is a very entertaining video showing the power of music over film.

  • To accompany our characters: in fiction, it is common practice for composers to assign musical ideas to any character, this is called a leitmotif; think of Darth Vader’s iconic theme in Star Wars and how menacing it makes him appear. It works for non-fiction too. A news anchor, for example, is greatly benefited from the show’s epic and suspenseful opening song that legitimizes not just the show but also its host.

  • To simulate the passing of time: music can be used to keep the momentum going, making sure there are no stagnant moments in your film. Any scene can be made suspenseful using music alone, and more excitement is always welcome. You can also use music as a way to transition to a new scene.

In conclusion

When incorporating a composer give them space to share their ideas, but be clear about how you want the audience to experience specific characters and moments. Work with the composer to figure out how best to communicate what you want, whether that’s by sharing examples of other films or music, details about your story and characters, or describing a feeling.

Follow Totemworld’s work on Youtube and Facebook.

— — — — — — — — — — — — — — — — — — — -

If there are other questions you want to be answered in a blog post, let us know at info@fourwindfilms.com. In addition, we work with a large, diverse community of crew and artists working in most aspects of the filmmaking process and are always happy to help make connections. And of course, we are always building our community! Send us your work for review or feedback.

What is the difference between Reverb and Echo effects?

By Justin Joseph Hall 

Behind the scenes of “Abuela’s Luck” by Ricky Rosario. Photo by Daria Huxley.

Behind the scenes of “Abuela’s Luck” by Ricky Rosario. Photo by Daria Huxley.

Echo and reverb are almost the same audio effect except for one variance, and that’s time.  Reverb and echo are reflections of sound in a space.  However, echo is the more common word and we know it as hearing a reflection of sound return to one’s ear quieter and later than what was said.  Famously on television people shout into a canyon and hear what was said shortly after in fading repeats equally distant apart in time.

Reverb is the same concept as an echo but with a smaller reflection time that often comes back within a second and conflates with the sound that hasn’t finished yet.  For example, If I were to say, “I would like to hear my echo,” and applied an echo effect through some software, I might say the entire sentence and then hear the entire sentence back.  However, if I said the same thing and applied a reverb effect, you could start hearing the effect before you get to the second word of the sentence.  This replicates what it sounds like to hear reflections of sounds from rooms with hard walls.

In some cases in real life, you may hear reverb and echo when short sound reflections (reverb) and longer sound reflections (echo) hit your ear simultaneously.  For instance, when you’re in a racquetball court, you are likely to hear the reflection from a nearby wall quickly, but the far wall may take a bit longer to reach your ear.  This kind of room creates a fun interplay of reflections.  Many rock songs from the 1980s famously use these kinds of combinations to create a feeling of epic vastness.  A great example is Phil Collins’ In The Air Tonight when the drums kick in.

Reverb and echo are not always necessary in film and music, but one should always consider what kind of space you seem to be in when applying these effects.  Longer echo or reverb sound like bigger spaces or great halls or canyons, while shorter, tighter echo or reverb could sound like a cramped space, like a small apartment bathroom.

The sound mixer would need to take these very different spaces into consideration when applying echo and/or reverb. Gif courtesy of HBO.

The sound mixer would need to take these very different spaces into consideration when applying echo and/or reverb. Gif courtesy of HBO.

Creating a space with these 2 effects is one way of making different recordings sound unified.  It’s often part of any type of mixing in film or music.  For instance, if you’re recording music and the drums, amp, and vocals are all recorded at different times with different mics and mic placement, adding a room sound via reverb makes it sound like they may have all been playing at the same time.  It is often used during the mastering process to unify final sounds.

When filming a movie, you may record on location, and then in post-production find your project needs Automated Dialogue Replace (ADR). ADR is a re-recording of lines in the studio to replace the dialogue taken on set. By creating space with reverb and echo you can help unify the different mic’ings within a scene such as location sound mixed with ADR.  This is especially important if the two types of recordings are near one another.

Justin Joseph Hall is a video director, editor, and post-producer who used to mix audio for film, music, podcasts, and mastered songs for Bootsy Collins and others. For any more info or questions about sound mixing and/or mastering, write to Fourwind Films at info@fourwindfilms.com. Also sign up for our newsletter and podcast, Feature & a short where Brian Trahan, our sound mixer, adds reverb.

Sound Design vs. Sound Mixing: A Beginning Filmmaker’s Guide

By Justin Joseph Hall

One of the first things they teach you in an Intro to Production class is that bad sound is the fastest way a professional filmmaker can spot an amateur-made video. If you’re new to filmmaking it’s important to know the difference between sound design and sound mixing.  This is a first step to understanding how to create a good sound for your video.

Sound Design

Sound Design is the ambiance of the auditory space.  Let’s do an exercise together to help us learn.  Look around in the room you're in right now.  What do you see?  Say those things out loud.  After that close your eyes for one minute.  Listen to everything in the room.  What do you hear?  Say them out loud.  Be specific. Do you hear a computer fan?  Birds out the window, friends in the other room chatting? Is a train rolling by in the distance? Write down all of it.

A sound editor and foley artist create the feeling of the room. One way they do this is by recording each of the sounds you wrote down in the exercise we just did.  Other common sound effects include footsteps, clothes rustling, or even the sound of a refrigerator, radiator, or crickets chirping at night.

When we get to sound mixing we want to have a recording for each individual sound so you can adjust the loudness of each sound separately in the sound mix, which we’ll talk about in more detail later. 

Sound design is an amazing tool that many commercial entities and independent filmmakers don’t think about or utilize.  In a commercial setting, you may think sound design is an unnecessary excess.  However, a half a day’s work from a sound designer can bring up the production value of a video tenfold.  

One specific place where it really helps out is in animation because purely graphic animations don’t come with audio like interviews or captured video do.  Yet animation is common in commercial video products and changed completely when sound is added.

One example that is quick and easy to show are logo animations, like this one my company  created for PerformLine. Watch it with sound, and then mute the video and watch again. The sound adds energy to the logo and branding.  

We created this animation and background for PerformLine.

Sound design encompasses a wide array of sound effects. Sound Designers adjust their effects to fit the aesthetic and world of the film. For example, in David Lynch's Eraserhead, sounds like water running in a bathtub, or the clanking of an old heater, are more menacing and noticeable than they are in everyday life. Anyone who’s seen The Matrix may remember the whooshing noise accompanying Neo’s slow-motion bullet-dodging. 

Famous scene from The Matrix (1999)

Both of these examples are louder than one would expect to hear in the real world (or see, in the case of The Matrix, but I digress), and that has to do with how the Sound Mixer worked with the sound design. So now that you are familiar with sound design, let’s learn about the next and final step in audio post-production: sound mixing.

Sound Mixing

A sound mix is the last step in finishing audio for a film.  Simply put, the sound mixer adjusts how loud or quiet each individual sound is to maximize the impact of the message of the final video. The three main categories are dialogue, sound effects, and music.  Also, it can be confusing, but Sound Mixers may also have the title Re-Recording Mixer.

In commercial videos, mixing interviews can make voices more pronounced, clear, and pleasant to listen to.  Colloquially, this process is also called “sound sweetening.”  This step is important for sound clarity as well as creating the ambiance for the film.  For example, it is very annoying to hear a video where the dialogue of an interviewee or a central character in a scene is overpowered by loud music or background characters. Don’t let your audience’s focus be pulled away from a story by bad sound mixing. 

It is also important to remember a sound mixer can only do so much.  Some of the sound problems cannot be fixed after recording.  If you’re recording an interview of a rock fan at a concert while the band is playing loudly it is often impossible to separate a person speaking from the loud background music.  It’s important to keep in mind that not all problems can be solved in the sound mix.  An accomplished Sound Mixer can adjust audio to improve it, but it’s important to record clear high-quality audio to obtain optimal results.

If you have any questions or would like more information go to our website www.fourwindfilms.com, or write to me directly at justin.joseph.hall@fourwindfilms.com.

Six Steps To Finish a Video in Post-Production

By Justin Joseph Hall

I have been working as a professional editor for ten years on commercial, documentary, and narrative films, and this is for producers deciding what to include in video post-production. Many companies skip about three of the steps listed below, so I will explain what each step is, and why completing each one will make your video more professional.

  1. Editing

Editing is sequencing clips that are provided to increase the effectiveness of the video. Commercial videos pull a viewer towards an action or a specific emotional response to a product. A great editor knows this and uses timing and visuals to capture the viewer’s emotional and mental attention.

Author’s editing station. Photo by Author.

Author’s editing station. Photo by Author.

Modern editing tends to go towards getting the highest emotional response with the shortest amount of time. Often ads and commercials are limited to exact seconds. An editor will use their given assets to pace the film to climax emotionally towards the end of the ad, where there’s usually a reveal of a brand, product, or call to action. This leaves the viewer with the greatest emotional impact towards the end of the piece. If the climax happens before this, the edit still needs work.

An editor uses every edit to either seamlessly hide edits so viewers can concentrate on the story or visuals, or they may call attention to a cut to make the viewer pay attention to a certain moment. Editors think about when not to cut just as much as when to cut to the next shot. Too many cuts can distract from the story, or they can help tell the story more effectively to the target audience. MTV style editing from the 2000’s is an example of a style with a lot of cuts that worked well for their generally younger audience.

Editing is about controlling the emotional response with the tools given and knowing how each clip relates to one another. Think of an edited video as an emotional roller coaster for the audience.

2. Placing Music

Placing music is one of the most difficult things to do in post-production because everyone has an opinion about it, and people can have a wide range of emotional reactions to the same song. One of the goals in creating a video in post-production is making sure it will affect the majority of the target audience. Think about who you’re creating the video for, and choose music that will give the emotional reaction you want from them.

For example if you’re creating an ad that is for a Spanish speaking audience, playing Bachata music (or music that sounds like that genre) may remind listeners of times and places they listened to that music. It’s your job to understand what type of feelings or memories this type of music will evoke for the majority of your audience. For myself, when I hear music that has an AC/DC like guitar sound, it reminds me of listening to AC/DC on the radio in my friend’s garage as a teenager, which evokes a certain feeling of nostalgia for me, and might do the same for many people my age who also grew up in the Midwest. Sometimes there are certain associations with instrument choices. For example, if an ad uses ukulele music like in that Apple ad, you may be reminded of this Apple iPad Christmas ad, which could add or detract from your storytelling.

If you do choose a song with lyrics, make sure they don’t conflict with dialogue, voiceovers, or any audio in the video. Also, only choose music in a foreign language if you know what is being said.

Music is not always needed, but when you match the right music to the right scene, it will enhance whatever emotional impact you want to leave with your audience. It can help with the difficult job of keeping the audience’s attention and making them feel something at the same time.

3. Sound Design

Sit and stop reading. Listen to all the sounds you hear in the room. Most likely a heater or air conditioner, noise from people around you, maybe a computer fan. We are rarely listening to silence even if the room is quiet. So in order to make a video a full experience, we need to re-create the world we are watching. We do this by placing Sound Effects to replicate our experience in the real world. This is sound design.

Sound equipment.  Photo by Author.

Sound equipment. Photo by Author.

When humans see something move, there is almost always a sound involved in this. Move around and your clothes rustle. Scratching your skin makes sound. Lightning is always followed by thunder. When a plane flies, we hear the propulsion. When we see something move with motion graphics or animation, we expect to hear something.

If you don’t hear something when it moves, it can feel creepy or otherworldly. Horror films use this for ghosts. Things are eerily silent and then they scare you by breaking that silence or isolating sounds. If that’s not an effect you are going for, then not hearing sound can seem cheap and inorganic. Putting in sound effects like footsteps, clothing moving, and doors closing is a way to make your video come alive.

Sound Design can be a fairly cheap process as long as you have access to a Sound Effects library. Editors or Sound Mixers can usually do it. Keep in mind that you want to have isolated sound effects to make it easier for the Sound Mix. What is a Sound Mix? Well, let’s move on.

4. Sound Mixing

Much of the audio recorded and placed in video has been recorded in a lot of different places. It is being brought into a video that may have motion graphics, multiple days of shooting, or any number of things. In order to unify the sound, we hire a sound mixer.

In addition to unifying the sound, Sound Mixers also prepare the sound to perform best for the platform on which you are showing it. A theater often has great speakers and a high dynamic range. That is a very different environment from viewing a seven second Facebook ad on a phone.

Sound Mixers may also be called Sound Re-Recordists. These days we “Bounce” the file in a computer, or “Export” the final delivery, but the old wording sticks around. Some effects Sound Mixers can add include EQ (Equalization), Compression, Limiting, and Reverb or Echo.

An editor can mix sound, but being a great editor and a great sound mixer are two very different skills. Clients often don’t want to work with multiple people, but if your editor doesn’t have the technical expertise needed to mix sound, hiring a separate sound mixer may help you get the best final product. As an editor I prefer to have professional Sound Mixers finish the audio. They are more efficient with the tools needed to mix audio properly, as it’s often done with separate software from editing software. Sound Mixers often charge the same rate I do, but can improve the audio more in less time.

Sound Mixing can make a huge impact on your end product. The beloved resonance of a voice-over actor or podcast host’s voice are often aided by the sound mixer bringing out their best qualities.

5. Color Correction

You may think, my video looks fine, why would I need to color correct it? Isn’t correction only for when you have problems?

Yes, originally color correction was done when there were problems with film and editors had to match shots that looked a bit different. That still exists, but in addition there is often amazing details that make your image pop or enhance the feeling and goal of the video. It’s the final touch on the visual medium of a movie. Besides not mixing sound, not applying color correction is the first indication that a video is not done professionally. Here are some examples of video before and after it is colored.

Often you’ll hear an editor respond to the request of color correction with “I’ll throw a LUT on it.” What is a LUT? LUT stands for “Look Up Table,” which is is a preset like an Instagram filter but applied to video images. This is a cheap way to complete this process, but it can be problematic if the video contains clips from various parts of the day. Color Correction evenly matches the images to the time of day as well as to the style and mood of the video. This step fully immerses the viewer in the world of the video while also enhancing the visual aspects.

Color correction in process for Silent Forests, a documentary by Mariah Wilson. Photo by Author.

Color correction in process for Silent Forests, a documentary by Mariah Wilson. Photo by Author.

You can do many fun things in color correction like enhance sunsets, make people’s eyes crisper, and blur out backgrounds. Colorists know tricks to make sure the viewer is focused on what the colorist wants them to be focused on in every frame.

Color correction in process for Silent Forests, a documentary by Mariah Wilson. Photo by Author.

Color Correction can be expensive because it is a highly technical art form. However, it doesn’t take long for a Colorist to color a short video. A professional colorist can do up to fifteen minutes of video in one day. Color Correction is a great thing to be able to order in bulk, so finishing more videos at once can help with these costs. A half day (four hours) can be charged for something quick like a three-minute video, and usually that’s the minimum rate a professional will settle on to take a project.

6. Deliverables

Finally, your video is completed. Your colored video is matched with your mixed sound. How should you receive your video?

Many clients only want a video ready for YouTube or Facebook. However, these are very low quality files, and it’s important to obtain the highest quality possible. If you’re not sure what that is consult your editor, but a safe bet is getting a ProRes HQ file at the highest resolution. The main reason you want a high quality file is if you ever have to deliver it to something else or make changes, it saves you or the editor the headache of finding all the old files. For example, in the future you might want to request that longer ads be made shorter for a different platform, like YouTube. An editor can shorten the video easily with a high quality file.

One other option is to request the project files used to create the video. Project files are very small and most editors will deliver these, but it’s great to request them ahead of time as some editors feel their editing work is proprietary and may be wary about another editor re-editing the video in the future.

In addition to the high quality video file and project files, you should request high quality audio with “splits.” “Splits” or “stems” are different audio files the length of your video that contain:

  1. Dialogue

  2. Sound Effects (Most of Sound Design)

  3. Ambient Sounds (part of sound design sometimes)

  4. Music

This way if your video is used for a T.V. spot in the future but needs new music, you can replace it without losing your wonderful sound mix. It allows your video to have a long shelf life in the future. You invested time and money, it’s important to keep good records.

If you do these six steps properly, video professionals and clients will experience the difference, and you’ll see the effectiveness of your video increase.

If you have any more questions about specifics or other options when creating a video, please contact Fourwind Films at info@fourwindfilms.com