Create a Mood With Your Playlist By Treating It Like a Menu

A sensory approach to making a killer playlist or soundtrack.

By Jane E. Werle

Grafic (HQ).jpg

Create or Share a Mood

Have you ever had a very specific craving? Pork rinds, peanut butter pretzels, pimento cheese, pizza bagels, parmesan crisps, potato chips — whatever it is, that’s the only thing that will do. Because fulfilling what you are in the mood for can be greatly satisfying.

Setting the right mood for your event, film, or video, is critical to its success. So when creating your playlist or soundtrack, consider whether you are trying to express a mood that you are experiencing, or creating a mood you wish others to experience. Then think about food. Then think beyond snack food. Imagine your playlist is a dinner party menu.

Structure Your Playlist like a Menu

Your menu consists of the main course (pulled pork, mango-chickpea curry), an appetizer (Caprese bites, vegan seven-layer dip), side dishes (grilled asparagus, Chex mix, fresh bread), dessert (chocolate avocado raspberry pie, coconut chips), and a surprise or treat (sparkling wine, bacon-wrapped dates).

Your main course is your theme. It might not be your favorite food, but it appeals to (or can be eaten by) most of the folks in your dinner party, and it is both delicious and kitchen-tested.

Your appetizer is the first thing people get to taste, and plays well with your main dish, but has a different feel and level of accessibility. Whether you are facilitating an evening of good food your guests didn’t have to cook, or an experience they couldn’t have on their own, you want them to participate and to enjoy doing so. The appetizer is your opening play and sets you all out on a particular path. (Note that I am focusing on providing a desirable experience, but this idea could be extended in other emotional directions such as fear [SCOBY finger food] or self-examination [plain-flavored popsicles]).

Your side dishes either do something your main dish can’t, or expand on its effects in order to provide a more well-rounded menu. Dessert is for scratching that itch that nothing else yet has, and the surprise is for fun.

Pick Your Tracks

Enter into your planning with a sense of zest. There are so many possibilities, and you get to curate them for your pleasure and that of your friends/audience. What do you look forward to hearing? What songs are new to your “liked” list? Do you have a go-to artist or track? What track has a section that stirs you? What else sounds like that?

Once you have something in mind, you can think about what purpose it serves, or what vibe it encourages. Is this a main dish (thematically strong enough to carry the mood)? A juicy surprise (no one but you would have picked that song next!)? The way you want to start the evening (an appetizer that is a crowd hit)? The way you want to end it (a dessert to linger on the palate)?

Then consider the roles you have yet to fill, and how they relate to each other. When you think of (or hear) one element, what feels like it should come next? If someone were making or playing this for you, what would you expect to eat or hear after it?

If your mind remains blank and your planning is going nowhere, think about your friends (or audience) and what they like. If you were their short-order cook (or deejay), what would they request?

Perhaps you have the opposite problem, and your list of possibilities is too long. Try going back to where you started, that first thing you felt sure about, and examine your list for what fits the very best.

Enjoy!

The very best fit could be what is most exciting, or most challenging, or most familiar. Try things on-- listen to them together. Alternatively, you can look at what you’ve assembled and divide it into sections (part one is uptempo, part two is downtempo) of an ongoing series of amazement, designed by you. You decide, it’s your party.

You may find Jane E. Werle stomping in a rainstorm or starting a dance party, if she’s not writing and editing for nonprofits or advocating for kids. Colorado-based, Jane prefers naps to marathons but is happy to go backpacking or sit in a creek. Jane received her MFA from Naropa University and is a frequent contributor to Presenting Denver, a comprehensive resource for dance in Denver and along the Front Range.

If there are questions you want to be answered in a blog post, let us know at info@fourwindfilms.com, or visit our website at www.fourwindfilms.com. Also, we work with a large, diverse community of crew and artists working in most aspects of the filmmaking process and are always happy to help make connections. And we are always building our community! Send us your work for review or feedback.

Sound Mixing 101: Compressors and Limiters

By Justin Joseph Hall

compressor_edited.jpg

Compressors and Limiters are audio effects that control volume or amplitude of sound.  They are used to create equilibrium in what you hear.  They are often used in the mixing and mastering stages of music.  They are also used when mixing dialogue for film, video, and radio.

Let’s start with the compressor.  The compressor is created to lessen the ratio of dynamic range in a recorded sound.  For example, if you are recording guitar input and someone accidentally bumps the pickup (which is like the microphone for a guitar that “picks up” the sounds of the strings to amplify them), there will be a spike in the waveform that is much louder than the rest of the recording.  The compressor dampens a spike in volume so it doesn’t stick out as much.  It diminishes the amplitude of the sound wave by “compressing” any sound that registers above a certain threshold at the compression ratio you set it at.

When using a compressor there are a few key terms.  

The threshold refers to the amplitude that the compressor kicks in.  So if you set the threshold at -16dBs then anything that goes over -16 decibels will be affected by the compressor.   

The ratio refers to the amount of compression.  If you have a 3:1 ratio in your compressor then the sound above the threshold will become ⅓ as loud as the original.

The attack or attack time is how fast the compressor will kick in after the amplitude gets over the threshold.  Often too fast of an attack time may compress small peaks that barely get over the threshold and can sound odd unless it’s sustained for at least a short period of time.  The length it takes to turn on the attack time is usually measured in milliseconds.  This is an adjustment you can alter in order to make sure the compressor isn’t turned off and on too often which can be noticeably irritating.

The release or release time is the delay that the compressor should shut off if the amplitude goes under the threshold.  This is to prevent the compressor from turning off and on if the amplitude wavers on the threshold and sometimes drops below it.  The release and attack times are adjusted to make the transition into the effect smoother and less noticeable.  Often the presets work well from the factory, but play around with the setting to see if you can make it sound smoother, especially if the effect is sounding choppy.

The knee is a gradual curve in how the compressor affects the amplitude. So if your compressor is set at -16 decibels at a 3:1 ratio, the knee may prevent the compressor from compressing the sound to 3:1 ratio directly at -16dBs.  Instead, it will be applied on a gradient that is adjusted by the knee.  It makes the compressor effect more gradual and less noticeable.

Compressors limit the range of the amplitude of a sound.  This is key for making recorded sound easier to listen to on speakers.  There are as many different types of speakers as there are flavors of ice cream. Compressing sound to a smaller range makes it so listeners won’t hurt their ears if they turn up the volume during a quiet moment in a film that’s followed by a loud explosion scene. Compressing the sound should make it so that people don’t have to turn the volume up or down at all. 

Often, the more a sound is compressed the more pleasant the listening experience.  Many podcasts, audiobooks, and radio shows use compressors on the vocals so that voices sound similar throughout the program and listeners don’t have to fiddle with their knobs.

In music, pop songs are highly compressed.  This is pleasant and makes all songs have a similar dynamic range.  Some of the least compressed music is classically recorded music where the range of instruments will not be heard without a large dynamic range.  In a recording of an orchestral concert, you want to hear the comparison of the piccolo solo, strings section, and the full orchestra playing without sacrificing the uniqueness of each sound.

Photo of author.

Photo of author.

Limiters

Compressors are often used in conjunction with Limiters.  The two audio effects work very well in tandem.  As the Compressor makes the dynamic range smaller, it softens the loudest sounds.  This is because audio begins from no sound and increases like a bar graph in amplitude.  If we have a sound that is -10 decibels, the peak of that soundwave is at -10 decibels.  When we compress it, the mountain moves to a lower peak.

A Limiter is often applied after a compressor.  This is because once you have the desired ratio of your sound, all peaks are compressed to a certain height of amplitude.  A limiter brings those peaks higher or lower equally across your sound’s amplitude without going over.

The Limiter earned its name because even though it may increase the amplitude, it limits the amplitude to a Limiter’s threshold.  Like the Compressor’s threshold, a Limiter’s threshold is a cutoff point that says no sound peaks will go over this amount.  This is to prevent peaking, which is when a recorded sound is too loud for a mic and distorts.  For digital audio this is 0 decibels.  For analog audio it can vary and depends on what you’re working with, but it is often seen from +6 to +12 decibels.

Limiters also have release and attack inputs which work the same as a compressor’s. They are measured in milliseconds and are when the Limiter kicks in (attack) and when the effect drops out (release).

Why use a Limiter? 

So that the sound you are creating is loud enough to hear after a compressor is applied.  It would be very annoying for someone to compress a song to be very quiet and then listen to a song that is really loud afterward because you’d have to keep adjusting the volume.

For example, if you listen to a classical song that barely uses a compressor and has a huge dynamic range from -60 decibels all the way to -3 decibels, and follow it with a pop song, you want the loudest part of each song to be at the exact same peak.  This is so if you set the music at a house party at a certain volume, that volume is never exceeded and you don’t suddenly scare your neighbors with O Fortuna blasting right after listening to a compressed pop song.  Most music players have a setting that levels the peaks of songs so you may already be familiar with this automated process in those kinds of digital music players.

0287 Rock n' Roll Matthew Solarski Interpol.JPG

Pop songs actually use Limiters a lot. This trend of mixing music slowly became popular with the rise of Rock N’ Roll wanting loud music, and the Metal of the 1970’s pushing that trend even more. Sound mixers kept making mixes louder by compressing them to a small dynamic range and then raising the peaks to the maximum volume. This puts songs near the top of the possible amplitude without going over 0 decibels and distorting.   Eventually, the radio ads also started competing with this music to get the listener's attention and radio ads started being mixed even louder than the music.  This became known as the “Loudness Wars,” which really peaked (pardon the pun) in the 1990s and 2000s.  You can really hear the dynamic range lessening, especially in pop music during that time.

Compression and limiting are powerful tools that help you create the listening experience you want  whether it’s for a song, radio show, or even a movie mix.  Compressors and Limiters are used in virtually every professional sound mix of any sort.  Learn the ins and outs and what sounds good to your own ear, practice a lot, and you’ll have mastered one of the basics of sound mixing.

— — — — — — — — — — — — — — — — — — —

If there are other questions you want to be answered in a blog post, let us know at info@fourwindfilms.com. In addition, we work with a large, diverse community of crew and artists working in most aspects of the filmmaking process and are always happy to help make connections. And of course, we are always building our community! Send us your work for review or feedback.

What is the difference between Reverb and Echo effects?

By Justin Joseph Hall 

Behind the scenes of “Abuela’s Luck” by Ricky Rosario. Photo by Daria Huxley.

Behind the scenes of “Abuela’s Luck” by Ricky Rosario. Photo by Daria Huxley.

Echo and reverb are almost the same audio effect except for one variance, and that’s time.  Reverb and echo are reflections of sound in a space.  However, echo is the more common word and we know it as hearing a reflection of sound return to one’s ear quieter and later than what was said.  Famously on television people shout into a canyon and hear what was said shortly after in fading repeats equally distant apart in time.

Reverb is the same concept as an echo but with a smaller reflection time that often comes back within a second and conflates with the sound that hasn’t finished yet.  For example, If I were to say, “I would like to hear my echo,” and applied an echo effect through some software, I might say the entire sentence and then hear the entire sentence back.  However, if I said the same thing and applied a reverb effect, you could start hearing the effect before you get to the second word of the sentence.  This replicates what it sounds like to hear reflections of sounds from rooms with hard walls.

In some cases in real life, you may hear reverb and echo when short sound reflections (reverb) and longer sound reflections (echo) hit your ear simultaneously.  For instance, when you’re in a racquetball court, you are likely to hear the reflection from a nearby wall quickly, but the far wall may take a bit longer to reach your ear.  This kind of room creates a fun interplay of reflections.  Many rock songs from the 1980s famously use these kinds of combinations to create a feeling of epic vastness.  A great example is Phil Collins’ In The Air Tonight when the drums kick in.

Reverb and echo are not always necessary in film and music, but one should always consider what kind of space you seem to be in when applying these effects.  Longer echo or reverb sound like bigger spaces or great halls or canyons, while shorter, tighter echo or reverb could sound like a cramped space, like a small apartment bathroom.

The sound mixer would need to take these very different spaces into consideration when applying echo and/or reverb. Gif courtesy of HBO.

The sound mixer would need to take these very different spaces into consideration when applying echo and/or reverb. Gif courtesy of HBO.

Creating a space with these 2 effects is one way of making different recordings sound unified.  It’s often part of any type of mixing in film or music.  For instance, if you’re recording music and the drums, amp, and vocals are all recorded at different times with different mics and mic placement, adding a room sound via reverb makes it sound like they may have all been playing at the same time.  It is often used during the mastering process to unify final sounds.

When filming a movie, you may record on location, and then in post-production find your project needs Automated Dialogue Replace (ADR). ADR is a re-recording of lines in the studio to replace the dialogue taken on set. By creating space with reverb and echo you can help unify the different mic’ings within a scene such as location sound mixed with ADR.  This is especially important if the two types of recordings are near one another.

Justin Joseph Hall is a video director, editor, and post-producer who used to mix audio for film, music, podcasts, and mastered songs for Bootsy Collins and others. For any more info or questions about sound mixing and/or mastering, write to Fourwind Films at info@fourwindfilms.com. Also sign up for our newsletter and podcast, Feature & a short where Brian Trahan, our sound mixer, adds reverb.

Sound Design vs. Sound Mixing: A Beginning Filmmaker’s Guide

By Justin Joseph Hall

One of the first things they teach you in an Intro to Production class is that bad sound is the fastest way a professional filmmaker can spot an amateur-made video. If you’re new to filmmaking it’s important to know the difference between sound design and sound mixing.  This is a first step to understanding how to create a good sound for your video.

Sound Design

Sound Design is the ambiance of the auditory space.  Let’s do an exercise together to help us learn.  Look around in the room you're in right now.  What do you see?  Say those things out loud.  After that close your eyes for one minute.  Listen to everything in the room.  What do you hear?  Say them out loud.  Be specific. Do you hear a computer fan?  Birds out the window, friends in the other room chatting? Is a train rolling by in the distance? Write down all of it.

A sound editor and foley artist create the feeling of the room. One way they do this is by recording each of the sounds you wrote down in the exercise we just did.  Other common sound effects include footsteps, clothes rustling, or even the sound of a refrigerator, radiator, or crickets chirping at night.

When we get to sound mixing we want to have a recording for each individual sound so you can adjust the loudness of each sound separately in the sound mix, which we’ll talk about in more detail later. 

Sound design is an amazing tool that many commercial entities and independent filmmakers don’t think about or utilize.  In a commercial setting, you may think sound design is an unnecessary excess.  However, a half a day’s work from a sound designer can bring up the production value of a video tenfold.  

One specific place where it really helps out is in animation because purely graphic animations don’t come with audio like interviews or captured video do.  Yet animation is common in commercial video products and changed completely when sound is added.

One example that is quick and easy to show are logo animations, like this one my company  created for PerformLine. Watch it with sound, and then mute the video and watch again. The sound adds energy to the logo and branding.  

We created this animation and background for PerformLine.

Sound design encompasses a wide array of sound effects. Sound Designers adjust their effects to fit the aesthetic and world of the film. For example, in David Lynch's Eraserhead, sounds like water running in a bathtub, or the clanking of an old heater, are more menacing and noticeable than they are in everyday life. Anyone who’s seen The Matrix may remember the whooshing noise accompanying Neo’s slow-motion bullet-dodging. 

Famous scene from The Matrix (1999)

Both of these examples are louder than one would expect to hear in the real world (or see, in the case of The Matrix, but I digress), and that has to do with how the Sound Mixer worked with the sound design. So now that you are familiar with sound design, let’s learn about the next and final step in audio post-production: sound mixing.

Sound Mixing

A sound mix is the last step in finishing audio for a film.  Simply put, the sound mixer adjusts how loud or quiet each individual sound is to maximize the impact of the message of the final video. The three main categories are dialogue, sound effects, and music.  Also, it can be confusing, but Sound Mixers may also have the title Re-Recording Mixer.

In commercial videos, mixing interviews can make voices more pronounced, clear, and pleasant to listen to.  Colloquially, this process is also called “sound sweetening.”  This step is important for sound clarity as well as creating the ambiance for the film.  For example, it is very annoying to hear a video where the dialogue of an interviewee or a central character in a scene is overpowered by loud music or background characters. Don’t let your audience’s focus be pulled away from a story by bad sound mixing. 

It is also important to remember a sound mixer can only do so much.  Some of the sound problems cannot be fixed after recording.  If you’re recording an interview of a rock fan at a concert while the band is playing loudly it is often impossible to separate a person speaking from the loud background music.  It’s important to keep in mind that not all problems can be solved in the sound mix.  An accomplished Sound Mixer can adjust audio to improve it, but it’s important to record clear high-quality audio to obtain optimal results.

If you have any questions or would like more information go to our website www.fourwindfilms.com, or write to me directly at justin.joseph.hall@fourwindfilms.com.