I’ve got to say, the new features in Reason 7 are killer!  I’m having a whole lot of fun working with everything from the new Spectrum EQ Window, to the buses and parallel channels on the Main Mixer.  And of course all the rocking Rack Extension devices, especially the Korg Polysix and Propellerhead PX7 instruments, and the Pef Buffre and FXPansion Etch Red Filter effects, and iZotope Ozone for mastering.  Way too much fun and not enough time in a day to play with all these new shiny toys!

And did I mention that you can turn audio loops into REX loops directly in Reason now?  No more need for the ReCycle program.  This is so cool!

About the new bus paths in Reason’s Main Mixer.

And it’s so easy to create parallel channels now.

My Berklee Online Courses:

Remix Song Structure

Jul 20 2013

Do remixes have a defined song structure?  Yes they do, and for good reason. The object of a dance music remix is to get the song that’s been remixed played in the clubs. To do this, having a DJ friendly song structure that makes it easy for a DJ to play the remix in their set is mission critical. Also, having a song structure that breathes energy into the dance floor through dramatic rises, breaks, and drops will make your remix popular with dance music patrons and DJs alike. To this end, a well defined remix song structure is of paramount importance. Here’s my take on understanding and analyzing good remix song structure.

I’ve worked in a lot of DAW programs: Digital Performer, Cubase, Pro Tools, Logic, Live, and Reason.  I keep coming back to Reason for its amazing sonic palette (which has grown immensely with the release of Rack Extensions) and its inspiring interface for developing custom sounds.  I’m a big believer in taking the time to design sounds that fit each production, and then recycling these sounds for future productions.  Over time, I believe this helps you to develop a voice as a producer.  Your productions will be recognizable not only from your arrangements and writing, but also from your personal bank of synth and sampler patches.

Reason makes developing such sounds as easy as plug and play, like simple object oriented programming.  You don’t need to be a technically minded, sound designer whiz to cook up great sounding patches using just a combinator and some simple layering techniques.  This is how I make a lot of my sounds that you can hear in my productions.  In this video, Easy Sound Design with Reason (or, Building a Cool Electro Bass), I demonstrate how you can easily and quickly build your own custom sounds without needing to understand anything to technical.

AES Report 2012

Nov 07 2012

The Audio Engineering Society (AES) held its annual conference in the beautiful city of San Francisco a couple of weekends ago.  It was the 133rd conference!  That’s a lot of shows.  I had the opportunity to visit the show room floor where manufacturers where hocking their wares.  It’s always a fun atmosphere in which to see, touch, and learn about the newest and coolest music production gear.

As a rule, when I hit the show room floor I’m keeping my eyes open for specific products, as well as anything groundbreaking that might make my job easier and inspire my music production work.  This year I was looking out for studio monitor control devices, mid-sized studio monitors with great bass response, MIDI controllers with finger pads, and innovative work surfaces.

The monitor controller that caught my eye was the Oculus by Shadow Hills.  Its fat, ergonomic level knob felt great, and its toggle switches for selecting input and monitor sources were a pleasant change from the usual push buttons.  Most impressive of all, it was wireless!  The company’s demo guy handed me the Oculus controller, sat me in front of an array of Barefoot Sound monitors, and asked me what I wanted to hear.  Of course I asked, “Got some dance music with good bass?”  He happily obliged my musical preference and the next thing you know I’m banging an EDM track while fluidly switching between three sets of speakers.  Selecting speakers with the Oculus was a real pleasure, and the Barefoot Monitors sounded amazing.  I was especially impressed by their smooth, consistent mid and high frequency response across three different sized speakers: MicroMain35, MicroMain27, and MiniMain12.  The large MiniMain12 and mid-sized MicroMain27 speakers both had excellent, tight bass response.  I was impressed.

Oculus by Shadow Hills

Barefoot Monitors

While fantasizing that I could somehow fit the MiniMain12 speakers into my home studio, much less afford them ($19,950 a pair, ouch!) I heard two people behind me say, “We’ve got these speakers in a room at Berklee.”  I turn around to see Mark Wessel and Leanne Ungar, both Berklee College of Music Associate Professors in the Music Production and Engineering department.  Pretty cool!  It’s always a lot of fun to meet people at these shows, especially fellow Berklee folk.

Akai also had a booth at which the new Akai MPC Renaissance and its little brother, the Akai MPC Studio were on display.  Of course I had to try out some finger drumming on the pads to see if they felt at all similar to the classic MPC pads.  I was not disappointed.  The MPC Renaissance felt especially good, with a solid feel, responsive pads that are velocity and after touch sensitive, and a bank of sixteen very grab-and-turn friendly rotary knobs.  The Renaissance is a surefire hit for folks wanting that classic MPC feel in a fully integrated MIDI controller and beat making platform.  The MPC Studio’s pads felt identical to the Renaissance, but its dials felt decidedly inferior to the Renaissance’s rotary knobs.  I found myself wondering why you would design a controller with dials that feel like mini plastic plates rather than knobs you can grab between your thumb and forefinger?  Maybe they’re for spinning rather than turning and I’m missing the point?  In any case, some knobs on the Studio would be nice.

Akai MPC Renaissance

 

Raven Multitouch Audio Production Console by Slate Pro Audio

One booth that always had a crowd was Slate Pro Audio, where they were demonstrating the Raven Multitouch Audio Production Console.  It appears to be a truly innovative work surface, a giant touch screen from which to control your DAW program.  I’m always wanting a bigger screen and this definitely fits the bill!  But wait, didn’t I just say I like knobs to turn?  This is more like a really giant iPad.  In any case, it’s a truly innovative concept and it will be interesting to see how the platform develops.  Really amazing technology.

It doesn’t matter if you’ve got the best gear money can buy if your studio isn’t properly set up.  I can’t tell you how many home studios I’ve seen with improperly positioned monitors, uncomfortable workstations, and a poorly tuned room.  What you end up with is a sound that might be fine in your home studio but doesn’t translate at all to the outside world.  And you’re left scratching your head, wondering why you just bought the best gear you could afford but it’s not sounding right?  Well, you’ve got to set it up correctly in order to truly hear what you’re doing.  This doesn’t mean you have to build your own room from scratch, or spend a ton on acoustic material, you just need to understand basic acoustic principles and apply some common sense.

When I did consulting I used to go into home studios and help clients set up their gear for the best results.  When I saw Grammy award winning audio engineer Francis Buckley’s Studio Rescue series (sponsored by Rode Microphones) on YouTube, I said to myself, “Wow, that’s exactly what I would have recommended. I’ve got to tell my students about these YouTube videos.”  They’re really excellent.  Buckley knows what he’s talking about and offers practical advice on working with the space you have, and how to tune it using furniture placement and a few strategically placed Vicoustic foam panels.  Watch this video series if you’re not sure about how to position all the gear in your home studio.  I guarantee you’ll learn a ton.

There are twelve episodes posted so far.  Here are a few direct links:

Studio Rescue – Episode 1

http://youtu.be/02qpJt0hsL0

Studio Rescue – Episode 9

http://youtu.be/pf_7sC9wV8Q

Studio Rescue – Episode 12

http://youtu.be/K0iuj56c_eg

Headed to SXSW

Mar 06 2012

I just want to let everybody know that I’ll be in Austin for the SXSW Conference next week.  I will be presenting a workshop on producing music in Reason in the Artist Central area on Thursday, March 15th, from 4 to 5 PM.  If you’re around, definitely drop in and say “Hi!”  (Or is it “Howdy!” in Texas?)

If you would like to hear a couple of the EDM tracks I’ve been producing in Reason lately, check out my new singles,  “Energy” and “Hiccup”.  “Energy” is out now (available everywhere, including iTunes, and soon Beatport), and “Hiccup” will be released next month on my Synchronized Music label.

“Energy” on Juno Download

“Hiccup” on Soundcloud

 
Hiccup (Original Mix) by Erik Hawk Music

I recently had the opportunity to remix a previously unreleased Scatman Crother’s song, “Scoot On Over To Scat’s” (produced by Andrew Melzer in 1979).  It was a lot of fun to work on a track from such an icon of the 70s.  It was also a serious challenge because all I had to work with was an unmastered, stereo mix.  The multitrack tapes had been lost long ago.  But, as the saying goes, “What doesn’t kill you makes you stronger.”  Or, at the very least gave me a serious work out using the Pro Tools Elastic Audio’s warp markers, and writing my own music on top of a preexisting disco groove. Whew!

Here’s a video tour of my Pro Tools session explaining how I pulled off this remixing magic.

You can hear the original “Scoot On Over To Scat’s” song here, http://youtu.be/jsXxRFxATaU.  And this is my remix. Enjoy!

Scatman’s Background

Benjamin Sherman Crothers, born May 23rd 1910 in Terre Haute, Indiana (passed away November 22nd, 1986 in Van Nuys, California), started performing in the speak-easy circuit of Chicago in the latter part of the 20s.  In 1931, he got his own radio show on WFMK Dayton, Ohio, billing himself as “Scat Man”. In 1935, he made his first appearance in a film, a short called “Symphony In Black” with Duke Ellington and Billie Holiday. He would go on to act in 45 more motion pictures including “The Shining”, “One Flew Over The Cuckoo’s Nest”, “Bronco Billy”, “Aristocats”, “The Shootist”, “Silver Streak”, “The Lady Sings The Blues”, “Scavenger Hunt”, “Twilight Zone: The Movie”, and “Transformers: The Movie”.

In 1943, Scatman moved to Hollywood, California and hired an agent. In 1948 he was one of the first African-Americans to land a recurring role on a network TV show, “Dixie Showboat”. Over the next three decades, Scatman appeared in hundreds of TV programs including 65 episodes of NBC’s sitcom “Chico and the Man” as Louis the garbage-man, 18 guest appearances on Johnny Carson’s “Tonight Show”, and “Colgate Comedy Hour”, “The Jack Benny Show”, “Nat King Cole Show”, “The Steve Allen Show”, “Casablanca”, “Hong Kong Phooey”, “Roots”, “The Super Globetrotters”, and “Sanford and Son”. Scatman Crothers received a star on the Hollywood Walk of Fame, in front of the Egyptian Theatre.

After returning from summer vacation in Hawaii I needed a little remixing exercise to get me back into a music production mood. The Kerli’s “Army of Love” remix opportunity on www.indabamusic.com looked like just the ticket. I signed up and downloaded the vocal stems. Technically speaking, they weren’t the best vocal stems I’ve ever heard. They downloaded as WAV files but sounded like they had been converted from MP3 files. And, there was some type of parallel effect or headphone bleed mixed in with the backing vocals. You could hear the original music in the backing vocals stem. But this is what everybody had to work with, so I got busy.

I timed the vocals into my Pro Tools session, made a slight tempo change, a few beats per minute faster, and found one of my tracks that sort of matched Kerli’s performance. I didn’t want to spend a lot of time on this so using one of my already produced tracks made the most sense. With a key and tempo change, a few new chords, and a couple of structural modifications to the drums I was able to get a tight sounding mashup.

Then, I spent a lot of time making the vocal stems pop in the mix. The magic bullet for this job was iZotop’s Nectar, the “complete vocal suite” plug-in, and the always fun Stutter Edit plug-in (which I’ve blogged about previously, Stutter Edit by BT). I also did a lot of automation on the vocals to remove breaths, background noise, and bleed from the original tracks.

Whew, a lot of work, but I think it came out sounding pretty cool. It was definitely a good warm up before getting back into my busy composing and production schedule. Plus, I got to play with Nectar and Stutter Edit. All the synth and drum sounds are being produced by Reason 5, rewired into Pro Tools 9. Here’s the video tour of my remix session.

I also threw together a cool video montage to go with the remix. You can find it on my Facebook page, www.facebook.com/erikhawkmusic. (Like me on Facebook please.) And, if you feel inspired to do so, you can listen to and vote for my remix on the indaba music site.  Thanks!

Here’s a common mistake I see over and over, producing and mixing a track with a gain maximizer on your mixer’s main output. Examples of gain maximizers are the Waves L2 Ultramaximizer, Avid Maxim, or if you’re working in Reason or Record, the MClass Maximizer. These devices are designed to limit a signal’s peaks and then automatically optimize the output (called automatic gain makeup), in relation to a given threshold, to a specific level you’ve set (such as 0 dB, or –0. 2 dB). Or, to put it in simpler terms, to make your audio sound as loud as possible.

If you don’t realize that you’re working through a gain maximizer, you’re probably thinking that your production sounds wonderfully loud and full. Indeed, some software programs actually feature default templates with a maximizer on the mixer’s main output (such as Reason). This is a great sales pitch, since it makes your track sound totally bombastic, but it’s not reality. The reality is that if you bypass the maximizer you’ll most likely discover that you’re clipping (exceeding 0 dB) your main output, badly. It’s only because of the maximizer that you can’t see or hear your woefully out of balance gain structure. Instead, the clipping is being rounded out by the maximizer’s brick-wall limiter algorithm in order to sound more palatable to your ear. But, the clipping is still there, your overdriven mixer channels are still there, your poor gain structure is still there.

As a dramatic example, I often demonstrate this mistake in Reason. With the default Mastering Combinator inserted on the mixer’s main output I create a Dr. Octo REX, press Play, and then turn up all the levels to the max: Dr. Octo’s Master Level, the mixer channel’s level, and the mixer’s Master Fader. It sounds great and there’s no clipping indicated on the Audio Output Clipping Indicator on the Transport Bar. But, then, I Bypass the Mastering Combinator and the Clip Indicator immediately illuminates, and stays on nonstop. In fact, in this extreme example you can actually hear the clipping, and this is difficult to do in Reason because its main outputs seem to be pretty forgiving even when you’re seeing the Clip indicator.

So, now, the obvious question is, if everything sounds fine with a maximizer on my main output why should I care? There are a few reason’s why it’s not a good idea to produce and mix with a sonic maximizer on your main output:

Just because you can’t see or hear the clipping doesn’t mean it’s not there. And, if it’s there, then when you master your mix all you’re doing is trying to smooth out the clips. You’re gain maximizing your entire mix, clips included, and this can only lead to an inferior sounding master.

If the maximizer is trying to turn up, or turn down, your signals for optimum loudness, then whenever you adjust a level or EQ a signal in your mix the maximizer is automatically countering your move. Consequently, you won’t have the full dynamic range to work in, you’ll be limited to the dynamic range that the maximizer is setting for you. With the maximizer countering every move you make in the mix you aren’t really hearing your work. Talk about counter productive.

As a rule, maximizers are serious processor hogs. Think about it, they have to look ahead at the digital signal and adjust every upcoming peak according to their Threshold and Gain Output settings. So, with a maximizer inserted on your main output, can you imagine how much latency you’re introducing? The answer is, a lot, in the thousands of samples. Just try monitoring a live signal (such as a vocal or guitar) through a maximizer inserted on your main output and you’ll immediately hear what I’m talking about. It’s a disturbing amount of latency and there’s no reason to be fighting for processor resources when all you have to do is delete the maximizer from your signal path. (When you’re mastering, this sort of latency on your main output isn’t an issue.)

If you’re already slamming your mix through a maximizer, you’ve pegged 0 dB, and everything is as loud as it can possibly be with hardly any dynamics left in your mix, what’s left for a mastering engineer to do? The answer is, not much. If you’re serious about releasing your music, leave some dynamics in your mix for a mastering engineer to work with.

Having said all this, I think it’s a great idea to fine tune your mix through a maximizer when you’re mastering directly in your multitrack mix session. I do this all the time, after my mix is complete, especially for reference mixes (tracks that need to impress clients), and background music for film and TV. However, if it’s for an album cut that I plan to send out for mastering, the maximizer effect (some maximizers, such as the L2 and Maxim, have dithering and bit reduction that can be used independently of their gain maximizer functions) is out of the signal path altogether.

I’ve received many requests for tutorials on writing/producing a hip-hop or dance beat. In theory, this is a nice idea. In reality, there’s just no way you can encapsulate all of the creative and technical know-how that goes into writing and producing a great sounding beat in a single tutorial. Fortunately, that hasn’t stopped me from trying, because even if I can’t pack all of the relevant information into one tutorial, it’s still worth doing for the information that I can share in about a ten-minute video.

So, I threw on some clothes, my Remix Miami T-shirt, didn’t bother to shave, set up the camera (top view down so you could see my hands on the control surfaces), and wrote a hip-hop style beat off the top of my head. It took me around 40 minutes, but I edited the whole process down to about a 12-minute video. Obviously, there are some parts missing, such as playing with MPC backdrops for Kong, or running the hi-hats through a compressor. But, if you watch carefully, it’s all there, because in addition to the techniques I describe as I’m working, you can also see all the device settings and the connections when I flip Reason’s rack over. The video is in HD so you can totally see all the details. I used Reason 5, Kong for all the drum sounds, and Thor for the bass line. Enjoy!