One of my favorite things about Reason is that you never need to worry about updating your DAW’s third party plug-ins.  This is because all the virtual instruments and effects you need are built right into Reason.  In other words, you can open a song from two years ago and it’s going to open perfectly, every time, without any missing plug-ins.  Too cool!

However, an obvious complaint about this system was that there weren’t enough virtual instruments and effects to choose from.  End users couldn’t pick up a Korg virtual synth or a McDSP compressor plug-in and add these software devices to their music production toolkit.  Fortunately, this has all changed with the brilliant implementation of Rack Extensions (REs).  Basically, a plug-in system for Reason, but without giving up the original idea that you should never find yourself in the frustrating position of trying to open an old song file only to find missing and out of date plug-ins where there used to be sound.  Trust me, not a fun situation.

Today, in the Propellerhead REs store there’s a cornucopia of fantastic devices available for your Reason Rack. This includes synths by Korg, Rob Papen, and Synapse, and effects by McDSP, Softube, and u-he, just to mention a few.  Indeed, there are so many awesome REs now it’s getting hard to choose.  In this video I share with you a few of my favorites, broken into the categories of: synths, effects, and utilities.  And don’t forget, if you’re a Berklee Online student you get 30% off a select number of REs until the end of this month.

I’ve got to say, the new features in Reason 7 are killer!  I’m having a whole lot of fun working with everything from the new Spectrum EQ Window, to the buses and parallel channels on the Main Mixer.  And of course all the rocking Rack Extension devices, especially the Korg Polysix and Propellerhead PX7 instruments, and the Pef Buffre and FXPansion Etch Red Filter effects, and iZotope Ozone for mastering.  Way too much fun and not enough time in a day to play with all these new shiny toys!

And did I mention that you can turn audio loops into REX loops directly in Reason now?  No more need for the ReCycle program.  This is so cool!

About the new bus paths in Reason’s Main Mixer.

And it’s so easy to create parallel channels now.

My Berklee Online Courses:

I’ve worked in a lot of DAW programs: Digital Performer, Cubase, Pro Tools, Logic, Live, and Reason.  I keep coming back to Reason for its amazing sonic palette (which has grown immensely with the release of Rack Extensions) and its inspiring interface for developing custom sounds.  I’m a big believer in taking the time to design sounds that fit each production, and then recycling these sounds for future productions.  Over time, I believe this helps you to develop a voice as a producer.  Your productions will be recognizable not only from your arrangements and writing, but also from your personal bank of synth and sampler patches.

Reason makes developing such sounds as easy as plug and play, like simple object oriented programming.  You don’t need to be a technically minded, sound designer whiz to cook up great sounding patches using just a combinator and some simple layering techniques.  This is how I make a lot of my sounds that you can hear in my productions.  In this video, Easy Sound Design with Reason (or, Building a Cool Electro Bass), I demonstrate how you can easily and quickly build your own custom sounds without needing to understand anything to technical.

It doesn’t matter if you’ve got the best gear money can buy if your studio isn’t properly set up.  I can’t tell you how many home studios I’ve seen with improperly positioned monitors, uncomfortable workstations, and a poorly tuned room.  What you end up with is a sound that might be fine in your home studio but doesn’t translate at all to the outside world.  And you’re left scratching your head, wondering why you just bought the best gear you could afford but it’s not sounding right?  Well, you’ve got to set it up correctly in order to truly hear what you’re doing.  This doesn’t mean you have to build your own room from scratch, or spend a ton on acoustic material, you just need to understand basic acoustic principles and apply some common sense.

When I did consulting I used to go into home studios and help clients set up their gear for the best results.  When I saw Grammy award winning audio engineer Francis Buckley’s Studio Rescue series (sponsored by Rode Microphones) on YouTube, I said to myself, “Wow, that’s exactly what I would have recommended. I’ve got to tell my students about these YouTube videos.”  They’re really excellent.  Buckley knows what he’s talking about and offers practical advice on working with the space you have, and how to tune it using furniture placement and a few strategically placed Vicoustic foam panels.  Watch this video series if you’re not sure about how to position all the gear in your home studio.  I guarantee you’ll learn a ton.

There are twelve episodes posted so far.  Here are a few direct links:

Studio Rescue – Episode 1

http://youtu.be/02qpJt0hsL0

Studio Rescue – Episode 9

http://youtu.be/pf_7sC9wV8Q

Studio Rescue – Episode 12

http://youtu.be/K0iuj56c_eg

Headed to SXSW

Mar 06 2012

I just want to let everybody know that I’ll be in Austin for the SXSW Conference next week.  I will be presenting a workshop on producing music in Reason in the Artist Central area on Thursday, March 15th, from 4 to 5 PM.  If you’re around, definitely drop in and say “Hi!”  (Or is it “Howdy!” in Texas?)

If you would like to hear a couple of the EDM tracks I’ve been producing in Reason lately, check out my new singles,  “Energy” and “Hiccup”.  “Energy” is out now (available everywhere, including iTunes, and soon Beatport), and “Hiccup” will be released next month on my Synchronized Music label.

“Energy” on Juno Download

“Hiccup” on Soundcloud

 
Hiccup (Original Mix) by Erik Hawk Music

It’s super easy to sidechain compress in Reason.  And this is the key to producing that classic, pulsing synth pad sound you hear in dance music.  You know, the synth pad that throbs in time with the kick drum.  Here’s a video on how to set this type of sound up in Reason.  Plus, I show you how to keep it going even when your song’s main kick drum drops out, so you can produce inspirational breaks in your arrangement without ever losing the pulse of the kick.

Here’s the completed combinator patch that I demonstrate in the video so you can explore how it’s put together right in your own Reason Rack.

Combinator Patch [COMING SOON]

 

I recently had the opportunity to remix a previously unreleased Scatman Crother’s song, “Scoot On Over To Scat’s” (produced by Andrew Melzer in 1979).  It was a lot of fun to work on a track from such an icon of the 70s.  It was also a serious challenge because all I had to work with was an unmastered, stereo mix.  The multitrack tapes had been lost long ago.  But, as the saying goes, “What doesn’t kill you makes you stronger.”  Or, at the very least gave me a serious work out using the Pro Tools Elastic Audio’s warp markers, and writing my own music on top of a preexisting disco groove. Whew!

Here’s a video tour of my Pro Tools session explaining how I pulled off this remixing magic.

You can hear the original “Scoot On Over To Scat’s” song here, http://youtu.be/jsXxRFxATaU.  And this is my remix. Enjoy!

Scatman’s Background

Benjamin Sherman Crothers, born May 23rd 1910 in Terre Haute, Indiana (passed away November 22nd, 1986 in Van Nuys, California), started performing in the speak-easy circuit of Chicago in the latter part of the 20s.  In 1931, he got his own radio show on WFMK Dayton, Ohio, billing himself as “Scat Man”. In 1935, he made his first appearance in a film, a short called “Symphony In Black” with Duke Ellington and Billie Holiday. He would go on to act in 45 more motion pictures including “The Shining”, “One Flew Over The Cuckoo’s Nest”, “Bronco Billy”, “Aristocats”, “The Shootist”, “Silver Streak”, “The Lady Sings The Blues”, “Scavenger Hunt”, “Twilight Zone: The Movie”, and “Transformers: The Movie”.

In 1943, Scatman moved to Hollywood, California and hired an agent. In 1948 he was one of the first African-Americans to land a recurring role on a network TV show, “Dixie Showboat”. Over the next three decades, Scatman appeared in hundreds of TV programs including 65 episodes of NBC’s sitcom “Chico and the Man” as Louis the garbage-man, 18 guest appearances on Johnny Carson’s “Tonight Show”, and “Colgate Comedy Hour”, “The Jack Benny Show”, “Nat King Cole Show”, “The Steve Allen Show”, “Casablanca”, “Hong Kong Phooey”, “Roots”, “The Super Globetrotters”, and “Sanford and Son”. Scatman Crothers received a star on the Hollywood Walk of Fame, in front of the Egyptian Theatre.

Here’s a common mistake I see over and over, producing and mixing a track with a gain maximizer on your mixer’s main output. Examples of gain maximizers are the Waves L2 Ultramaximizer, Avid Maxim, or if you’re working in Reason or Record, the MClass Maximizer. These devices are designed to limit a signal’s peaks and then automatically optimize the output (called automatic gain makeup), in relation to a given threshold, to a specific level you’ve set (such as 0 dB, or –0. 2 dB). Or, to put it in simpler terms, to make your audio sound as loud as possible.

If you don’t realize that you’re working through a gain maximizer, you’re probably thinking that your production sounds wonderfully loud and full. Indeed, some software programs actually feature default templates with a maximizer on the mixer’s main output (such as Reason). This is a great sales pitch, since it makes your track sound totally bombastic, but it’s not reality. The reality is that if you bypass the maximizer you’ll most likely discover that you’re clipping (exceeding 0 dB) your main output, badly. It’s only because of the maximizer that you can’t see or hear your woefully out of balance gain structure. Instead, the clipping is being rounded out by the maximizer’s brick-wall limiter algorithm in order to sound more palatable to your ear. But, the clipping is still there, your overdriven mixer channels are still there, your poor gain structure is still there.

As a dramatic example, I often demonstrate this mistake in Reason. With the default Mastering Combinator inserted on the mixer’s main output I create a Dr. Octo REX, press Play, and then turn up all the levels to the max: Dr. Octo’s Master Level, the mixer channel’s level, and the mixer’s Master Fader. It sounds great and there’s no clipping indicated on the Audio Output Clipping Indicator on the Transport Bar. But, then, I Bypass the Mastering Combinator and the Clip Indicator immediately illuminates, and stays on nonstop. In fact, in this extreme example you can actually hear the clipping, and this is difficult to do in Reason because its main outputs seem to be pretty forgiving even when you’re seeing the Clip indicator.

So, now, the obvious question is, if everything sounds fine with a maximizer on my main output why should I care? There are a few reason’s why it’s not a good idea to produce and mix with a sonic maximizer on your main output:

Just because you can’t see or hear the clipping doesn’t mean it’s not there. And, if it’s there, then when you master your mix all you’re doing is trying to smooth out the clips. You’re gain maximizing your entire mix, clips included, and this can only lead to an inferior sounding master.

If the maximizer is trying to turn up, or turn down, your signals for optimum loudness, then whenever you adjust a level or EQ a signal in your mix the maximizer is automatically countering your move. Consequently, you won’t have the full dynamic range to work in, you’ll be limited to the dynamic range that the maximizer is setting for you. With the maximizer countering every move you make in the mix you aren’t really hearing your work. Talk about counter productive.

As a rule, maximizers are serious processor hogs. Think about it, they have to look ahead at the digital signal and adjust every upcoming peak according to their Threshold and Gain Output settings. So, with a maximizer inserted on your main output, can you imagine how much latency you’re introducing? The answer is, a lot, in the thousands of samples. Just try monitoring a live signal (such as a vocal or guitar) through a maximizer inserted on your main output and you’ll immediately hear what I’m talking about. It’s a disturbing amount of latency and there’s no reason to be fighting for processor resources when all you have to do is delete the maximizer from your signal path. (When you’re mastering, this sort of latency on your main output isn’t an issue.)

If you’re already slamming your mix through a maximizer, you’ve pegged 0 dB, and everything is as loud as it can possibly be with hardly any dynamics left in your mix, what’s left for a mastering engineer to do? The answer is, not much. If you’re serious about releasing your music, leave some dynamics in your mix for a mastering engineer to work with.

Having said all this, I think it’s a great idea to fine tune your mix through a maximizer when you’re mastering directly in your multitrack mix session. I do this all the time, after my mix is complete, especially for reference mixes (tracks that need to impress clients), and background music for film and TV. However, if it’s for an album cut that I plan to send out for mastering, the maximizer effect (some maximizers, such as the L2 and Maxim, have dithering and bit reduction that can be used independently of their gain maximizer functions) is out of the signal path altogether.

I’ve received many requests for tutorials on writing/producing a hip-hop or dance beat. In theory, this is a nice idea. In reality, there’s just no way you can encapsulate all of the creative and technical know-how that goes into writing and producing a great sounding beat in a single tutorial. Fortunately, that hasn’t stopped me from trying, because even if I can’t pack all of the relevant information into one tutorial, it’s still worth doing for the information that I can share in about a ten-minute video.

So, I threw on some clothes, my Remix Miami T-shirt, didn’t bother to shave, set up the camera (top view down so you could see my hands on the control surfaces), and wrote a hip-hop style beat off the top of my head. It took me around 40 minutes, but I edited the whole process down to about a 12-minute video. Obviously, there are some parts missing, such as playing with MPC backdrops for Kong, or running the hi-hats through a compressor. But, if you watch carefully, it’s all there, because in addition to the techniques I describe as I’m working, you can also see all the device settings and the connections when I flip Reason’s rack over. The video is in HD so you can totally see all the details. I used Reason 5, Kong for all the drum sounds, and Thor for the bass line. Enjoy!

Stutter Edit by BT

Jan 23 2011

The new Stutter Edit plug-in, conceived and developed over the past fifteen years by pioneering electronic music artist and composer, BT, is pretty amazing. Upon installing this plug-in on my system I feel like I’ve got BT in the studio with me helping to produce stutter edits and breaks in my song. Really, it’s like I hired him as a technical consultant just for his stutter edit production techniques. It used to take me hours, even days to cook up these sound effects, through intricate slicing and dicing of waveforms and automating stacks of effects. Now, I can simply play a key on my keyboard and get the same, if not better, results! I can’t restrain myself from exclaiming, “It’s BT in a plug-in!”

How It W-w-w-works

Here’s how it works, simply insert Stutter Edit on the audio track that you want to stutter. Then, set up a MIDI track to send MIDI note and controller data to the Stutter Edit plug-in. Now, play your song and whenever you want to hear a stutter effect press a note on your keyboard to trigger one of the preset stutter effects. It’s that simple, and the presets sound great! Plus, to add more dynamics and enhance your ability to really play the effects, Pitch Bend is assigned to the plug-in’s global, resonant filter effect, and the Mod Wheel let’s you control different real-time dimensions of a preset. For example, moving the Mod Wheel could alter the speed of a preset’s stutters. You can record your MIDI performance and automate Stutter Edit directly from the MIDI track.

Sutter Edit comes with a ton of ready-made stutter effects spread out across the entire keyboard, right when you open it, so you can get to stuttering immediately. It also includes banks of stutter effect presets from BT himself, and a other electronic music luminaries, such as Richard Devine. If you’re not into the presets, you can certainly program your own stutter effects, from a simple eighth note stutter to crazy lo-fi distortion with delays and noise sweeps. Its many controls—Quantize, Delay, Gate, Filters, Buffer Position, Bit Reduction, Pan, Lo-Fi, Stutter Matrix, and Arpeggiator—combined with its Generator noise synthesis section gives you the ability to cook up just about any cutting edge stutter effect that you can dream of. Way too much fun!

Imagine the Possibilities

Sutter Edit is incredibly useful in the studio, but what I’m equally impressed by is its live performance potential. For example, stutter effecting loops in Ableton Live, in real-time right from your MIDI keyboard. Obviously, BT is deep into such things. He didn’t just dream up this plug-in in the studio, he wanted to take his stutter effects to the stage for live performances. And, clearly, he’s done exactly this, giving Stutter Edit plenty of beta testing during his Laptop Symphony shows. So, even though this is just version 1.0, it’s reassuring to know that it’s been out on the road and thoroughly tested by a pro, in high-profile, real life gigs. We know it works for live shows, not just in how it’s designed, but that it’s reasonably stable as well. How many software companies can say this about their newest software?

I’m already seeing and hearing grumblings on discussion threads saying, “I’ll never use Stutter Edit. I take pride in programming my own stutter effects one edit at a time.” Well, fine, I’ll have an entire track of stutter effects produced in the time it took you to do just one. And, besides, given a little time and patience—I know stutter edit producers have plenty of this—you can program your own unique stutter effects in this plug-in, assign them to keys on your keyboard, and save them in your own bank of presets. You don’t have to sound like the factory presets, you can develop your own unique stutter effect sound. Then, you can perform your stutter edits live, whether in the studio or on stage. This most certainly isn’t something you can do with that one stutter edit you just spent all day programming in your DAW. OK, enough said.

In the coming years, I predict that Stutter Edit will be massively overused, not unlike the AutoTune vocal sound (you know, Cher and T-Pain). Hopefully, the effect will be used tastefully, artfully, and without going completely overboard with it. Though, admittedly, I’ve probably already failed in this department—it’s just too much fun to play with. In fact, after about six instances of Stutter Edit in my Pro Tools session I managed to crash hard, several times, eventually completely freezing my Mac. Fortunately, after a quick reboot I was back in business and everything was running smoothly again. I also had problems controlling clipping at the plug-in’s output, because some of the effects pumped out serious amplitude spikes. A soft clip limiter section in the next build of Stutter Edit would be greatly appreciated.

Stutter Edit is distributed and supported by Izotope. There’s a lot of wonderful information about Stutter Edit on their Web site,
http://www.izotope.com/products/audio/stutteredit/index.asp
But, the best way to really appreciate Stutter Edit is download the trial version and take it for a test drive yourself. Also, check out this video tour of how I used Stutter Edit in a remix of my song “Delicious People”, for which the remix stems are available on my CD, Erik Hawk & The 12-Bit Justice League.
http://www.cdbaby.com/cd/erikhawk