I’ve had my eye on Reaper from a distance for some time now, cautiously considering diving in as I keep hearing it mentioned in conjunction with sound design. Today I was looking through the GDC vault, when I found the PDF for this talk about VR audio and the use of Reaper in a workflow. I typically live in Pro Tools (as many of us do, probably?) but this talk’s mention of Reaper’s Region Render Matrix, and how it fits into the overall project workflow definitely intrigued me.
Those of you who use Reaper – what benefits have you found with using the program, and how deeply have you embedded yourself into it? Do you still use Pro Tools then export to Reaper before a middleware program (like this GDC talk mentions)? Or do you do all of your sound design within Reaper now? Are there any specific features (such as the Render Matrix) that jump out as unique and specific to Reaper?
I’m super curious about people’s experiences with this program, as well as the learning curve associated with it coming from another DAW.
I’m not (yet) a Reaper user, but it really does seem to be building a momentum that should concern the likes of Pro Tools. The biggest obstacle to it gaining ground in Post at the moment is the lack of OMF/AAF support.
But to see the likes of David Farmer utilising it in his workflow for sound design really does put it on the map as a future contender! See this video: https://youtu.be/43_JHOCJvsA
I use Reaper exclusively for sound design and mixing and…well pretty much everything except opening .raw files, for which I use Sound Forge.
If working with reaper, be sure to download the SWS extension, which provides invaluable functionality, like the ability to create snapshots that are highly configurable. SWS also allows you to relate projects to one another. So say you have a scratch project full of samples and sounds you think you might need for a number of other projects, you can just make that a related project to all the others and then open it more quickly and easily. you can also create project lists.
there are some very good JS effects, which is a standard developed for Reaper. There’s an fft peak filter, a lattice filter, a spectral hold plug-in, which I love, a frequency shifter and a growl plug-in, which can produce some interesting results if used sparingly.
There are also band-splitters and joiners so that you can split your sound into bands and process them independently.
I also like the parameter modulation interface for effects and their latch preview automation mode. Here’s an article about the former.
I like sub-projects. They are an invaluable part of my work-flow.
Reaper’s stock convolution plug-in, reaverb, I also find very useful.
Here’s what David Farmer says about Reaper for sound design.
Oh wow. I only just noticed that that article came from this website.
Hope this helps. I’m sure I’ve missed awesome features but then I’ve never used Pro-Tools to compare it to.
Some long and rambling thoughts….
I’m a begrudging Reaper user, not entirely convert. I’m a long-time Pro Tools user and know and love it as an editing platform for it’s raw speed and simplicity. I have an inherent distaste for platforms such as Reaper and Nuendo that offer vast flexibility and power at the cost that ease and simplicity. What I love about Pro Tools is that it works like a word processor… you can probably figure how to use Pro Tools fairly quickly if you’ve ever spent any time in MS Word.
Reaper is the opposite of that. The menus in the stock setup are a confusing mishmash. They adopt weird and often nonsensical lingo (I want to zoom in on the waveforms… why is it called “amplify peaks”? I don’t to amplify anything! In sound language “amplify” means to make louder and I don’t want to do that!) that can be confusing to those who have spent years honing their skills in other sound applications. There are frustratingly eccentricities that can lead to horrible accidental outcomes (why, I ask, WHY does my 32-channel ambisonic audio track still have a L-R pan slider and width slider?).
These issues aside… I don’t have Pro Tools HD and I refuse to bring Nuendo home from work (Reaper is more competition for Nuendo in my use than PT). I often have personal projects I need to edit, mix, and monitor in surround. Reaper’s the best option I have and so I use it.
I’ve learned to really appreciate the flexibility in routing capabilities that Reaper has, although its multiple-of-two track bussing can be troublesome. For instance, I’ll go to render out a 20 minute 9-channel 2nd order ambisonic stem only to remember that Reaper’s multiple-of-2 requirement means that the resulting file will be 10 channels, and I’ll need to open the file in another app to delete the blank 10th channel.
The region-render matrix is a bit of genius, I’ll admit, and that’s the thing that finally sold me on trying Reaper out.
For editing, I’m thankful that Reaper is more like PT than Nuendo. Nuendo still hasn’t figured out how to auto-contextually change tools without me doing it manually. Reaper has it available to a limited degree, but every day I mourn not having PT’s multi-tool available.
I’ve found sound design to be a challenge with Reaper, not from general usability but because of its internal engine. Despite claims to being sample rate agnostic, Reaper appears to derive it’s DSP sample rate from the host computer’s sample rate and not from either the the audio hardware or project’s assumed rate. To explain what I mean, I’ve run into situations where the host computer was set to 48k, the project to 192k and the audio hardware to 92k. I played back a 192k sound file (with content well above 60k… it was bats) and used a noise reduction plugin (RX). After rendering the file to 192k, I opened it up in RX standalone to discover the file was essentially hi passed at 24k — the host computer’s Nyquist rate. I repeated the process, but changed the host computer to 48k and the hardware rate to 192k. This time the render was hi-passed at 48k. I repeated in the same process in Pro Tools with the project & hardware at 192k (required by PT) and the host rate at 48k. As expected, the render from Pro Tools went right up to the Nyquist frequency of 96k. So the realization I was left with was that anything I did at high sample rates and requiring an output greater than the host computer’s sample rate had to be done in Pro Tools. I also realized that checking the host computer’s rate while in Reaper was going to be a necessary ongoing thing, as some applications can hijack and change the host rate independently and Reaper won’t change it back. It’ll keep chugging along, happily processing plugins at the new lower (or higher) rate. There are similar situation I’ve come across where Reaper says it is doing one thing, but really does another. It’s lead me to distrust my ears while using Reaper and distrust what’s happening in the DAW. I don’t like not being able to trust my tools.
That said, I’m trying to get behind Reaper when I can. I just made my first JS plug-in… a simple double mid-side matrix that I cobbled together after checking out the built in stereo mid-side plug-in matrix. That you can open up and live-edit any JS plug-in is brilliant and i really look forward to playing with that more in the future. I also make frequent use of the arbitrary track-channel size in Reaper. At work we have a high-order ambisonic mic that outputs 32 channels and is controlled and matrixed via a 32-channel VST and AudioUnit. There’s no way Nuendo or Pro Tools is going to let me do that.
I use Reaper for anything it lets me, which is most things.
Audio to picture
Video Editing (in the most primitive of ways)
SFX Library editing
I have a blast with the customizability, and have built up my own preferences and key shortcuts over a long period of time. I strongly recommend understanding what you want from Reaper and learn how to make that happen in customizing Reaper, rather than making do with the stock setup.