started this recently..
started this recently..
Just curious, but when you’re building your instruments and stacking them and your effects on the racks, do you leave the master out gain below 0 (to, say, -3dB), or do you lower the gain for each instrument below to that point? Is this supposed to be done during just the mastering phase?
To give some insight on how I do things, I’ve not done any compressing yet and don’t have any compressors/limiters on my racks. I’ve only started equalizing some sounds. B/c my system isn’t exactly a beast, I’m forced to do a lot of mixdowns and cutting w/ custom-made samples I record in other sessions, so I tend to do equalizing in "steps" (after figuring out just about where I want the instrument to sit at on the frequency range, padding room aside) i.e do a loop, mix it down, import it, cut it, throw some effects on it, equalize, and repeat (maybe). I’m not bothering with compressing and limiting until most of the track is set in place. I always keep the volume on each instrument on their racks @ 0dB, in case you’re wondering.
But back to my original question….given the way I have to do things, when I do have to push for the final equalizing and compressing/balancing etc., with my final samples all set, do I set their rack gains individually to around -3dB or so, or do I just do that for the master rack and leave the instrument racks at 0dB?
Perhaps part of what you need for laptop music to evolve into an appreciated live performance art medium is simply time.
Finnish artist Sasu Ripatti is a good candidate for mastery of the form. Honing his production and performance skills since the late 90s, he’s become a maestro of digital music. Moments in his music stretch out into shadowy industrial landscapes, as if painting the mysterious worlds that lie between the beats. Others crank the machinery of the dance floor back into mystical frenzy.
Now, I believe the best way to experience a live performance is in the same room as the artist – whether they’re armed with a laptop or a mandolin. But the next best thing is proper documentation, and surely as scholars of music practice, we should sometimes review the tape. In this nearly one-hour HD capture, you can see him tease out a recent live show, armed with mixer and Faderfox controller. This is waveforms and mix as instrument, stuttering journeys through architectural realms of sound. There’s not any noticeable virtuoso performance to look at, necessarily, but in some sense I think you get an impression of him feeling his way through the music, and travel along that walk with him.
Watch, and see what you come away with:
URSSS.com has done a series of these live performances — too many to mention. Enter only at the risk of getting nothing else done for a bit. I love their brilliant moniker: “mistake television.” Hey, that’s why it makes sense to record live shows.
There’s more news from the artist’s hideaway in the north, too.
He’s in the studio now, with releases promised this summer. (Yes, if you visit his site, you know this, too, but it’s good news worth mentioning.)
And specifically, he’s teaming up with another high priest of archaic sound arts, the terrific Mark Fell.
And, nicely enough, there’s a preview. This is what happens when the dance floor glitches. I dearly want to see people dancing to this / want to get to dance to this myself:
I don’t know why they’re bundling a pencil with the limited release, but they are. (Crayon would have been my choice, but then, okay, the sound design here is a great deal more precise. But, still, crayons are cool. Sharpie?)
For something completely different, this is what a “Wedding Mixtape” sounds like from Sasu and AGF:
Great stuff is also happening when he teams Sasu with Moritz von Oswald and Max Loderbauer for the Moritz von Oswald trio:
And I love that you can find a tightly-curated selection of music that directly supports the artist at his Bandcamp store:
It seems worth spending the money to suspend your iTunes and spending it there, instead, for things that really matter.
We’ll be watching for more.
Image courtesy the artist.
just wondering if there as been a similar thread or if we should have one? facebooks soundclouds etc?
I use FL Studio 10.
Listen to these DNB tracks I’ve made some time ago and you’ll know what I mean. Please give me some advice and tricks to make it sound better. Cheers!
I’ve been through countless different synths, using every waveform I can think of trying to achieve a certain type of sound and I’m still stuck.
The sound in question is used all throughout Madeon’s track "Icarus" and at the very beginning of Au5′s "Crystal Mathermatics".
If anyone knows how to make it or at least knows what I’m talking about and can offer me a little help it would be greatly appreciated.
Anyone got the stems? I’ll love you forever if you do…
I’ve been serious about production for a little more than a year, yet my knowledge of mastering and even mixing is still very primitive. I’m working on a track titled "Glacier" (Melodic DnB) which I am very proud of and hope to get signed to a major label such as Monstercat or Adapted. I’ve been stuck with this track for about two weeks and haven’t been able to do anything new with it due to school and things and I need a bit of help with it. I’ve been trying to improve my listening skills in tracks to find weak areas where I can improve the mix and I was hoping you more experienced producers can find some as well.
Things I have already been able to hear before making a new draft:
1) I may have too many elements going on at once
2) The beginning piano/bell is WAY too resonant around 500-700hz
3) I really need to adjust the reverb settings on the intro square pluck.
(For anyone interested in the track its a seven minute long melodic/hard hitting, cut time DnB tune influenced by the likes of Madeon, Au5, I.Y.F.F.E, and Deadmau5. There are some slight variations to the bassline I plan to make but other than that I’d like to get a decent mixdown going)
Any tips on some tricks using it are welcome!
So, I normally do a home master on my tracks anyway if they aren’t getting sent off somewhere, but I feel kinda nervous for this, as this is a really big deal, and I don’t think I have the solid capabilities to do a pro sounding master. Can any of you point me in some kind of direction? whether it be some mastering pro tips, or someone from a studio that’ll do a freebie? or discount or anything really?
thought this was pretty interesting
havnt tried it yet though
I have also seen a lot of arguments as to why you should or should not do certain things to your mix from the get go.
What i want to know is the benefits of either, adaptive limiting your master channel from the start of writing your track, brickwalling the master from the start, or trying to mix everything without peaking first, then bouncing to wav and making a new project where you throw ozone or something on the whole original bounce.
What i have been doing, is the latter (sort of)… I’ve been writing a track with a dry master channel and trying to push the most out of it without anything peaking, then putting an adaptive limiter on the master and bouncing the track.
Then in a new project, drop the audio file in, using a minimal amount of distortion in camelphat, which also brickwalls the signal (something audio explains in his tutorial) and throwing ozone after that and limiting the hell out of it. This doesn’t seem to be causing me much distortion and my tracks are loud when compared with others, I’m simply asking if I’m going the complete wrong way about it and there is a much easier way to go.
Thanks in advance for any responses! :rslayer:
i downloaded a few nmsv files for NI Massive, but i don’t know how to insert them in Massive.
it would be nice if someone can tell me how I can do this, thankz
I’m really envious of what I’ve seen Seamless and others do with Harmour and resampling to make sick neuro basses but since I produce on a Mac that isn’t currently an option.
I mostly use Zebra2 in Ableton with Ubhik and Fabfilter Volcano an Saturn on top of stock Ableton effects these days, if that matters in any way.
I released a free abstract percussion sample pack in the last days with 25 different percussion sounds and 3 loops here: http://www.ghosthack.de/remository/d…t-Percussions/
Hope the one or other could use them
Or can both daws be rewired?
We’ve got a lot of readers who are young producers looking for (or who might have) an older mentor also in the production game, so when producer BT went on a long tiarade about his experience mentoring and helping Porter Robinson in his rise to… Read more
When it comes to producing, recording and mixing insanely fast metal music, timing is everything. It sounds like an oxymoron, but metal needs to be clean. Not in tone, but in production.
It needs precision, clarity and tightness in every instrument, so that the aggressiveness and tight rhythm punches through your speakers. If not, you'll hear an indecipherable mess of overly distorted guitars, drunkenly played bass lines, and drums that are completely washed out by the sound of the cymbals.
Let's dive into some easy to use advice on how to create a better metal music production.
Metal music usually revolves around the powerful riffs of the electric guitars. For incredibly fast riffs you'll need some incredibly good players.
If you have two guitar players their riffs need to align almost exactly. If not, you'll end up with a riff that simply sounds sloppy. There will always be a pseudo chorus effect when you have two different players, especially if you make them double track their parts as well. However, if you don't make sure those guitar riffs line up exactly to the time, you'll lose the power of the mix immediately. Never mind the great vocal performance or the awesome drum sound you got. If the riff sounds sloppy, that's all anybody's going to hear.
Now, the easiest way to do this is to simply get amazing players that can play really tight. The alternative is the painstaking editing process of syncing up all the guitars with everything else. I do not recommend this unless you only have a few spots here and there that you need fixed. Usually, if the guitarist is that good, it'll take less time to move the 2-3 parts around than to re-record the whole performance. If he isn't, send him home until he is.
The same goes for the bass guitar. It has to be locked into the guitar riff as much as possible. Any variation can cause the riff to sound sloppy.
This sort of attention to detail is what differentiates a metal production from a folk or rock song. You can get by with the occasional loose playing when you're strumming an acoustic guitar, or playing some indie guitar riff. But the key to making a metal riff work is a locked in riff from all the instruments.
I forget which band did this, but I thought it was an interesting approach to recording both guitars and bass. After this metal band had recorded the drums, they started with the guitars before they recorded bass. They probably had a guide bass to record the drums to, but the reason behind it was that the bass would take up too much space if it were recorded beforehand.
Their philosophy was to create the tightest, thickest guitar sound possible, because their songs mostly revolved around the guitar riffs anyway. So by recording their guitars first they had the opportunity to thicken them up because they wouldn't clutter up the mix when combined with the bass. Then, later on when the guitars were done, the bass player and the producer found a sound that complimented the sound of the preexisting electric guitars that they already recorded.
It seems like a counterintuitive way of recording bass but this sort of fill-in-the-gap bass recording sounded good to them.
In the same vein as before, using a bass approach to EQing guitars can also help. Since guitars aren't as bass-heavy as the bass guitar, boosting the lows in the guitars can produce a smoother effect than boosting the bass guitar.
A boost in 100 Hz in the bass guitar might cause undesired boom or mud while boosting the guitars could create a thicker and tighter sound. That way you could actually reduce mud in the bass while increasing tightness and thickness in the overall riff production.
On the higher end of things, low-pass filters are your best friends to get rid of the hiss you get from distortion. Slapping a high-cut filter down to 12 kHz or so can clean up the unnecessary noise you get from very distorted or overdriven guitar amps or cheap stomp boxes.
The same goes for high-pass filters. The lowest rumble of the bass guitar (around 40-50 Hz) can easily be cut out without compromising the thickness of the bass.
Additionally, an overabundance of high-mids in the 4 kHz area can also cause a fatiguing guitar tone. Smoothing out your guitar by subtly cutting that area will reduce the harshness of your guitar while still keeping the aggressiveness of the guitar tone. A rounder tone with the same attack.
Lastly, you might be tempted to use reverb to create a sense of space. While you definitely should use reverb in your mixes, metal music requires particular attention to it.
A little bit too much reverb on the guitars and you'll go back to the sloppy mess of sound that you've worked so hard to stay away from. If anything, short delays to create additional thickness will work better because they not only add a sense of depth to your guitar production but they also add another layer of guitars to your riffs.
As I've said before, attention to detail is crucial and adding too much space to really fast guitar playing will inevitably muddy up your production.
Producing the rhythm section of metal music comes down to a certain mentality. You can't slack off and make do with things that aren't 100% perfect. A small, uneven section in the rhythm section creates immediate sloppiness for the whole mix, resulting in an amateurish production just because there wasn't enough attention to detail.
Make sure your players are great, make your guitars and bass fit together and don't add too much space and you'll end up with a stellar metal production.
I was working on a track of mine for the last few days, and became more and more satisfied with it.. Until I later listened to some other tracks by other producers for some inspiration, and.. Well.. It kinda was totally demotivating. I was immediatly dissapointed in my own track.
For example, I listened to: Mefjus & InsideInfo "Mythos" (out today!) and HYQXYZ & Ghostnotes "Facebreaker".
The first thing I noticed is the overall loudness of the tracks. They are so much louder when played at the same volume level of my own track. Off course, louder doesn’t mean better, but still. I was wondering how they managed this?
Anyway, this is my track I am working on:
I use FL Studio 10, Superio Drummer 2.0, Massive, Fabfilter, iZotope Ozone 5, some SPL plugins..
I am very open for all kinds of feedback! Mostly production-wise.
It also has a camo and krooked remix, and my question now is: Should I not care and just leave it as my own track? Or should I name it a bootleg remix of that tune?
I’m really proud of what I have done with it so this is kind of a decision, what are your thoughts?
But I’m very curious how other people go about this .
But the actual issue is that my snare in the project has disappeared. I bounced the tune late last night only to notice on the way to work it wasn’t there! The snare itself has 3 different layers, each one is on my hard drive aswell as the full snare with all layers combined. Even when asked to locate the snare – full (file name) it isn’t there.
Does anyone have any clue what i’m rambling on about?? As it says in thread title im using logic
Edit: – The Snare – full file is on my hard drive, it just doesn’t play back in logic anymore
Contructive feedback is welcome!
Any essentials to pick up?
I’ve heard good things about WOW.
Even if theres nothing going on in the arrange window at all, the sound becomes kind of crackley, almost like its clipping in a weird way (it isn’t).
This is happening when I’m just messing around on the keyboard (but only if I’ve hit play), recording, and if I’ve drawn the notes in.
I’ve not had any issues with any other synths, nor have I had a problem like this with Massive before. Anyone got any ideas?
They are all 175 bpm and the name of the sample contains the root key note.
Let me know your thoughts.
Thanks and enjoy,
How would u go on finding the right chords around an acapella?
How does a vocalist that gets featured in a track get paid? They sign a contract with the label or I have to pay them before??
Also this can be done remotely? In the sense that, vocals can be made let say in London while I’ll be in Malta at the same time?
Templates- do any of you use them?
House keeping – how do you guys sort your projects out? eg do you mix into a limiter and master compressor or that comes in at the end?
one quick question…
what filter plugin do you use to dilter your baselines to bring out the nasty wobbles.
i achived some good results with IL Love Philter but this plugin somehow crahses my Cubase…
any alternatives ?
I know many producers are planning on doing live sets right now, and I’m keen to help and give insight. I can’t be arsed to write a whole post about it so if you guys have any questions, fire away and I’ll do my best to answer!
Other live players too, come aboard and discuss!
I fact – I have a question too for other live brethren: In how many tracks/scenes do you split a single tune into?
I’ve been getting into darker dnb lately and i found a duo from my hometown who are making these nice growly basses. I’m talking about the main bass ( http://www.youtube.com/watch?v=TTvLmO_vL38 ). You can also hear this one as the first bass sound in Program by Noisia ( http://www.youtube.com/watch?v=vyrwMmsufJc ). I’m not asking for the exact sound, I would just like to know what the base for this sound is and how sounds like this in general can be achieved.
I’m a hardware-dummy, so I really know nothing about the possibilities there, but I’m eager to learn.