Unison horns

edited November 2017 in General Questions
I am using Sax Bros., Trumpet 3, and Trombones. I use this software as VST plugins in Pro Tools 12. All of the instruments sound really good either as solo instruments or in ensemble arrangements. However one no-so-good result is playing parts in unison. For example, if two trumpets are playing unison parts, the effect doesn't sound like two trumpets- only one is discernable, irrespective of mix variables. If a trumpet is played with the baritone sax in unison, the sound is most unpleasant and raspy.

How I correct this?

Mickey Rouse

Comments

  • edited 6:20AM
    Can you explain a bit more about which instruments you are loading and how you are controlling them? For instance, do you only mean musically "in unison" where the instruments happen to be playing the same pitches or more of a "MIDI unison" which is two instruments in Kontakt responding to the exact same controller and channel?



    If you want MIDI unison where a single controller is sending the same input to multiple horns at once, then it's best to use the specific Unison Multi patches. For trumpets as an example, this will load "Trumpet 1 v3", "Trumpet 2 v3", and "Trumpet 3 v3" all in the same instance of Kontakt and set their MIDI channel to Omni. Then as you play you will hear all three horns together. They are manipulated in a way that allows them to sound like 3 horns instead of one. If you were to instead simply load "Trumpet 1" 3 times, it would produce unnatural artifacts that sound like a single bad trumpet.



    Even better is if you mean musical unison where you load Trumpet 1, Trumpet 2 and Trumpet 3 and set their MIDI channels to be unique and then individually record them in 3 passes so that they each receive unique timing and dynamics. In that case, the playback should sound fantastic.



    I don't own the Sax Bros, so I can't comment on whether they have unique challenges, but I expect you need to treat them similarly.
  • edited 6:20AM
    In Trumpets, for example, I load three instances, one track in Pro Tools each. I will then select on Track 1 Trumpet 1, Track 2 trumpet 2, Track 3 trumpet 3. ( I discovered early on that two parts written on the same track cancels one out or worse.) Parts written in harmony this way sound as they should, quite good. Unison, like only one horn. I have tried changing things that effect texture such as expression in different tracks, but not much difference. I have toyed with tuning, but it seems the variability is too great.

    There is an indicator on the plugin screen that notes ensemble unison, but I can’t tell what to do with that, and the manual soesn’t say.
  • edited 6:20AM
    The manual for Trumpets talks about the Unison Ensemble usage on page 33 in most recent PDF. You can either load all three trumpets in one Kontakt instance using the Multi directly, or if your ProTools tracks are each opening a separate Kontakt instance then you will want to manually load the associated ensemble patch. In the Kontakt browser they are under "Instrument" pane, then "For Ensemble Unison", then load 1, 2, or 3 as appropriate.



    I had tried to share an MP3 showing the difference, but the forum isn't allowing attachments today for some reason.
  • edited 6:20AM
    I know this problem all too well. The good news is : you can get rid of it. The bas news is : it's cpu heavy.



    Basically, your problem is that even with different midi recordings, the instruments are too mathematically close, and your brain instantly picks it up. The only solution is to use a good reverb to sculpt their sound in the room, and create close but non "mathematically similar" sounds for each one of your instrument. Which means : no bus treatments. Which means : crying cpu.



    As for myself, I tried tons of reverb (lexicon's, vienna mir pro, EAReverb, ... the only one that might work that I didn't try is SPAT), and found that only ONE does the job : seventh heaven professional from liquidsonics. The ability to chose between 32 different early reflexions and the special Fusion-IR algorythm is paramount here.



    Basically what you want to do is to have 2 reverbs per instrument (cpu crying again).

    The first must be from the ambient family (special reverbs very blendable used especially to push a sound in space, and will be going from mostly wet to fully wet (killing the dry sound), depending on the instrument and what you want to achieve. You have to dive into it and understand what each parameter is for to have the sound you want, but that's kind of mandatory with an anechoic sound anyway...

    The second reverb will be a hall or a room, or anything really. It has the more traditional role of a reverb. This one can be on the bus, but some extra depth and realism will be achieved by having it on each instrument.

    These steps are essential to kill the mathematical redondancies between the instruments, and will drastically improve your sound. Infact, it's even the only way I've found to make truely believable ensembles.



    Don't forget to also EQ out some mids on each instrument before feeding them to the reverb. Their default frequency balance is that of a solo instrument, not an ensemble one. Doing the EQ to sculpt your instrument after the reverb, while making more sense cpu wise, must be avoided. It really weakens the reverb and make it sound fake.
  • edited 6:20AM
    To modernbard: and my manual stops at page 29. I’ll download the latest.

    Re reverb treatment: I do a lot of R&B in the style of Muscle Shoals-Stax-New Orleans, which is usually reverb-lite to non-existent. I want to try to stay in that vein.
  • edited 6:20AM
    I doubt any style of music uses any kind of anechoic instruments whatsoever :p I think you're confusing the reverb as a whole, which can be used for tons and tons of things that we wouldn't call reverb, and the reverb tail. I mainly do super dry, ambient, short reverbs with RT60s below 0.7 second, which aren't processed by the brain as reverbs at all, but as spatial and tonal informations.

    Non-existent reverb is something totally unnatural, which exposes instruments to mathematical artifacts like the ones you're having. I can assure you you can use the tools and method I've talked about to make super dry yet correctly placed and wide instruments.
  • edited 6:20AM
    Mickeyrouse wrote: I have toyed with tuning, but it seems the variability is too great.
    I don't suppose that playing "out of tune" is a good idea. Adding some randomness to the tuning (via pitch bend) might be more promising.

    Trumpet 3 features "CC28: random detune" and "CC32: pitch fluctuation" seemingly exactly for this purpose. (-> User guide p 15.)



    Moreover a random delay imposed to the note on and note off events might help. I don't know if the Kontakt library supports this by itself, but (e.g. with Reaper) you could easily find or create a plugin that does this.



    -Michael
  • edited 6:20AM
    Plougot wrote: As for myself, I tried tons of reverb (lexicon's, vienna mir pro, EAReverb, ...
    With Reaper you have ReaVerb (a pure Convolution tool that is to be loaded with Impulse Response files). Reaverb does not eat that much CPU. You even can set it to a "No Latency" mode that eats slightly more CPU but does not introduce any latency. I happily use two instances of same for live playing Pipe Organ sounds. I am sure there are free convolution VSTs that are usable with other DAWs.



    Here you can load some of the great impulses provided by "Samplicity M7". All this is free and great quality.



    For positioning a sound source in a room you can use the "Space 360" VST, that also is free of charge.



    -Michael
  • edited 6:20AM
    Yeah, I've also tried those and I didn't find them very convincing. The main advantage of seventh heaven pro over any other plugin is that it's much more than a convolution, and it's actually quite obvious to the ear : IRs are always intermodulated in a way that gives them this "alive" feel that you don't have with traditional convolution, basically because traditional convolution don't actually correctly capture a sonic space in its entirety.

    It's more of a good approximation which falls short when used on ultra dry or anechoic sounds.



    It's interesting that you mention the samplicity IRs, seventh heaven pro is actually also an M7 emulation. But it's much more close to the original, thanks to the modulated IRs I was talking about.

    There are some comparisons over the internet. I know that on audiofanzine, Seventh Heaven pro is the only plugin in the history of the site to have a 10/10, and the comparisons are heavily in favour of liquidsonics's plugin.



    For Space 360, I've tried it a while back, but I found it to be merely usable on samplemodeling's instruments. It's not bad, but it's not precise enough, it has this weird metallic quality of algorythmic reverbs. It becomes especially obvious when you try to build an ensemble sound out of it. At least that's the conclusion I remember thinking... I must admit I don't remember the sound of it directly ^^. It's a really nice plugin though, for the price, can't complain :D
  • edited November 2017
    Great that you did all these tests !



    But technically "IR" means "Impulse response" and this by definition is what pure mathematical convolution does. And this is exactly what a "natural" static room does to a signal sent out from a single static point in space and recorded at some other static point in space. This is called "linear" signal processing.



    This is perfectly natural, but that does not mean that it is appropriate for the task in question, because with merely linear processing (by definition) there is no different between processing to two signals and mixing the result and mixing the signals first and then processing the sum.



    "Advanced" (non-linear) reverb engines might either work without recorded "natural" impulse data or add modulation and/or distortion (e.g. volume depending filtering, overdrive, ...). While this is not "natural", it might be pleasant, interesting and in this special case add different "personality" to otherwise similar signals.



    -Michael
  • edited 6:20AM
    We're on the edge on my knowledge here, but I'm pretty sure what Seventh heaven pro is doing more than just something "pleasing". It's rapidly explained here : https://www.liquidsonics.com/fusion-ir/about/

    and here :

    http://www.musicradar.com/news/tech/meet-the-programmers-liquidsonics-634888



    Unless I'm mistaken, it seems to be more than a psychoacoustic phenomenon. Something feels "breathing" whereas normal convolution reverb seems "static". I'm no engineer, though, so I can't go further. Anyway, have you tried it ? There's a demo on the site, but beware: trying it is adopting it :p
  • edited 6:20AM
    Good ideas, all. Thanks.
  • edited 6:20AM
    Plougot wrote:
    http://www.musicradar.com/news/tech/meet-the-programmers-liquidsonics-634888 ...

    -> "The inputs to the convolution streams are modulated".



    This is exactly what I meant to explain and what you call "breathing" and what might be very helpful in the situation at hand.



    -> " a great reverb algorithm will never produce the same response twice. "



    That might be great, helpful, pleasant and appropriate, but not what a natural room does, as same obviously is "static".



    I wounder if this can be done by using normal (technical correct) convolution and feeding them with multiple amplitude modulated versions of the signal. No cost but a lot of work to set up :)



    Both variants of course will eat a lot of CPU (as you already did mention).



    -Michael
  • edited 6:20AM
    I think it may be like the difference between math and physics. Yes, in theory, static is true to the real world. But it's based on the assumption that, even on a molecular level, nothing is moving. That the sound bouncing isn't ever so slightly warming up the air and the walls, isn't making them react differently. That the player, the instrument, are not moving, not even just vibrating.



    To our ears, of course, it's all the same but it varies significantly in a mathematical sense.

    Sort of a butterfly effect. Too many variables are changing all the time for a theorical static approach to be enough.



    In that sense, I'd say what he's doing isn't something on top of reality, like your purely mathematical approach would suggest, it's actually a better approximation of what reality does, and it's different from, for example, saturation or distortion, which are psychoacoustics phenomenons.
  • edited 6:20AM
    Here's what I was going to add a couple days back to demonstrate the differences I mentioned, but attachments weren't working previously.



    The attachment shows a simple horn line I put together that hopefully shows the difference. All MIDI input was via EWI-USB controller.



    Take 1: A solo Trumpet 1 v3

    Take 2: Identical Trumpet 1 v3 loaded 3 times, all controlled by the exact same MIDI input as take 1. Notice the odd phasing sounds. This is definitely unpleasant, and doesn't represent how an actual ensemble would play (exact same instrument with exact same input).

    Take 3: Trumpet Unison Multi (Trumpets 1, 2, 3) controlled by the exact same MIDI input as take 2. This one shows that the unison patches are quiet useful for getting closer to a unison line sound from a single MIDI line.

    Take 4: Trumpet Unison Multi (Trumpets 1, 2, 3) all uniquely recorded to play the same line. This shows the benefit of crafting the input for every player. Takes more work for sure, but now it starts to sound legit.

    Take 5: Contextual Listen: Trumpet Unison Multi (Trumpets 1, 2, 3) all uniquely recorded with one section in unison



    The "space" in all of these is directly from the Trumpet instrument settings. Early reflections from the virtual sound stage tab (Multi's defaults) and the reverb is the Trumpet-supplied IR loaded by the Multi as well.
  • edited 6:20AM
    Plougot wrote: That the sound bouncing isn't ever so slightly warming up the air and the walls, isn't making them react differently. That the player, the instrument, are not moving, not even just vibrating.
    Of course you are right. The question is what of these effects is not so tiny that it can't be heard.



    -Michael
  • edited 6:20AM
    Bard,

    Thanks a lot for the examples !!!

    -Michael
  • edited 6:20AM
    Of course you are right. The question is what of these effects is not so tiny that it can't be heard.



    -Michael

    Actually, my point is that the effects are quite obvious on our "macro-level", like dominos never falling exactly the same way or dices rolling. None of the causes will ever be apparent, yet the result will always be unique.



    ModernBard, if you're interested, put your test midi files here and I will run them through the setup I was talking about. This would give everyone a chance to hear what I'm talking about, or to hear that I'm full of sh*t if you disagree with the result ^^.
  • edited November 2017
    Plougot wrote: Actually, my point is that the effects are quite obvious on our "macro-level", like dominos never falling exactly the same way or dices rolling.

    Yep, mathematically this is called "Chaos Theory": tiny differences at some point result in large results at a later point. This of course is a ubiquitous effect in physics, and obviously worth considering.



    But I don't assume that a natural room will "breath" (i.e. show a slow periodical modulation). Nonetheless this might be a better approximation than a completely static behavior.



    -Michael
  • edited 6:20AM
    Plougot wrote:
    ModernBard, if you're interested, put your test midi files here and I will run them through the setup I was talking about. This would give everyone a chance to hear what I'm talking about, or to hear that I'm full of sh*t if you disagree with the result ^^.


    Sure I'll try to get it exported and shared within a few days when I get a chance. I'd love to hear an example of the effect you mentioned.

Leave a Comment

Rich Text Editor. To edit a paragraph's style, hit tab to get to the paragraph menu. From there you will be able to pick one style. Nothing defaults to paragraph. An inline formatting menu will show up when you select text. Hit tab to get into that menu. Some elements, such as rich link embeds, images, loading indicators, and error messages may get inserted into the editor. You may navigate to these using the arrow keys inside of the editor and delete them with the delete or backspace key.