When Steinberg released Halion 6, which included Halion Scripting (based on Lua) and a UI builder in February of this year, I was very excited to see what’s cooking under the hood.
I expected there to be a learning curve, but the reality was much more daunting. Not being a programmer, there was a point where I thought I would never fully grasp all the concepts and syntax, leaving me with limited use of the Halion scripting language. The online reference for Programming in Lua largely uses single letter variables in it’s explanations, with very little real world language in examples. This made following the trail of a script twice as hard.
On the other hand the Halion 6 scripting reference was more like a dictionary and the supplied examples served more to explain themselves than show a practical use. The scripts included in instruments where also hard to follow, as their exact “global” purpose wasn’t immediately clear, even after reading the added comments.
Learn Lua – Brian Burton
This Youtube playlist is very good to begin with.
The videos are fairly short and to the point. The use of real world language and practical examples help make you more comfortable with the Lua language. Working through these a few times will lay the groundwork for continuing on to Halion Scripting.
There are a few Lua and general programming concepts which are not covered, but the next playlist will take care of that.
Lua Tutorials – KarmaKilledtheCat
This playlist is a bit more lengthy than the previous one, but obviously takes a closer look at Lua and programming in general.
Lua mimicking Object Oriented Programming and topics such as meta tables are covered in greater depth. Not everything covered in these videos will apply to Halion Scripting, though having more detailed knowledge of what the previous playlist covered, is essential in the long run.
Again, the use of regular language and relatable examples, makes the information more digestible. This playlist covers most the “Programming in Lua” reference and is very thorough.
Steinberg developers have also been on the Halion forum and made some additions to the online script reference based on common user questions. To a certain extent I think the HALion Script Home might progress even more over time as users give feedback on the topic.
As these playlists make you more familiar with Lua syntax, the Halion reference and scripts become much less confusing. Basically when you get to the point where you start identifying functions as Lua functions, Halion functions or user declared, the scripts quickly start making sense.
It’s easy to avoid the traditional mono compatible mix and cite that “almost everything is stereo nowadays”, but you are missing out on some tremendous benefits.
I messed around with Cubase’s Control Room mixer section to get more familiar with the functions and settings and noticed something which I hadn’t before. When I pushed the Stereo/Mono switch, my mix’s level balance was completely off. Certain elements were way too loud, the bass was less defined and general mix eq was less than satisfactory.
I was curious if this could be the basis of a technique to improve mixes in general and was quite surprised to find that this is an established procedure. The main premise is that if you use panning to achieve definition in your mix, you’re not fixing the problem, you’re avoiding the issue. It’s a bit like trying to get into trousers that don’t fit by only putting in one leg. Most mixes are heard from a distance or any number of less than ideal scenarios. Fixes done with panning thus become obsolete and flaws creep back in.
Lately I’ve been moving outside my comfort zone as far as genre is concerned. For the most part this was not a problem during production, but lead and melody elements never sat well in the final mix. This was perhaps the most apparent and immediately useful revelation of summing to mono. I was truly surprised at how obvious the imbalances had become and even more surprised at how quickly this could be solved while in mono.
Occupying the Same Space
While panned, it is not always as easy to identify mix elements that occupy similar frequency bands, or at least have significant overlaps. Summing forces elements into a space where proper equing is the only way to regain definition. You could argue that changing levels will also improve mix definition. While true, it might be at the cost of a mix element which you don’t want to lose.
No need to say much about this as summing to mono making phase issues more pronounced, is fairly predictable. This helped a lot with fixing erratic bass levels and muddiness, as this is where phase is most likely to cause problems due to longer wave lengths. As a result, high pass filtering and low shelf equing suddenly became more accurate and quicker to set up without losing beef in the bottom end.
This overlaps with my previous point about “same band occupation” and goes a long way to clear up the mix while maintaining cohesiveness.
EQ and Automation
While taking care of the above mentioned issues, you might start noticing level automation and tone problems. Although this is related to phase and frequency band occupation, on a conceptual level we often think of this in a different way and start noticing it at different times. Also, clearing up mix definition will gradually bring these flaws to the front, as conflicting elements start finding their place.
With mono summing, level automation smoothing and eq choices are more easily reached. As you are removing one critical listening barrier by bringing mix elements into a single panoramic plain, level fluctuation, masking and tonal conflicts become more evident.
In principle this entire process can be clarified with a parallel to visual dimensions. It’s very hard to gauge the comparative size of 2 objects that vary in distance with respect to the viewer. Placing them in the same depth plane, removes a judgement obstacle, in turn making comparison easier and more accurate.
If you’ve never accessed the control room mixer before it’s quite straightforward.
In Cubase Pro, after clicking the “Window Layout” icon in the mixer window, you can then activate the “Control Room / Meter” option. If the control room is not turned on, click the enable button.
Two things to do if the control room has not been set up before:
- Go to VST Connections => Studio => Add Channel => Add Monitor – Connect this to the output that feeds your studio monitors
- Go to VST Connecyions => Outputs – Make sure there are no other connections to your monitors here – This is important. You could end up getting a duplicate output which pushes up your monitor level and will play a stereo signal over the mono signal when changing the monitor downmix.
Having the control room active gives you a quick way to turn on a mono downmix on the output of your choice. If you have Cubase Artist, put a plugin such as StereoEnhancer on your master bus and turn the stereo spread to zero. You can then activate the plugin as needed.
I hope this benefits others as it did me. It’s a simple but very handy tool in your mixing arsenal, which could lead to great results.
Why Batch Bouncing
This is a handy technique I discovered out of necessity when dealing with a potentially laborious situation in a recent project.
A while back I was commissioned to do a mastering job. The client however could only supply me with a single wave file containing all 30 tracks. I wasn’t in the mood to bounce each clip individually after slicing and thought it worth while to find a simpler approach with future projects in mind.
Cubase is more than capable in dealing with sliced audio, clips and audio events, but there are pitfalls I wanted to avoid. When processing audio events referring to a single wave file, your options for dealing with shared events can lead to problems.
If for instance, you want to normalize a shared event, you will be given the option of applying the process directly to the wave file, or creating a new version with your process applied. This still leaves you having to repeat the same process 30 times over. In this particular case I’m also not a fan of processing the wave file directly via audio events. On the off chance that your project file becomes corrupt, or saving was unsuccessful, you are left having to do some pretty fidgety editing to determine where different processes were applied to split events as they were.
Should you try to apply an audio process to all shared events at once, your options change to “skip doubles” and “new version”. With “skip doubles”, only the first selected event is processed. “New version” might seem like an option, but there is a downside. Initially the new version still refers to the original wave file, but once you decide to commit to your processes by freezing the edits, it makes a full duplicate of the original file with processing only applied to the event region.
30 x 700 Mb is not a cost effective use of hard drive space for normalizing and DC offset correction.
How it’s Done1. Select all shared events you want to bounce individually.2. On the audio menu, click Advanced => Event or Range as Region.
- Trimming for Sampler: Even though plugins like Halion and Groove Agent have functions for slicing and trimming when saving or exporting programs, I sometimes prefer trimmed samples before going into patch building. Especially when slicing drum beats, you can end up with very short samples. Having the full audio file overhang outside the sample regions affects zoom performance and region marker drag accuracy.
- Stripping Silence: When dealing with recordings containing large silent spaces, or using the “Detect Silence” function, this can also be a way to make the resulting clips more manageable. This, again, saves disk space and affords bulk processing of the resulting clips without fuss.
- Isolate VariAudio: When treating a vocal or soloist recording, you sometimes only want to apply VariAudio to specific parts. When applying VariAudio to an audio event, analysis occurs throughout the whole file. I don’t like any more processing done than needed in a project so this comes in handy.
What it Doesn’t Do
This process does not apply any event based fades, envelopes or non-destructive volume/gain changes to the resulting clip.
I hope this will be of benefit to some. It has saved me valuable time by reducing repetitive tasks down to a few click.
Nowadays, doing a Youtube search for an audio tutorial on your chosen subject, can likely lead to much confusion and wasted time. So many results will be parroting the same drivel, tell you what settings to use rather than explaining the underlying concepts or give you sub-par instruction based on poor understanding of the workings of audio.
To help make sense of the noise and because I got tired seeing “no posts” on my main page, I thought I’d quickly share my 2 favourite Youtube resources.Dave Pensado’s channel probably needs little introduction. As of writing this he has 160k subscribers, he’s won a Grammy and has mixed for some of the biggest names in the business. So there you go! He’s doing well.
His channel also has great guests and interviews, giving fresh perspective and bringing new approaches to recording and mixing techniques.Mixbus TV has a very solid following and with good reason. Whether you need to brush up on skills or dig much deeper into the workings of audio processing, this is the place to be. His ultimate compression playlist should convince you of this very quickly.
Compression is a misleadingly simple process and every now and then I start doubting my approach. This series set me straight very quickly.
With an extensive collection of real world mixing tutorials, debunking of audio myths and solid workflow advice, this channel has become a regular “go to” for me.
This site will mainly be repository for my work , but I will be posting some articles on sound design, synth programming and interesting web finds. I’m thinking of writing a few posts about Cubase, Halion and Dune 2.5 among others. So if anyone ever reads this, it might be worth checking back every once in a while.
Music for Media,