Win Iris 2 by iZotope

iZotopeMerry Christmas to everyone! Even if Santa already finished his world tour, he still had some nice gifts for a couple of Designing Sound Exchange members here, starting with Iris 2, the powerful synthesizer by iZotope.

As you might already know by now, if you’re interested in Iris 2, please take a break from your chocolate-eating holidays and head over to DSX and start debating with fellow sound enthusiasts. Talk and listen, ask and answer, and you might end up having an extra Christmas gift this year!

More details about our give-away can be found here. Thanks a lot to Santa iZotope for providing us with a copy of Iris to give and good luck to y’all!

Win Iris 2 by iZotope

iZotopeMerry Christmas to everyone! Even if Santa already finished his world tour, he still had some nice gifts for a couple of Designing Sound Exchange members here, starting with Iris 2, the powerful synthesizer by iZotope.

As you might already know by now, if you’re interested in Iris 2, please take a break from your chocolate-eating holidays and head over to DSX and start debating with fellow sound enthusiasts. Talk and listen, ask and answer, and you might end up having an extra Christmas gift this year!

More details about our give-away can be found here. Thanks a lot to Santa iZotope for providing us with a copy of Iris to give and good luck to y’all!

The VR Soundscape of Secret Sorcery’s Tethered


Guest Contribution by Kenny Young

Tethered is an immersive VR strategy game in which you take on the role of a god-like spirit tasked with looking after your flock of ‘Peeps’, the cute but simple-minded inhabitants of the beautiful, floating archipelagos that you gaze down upon from your lofty seat amongst the clouds. This disembodied third person perspective sets the experience apart from the embodied, or implied-embodied, first person perspective employed by most other VR games and had a significant impact upon various aspects of its audio presentation.[1]

My intention here is to highlight a different kind of VR audio experience with the hope that this helps to further our collective understanding of what is common to all VR audio experiences and perhaps also what aspects are more readily applicable to the growing variety of sub-genres and experiences that VR supports.


The VR Illusion

A player’s reaction in VR is much more likely to be instinctual or subconscious – the layer of abstraction that is normally provided by the TV or computer screen, speakers and input device, is dramatically reduced in VR. For audio this means that the familiarity and comfortable habituation that players have accumulated from all the audio experiences they’ve ever had are no longer their only point of reference, they are also referencing their perception of how sound behaves in the real world.

It’s unsurprising, then, that the simplistic ways that we generally rely upon to represent sound in our virtual worlds often don’t translate all that well to VR and can fall short of players’ expectations.

However, not all VR experiences are the same or have the same challenges and requirements. This is easy to overlook when so much insistence is placed on the apparent necessity of the various different audio technologies which have found their true calling in VR. Don’t get me wrong, I love tech , and attempting to replicate the complex ways that sound can interact with its environment and our ears are super interesting problems, and it’s important to understand them and consider whether they are relevant to your project. But audio people, particularly interactive audio people, tend to skew technical in their abilities and interests which makes it all too easy to indulge that side of ourselves and get bogged down in the tar pit of technology. Technology exists to serve the player experience and all of its ludic, artistic, conceptual, dramatic, emotional and metaphysical pillars and pores. As always, you can’t really go wrong if you are evaluating tech through the lens of what the player experience truly needs.

Once I got stuck in to it, it was clear that Tethered didn’t need to invest in complex reverb models, obstruction, occlusion or propagation. It didn’t even really need a 3D audio solution. This didn’t particularly match my expectations of working on a VR game, but the project was nonetheless a great opportunity to try out and learn new things.

3D Audio

Everything is more or less in front of the player in Tethered, so 3D audio is a relatively subtle/nice addition to this VR experience rather than a significant deal breaker. There’s a bit of non-critical height differentiation between events that happen above you (such as the materialisation of clouds and falling eggs) and events that happen below you (which is pretty much everything else) and that’s about it.

The only thing that is guaranteed to be behind the player at some point is a distant flock of circling birds – normally, I probably wouldn’t have bothered scoring this kind of peripheral event, but these small details seem much more significant in VR and I found myself just gazing at the birds and enjoying them. So, I felt compelled to give them some sound support, but this also created the problem of them being rather distracting, especially when positioned behind the player. Placing them on the cusp of consciousness in the mix wasn’t quite enough, so I ended up emulating the way these kind of distant sounds can ebb and flow on the wind to make sure they weren’t an omnipresent menace.

In many respects I think this is representative of my first forays into creating a VR audio experience: try something; it invariably doesn’t quite work; iterate. Whilst this is a familiar way of working for any experienced games maker, the applicability of this across the entirety of the audio experience, coupled with an increased level of attention to detail, meant that there was significantly more to do before reaching a sense of satisfaction and completion with the work.

One of the primary knock-ons from using 3D audio tech is that it requires all of your spatialised sounds to be mono point sources (or composed from mono point sources). This is fine for the most part, but sometimes such overt directionality isn’t desirable. One event in Tethered which didn’t work so well when collapsed to mono is the lightning cloud strike, which you can use to zap marauding creatures or destroy obstacles on the islands. It’s a powerful event and one of the loudest and most visually striking in the game, which is also why it being so directional doesn’t quite work – the lack of any enveloping environmental reflections that you strongly associate with thunder and lightning just isn’t credible and feels off. Similarly, using a 2D stereo sound (i.e. going straight to the headphones) also feels strange because when you move your head around it doesn’t track. So, for this event I decided to try triggering two sounds at the same time.

To start with I had the blast of the initial lightning bolt as positional and the following rumble of the thunder as stereo, but that sounded more or less the same – the blast didn’t really register as positional, perhaps because it was too short to track, but maybe also because my ear immediately caught hold of the stereo tail. But, after some more experimentation, what I settled on was having a stereo blast with an additional (and unique) positional element layered on top, which then gives way to the thunder which is initially stereo but smoothly resolves to being positional. The stereo elements really sell the power and size of the event, especially when juxtaposed against all the other mono-positional sounds in the game, and, whilst it makes little sense on paper, having the thunder resolve to being positional really helps with localisation. There’s clearly plenty of room for experimentation here and it’s going to be cool to see these kinds of techniques progress and mature.

Sense of Place

Reinforcing the sense of place is central to backing up the reality presented to players in VR. Because a strategy game essentially boils down to ‘lots of things happening in the world as a result of the player’s actions’ one of the primary ways of reinforcing the sense of place in Tethered was by paying close attention to the presentation of sounds that take place within it.

Background ambience is clearly an important consideration for all VR projects. However, the world surrounding the player in Tethered is of much less importance than the world they are observing in front of them. As a result, using surround ambiences of some ilk (e.g. first order ambisonics) wasn’t a good fit for this project. So, the backgrounds in Tethered are a mixture of plain vanilla stereo beds (without too much detail in them – it’s primarily air or wind with a wash of birdsong or crickets, depending upon time of day) with some random ambient spots to help create variation (e.g. sporadic birdsong and other wildlife, gusts of wind), and then there are also positional spots placed in the world to accentuate the features of a given island (e.g. a waterfall, an ominous cave, the wind in the trees).

The broadband noise of the waterfalls on some of the islands in Tethered was a bit of a nightmare – they’re huge and impressive which meant they needed a sound to match, but they’re also omnipresent, sucking up room in the mix. I addressed this by having them be loudest at the start of a level, when the player is most likely to be observing them as they survey the landscape, and then reduced them gradually over 45 seconds to move them out of the way. Another solution I considered was to subtly and smoothly ride the volume based on the player’s gaze, but I was put off that idea when it became clear from some other gaze-related issues in the game that just because the player is facing directly towards something doesn’t mean they are necessarily looking at that thing – in fact, it’s entirely possible to be looking at nothing in the world whilst having an absent-minded think! But there is something in this idea even if I couldn’t find a use for it here, and I’m absolutely convinced that once eye tracking tech comes online there is a whole world of intent-related mixing and dynamic audio which is super exciting to think about.

Alan McDermott, Secret Sorcery’s Creative Director, had the nice idea of adding unique ambient sounds to each of the buildings that the player can construct – these are designed to be relatively unobtrusive additions to the overall soundscape, but if the player leans in and gets close enough they ramp up and the player can hear what’s going on inside. It’s subtle stuff, but all of these nice little touches add up to an engaging experience that enhances the sense of place, draws the player in and (hopefully) meets their expectations.


Some of the most important sounds in Tethered are those made by the Peeps. On one level, hearing the various different sounds of them going about their daily business – gathering resources, calling out to each other or humming to themselves as they move between tasks – are simply nice touches that fill the air and help make the world feel vibrant and inhabited. On another level, there’s some crucial gameplay information to be gleaned about what is going on in the world, and where. And on yet another level these sounds compliment the animation and AI code to bring these little creatures to life in a way that is greater than the sum of its parts, and really helps to cement your emotional bond with them. It’s understandable, then, that these sounds received a lot of attention and iteration. One of my fundamental epiphanies was realising that all these sounds really needed to sound like they were taking place in the world, and that this was a great way of backing up the sense of place. I paid more attention to the impact that size, space and distance had on the sounds in Tethered than I have in any other game I’ve worked on.

Tethered always presents an exterior location to the player, and from a similar mid-distance, so this allowed me to bake a nice exterior reverb and EQ in to all of the samples that represent events emanating from the world. This is something I revised throughout development as I got a better sense of what worked, which allowed me to hone in on a sound that I felt was credible and supportive of the world we were presenting. This approach also permitted content-specific tweaks (e.g. different reverbs and settings for different sounds and categories of sound), something that is currently a bit more of an ask when using real-time effects.

Distance attenuation also received quite a lot of experimentation and tweakage over the course of development, starting out with an overtly realistic approach on everything – because that’s what VR requires or the player dies from lies, just like in The Matrix – but ending up with something relatively subtle that merely alluded to distance. Having committed to baking reverb in to samples, dynamically adjusting it in real-time wasn’t an option, but I never got to a point where I felt like volume attenuation and low pass filtering wasn’t enough to appropriately sell the effect – again, the lack of any real close proximity to events certainly helped here because it meant my baked reverb only had to contend with representing mid to far distances.

Similar to my content-specific approach to reverb, most sounds were split in to two main attenuation categories – important gameplay events that must be heard by the player (with a max attenuation of -3dB and LPF at 12.5kHz over 20k distance units), and sounds that are more just part of the soundscape and exist to bring the world to life and could therefore afford to have a more significant roll-off (max attenuation of -48dB and LPF at 16kHz over 60k distance units).[2] It’s interesting that for key events the allusion to distance comes primarily through frequency roll-off – I wanted them to sound affected without being easy to miss in the the mix – but for all other sounds the opposite is true – I didn’t mind them being a lot quieter but I didn’t want them to disappear completely either so I had to keep more high frequencies in there to catch your ear. Not letting the low pass filters go crazy is also beneficial for the HRTFs used by 3D audio tech (or, at least, it gives your ears more information to work with once an HRTF has been applied to a sound), but that was an ancillary benefit to be honest as I was primarily mixing for what felt and sounded right.

Significant visual occlusion exists in some of Tethered’s levels (e.g. an island’s topography might peak in the middle, hiding what’s on the other side and requiring the player to change position to get a better view) but backing this up in an overtly realistic manner with an audio obstruction effect would have had the same detrimental and unfair impact on the player experience as enforcing a realistic distance attenuation effect. I’m sure a subtle nod towards this might have been nice, and the facility was there if I wanted to try using it, but the lack of this never felt problematic and I didn’t want eat more CPU just to tick this box. Perhaps the coincidence of obstructed sounds generally being more distant and therefore sounding more obfuscated due to the attenuation effects kicking in helped to tick this box…


The music in Tethered has much in common with many other VR experiences in that it is quite chilled and offers players some calming reassurance in an effort to counterbalance the fact that being transported to another place is, in and of itself, quite an intense experience. So, the musical palette is generally one of lush string quartet intermingled with analogue synths, warm wind band-like brass and sparse solo instrumentation.

Where the music experience differs is that Alan had tasked me with looking in to using music, or musical sound, to communicate key information to the player in order to reduce the amount of visual UI clutter displayed in the world – Secret Sorcery had identified that this worked against maintaining the sense of presence in VR. So, on top of the ambient bed tracks sits a suite of easy to remember, symbolic musical motifs that the player comes to learn and associate strongly with some of Tethered’s most important gameplay events. When the player hears one of these iconographic music stingers they are informed as to what is happening, the positional audio and head tracking helps them to tune in to where it is happening and this in turn helps them to locate it visually and respond as they see fit.

So, on paper, this sounds like a great idea and, indeed, the end result works well for Tethered, but there was an awful lot of effort required to pull this off. The fundamental issue is that naively playing musical motifs at the same time as an underlying music bed track is a recipe for disaster – you need temporal and harmonic alignment for this to be a pleasant experience. So, I spent a couple of weeks prototyping an interactive music system in Unreal Engine 4 that proved this approach was at least technically viable and then handed it over to Scott Kirkland, Secret Sorcery’s Managing Director and one of their l33t coders, who spent a couple of days making it more robust and scalable for using in the game proper. Once it was up and running, there was then the issue of authoring all the stinger content, which was no small feat given that the stingers were memory-resident samples that had to be recorded from scratch (you can’t use virtual instruments or samples as samples – it’s against their licensing terms) and were segmented in to beat boundaries that could update at any point in their duration to match changes in the underlying harmonic progression of the bed track.

Doing this topic justice would be a whole other article in and of itself, so I won’t go in to more detail here because this piece is already a bit sprawly, and I’m at risk of cannibalising the audience for a GDC talk on this subject. But here’s a video that helps to illustrate the technique:

The main bit of takeaway here is that it’s pragmatic to spend your time and resources in the places that will return the biggest bang for buck on your project, and this might not relate to any of the technologies or approaches that are flavour of the month or that you are keen to utilise or learn more about. Prioritising amongst all the different technologies you could throw at a VR project is crucial, particularly for an independently developed and published title like Tethered.


I think it’s important to mention that the audio experience in Tethered was implemented using stock Unreal Engine 4 without any middleware and I found this to be a good experience! This isn’t an anti-middleware point – I love using middleware on projects where it enables a whole other level of finesse that otherwise would not be possible (Unity, I’m looking at you) – but Unreal’s own audio tools were a great match for this project’s needs despite any apprehensions I had due to the prevailing received wisdom. The most refreshing thing for me was having a totally seamless development environment with minimised abstraction between the audio implementation tools, the game engine and previewing the game. The fact that I was able to prototype and experiment with new functionality myself using the Blueprints system, despite never having used the Unreal Engine before, was of massive benefit to this project.


Working remotely as a freelancer and being an integrated part of the team was made possible by the brilliant support I received from Secret Sorcery. Alan McDermott isn’t just their creative director, he was previously the audio director at Sony’s Evolution Studios and it’s this level of AAA audio insight from someone in a senior creative position at the company that enabled the audio experience in Tethered to be a central part of the player experience – Alan was sowing audio seeds long before I was brought on to the project to help realise them. Similarly, having the managing director of a company as your audio coder is a real perk. This wasn’t the tragically familiar case of Scott “doing audio” because it fell in to his lap – he’s a massive fan and champion of audio, appreciates its contribution towards the player experience, and was brilliantly responsive to all of the hair-brained things that Alan and I cooked up.

There is no substitute for teamwork and close collaborations – this is where the good stuff comes from and it deserves as much of your attention as any other aspect of a project, be that your own chops in your given craft or the technology you choose to use in it or develop for it. If you are working with reasonable people, there is a strong relationship between being seen to do the right thing for a project and being given the trust, freedom, encouragement and support to take that to the next level. Sound is viewed primarily as a utilitarian and technical discipline by most non-audio professionals – merely a box that needs to be ticked. If you feel alienated, uninvolved or unsupported, consider whether this is because your words and actions are confirming this false suspicion.

Final Thoughts

Creating the audio experience for Tethered wasn’t so much about slavishly recreating reality, it was more about alluding to it with a focus on backing up the world presented to players, being hyper-aware of the importance of this and, as a result, paying attention to the subtle details that helped to fulfil this function. Additionally, when there were problems that didn’t have easy solutions, but were clearly central to making the experience work, spending time on these paid dividends and helped to create a unique experience. I think all of that is equally applicable to any game and, in that regard, VR audio is nothing new – the overall approach hasn’t changed, the player experience has and this is where the knock-ons stem from.

I think it’s quite easy to dismiss a lot of the solutions I’ve discussed here because they quite clearly could have been used in a non-VR game. This is both true and totally misses the main point I’ve been trying (and possibly failing) to get across – none of the technologies or approaches in VR are new or unique; they have and are being used in non-VR games too. What’s new is that VR puts a pressure on your approach which pushes you in directions that you are unlikely to have considered, unlikely to have been able to justify, and unlikely to have had the determination to persevere with were it not for the amazing sense of presence and the requirement to back that up. If VR game audio is anything, it is this.

Another way of looking at it is that it’s important to differentiate between the requirement to present the virtual world in a way that meets the player’s expectations versus how you go about achieving that. The former is VR 101 and common to all VR experiences, the latter is project-specific (possibly sub-genre-specific) and no amount of reading up on “VR audio” is going to tell you precisely what the right thing to do is on a given project, it’s simply going to make you aware of what other people have done on their projects and possibly inform aspects of the approach you choose to take.

There are no rules.

Go nuts 🙂

Kenny is a freelance audio director, composer and sound designer best known as the brains behind the celebrated audio experiences in Media Molecule’s LittleBigPlanet and Tearaway franchises. You can follow him at @kcmyoung and find out more about his work at


1 – I don’t think that the terminology of “first person” and “third person” makes a great deal of sense in VR – arguably, all VR experiences are intrinsically first person irrespective of ‘camera placement’ because the player is always directly experiencing the world through their own eyes and ears and very much in control of where they are looking. So, perhaps it makes more sense to consider whether the player is embodied (i.e. looks down and sees their body) or disembodied (looks down and sees no body). That’s not a fool-proof distinction either because there are VR experiences where you have no visible body but you do have disembodied floating hands which make it feel like you have a body present in the world – “implied embodiment”. And then there are first person experiences where the player isn’t playing ‘themselves’, but is role-playing as someone else. This can work brilliantly if the player is invited to participate, but if handled clumsily it can easily bump the player out of the experience (e.g. you are given hands that you can’t move, or your character speaks on your behalf, or someone speaks to your character in such a way that it doesn’t feel like they are talking to you) – “implied disembodiment”. What I find interesting is that these are all issues in non-VR games too, but the lack of abstraction in VR puts a magnifying glass on them – the player’s sense of personal identity is stronger when they are in the world rather than merely peering in to it. Indeed, I think this puts a welcome pressure on the ability for players to communicate directly and naturally with NPCs. I’ll wrap-up this rambling brain-fart of a footnote by asking you to consider the possibility of the player being an acousmêtre that can observe and haunt other inhabitants in the world as a disembodied voice. I think that’s pretty rad 🙂

2 – I know these distance units are essentially meaningless, but consider them relative to each other. FWIW, 20k units equates to something like the distance from your viewing position to the middle of an island (“just in front of you”), the result being that important events are perceived to be either “near you” or “not near you”, which is a useful cue for the player. 60k units is a little less than the length of the largest islands in the game, so a sound which is waaaaaaaaaaaay over there is going to be on the threshold of hearing – audible if you choose to tune in to it in a quiet moment, but otherwise out of the way.

Copyright © 2016 Kenneth C M Young

Sunday Sound Thought 52 – Reflection

I started this series a year ago…well, a year ago next week…with the goal of making myself continue thinking about sound in new ways. When I was regularly writing longer pieces for this site, that was easy to do. As I stepped back into more of a background role though, I found myself missing the constant motivation to evaluate the way I approach sound. So this was all about getting myself into the habit of thinking more regularly on a smaller scale, in the hopes that I would find ways beyond my daily work to continue developing my skills and aesthetics. It started with just exploring some ideas, and eventually turned into a weekly post-mortem of the work I had done or little inklings of ideas I had because of work, media viewing or even common everyday experiences. It’s made me be more aware of myself, the way I work, and the ways I want to develop further. The weekly posts here were an accountability system to keep me on track. So I think I was successful in what I wanted to achieve. I honestly won’t know until I take away that accountability system and see if I truly have turned this into a habit. It’s helped me though, so I encourage others to try it.

Now I think you’ve all indulged me in my ramblings long enough. I set the goal of one year, and this is week number 52. Last I checked, that’s how many there are in one year. So I’m going to turn this idea over to the rest of the Designing Sound team who are going to run with it how they see fit. Thanks for helping me change!

Round 3 Winners and Next Week’s Prizes


So how did we pick this week’s winners? We took the two users who asked questions that generated the most discussion, and the two users who answered the most questions (with overall reputation as a tie breaker), then randomly drew names out of a hat to assign our prizes. So here they are…

  • BernC wins a copy of Krotos Dehumaniser II for answering 5 questions this week
  • JustinMacleod wins a copy of Zynaptiq Adaptiverb for starting the “Ethical sound design” discussion
  • AzimuthAudio wins a vehicle library from Airborne Sound for the daring people to “Scare me into being better at archiving”
  • Avi Ziv wins the Urban Ambisonics Bundle from Pro Sound Effects for answering 5 questions this week
  • stevefanagan is this week’s randomly selected winner of an annual subscription to

Congrats to this week’s winners, and thank you, Krotos, Zynaptiq, Airborne Sound and Pro Sound Effects!

Just a quick note to our winners…I’m traveling today, but I’ll be reaching out as soon as I can this weekend.

We had some overlap between between categories (Justin Macleod answered 23 questions this week!), and we had to disqualify a few others who were extremely active this week…because they’d already won something. It’s great to see this level of interaction! Keep it up, and remember that even those who have already won are still in the running to win the Grand Prize. We want to see those who’ve been rewarded for their participation to continue participating…and that’s our way to do it. If you haven’t been participating there’s still a chance to win some awesome free stuff. Head over to DSX, check out the rules, and get involved!

Here’s what we’re giving away in our final week…

Sound Particles

iZotope Iris 2

Shotguns HD Pro

Any single library from

A 1 year subscription to

….oh and we’ll be announcing the winner of the Hybrid Library from Pro Sound Effects on January 1st!

Be excellent to each other, have fun conversations and good luck!

Jon Langford, Jean Cook and Walter Salas-Humara: December 17, 2016 – Private Residence in Brooklyn

neild reports:
In a bookend to his February 2016 acoustic gig/art show at a private apartment in Brooklyn, Jon Langford closed out the year with another performance at the same venue. This time he brought along not just his guitar and violnist/vocalist extraordinaire Jean Cook (plus lots of his inimitable paintings and prints, which did a brisk business), but also Silos cofounder Walter Salas-Humara. The pair of country-punk pioneers swapped songs while adding guitar and improvised percussion to each other’s songs — on one track Langford started out drumming with brushes on his guitar back, before breaking for a drum solo played on his own butt — perfectly melding Langford’s sardonic tales of blasé drone operators and Irish fishing villages still reliving the decades-only filming of Moby Dick with Salas-Humara’s ballads of pot smuggling and doomed romance. It was as stripped down as punk rock, or music in general, can be, and further cemented Langford’s role as one of the world’s most seriously funny, and comically serious, artists, while providing a welcome reminder that really all that’s needed for great music are a couple of instruments and an appreciative crowd willing to join in on the choruses.

As with the February show, there was no amplification, so this was recorded with the same CA-14 cardioid mics (mounted on a stand this time instead of sitting on the floor, as the space was slightly less cheek to jowl); when it sounds like the performers have wandered off into another part of the room at one point, that’s exactly what’s happened. Thanks to Jon and Walter and Jean, and to Jon Raaen for hosting us all again at his apartment. Next time, with less snow on the ground for once!

Download the Complete Show in MP3 or FLAC format at [HERE]

Stream the Complete Show (via

Jon Langford and Jean Cook
Private Residence
Brooklyn, NY

CA-14 cardioid mics > Church Audio ugly battery box > Sony PCM-M10 > WAV (24/48) > Sound Studio (light dynamic compression to reduce loud claps and percussion) > FLAC (16/44.1) > Tag > FLAC

Recorded and mastered by neil d

Early Set:
01 Over the Cliff
02 Millionaire
03 The Country Is Young
04 Tubby Brothers
05 Counting on You
06 The Sunshine and the Moon
07 Shitload of Cash
08 The Sounds Next Door
09 Sentimental Marching Song
10 Hank Williams Must Die
11 Youghal
12 Streets of Your Town (Go-Betweens cover)

Late Set:
13 second set
14 Summer Stars
15 Drone Operator
16 Walking on Hell’s Roof
17 Satellite
18 Diner by the Train
19 Nashville Radio
20 Homburg
21 Commodore Peter

Check out more on Langford’s many forms of art at the terrific site maintained by Nobby Knape, or just wait around long enough, and he’ll probably show up in some guise or another.

Bitwig Studio 2 lets you modulate and control like a bandit

Bitwig gets its first blockbuster upgrade since launch, in beta now. And the first look at this software suggests it’s continuing to deliver what an enthusiast audience wants – even if some of the revolutionary promise of the tool remains over the horizon.

So, first, what it isn’t: it isn’t a complete modular environment. Underneath all the goodies Bitwig offers is a set of modules that provide its functionality. Bitwig’s developers have said eventually they’ll open that up to users, not just for their own development. And that’s be exciting indeed.

But forget about big ambitions for a moment. The step that we get here looks really useful.

In fact, it might be friendlier to everyday users than the grand-modular-everything scheme.


What’s cool about Bitwig is its consistency. I think Ableton has actually suffered as its included devices have fragmented. There are third-party tools that never get updated. There are truly native tools like Simpler – and those are great. Then there are features relegated to second-class citizens as Max for Live devices, which sometimes cause them to behave differently or load more slowly. There are different sets of tools for monitoring signal or looking at frequencies, and they aren’t available everywhere. Lots of functions aren’t modular. MIDI assignment is clunky. I could go on. Adding Max for Live seems to have become an excuse for not fundamentally improving any of this – at least through what’s now several years of updates. And, apologies, Ableton, but I think in this case you deserve the comparison.

Bitwig’s first versions laid a foundation for something more consistent and integrated. But we had to wait for them to deliver a product that built from that competition past the competition.

And modulators really look like they could be it. Every internal device, and every plug-in, now has an unlimited number of modulator slots.

So add an LFO if you want. Add some math or randomization. There are envelopes and step sequencers and keytrackers and nifty X/Y controllers. Plug those in, change whatever you want. Do it anywhere.

These are also all polyphonic. That combined with the cool control provided by devices like ROLI’s I think could open up a new approach to sound design.

I won’t mince words: you can stop reading here, because I think modulators are a reason to give Bitwig a go.


This semi-modular capability is much of the time probably more appealing for quickly coming up with ideas than a full-modular environment would be. On the other hand, if this works, it can and should increase appetite for more modular tools – if I could just change that step sequencer a little…

But I really think this illustrates the limitations of Max for Live, or running other environments as plug-ins. Being able to modulate in devices while you arrange, inside a DAW, natively, is a whole other experience. I can’t wait to try it, and I’ll be writing once I get some time with the beta.

Check them out here.

Hardware integration is the other functionality I think is really important, and really in tune with how many people want to work now. Again, it’s nice to see Bitwig add these features natively.

For MIDI, you get devices for both hardware and plug-ins:
Control Change (CC)
Program Change

And hardware devices:
Clock Out
MIDI timecode (MTC)

Plus, there are Control Voltage devices, for gate, continuous control, and simple direct signals:
CV Instrument
CV Out

You also get a bunch of MIDI/pattern devices – nothing so radical to users of other DAWs, like Cubase, but I think doubly welcome in the context of the other hardware features and rich modulation:

Multi-note (think chords)
Note harmonizer
Note length
Note echo
Note latch
Note velocity

Add those together with modulation, and many of you probably don’t need a full modular tool.

Remote Controls for any device take the best feature of Live’s Racks – macro mapping – and appear to make it more coherent. Whereas those are device-specific and require setting up a rack, Bitwig’s feature can be saved with presets, too, and are available everywhere. They also go well with the hardware integration features above.

The other reason I’m going to give this a second go is, frankly, fades/crossfades – which look elegant and nicely work not only in the arrangement view but in clips and audio editor, too.


Like any maturing DAW, the rest of this is a sort of grab bag of lots of improvements to workflow – the various refinements that occur in parallel to multiple elements of the tool.

So you get a spectrum analyzer, and spectral tools through the internal toolset. There’s an expanded Polysynth, with expanded timbral tools like oscillator mix and filter waveshaping modes – and it combines with those new modulators. There’s VST3 support – a rarity outside Cubase.

If that didn’t excite you, zoom in on this shot of the Polysynth. The new visual language, the friendliness of the UI, the richness of modulation – this looks like promising stuff for synth lovers.


They’ve also significantly streamlined editing workflows and how tools, menus, and windows are integrated.

I expect some people will be disappointed that the revolution hasn’t arrived. And it means there’s a battle for Bitwig. The DAW market is crowded. Just being good – sometimes even being better – has never been enough.

But I think we may finally get a chance to really take advantage of the modular engine beneath Bitwig. And since a lot of us have tracks we want to make, the availability of modulators and the nice suite of arrangement and control tools here mean something you can use right now, today.

We’ll have more to say once we do our review. Happy modulating.

The post Bitwig Studio 2 lets you modulate and control like a bandit appeared first on CDM Create Digital Music.

Jeff Rosenstock: November 18, 2016 Villain BK

It is an objective fact that Jeff Rosenstock released one of the best albums of 2016, if not the best record of the year. WORRY is a work that exhibits all of the positive qualities of contemporary punk by one of the genre’s most consistently excellent and prolific performers over the last two decades. During that time, Rosenstock has been a virtually unlimited fount of quality music, having participated as band member, contributor or producer in something like thirty albums in the last fifteen years. But Rosenstock’s recent solo albums may be his highest quality output of his career. WORRY builds on the overwhelming success of his excellent 2015 album We Cool?. The fan response to it crashed the SideOneDummy Records website on the day of its release.

WORRY is album that walks the listener through a set of modern problems in a series of songs that touch on multiple genres all anchored by the indefatigable Jeff Rosenstock. Early in his career Rosenstock fronted a ska-punk band and then an anarchist-punk band but in his late thirties he’s now not beholden to any particular genre. The album touches on themes of loneliness, oppression, politics and music — all of which are pointless without our emotional investment in other people. Indeed “…While You’re Alive” concludes as the penultimate track on the record, by declaring that ultimately “Love is Worry”. We may never be able to achieve a perfect existence (“Perfect Sound Whatever”), but if we appreciate the small events, we’ll be truly alive. Its a simple but effective message from a performer who has spent his career confronting negativity and apathy with a singular work ethic and drive.

The Jeff Rosenstock Fall Tour came to a close on a cold November night in a converted warehouse in Brooklyn a mere ten days after perhaps the most devastating political moment of our lives. The final night of the tour was filled with much love and appreciation for the bands and fans who had supported him throughout, but the night was also fueled by purposeful rage and resistance. Rosenstock dealt with these issues throughout the set but also focused on the healing power of music. The set drew primarily from We Cool? and WORRY material, including an epic segue of the last eleven songs from WORRY to close the set. The band encored with “Police on My Back”, which Rosenstock unfortunately noted as a Clash song. Perhaps the only down moment of the night was that he failed to call attention to the composers of that song, The Equals — a 1960s racially-integrated London band who experienced actual police oppression of immigrant culture. Its a theme sadly still relevant nearly fifty years after the release of that song. Otherwise, the night was an inspiration and Jeff Rosenstock continues to cement his status as perhaps the most important punk artist composing today. Moving forward through 2017 and after, we’re going to need artists like Rosenstock whose life’s work is hell bent on resisting the wave of hate and intolerance bubbling up from the worst aspects of our society.

Jeff Rosenstock is back on tour next month, including two local shows. The just-announced Antarctigo Vespucci (Rosenstock and Chris Farren) show at Trans-Pecos is February 11, and a Jeff Rosenstock date opening for Menzingers at Irving Plaza is on March 30. All dates HERE.

I recorded this set with the Neumann hypers mounted at the soundboard on the far right and mixed with a board feed. The balance and mix took a while the get right but I believe this is now an excellent recording that definitely captures the energy of that great night. Enjoy!

Download the Complete Show [MP3] / [FLAC]

Stream the Complete Show:

Jeff Rosenstock
Brooklyn NY

Digital Master Recording
Soundboard + Audience Matrix

Soundboard [Engineer: Ryan] + Neumann KM-150 Hypers > Sound Devices 744t > 2 x 24bit 48kHz wav files > Soundforge (post-production) > CDWave 1.95 (tracking) > TLH > flac (320 MP3 and tagging via Foobar)

Recorded and Produced by nyctaper

[Total Time 1:08:27]
01 Wave Goodnight To Me
02 I’m Serious I’m Sorry
03 Hey Allison
04 Pash Rash
05 [banter – on tour]
06 To Be a Ghost
07 Festival Song
08 [banter – donations]
09 Beers Again Alone
10 Nausea
11 [banter – Worry]
12 Pietro 60 Years Old
13 I Did Something Weird Last Night
14 Blast Damage Days
15 Bang on the Door
16 Rainbow
17 Planet Luxury
19 June 21st
20 The Fuzz
21 While You’re Alive
22 Perfect Sound Whatever
23 You In Weird Cities
24 [encore break]
25 Teenager
26 Police on My Back [The Equals]

SUPPORT Jeff Rosenstock: Bandcamp | Tumblr | Buy WORRY at SideOneDummy Records

Lake Mary: October 21, 2016 The Schoolhouse


Lake Mary, aka Chaz Prymek, of Denver, hasn’t been as much a regular in our personal orbit as his East Coast-based peers, but this relatively rare treat of a show made up for it. This show at the Schoolhouse, a low-key, living-room style DIY venue in Bushwick, provided the perfect intimate setting for Prymek’s talents. The fingerstyle guitarist played a total of four compositions, including an as-yet-unnamed new number. The opener of “Everything the Light Touched,” with Prymek accompanied on accordion, proved the perfect, ethereal kickoff to a night that also included performances by Dylan Golden Aycock and Bridget St. John (those recordings coming soon). Prymek’s style has a welcoming, unforced quality to it that suggests that he is more interested in communicating the emotions in his music than his ability to play guitar. It’s the type of experience that sticks with you, even if it’s your first time seeing him; he leaves a warmth behind. So if you find Lake Mary in your town, make sure to make the trip to see him.

I recorded this set with Schoeps MK4V microphones at the stage lip and a soundboard feed. The sound quality is excellent. Enjoy!

Thanks to Jeff Conklin of Avant Ghetto Presents for sponsoring and inviting us to the show. Check out Jeff’s WFMU radio show here.

Stream and download the complete show from our bandcamp page:

Lake Mary
The Schoolhouse
Brooklyn, NY

Soundboard + Schoeps MK4V (onstage)>KCY>Z-PFA>>Zoom F8>2x24bit/48kHz WAV>Adobe Audition CS 5.5 (fades, align, compression)>Izotope Ozone 5 (EQ, effects)>Audacity 2.0.3 (track, amplify, downsample, dither)>FLAC ( level 8 )

01 Everything the Light Touched>[new]
02 [banter]
03 Blue Spruce
04 [banter2]
05 Porch Music

Support Lake Mary at his Bandcamp page

Steve Coleman – Myths, Modes and Means

One of a trio of RCA CDs taken from altoist Steve Coleman’s week at Paris’s Hot Brass Club, three of the six performances on this set ("Finger of God," "Song of the Beginnings" and "Transits") are over 19 minutes long, and, with the exception of a rap on "The Initiate" and one during the last part of "Song of the Beginnings," this is a jazz set. On "Finger of God," Yassir Chadly’s spiritual chant is followed by some intense Coleman alto over a drone by the keyboards. Other selections feature passionate free funk with plenty of Coleman alto and Ralph Alessi trumpet solos; Vijay Iyer’s spooky keyboards and Miya Masaoka’s koto are assets in the ensembles. – by Scott Yanow, AMG

Artist: Steve Coleman and the Mystic Rhythm Society
Album: Myths, Modes And Means
Year: 1995
Label: RCA Records
Runtime: 75:40

1.  Mystic Dub 2:06
2.  Finger of God 24:37
3.  The Initiate 3:50
4.  Madras 3:43
5.  Song of the Beginnings 19:16
6.  Numerology 1:42
7.  Transits 20:23
All compositions by Steve Coleman

Steve Coleman (Alto Saxophone)
Ralph Alessi (Trumpet)
Andy Milne (Piano and Keyboards)
Reggie Washington (Bass)
Gene Lake (Drums)
Vijay Iyer (Keyboards)
Miya Masaoka (Koto)
Josh Jones (Percussion)
Ramesh Shotham (Percussion)
Yassir Chadly (Vocals and Percussion)