This is the first full length collaboration between Porya Hatami and Darren McClure. Both artists have shared a mutual respect for each other’s work over the years, and in the Spring of 2014 they began a project together that slowly and organically unfolded between Iran and Japan resulting in this album.
Rough edges collide with smooth surfaces creating tactile layers of ambience, through which sparse piano and synths rise to the fore. Field recordings and drones have been woven with melodies and harmonic tones. From these elements, five tracks explore the intersection between opposing textures, the interplay between different shades and the hidden moments of in-between spaces.
Let’s face it, modular is very popular and it has been for quite a while now. In the mobile world we’ve had a few modular synths and some really spectacular ones in that group. Now we add a new app to the list. S-Modular is a semi modular synthesizer for your iPad. I’m not sure that I’ve seen an app that quite described itself that way before. S-Modular comes from the developer behind PRFORM, WubSynth and Synth Automata, although they have many more in their portfolio.
According to the developer:
S-Modular has been designed to have everything on one screen to make patching quick and easy. Drag from one jack port to another to make a connection, tap a plugged port to change the wires color instantly.
S-Modular has a unique and vintage sound quality, reminiscent of synthesizers from the 70s, warm and rich in character.
Here’s a quick view of the app’s features
IAA (Inter-App Audio)
24db/oct ladder filter
Resonant hi-pass and low-pass filter
4 step cv sequencer
4 track Mixer
Audio and cv spliter
2x AHR envelope generators
I have to say that it looks really interesting and I’m quite tempted to go take a look and see what it’s like.
We’re gathering with top digital media artists this week – and you can tune in. Here’s a preview of their work, on the eve of Lunchmeat Festival, Prague.
Transmedia work and live visual performance exist at sometimes awkward intersections, caught between economies of the art world and music industry, between academia and festivals. They mix techniques and histories that aren’t always entirely compatible – or at least that can be demanding in combination. But the fields of media art and live visuals also represent areas of tremendous potential for innovation – where artists can explore immersive media, saturate senses, and apply buzzword-friendly technologies from AI to VR in experimental, surprising ways.
Our goal: bring together some artists for some deep discussion. And we have a great venue in which to do it. Prague’s Lunchmeat Festival has exploded on the international scene. Even sandwiched against Unsound Festival in Krakow and ADE in Amsterdam, it’s started to earn attention and big lineups, thanks to the intrepid work of an underground Czech collective. (The rest of the year, the Lunchmeat crew can usually be found doing installations and live visual club work of their own.)
Heck, even the fact that I’m stumbling over how to word this says something about the hybrid forms we’re describing, from live cinema to machine learning-infused art.
Since most of you won’t be in Prague this week, we’ll livestream and archive those conversations for the whole world.
To whet your appetite (hopefully), here’s a look at the cast of characters involved:
Katerina Blahutova [DVDJ NNS]
Let’s start for a change with the home Prague team. Katerina is a great example of a new generation of artists coming from outside conventional pathways as far as discipline. She graduated in architecture and urbanism, then shifted that interest (consciously or otherwise) to transforming whole club and performance environments. She’s been a VJ and curator with Lunchmeat, designed releases and videos for Genot Centre (as well as graphic design for bands), then went on to co-found LOLLAB collective and tour with MIDI LIDI.
Don’t miss her poppy, saturated, post-Internet surrealism – hyperreality with concoctions of slime and object, opaque luminosities and lushly-colored, fragmented textures. (I can rip off this bit of the program; I wrote it originally!)
Oh yeah, and she made this nice teaser loop for this week’s festivities:
Ignazio Mortellaro [Stroboscopic Artefacts, Roots in Heaven]
Turn that saturation knob all the way down again, and step into the world of Stroboscopic Artefacts. Ignazio is the visual imagination behind all of that label’s distinctive look, from album design (as beautifully exhibited) to videos. He’ll be talking to us about that ongoing collaboration.
In addition, Ignazio is doing live visuals for a fresh project. Allow me to quote myself:
Roots in Heaven, a label owner and accomplished solo artist hidden behind a mesh mask and feathers, joins visualist Ignazio Mortellaro to present a new live audiovisual work. This comes on the heals of this year’s Roots in Heaven debut record “Petites Madeleines” (a Proust reference), out on K7! offshoot Zehnin. The result is a journey into “concentrated sensory impression” in sound, light, and sensation.
Gregory Eden [Clark]
One of the goals Lunchmeat’s curators and I discussed was elevating the visibility of people working on visual materials. But unlike the ‘front man’/’front woman’ role of a lot of the music artists, the position some of these people fill goes beyond just sole artist to broader management and production. Maybe that’s even more reason to pay attention to who they are and how they work.
Greg Eden, who’s at Lunchmeat with Clark, is a great example. With a university physics degree, he went on to Warp, where he developed Clark and Boards of Canada. He’s now full-time managing Clark, and in addition to that … uh, full time job … manages Nathan Fake (with visuals by Flat-e) and Gajek and Finn McNicholas.
Visuals are often synonymous with just “something on a projector,” live cinema-style. But Clark’s show is full-on stage show. For the stage adaptation of Death Peak, the artist works with choreographer Melanie Lane, dancers Kiani Del Valle and Sophia Ndaba, and lights from London’s Flat-E. Think of it as rave theater. That makes Greg’s role doubly interesting, as someone has to pull all of this together:
Novi_sad [with Ryoichi Kurokawa, SIRENS]
The collaboration between Novi_sad and Ryoichi Kurokawa is one of the more important ones of the moment, its nervous, quivering economic data visualization a fitting expression of our anxious zeitgeist. Here’s a glimpse of that work:
Ryoichi Kurokawa and Novi_sad have worked together to produce an audiovisual show in five etudes that produces a dramaturgy of data, weaving the numbers of the economic downturn into poignant, emotional narrative. Data and sound quiver and dematerialize in eerie, mournful tableaus, re-imagining the sound works of Richard Chartier, CM von Hausswolff, Jacob Kirkegaard, Helge Sten, and Rebecca Foon. Novi_sad is self-taught composer Thanasis Kaproulias, himself coming not only from the nation that has borne the brunt of Europe’s crisis, but holding a degree in economics. As a perfect foil to his sonic landscapes, Japan’s Ryoichi Kurokawa has made a name in expressive, exposed digital minimalism.
Ben Frost is already interesting from a collaborative standpoint, having worked with media like dance (Chunky Move, Wayne McGregor). The collaboration with MFO brings him together with one of Europe’s leading visual practitioners; Marcel will join us to talk about that but hopefully about his work for the likes of Berlin Atonal Festival, as well.
MFO has also designed the visuals for the sensational Jlin, but Theresa Baumgartner is touring with it – as well as working on production for Boiler Room. So, we have Theresa joining us from something of the in-the-trenches production perspective, as well.
VJing and live cinema are rooted in conventional compositing and processing. Even when they’re digital, we’re talking techniques mostly developed decades ago.
For something further afield, Gene Kogan will take us on a journey into deep generative work, machine learning and the new aesthetics that become possible with it. As AI begins to infuse itself with digital media, artists are indeed grappling with its potential. Gene is offering talks and workshops both here at Lunchmeat and at Ableton Loop next month, so now is a great time to check in with him. A bit about him:
Gene Kogan is an artist and a programmer who is interested in generative systems, artificial intelligence, and software for creativity and self-expression. He is a collaborator within numerous open-source software projects, and leads workshops and demonstrations on topics at the intersection of code and art. Gene initiated and contributes to ml4a, a free book about machine learning for artists, activists, and citizen scientists. He regularly publishes video lectures, writings, and tutorials to facilitate a greater public understanding of the topic.
I’ll be reviewing the resources he has for artists soon, too, so do stay tuned.
Also coming from Prague, Gabriela has been guiding the INPUT program for Lunchmeat this fall, as well as being one of my collaborators (our installation is part of the exhibition this week). Its contents are mysterious so far, but a live AV work with Gabriela and Dné is also on tap.
It’s always good to welcome a new app to the mobile music community, especially one that claims to be exactly for making ‘short and simple’ pieces of music. sequencism calls itself a music sketchbook tool, which is nice idea. According to its creator, the app is designed to provide a traditional user interface of track and piano roll editors, while taking advantage of the touch-screen capabilities of the iPad. It also includes other helper tools, such as chord helper tracks, which is a nice feature.
It’s also worth noting that apparently the main goal of this app is to work as a sketchbook, it is not recommended to use sequencism to produce complex songs or to play songs live on stage. Which is a fair warning I guess.
• Simple track mixer and visual mixer (volume, pan)
• Multitrack editor: MIDI instrument tracks, chord helper tracks
• Support for SF2* and AUv3 instruments (*lightweight SF2 files only)
• AUv3 Effects
• Piano roll editor: add, move, copy notes within blocks
• MIDI keyboard, supports multiple scales, including user-defined scales
• Support for diatonic and chromatic chords
• Automatic transposing when changing chords or scales
• Supports Audiobus 3 and Ableton Link
• Support for MIDI keyboards, including Bluetooth keyboards
• Export to MIDI
I’m not sure that I’ve actually managed to talk about AC Sabre since I’ve been at CDM, so now is a good time to introduce it ahead of talking about what’s new in the app. AC Sabre is essentially a full performance instrument in your iPhone. You can use it to control almost anything. The app is a wireless MIDI instrument and motion controller for anyone who’s interested
in electronic music production.
AC Sabre reads your movements with the built in gyroscope and accelerometer and translates them into musical actions. It lets you pluck invisible strings in the air while controlling up to 7 additional parameters, intuitively with your movements, via MIDI CC messages.
In the latest version there’s support for Audiobus 3, together with automatic Bluetooth advertising, zero-tap, auto-connection to iPad via new free “AC Central” app. Audiobus state saving both locally and to iPad via AC Central. Hands free mode now works when AC Sabre is in background.
So all in all a fairly substantial update. In there is mention of AC Central, the iPad companion app that’s free on the app store. AC Central is a free auto-connection hub for the AC Sabre Motion MIDI Instrument. AC Central takes the hassle out of getting connected to your favourite synthesizers, samples and other sound machines. Plus, combined with Audiobus 3 (separate purchase, well worth it!), you can save and recall your complex multi-app setups and MIDI mappings with the touch of a button…
AC Sabre on the app store:
AC Central on the app store:
Finally, here’s a little video showing the AC Sabre working with the Moog Model 15 app.
Widowspeak just released their fourth album, Expect the Best. It gave me a little jolt of nostalgia to see what’s now by any definition a veteran band taking the stage in Brooklyn again, six years after I caught one of their earliest shows at Glasslands. It also gave me hope: Here’s a band that has thrived from day one on the very simple proposition of writing good songs and performing them well, and continues to be well-received.
This show at Rough Trade marked the end of the band’s recent tour in support of Expect the Best, and it had that triumphal homecoming feel about it, from the crowd of friends and showgoing regulars to the career-spanning setlist. After all, it’s not like Widowspeak need to hunt around Brooklyn for converts. Expect the Best feels like a darker, slightly less pop-driven record than 2015’s All Yours, as befits the times and singer Molly Hamilton’s recent decamping to her home in the Pacific Northwest. Likewise, the new record also has more of a “live” sound to it, a bit looser than the band’s recent efforts, and that also works to the advantage of these songs like, particularly the set opener, “Right On,” which reminded us that despite the association of this band with more of a hazy, laid-back sound, they are more than capable of rocking out. As noted, the set also reached back to the band’s first album for three of their best-loved numbers, the unforgettable “In the Pines,” “Gun Shy” and Hamilton’s Pacific Northwest in-joke “Harsh Realm.” Hearing those songs played by a tightly-knit four-piece — a much fuller sound than the trio had back in 2011 — only reminded me again that some things do improve with time. Widowspeak are one of those bands that has not only stayed consistent, but keeps putting out great new music. That album title they chose this time says it all.
I recorded this set with a soundboard feed from house engineer Jeremy combined with Schoeps MK5 microphones. The sound quality is excellent. Enjoy!
Tracks [Total Time 1:13:35]
01 Right On
04 In the Pines
06 All Yours
08 Ballad of the Golden Hour
09 The Dream
10 Expect the Best
11 Gun Shy
14 The Swamps
15 Harsh Realm
16 Fly On the Wall
18 Coke Bottle Green
The 200-ton, building-sized Telharmonium original produced some of the first electronic music. But now it’s a compact modern synth module, too.
The Make Noise/Tom Erbe Telharmonic is emblematic perhaps of how synthesizer history now folds in on itself. The module combines analog and digital control and synthesis, and pairs a well-known modular creator with one of recent years’ best known engineers and teachers of digital synthesis. Put those elements together, and you recreate… a giant electro-mechanical instrument patented in 1897, but in a form that has never existed before. That old progression from past to present to future seems so boring now. Instead, we have a wormhole of simultaneous possibilities. You know, in a good way.
But if turn-of-the-last-century pioneering instruments are being made into compact modules, we also need a different kind of history.
Kyiv, Ukraine-based composer/artist Oleg Shpudeiko – aka Heinali – recently wove together a history of the original Telharmonium and the new Telharmonic module. It’s such a lovely read that I felt it shouldn’t live only on The FaceBook. So here it is, preserved for posterity (and, if you like, further comments and thoughts).
Thanks to Oleg for this. -Ed.
Make Noise Telharmonic and electronic music history.
I’ve been considering writing about Make Noise/Tom Erbe Telharmonic for some time now. There’s an abundance of videos covering this module, of course. But regrettably, I couldn’t find any that go beyond technical demonstrations, in order to cover the module’s historical and ideological contexts (except for the original Make Noise demo videos, to a certain extent). In my opinion, those are the very things (apart from the hardware’s great sound) that make it a truly exceptional work of tech art. My text is by no means comprehensive, but I hope to accentuate some of my points of interest.
Telharmonic is an Eurorack synthesizer module, a product of collaboration between Make Noise and Tom Erbe. Make Noise is the modular synth company from the US founded by self-taught electronic musical instrument designer Tony Rolando. Tom Erbe is a University of California Santa Davis (UCSD) computer music professor, and author of the famous Soundhack sound processing software for Mac and PC.
The module is described as a ‘Multi-Voice, Multi-Algorithm synthesizer module named for the music hall considered by some to be the location of the first electronic music concerts.’ So lets start with the name, because it’s by no means neither accidental, nor just a simple homage.
Tadeus Cahill’s Telharmonium, also known as Dynamophone, could be described as the first synthesizer, or at least the first electronic music instrument of big significance. Patented in 1897, the instrument was established in Telharmonic Hall in New York in 1906. The hall was a special concert space with an auditorium on the first floor and a basement fully occupied by instrument’s machinery. (The Mark I weighed 7 tons; Mark II and III weighed 200 tons).
Two of the tone rotors of the MkII Telharmonium in the basement of Telhamronic Hall circa 1906. Image from McCLure’s Magazine, 1906.
Performances took place in the hall, with a performer sitting behind an organ-like keyboard manual. Music emenated from loudspeakers and was simultaneously transmitted via telephone wires to subscribers in the city.
Telharmonic Hall, New York City, circa 1906.
As its core, the Telharmonium employs additive synthesis, by the means of dynamo-powered tone wheels — rotors with variably shaped alternators spun in a magnetic field, producing a set of sine waves. (The mechanism later became the basis of Hammond electric organs.)
One of the massive rotors that produced tones via electromagnetic field.
The bottom rotor would produce a fundamental frequency and each other rotor above it would produce a partial.
The MakeNoise reinterpertation of this design subtly alludes to the tonewheel, as could be seen in Cahill’s original patents.
The Telharmonium’s original additive synthesis, with sine wave fundamental and partials, is implemented is the MakeNoise module’s H-voice. As in Cahill’s tonewheels, it’s possible to shape the tone by choosing sine wave partials. However, unlike Telharmonium’s original 8 alternators, the digital H-voice features 24 partials for each of its three voices. Each partial can be brought forward by moving the Centroid knob and then locked in place (so it will continue to sound louder) by pressing the H-lock button.
In the original Telharmoninium, partials were controlled by organ-like stops near the performer’s keyboard.
The Telharmonium’s organ-style keyboard manual and stops.
Three H-voices can be arranged in a major, minor, or diminished chords, with inversions, a fifth, unison, or octave, and microtonal combinations in between. Another parameter that develops the idea further is the Flux knob. In its fully clockwise position, it focuses on a particular partial chosen by the Centroid knob. Moving counterclockwise, it brings forward more of the neighboring partials, until all of them are present in the fully counterclockwise position.
Unfortunately, there are no recordings of the original performances, and I wonder how similar or different they may have sounded from the modern module. The Telharmonium’s tones were described as ‘clear and pure.’ One of the visitors noted the instrument’s ability to synthesize different timbres of musical instruments:
The first impression the music makes upon the listener is its singular difference from any music ever heard before: in the fullness, roundness, completeness, of its tones. And truly it is different and more perfect: but strangely enough, while it possesses ranges of tones all its own, it can be made to imitate closely other musical instruments: the flute, oboe, bugle, French horn and ‘cello best of all, the piano and violin not as yet so perfectly. Ask the players for fife music and they play Dixie for you with the squealing of the pipes deceptively perfect. Indeed, the performer upon this marvelous machine, as I shall explain later, can “build up” any sort of tone he wishes : he can produce the perfect note of the flute or the imperfect note of the piano — though the present machine is not adapted to the production of all sorts of music, as future and more extensive machines may be.
Let’s now move 55 years in the future. It’s 1961, and a young composer named James Tenney produced his first computer music piece ‘Analog #1 (Noise Study)’ inside Bell Labs, using Max Matthew’s Music III sound synthesis software.
The composition was recorded on tape, but the sounds for it were produced on the computer. Noise Study is considered the first recorded ‘serious’ computer music, written by a classically trained composer. In a way, the composition shows John Cage’s influence, in its meditation on listening. Here’s what Tenney wrote about the experience:
My first composition using computer-generated sounds was the piece called Analog #1: Noise Study, completed in December, 1961. The idea for the Noise Study developed in the following way: For several months I had been driving to New York City in the evening, returning to the Labs the next morning by way of the heavily traveled Route 22 and the Holland Tunnel. This circuit was made as often as three times every week, and the drive was always an exhausting, nerve-wracking experience, fast, furious, and “noisy.” The sounds of the traffic — especially in the tunnel — were usually so loud and continuous that, for example, it was impossible to maintain a conversation with a companion. It is an experience that is familiar to many people, of course. But then something else happened, which is perhaps not so familiar to others.
One day I found myself listening to these sounds, instead of trying to ignore them as usual. The activity of listening, attentively, to “non-musical,” environmental sounds was not new to me — my esthetic attitude for several years had been that these were potential musical material — but in this particular context I had not yet done this. When I did, finally, begin to listen, the sounds of the traffic became so interesting that the trip was no longer a thing to be dreaded and gotten through as quickly as possible. From then on, I actually looked forward to it as a source of new perceptual insights.
Gradually, I learned to hear these sounds more acutely, to follow the evolution of single elements within the total sonorous “mass,” to feel, kinesthetically, the characteristic rhythmic articulations of the various elements in combination, etc. Then I began to try to analyze the sounds, aurally, to estimate what their physical properties might be — drawing 5 upon what I already knew of acoustics and the correlation of the physical and the subjective attributes of sound. From this image, then, of traffic noises — and especially those heard in the tunnel, where the overall sonority is richer, denser, and the changes are mostly very gradual — I began to conceive a musical composition that not only used sound elements similar to these, but manifested similarly gradual changes in sonority. I thought also of the sound of the ocean surf — in many ways like tunnel traffic sounds — and some of the qualities of this did ultimately manifest themselves in the Noise Study. I did not want the quasi-periodic nature of the sea sounds in the piece however, and this was carefully avoided in the composition process. Instead, I wanted the aperiodic, “asymmetrical” kind of rhythmic flow that was characteristic of the traffic sounds.
The instrument he designed for the realisation of his composition could produce noise bands with a certain degree of control over their parameters, like, for example, increasing and decreasing their bandwidth. (If you’re interested in the process, you can read about it in detail.)
The Telharmonic N-voice works in a very similar way, employing two band-limited noise sidebands around the central frequency by Tonic and Degree knobs, with a Flux knob controlling the width of the sidebands, resulting in a fluttering, almost sine-like sound in the full clockwise position, and pure white noise in the counterclockwise position.
Let’s now skip 23 years further, to the first commercially available phase modulation digital synthesizers. Basically, phase distortion technique appeared as Casio’s way to circumvent Yamaha’s patented FM (frequency modulation) synthesis. Ed.: Think the Casio CZ series. Good stuff. FM, developed by John Chowning, was capable of extraordinary timbres, but phase distortion was controllable in a unique way by contrast, and produced its own signature sounds. For added confusion, you can technically consider FM ‘phase modulation.’ -PK
To simplify, phase distortion is very similar to FM, though instead of frequency, the phase of the signal is modulated.
The Telharmonic P-voice features 3 phase-locked sine-wave oscillators –two of them are modulators, one is a carrier. By moving the Centroid knob, the frequency ratio is changed. The Flux knob controls the depth of the modulation.
All three Telharmonic voices — H, P and N — can be used simultaneously in any combination, with Centroid and Flux controls affecting the spectral content of the voices, while Degree and Tonic controls affect the voice’s intervals and pitch.
Apart from the main mode of operation described above, Telharmonic has two hidden modes, switched by holding the H-lock button for several seconds.
The first one is the ASR emulation. ASR stands for analogue shift register, which is basically a more complex sample and hold circuit, or, in classical musical terms, a canon generator.
For example, a three-voice ASR would have two inputs and three outputs. The first input takes the signal which is sampled and ‘memorized’ every time it receives a pulse in its second input (clock). The first time it receives the pulse, it outputs the memorized signal from the first output; the second time, it outputs the voltage from the second output and memorizes the next voltage and outputs it from the first output. The third time, the first voltage is sent to the third output, the second outputs from the second, and the new (third) voltage is being sampled, stored, and sent through the first output, and so on. In this way, the process generates a simple canon, like, for example, Row your boat.
A simple canon, in score form.
While the exact origins of the first ASR are debatable, the first mass-produced, commercially available ASR module was designed by Serge Tcherepnin, creator of Serge synthesizers in the 70s. Here’s the description of ASR module from Serge’s catalog:
The ANALOG SHIFT REGISTER is a sequential sample and hold module for producing arabesque-like forms in musical space. Whenever pulsed, the previously held voltage is sent down the line to three consecutive outputs to produce the electrical equivalent of a canonic musical structure.
The Telharmonic digital ASR module features three channels, with P, H and N voices available simultaneously, as well as six quantization modes, selectable by Interval knob: suspended chord, major triad, minor triad, octaves and fifths, chromatic, octaves only.
The second Telharmonic hidden mode is the Spiratone, a Shepard tone generator. The Shepard tone, named after Roger Shepard, a cognitive scientist, is an auditory illusion of a tone that continually ascends or descends in pitch, yet never moves away or resolves. It was inspired by two particular compositions, Jean-Claude Risset’s “Computer Suite from Little Boy: Fall” of 1968 and aforementioned James Tenney’s “For Ann (rising)” of 1969.
Pretty much every experience with Telharmonic could become an interaction with some of the most interesting moments and ideas of electronic music history. Cahill’s Telharmonium and additive synthesis, half-forgotten phase modulation synthesis of the 80s, Tenney’s first computer music, Serge’s ASR, Shepard’s tones … all of these are interconnected, all housed in a small, 14hp, 30mm module.
If you have any corrections or additions for this piece, please feel free to contact me.
Ed., indeed, we just delved into rich territory both for this module and sound design generally. We’ll of course revise here and do more on any of these topics, if desired. (I counted at least half a dozen new stories we could write just based on some of the subplots here!) -PK
Free apps can be great. Let me start with that, they can be great. Not all free apps are great, and sometimes they can hide expensive IAPs that you need to get even the most basic functionality. So I’m always slightly sceptical about a new app that I know very little about. So when I decided to download expressionPad I didn’t have high expectations at all. However, I was pleasantly surprised when I got the app open. To begin with the app’s interface seemed way to small to work on an iPhone 6s in portrait mode, but when you switch to landscape it becomes much easier to use entirely. As such I will be giving the app a bit more room on my iPhone and a bit more time to explore it.
So you might be interested to know just what this app is all about. Here’s the app’s description:
expressionPad is a new kind of musical instrument. Continuous multi-touch support means you can control pitch bend, dynamics, and modulation with each touch, even as you change notes.
We’ve created a flexible and intuitive interface so you can focus on music. Watch our video and see for yourself!
Tune your expressionPad in fourths, fifths, or in guitar tunings such as standard or Open C.
expressionPad features a built-in polyphonic synthesizer/sampler so you can start making music right away. Experience the flexibility of polyphonic portamento — an electronic music first!
Connect to your music studio via Core MIDI, Apple’s inter-app music protocol. With expressionPad, you can play Reason, Ableton Live, Logic, Garage Band, and Logic more expressively than ever!
A clean interface, a flexible synthesizer/sampler, and lightning-fast MIDI response. Push your musical ideas to new limits with expressionPad.
It’s an article of faith that artists who are friends of Three Lobed Recordings always bring their best for the label’s annual Hopscotch day show showcase, co-sponsored with Durham’s WXDU. Nathan Bowles, a multi-year veteran of the event, outdid himself this year, choosing to use this platform to debut his new trio with Casey Toll on double bass and Rex McMurry on drums. Although this was the trio’s very first show, you wouldn’t necessarily have known it, as Bowles and the new band added new heft to what were originally his solo songs (“Blank Range”) as well as coming out strong with new ones (one untitled, one known as “Freshfaced”). Bowles’ current music has already redefined and expanded the notion of modern banjo music, but the trio promises to push things even further, offering the opportunity to head in directions he hasn’t yet contemplated in other groups in which he participates, such as the Black Twig Pickers. For one, these songs feel a bit more like rock songs, with McMurry’s drumming pushing a faster tempo and an overall heavier direction, particularly on the new song. “Freshfaced.” Seeing Nathan Bowles is always a treat — and it’s something we do as often as we can — and this trio only adds to the anticipation of what he’s up to next. Bowles has a few shows around the southern U.S. this month, so go have a listen if you can.
I recorded this set with Evan Lamb’s house mix, plus an additional soundboard feed of the banjo, together wtih Schoeps MK4V microphones onstage, and AKG 460 cardiod mics hung over the audience. The sound quality is excellent. Enjoy!
Nathan Bowles Trio
Three Lobed / WXDU Hopscotch Afternoon Jamboree
Raleigh, NC USA
Recorded and produced by acidjack
Hosted at nyctaper.com