Let’s face it, modular is very popular and it has been for quite a while now. In the mobile world we’ve had a few modular synths and some really spectacular ones in that group. Now we add a new app to the list. S-Modular is a semi modular synthesizer for your iPad. I’m not sure that I’ve seen an app that quite described itself that way before. S-Modular comes from the developer behind PRFORM, WubSynth and Synth Automata, although they have many more in their portfolio.
According to the developer:
S-Modular has been designed to have everything on one screen to make patching quick and easy. Drag from one jack port to another to make a connection, tap a plugged port to change the wires color instantly.
S-Modular has a unique and vintage sound quality, reminiscent of synthesizers from the 70s, warm and rich in character.
Here’s a quick view of the app’s features
IAA (Inter-App Audio)
24db/oct ladder filter
Resonant hi-pass and low-pass filter
4 step cv sequencer
4 track Mixer
Audio and cv spliter
2x AHR envelope generators
I have to say that it looks really interesting and I’m quite tempted to go take a look and see what it’s like.
We’re gathering with top digital media artists this week – and you can tune in. Here’s a preview of their work, on the eve of Lunchmeat Festival, Prague.
Transmedia work and live visual performance exist at sometimes awkward intersections, caught between economies of the art world and music industry, between academia and festivals. They mix techniques and histories that aren’t always entirely compatible – or at least that can be demanding in combination. But the fields of media art and live visuals also represent areas of tremendous potential for innovation – where artists can explore immersive media, saturate senses, and apply buzzword-friendly technologies from AI to VR in experimental, surprising ways.
Our goal: bring together some artists for some deep discussion. And we have a great venue in which to do it. Prague’s Lunchmeat Festival has exploded on the international scene. Even sandwiched against Unsound Festival in Krakow and ADE in Amsterdam, it’s started to earn attention and big lineups, thanks to the intrepid work of an underground Czech collective. (The rest of the year, the Lunchmeat crew can usually be found doing installations and live visual club work of their own.)
Heck, even the fact that I’m stumbling over how to word this says something about the hybrid forms we’re describing, from live cinema to machine learning-infused art.
Since most of you won’t be in Prague this week, we’ll livestream and archive those conversations for the whole world.
To whet your appetite (hopefully), here’s a look at the cast of characters involved:
Katerina Blahutova [DVDJ NNS]
Let’s start for a change with the home Prague team. Katerina is a great example of a new generation of artists coming from outside conventional pathways as far as discipline. She graduated in architecture and urbanism, then shifted that interest (consciously or otherwise) to transforming whole club and performance environments. She’s been a VJ and curator with Lunchmeat, designed releases and videos for Genot Centre (as well as graphic design for bands), then went on to co-found LOLLAB collective and tour with MIDI LIDI.
Don’t miss her poppy, saturated, post-Internet surrealism – hyperreality with concoctions of slime and object, opaque luminosities and lushly-colored, fragmented textures. (I can rip off this bit of the program; I wrote it originally!)
Oh yeah, and she made this nice teaser loop for this week’s festivities:
Ignazio Mortellaro [Stroboscopic Artefacts, Roots in Heaven]
Turn that saturation knob all the way down again, and step into the world of Stroboscopic Artefacts. Ignazio is the visual imagination behind all of that label’s distinctive look, from album design (as beautifully exhibited) to videos. He’ll be talking to us about that ongoing collaboration.
In addition, Ignazio is doing live visuals for a fresh project. Allow me to quote myself:
Roots in Heaven, a label owner and accomplished solo artist hidden behind a mesh mask and feathers, joins visualist Ignazio Mortellaro to present a new live audiovisual work. This comes on the heals of this year’s Roots in Heaven debut record “Petites Madeleines” (a Proust reference), out on K7! offshoot Zehnin. The result is a journey into “concentrated sensory impression” in sound, light, and sensation.
Gregory Eden [Clark]
One of the goals Lunchmeat’s curators and I discussed was elevating the visibility of people working on visual materials. But unlike the ‘front man’/’front woman’ role of a lot of the music artists, the position some of these people fill goes beyond just sole artist to broader management and production. Maybe that’s even more reason to pay attention to who they are and how they work.
Greg Eden, who’s at Lunchmeat with Clark, is a great example. With a university physics degree, he went on to Warp, where he developed Clark and Boards of Canada. He’s now full-time managing Clark, and in addition to that … uh, full time job … manages Nathan Fake (with visuals by Flat-e) and Gajek and Finn McNicholas.
Visuals are often synonymous with just “something on a projector,” live cinema-style. But Clark’s show is full-on stage show. For the stage adaptation of Death Peak, the artist works with choreographer Melanie Lane, dancers Kiani Del Valle and Sophia Ndaba, and lights from London’s Flat-E. Think of it as rave theater. That makes Greg’s role doubly interesting, as someone has to pull all of this together:
Novi_sad [with Ryoichi Kurokawa, SIRENS]
The collaboration between Novi_sad and Ryoichi Kurokawa is one of the more important ones of the moment, its nervous, quivering economic data visualization a fitting expression of our anxious zeitgeist. Here’s a glimpse of that work:
Ryoichi Kurokawa and Novi_sad have worked together to produce an audiovisual show in five etudes that produces a dramaturgy of data, weaving the numbers of the economic downturn into poignant, emotional narrative. Data and sound quiver and dematerialize in eerie, mournful tableaus, re-imagining the sound works of Richard Chartier, CM von Hausswolff, Jacob Kirkegaard, Helge Sten, and Rebecca Foon. Novi_sad is self-taught composer Thanasis Kaproulias, himself coming not only from the nation that has borne the brunt of Europe’s crisis, but holding a degree in economics. As a perfect foil to his sonic landscapes, Japan’s Ryoichi Kurokawa has made a name in expressive, exposed digital minimalism.
Ben Frost is already interesting from a collaborative standpoint, having worked with media like dance (Chunky Move, Wayne McGregor). The collaboration with MFO brings him together with one of Europe’s leading visual practitioners; Marcel will join us to talk about that but hopefully about his work for the likes of Berlin Atonal Festival, as well.
MFO has also designed the visuals for the sensational Jlin, but Theresa Baumgartner is touring with it – as well as working on production for Boiler Room. So, we have Theresa joining us from something of the in-the-trenches production perspective, as well.
VJing and live cinema are rooted in conventional compositing and processing. Even when they’re digital, we’re talking techniques mostly developed decades ago.
For something further afield, Gene Kogan will take us on a journey into deep generative work, machine learning and the new aesthetics that become possible with it. As AI begins to infuse itself with digital media, artists are indeed grappling with its potential. Gene is offering talks and workshops both here at Lunchmeat and at Ableton Loop next month, so now is a great time to check in with him. A bit about him:
Gene Kogan is an artist and a programmer who is interested in generative systems, artificial intelligence, and software for creativity and self-expression. He is a collaborator within numerous open-source software projects, and leads workshops and demonstrations on topics at the intersection of code and art. Gene initiated and contributes to ml4a, a free book about machine learning for artists, activists, and citizen scientists. He regularly publishes video lectures, writings, and tutorials to facilitate a greater public understanding of the topic.
I’ll be reviewing the resources he has for artists soon, too, so do stay tuned.
Also coming from Prague, Gabriela has been guiding the INPUT program for Lunchmeat this fall, as well as being one of my collaborators (our installation is part of the exhibition this week). Its contents are mysterious so far, but a live AV work with Gabriela and Dné is also on tap.
It’s always good to welcome a new app to the mobile music community, especially one that claims to be exactly for making ‘short and simple’ pieces of music. sequencism calls itself a music sketchbook tool, which is nice idea. According to its creator, the app is designed to provide a traditional user interface of track and piano roll editors, while taking advantage of the touch-screen capabilities of the iPad. It also includes other helper tools, such as chord helper tracks, which is a nice feature.
It’s also worth noting that apparently the main goal of this app is to work as a sketchbook, it is not recommended to use sequencism to produce complex songs or to play songs live on stage. Which is a fair warning I guess.
• Simple track mixer and visual mixer (volume, pan)
• Multitrack editor: MIDI instrument tracks, chord helper tracks
• Support for SF2* and AUv3 instruments (*lightweight SF2 files only)
• AUv3 Effects
• Piano roll editor: add, move, copy notes within blocks
• MIDI keyboard, supports multiple scales, including user-defined scales
• Support for diatonic and chromatic chords
• Automatic transposing when changing chords or scales
• Supports Audiobus 3 and Ableton Link
• Support for MIDI keyboards, including Bluetooth keyboards
• Export to MIDI
I’m not sure that I’ve actually managed to talk about AC Sabre since I’ve been at CDM, so now is a good time to introduce it ahead of talking about what’s new in the app. AC Sabre is essentially a full performance instrument in your iPhone. You can use it to control almost anything. The app is a wireless MIDI instrument and motion controller for anyone who’s interested
in electronic music production.
AC Sabre reads your movements with the built in gyroscope and accelerometer and translates them into musical actions. It lets you pluck invisible strings in the air while controlling up to 7 additional parameters, intuitively with your movements, via MIDI CC messages.
In the latest version there’s support for Audiobus 3, together with automatic Bluetooth advertising, zero-tap, auto-connection to iPad via new free “AC Central” app. Audiobus state saving both locally and to iPad via AC Central. Hands free mode now works when AC Sabre is in background.
So all in all a fairly substantial update. In there is mention of AC Central, the iPad companion app that’s free on the app store. AC Central is a free auto-connection hub for the AC Sabre Motion MIDI Instrument. AC Central takes the hassle out of getting connected to your favourite synthesizers, samples and other sound machines. Plus, combined with Audiobus 3 (separate purchase, well worth it!), you can save and recall your complex multi-app setups and MIDI mappings with the touch of a button…
AC Sabre on the app store:
AC Central on the app store:
Finally, here’s a little video showing the AC Sabre working with the Moog Model 15 app.
The 200-ton, building-sized Telharmonium original produced some of the first electronic music. But now it’s a compact modern synth module, too.
The Make Noise/Tom Erbe Telharmonic is emblematic perhaps of how synthesizer history now folds in on itself. The module combines analog and digital control and synthesis, and pairs a well-known modular creator with one of recent years’ best known engineers and teachers of digital synthesis. Put those elements together, and you recreate… a giant electro-mechanical instrument patented in 1897, but in a form that has never existed before. That old progression from past to present to future seems so boring now. Instead, we have a wormhole of simultaneous possibilities. You know, in a good way.
But if turn-of-the-last-century pioneering instruments are being made into compact modules, we also need a different kind of history.
Kyiv, Ukraine-based composer/artist Oleg Shpudeiko – aka Heinali – recently wove together a history of the original Telharmonium and the new Telharmonic module. It’s such a lovely read that I felt it shouldn’t live only on The FaceBook. So here it is, preserved for posterity (and, if you like, further comments and thoughts).
Thanks to Oleg for this. -Ed.
Make Noise Telharmonic and electronic music history.
I’ve been considering writing about Make Noise/Tom Erbe Telharmonic for some time now. There’s an abundance of videos covering this module, of course. But regrettably, I couldn’t find any that go beyond technical demonstrations, in order to cover the module’s historical and ideological contexts (except for the original Make Noise demo videos, to a certain extent). In my opinion, those are the very things (apart from the hardware’s great sound) that make it a truly exceptional work of tech art. My text is by no means comprehensive, but I hope to accentuate some of my points of interest.
Telharmonic is an Eurorack synthesizer module, a product of collaboration between Make Noise and Tom Erbe. Make Noise is the modular synth company from the US founded by self-taught electronic musical instrument designer Tony Rolando. Tom Erbe is a University of California Santa Davis (UCSD) computer music professor, and author of the famous Soundhack sound processing software for Mac and PC.
The module is described as a ‘Multi-Voice, Multi-Algorithm synthesizer module named for the music hall considered by some to be the location of the first electronic music concerts.’ So lets start with the name, because it’s by no means neither accidental, nor just a simple homage.
Tadeus Cahill’s Telharmonium, also known as Dynamophone, could be described as the first synthesizer, or at least the first electronic music instrument of big significance. Patented in 1897, the instrument was established in Telharmonic Hall in New York in 1906. The hall was a special concert space with an auditorium on the first floor and a basement fully occupied by instrument’s machinery. (The Mark I weighed 7 tons; Mark II and III weighed 200 tons).
Two of the tone rotors of the MkII Telharmonium in the basement of Telhamronic Hall circa 1906. Image from McCLure’s Magazine, 1906.
Performances took place in the hall, with a performer sitting behind an organ-like keyboard manual. Music emenated from loudspeakers and was simultaneously transmitted via telephone wires to subscribers in the city.
Telharmonic Hall, New York City, circa 1906.
As its core, the Telharmonium employs additive synthesis, by the means of dynamo-powered tone wheels — rotors with variably shaped alternators spun in a magnetic field, producing a set of sine waves. (The mechanism later became the basis of Hammond electric organs.)
One of the massive rotors that produced tones via electromagnetic field.
The bottom rotor would produce a fundamental frequency and each other rotor above it would produce a partial.
The MakeNoise reinterpertation of this design subtly alludes to the tonewheel, as could be seen in Cahill’s original patents.
The Telharmonium’s original additive synthesis, with sine wave fundamental and partials, is implemented is the MakeNoise module’s H-voice. As in Cahill’s tonewheels, it’s possible to shape the tone by choosing sine wave partials. However, unlike Telharmonium’s original 8 alternators, the digital H-voice features 24 partials for each of its three voices. Each partial can be brought forward by moving the Centroid knob and then locked in place (so it will continue to sound louder) by pressing the H-lock button.
In the original Telharmoninium, partials were controlled by organ-like stops near the performer’s keyboard.
The Telharmonium’s organ-style keyboard manual and stops.
Three H-voices can be arranged in a major, minor, or diminished chords, with inversions, a fifth, unison, or octave, and microtonal combinations in between. Another parameter that develops the idea further is the Flux knob. In its fully clockwise position, it focuses on a particular partial chosen by the Centroid knob. Moving counterclockwise, it brings forward more of the neighboring partials, until all of them are present in the fully counterclockwise position.
Unfortunately, there are no recordings of the original performances, and I wonder how similar or different they may have sounded from the modern module. The Telharmonium’s tones were described as ‘clear and pure.’ One of the visitors noted the instrument’s ability to synthesize different timbres of musical instruments:
The first impression the music makes upon the listener is its singular difference from any music ever heard before: in the fullness, roundness, completeness, of its tones. And truly it is different and more perfect: but strangely enough, while it possesses ranges of tones all its own, it can be made to imitate closely other musical instruments: the flute, oboe, bugle, French horn and ‘cello best of all, the piano and violin not as yet so perfectly. Ask the players for fife music and they play Dixie for you with the squealing of the pipes deceptively perfect. Indeed, the performer upon this marvelous machine, as I shall explain later, can “build up” any sort of tone he wishes : he can produce the perfect note of the flute or the imperfect note of the piano — though the present machine is not adapted to the production of all sorts of music, as future and more extensive machines may be.
Let’s now move 55 years in the future. It’s 1961, and a young composer named James Tenney produced his first computer music piece ‘Analog #1 (Noise Study)’ inside Bell Labs, using Max Matthew’s Music III sound synthesis software.
The composition was recorded on tape, but the sounds for it were produced on the computer. Noise Study is considered the first recorded ‘serious’ computer music, written by a classically trained composer. In a way, the composition shows John Cage’s influence, in its meditation on listening. Here’s what Tenney wrote about the experience:
My first composition using computer-generated sounds was the piece called Analog #1: Noise Study, completed in December, 1961. The idea for the Noise Study developed in the following way: For several months I had been driving to New York City in the evening, returning to the Labs the next morning by way of the heavily traveled Route 22 and the Holland Tunnel. This circuit was made as often as three times every week, and the drive was always an exhausting, nerve-wracking experience, fast, furious, and “noisy.” The sounds of the traffic — especially in the tunnel — were usually so loud and continuous that, for example, it was impossible to maintain a conversation with a companion. It is an experience that is familiar to many people, of course. But then something else happened, which is perhaps not so familiar to others.
One day I found myself listening to these sounds, instead of trying to ignore them as usual. The activity of listening, attentively, to “non-musical,” environmental sounds was not new to me — my esthetic attitude for several years had been that these were potential musical material — but in this particular context I had not yet done this. When I did, finally, begin to listen, the sounds of the traffic became so interesting that the trip was no longer a thing to be dreaded and gotten through as quickly as possible. From then on, I actually looked forward to it as a source of new perceptual insights.
Gradually, I learned to hear these sounds more acutely, to follow the evolution of single elements within the total sonorous “mass,” to feel, kinesthetically, the characteristic rhythmic articulations of the various elements in combination, etc. Then I began to try to analyze the sounds, aurally, to estimate what their physical properties might be — drawing 5 upon what I already knew of acoustics and the correlation of the physical and the subjective attributes of sound. From this image, then, of traffic noises — and especially those heard in the tunnel, where the overall sonority is richer, denser, and the changes are mostly very gradual — I began to conceive a musical composition that not only used sound elements similar to these, but manifested similarly gradual changes in sonority. I thought also of the sound of the ocean surf — in many ways like tunnel traffic sounds — and some of the qualities of this did ultimately manifest themselves in the Noise Study. I did not want the quasi-periodic nature of the sea sounds in the piece however, and this was carefully avoided in the composition process. Instead, I wanted the aperiodic, “asymmetrical” kind of rhythmic flow that was characteristic of the traffic sounds.
The instrument he designed for the realisation of his composition could produce noise bands with a certain degree of control over their parameters, like, for example, increasing and decreasing their bandwidth. (If you’re interested in the process, you can read about it in detail.)
The Telharmonic N-voice works in a very similar way, employing two band-limited noise sidebands around the central frequency by Tonic and Degree knobs, with a Flux knob controlling the width of the sidebands, resulting in a fluttering, almost sine-like sound in the full clockwise position, and pure white noise in the counterclockwise position.
Let’s now skip 23 years further, to the first commercially available phase modulation digital synthesizers. Basically, phase distortion technique appeared as Casio’s way to circumvent Yamaha’s patented FM (frequency modulation) synthesis. Ed.: Think the Casio CZ series. Good stuff. FM, developed by John Chowning, was capable of extraordinary timbres, but phase distortion was controllable in a unique way by contrast, and produced its own signature sounds. For added confusion, you can technically consider FM ‘phase modulation.’ -PK
To simplify, phase distortion is very similar to FM, though instead of frequency, the phase of the signal is modulated.
The Telharmonic P-voice features 3 phase-locked sine-wave oscillators –two of them are modulators, one is a carrier. By moving the Centroid knob, the frequency ratio is changed. The Flux knob controls the depth of the modulation.
All three Telharmonic voices — H, P and N — can be used simultaneously in any combination, with Centroid and Flux controls affecting the spectral content of the voices, while Degree and Tonic controls affect the voice’s intervals and pitch.
Apart from the main mode of operation described above, Telharmonic has two hidden modes, switched by holding the H-lock button for several seconds.
The first one is the ASR emulation. ASR stands for analogue shift register, which is basically a more complex sample and hold circuit, or, in classical musical terms, a canon generator.
For example, a three-voice ASR would have two inputs and three outputs. The first input takes the signal which is sampled and ‘memorized’ every time it receives a pulse in its second input (clock). The first time it receives the pulse, it outputs the memorized signal from the first output; the second time, it outputs the voltage from the second output and memorizes the next voltage and outputs it from the first output. The third time, the first voltage is sent to the third output, the second outputs from the second, and the new (third) voltage is being sampled, stored, and sent through the first output, and so on. In this way, the process generates a simple canon, like, for example, Row your boat.
A simple canon, in score form.
While the exact origins of the first ASR are debatable, the first mass-produced, commercially available ASR module was designed by Serge Tcherepnin, creator of Serge synthesizers in the 70s. Here’s the description of ASR module from Serge’s catalog:
The ANALOG SHIFT REGISTER is a sequential sample and hold module for producing arabesque-like forms in musical space. Whenever pulsed, the previously held voltage is sent down the line to three consecutive outputs to produce the electrical equivalent of a canonic musical structure.
The Telharmonic digital ASR module features three channels, with P, H and N voices available simultaneously, as well as six quantization modes, selectable by Interval knob: suspended chord, major triad, minor triad, octaves and fifths, chromatic, octaves only.
The second Telharmonic hidden mode is the Spiratone, a Shepard tone generator. The Shepard tone, named after Roger Shepard, a cognitive scientist, is an auditory illusion of a tone that continually ascends or descends in pitch, yet never moves away or resolves. It was inspired by two particular compositions, Jean-Claude Risset’s “Computer Suite from Little Boy: Fall” of 1968 and aforementioned James Tenney’s “For Ann (rising)” of 1969.
Pretty much every experience with Telharmonic could become an interaction with some of the most interesting moments and ideas of electronic music history. Cahill’s Telharmonium and additive synthesis, half-forgotten phase modulation synthesis of the 80s, Tenney’s first computer music, Serge’s ASR, Shepard’s tones … all of these are interconnected, all housed in a small, 14hp, 30mm module.
If you have any corrections or additions for this piece, please feel free to contact me.
Ed., indeed, we just delved into rich territory both for this module and sound design generally. We’ll of course revise here and do more on any of these topics, if desired. (I counted at least half a dozen new stories we could write just based on some of the subplots here!) -PK
Free apps can be great. Let me start with that, they can be great. Not all free apps are great, and sometimes they can hide expensive IAPs that you need to get even the most basic functionality. So I’m always slightly sceptical about a new app that I know very little about. So when I decided to download expressionPad I didn’t have high expectations at all. However, I was pleasantly surprised when I got the app open. To begin with the app’s interface seemed way to small to work on an iPhone 6s in portrait mode, but when you switch to landscape it becomes much easier to use entirely. As such I will be giving the app a bit more room on my iPhone and a bit more time to explore it.
So you might be interested to know just what this app is all about. Here’s the app’s description:
expressionPad is a new kind of musical instrument. Continuous multi-touch support means you can control pitch bend, dynamics, and modulation with each touch, even as you change notes.
We’ve created a flexible and intuitive interface so you can focus on music. Watch our video and see for yourself!
Tune your expressionPad in fourths, fifths, or in guitar tunings such as standard or Open C.
expressionPad features a built-in polyphonic synthesizer/sampler so you can start making music right away. Experience the flexibility of polyphonic portamento — an electronic music first!
Connect to your music studio via Core MIDI, Apple’s inter-app music protocol. With expressionPad, you can play Reason, Ableton Live, Logic, Garage Band, and Logic more expressively than ever!
A clean interface, a flexible synthesizer/sampler, and lightning-fast MIDI response. Push your musical ideas to new limits with expressionPad.
Open up a browser tab, use code sketch musical loops and grooves (using trigonometry, even), and play / export – all in this free tool.
So — why?
Developer Jack Schaedler is quick to caution that this is neither intended for teaching code nor teaching music, that better tools exist for each. (Sonic Pi is a particularly accessible entry for learning how to express musical ideas as code, used even by kids!)
Then again, you don’t have to believe him. That same spirit that made him decide to do this for fun seems to be infectious. And this might be an entry into making this stuff.
For coders, it’s yet another chance to discover some code and libraries and perhaps bits and pieces and inspiration for your own next project. For everyone else, well, it’s a terrific distraction.
And you can export MIDI, so this could start a new musical project.
By the way, someone want to join me in building this actual inspiration for Jazzari? It could be killer by next summer, at least.
The name is a riff on the 12th century scholar and inventor Ismail al-Jazari. al-Jazari is thought to have invented one of the first programmable musical machines, a “musical automaton, which was a boat with four automatic musicians that floated on a lake to entertain guests at royal drinking parties.”
Bonus, for my Arabic, Kurdish, and Persian friends in electronic music – no one knows which of those accurately can claim this guy. We clearly need to get something going.
The Groovebox app for iPhone and iPad already gave you a way of starting musical ideas quickly. A new update might help you make finished songs, too.
Since Ampify started out with their first app ‘Blocs Wave’, they’ve been doing everything that they can to make creating music on mobile a reality for a wider audience, and I think they’re doing a pretty good job of it. Arguably their app range contains some of the most popular entry level apps for getting going with making music. LaunchPad is an excellent example, and I’ve already mentioned Blocs Wave, which is equally accessible.
Their latest app, Groovebox, had a great start. It distinguished itself in a very important way. It enabled users to create music rapidly by giving them the ability to create semi-random patterns. That might sound simple, but it’s actually really important. It can give you the start without having to figuratively stare at a blank sheet of paper. It’s also really unique.
Now Ampify have taken Groovebox even further. They’ve added sections to the app so that you can create whole songs using Groovebox. ‘Song Sections’ help you build and arrange ideas quickly and easily. You can use instruments and patterns to make ideas, then easily structure your track by adding, moving and deleting sections with a swipe of your finger. On iPad a full screen section view shows you all your sounds. On iPhone a simpler section view shows you just what you need without cluttering up your screen.
Build bigger tracks with section on iPad and iPhone.
Sections can be added, reordered and removed easily
Switch sections at any time to Jam with your ideas
Simple iPhone section view removes the complexity of song arranging.
On iPad a full section view shows you all your sounds.
iPhone a simple section view shows you just what you need without cluttering up your screen.
Groovebox is still a free app on the store with a Pro IAP at $4.99, £4.99, €5.49. Each instrument also has a soundpack store. Soundpacks are either Drumpacks (with samples & patterns), or Presetpacks (with presets & patterns) costing $1.99, £1.99, €2.29 each.
Koma today revealed a sequel to their crowd-funded smash hit Field Kit. And it’s a whole bunch of patchable effects, for €249 (€219 for funders).
Inside that box, there’s a load of different effects to play with:
Sample Rate Reducer / Bitcrusher
Analog Spring Reverb
Yeah, you read that last one right – there’s actually a physical spring in there for reverb. Behold:
Looping of course means that you could make the FX a hub of performance. And in addition to the other digital effects, that frequency shifter opens up some really interesting possibilities.
So, whereas the first Field Kit depended on you attaching contact mics and working with the mixing functions, the Field Kit FX actually has a lot more sonic possibilities included right out of the box. There’s still a companion book to go with it, and of course this is already intended as a clever
But, for a kind of “weirdo modular effects toolkit” in a case, you also get a bunch of tools for applying these effects, by mixing and sequencing them:
4 Channel VCA Mixer
4 Step Mini Sequencer
All over the place, you’ve got various patch points. That’s a chance to connect to other analog I/O – which certainly includes Eurorack modulars, but these days a lot of other gear, as well, even desktop units from Novation, Roland, Arturia, KORG, and the like.
And there’s a new 4-Channel CV Interface for bringing it all together, meaning you can come up with pretty elaborate modular connections.
4-channel CV interface for communications with other gear – now not just modular, but a lot of new desktop stuff, too.
In fact, for under three hundred bucks, the whole thing looks a bit like either a shrunken Eurorack modular or a tabletop of analog and digital effects merged together for patching.
Now, this is still definitely geared for advanced users. There’s no MIDI. And the CV routing, while powerful, might be overwhelming to newcomers – for instance, there’s not a single, simple trigger in to clock that sequencer. (That’s not necessarily a criticism – the various CV options mean loads of creative flexibility. But it does probably mean this box is more for people who want to get deep into patching.)
Klevgränd has brought us a very wide range of iOS apps and Brusfri is yet another good example of this. Apparently Brusfri means “Noise free” in Swedish. Which I did not know before this.
Klevgränd describes Brusfri as
A highly advanced audio noise reducer, packed into a simple and straight-forward interface. It is very well suited for cleaning up noisy audio recordings, while retaining sound quality. Unlike many other noise reducers on the market, Brusfri doesn’t mess with audio phasing to suppress noise (a technique that often comes with audible side effects). Instead, multiple fine-tuned gates are used to silence unwanted noise.
Of course noise reduction can be useful in lots of situations, especially in the iOS worlds with mics that aren’t perhaps as good as they could be. One very useful feature in Brusfri is its “Learn” function. Press “Learn” during a couple of seconds of isolated noise to record a “profile” for noise. You could open Brusfri in any AUv3 compatible host, find a region in the recording that only contains noise and play it. Tap and hold the “LEARN” button for a short while (a second will be enough).
When the LEARN button is released, Brusfri starts reducing noise.
There are also parameters that affect the noise reduction algorithm:
Threshold – basically determines how much noise to be removed. A lower value means more reduction
Attack and Release sets the time it will take for the reducer to start working.
EDGE: Controls the reduction aggressivity. A lower value gives a smoother reduction.
HPF: Reduces low frequencies.
HIGH: Compensates high frequencies by boosting them.
NOTE! Brusfri is CPU intensive and iPhone 6S / iPad AIR 2 or better is recommended. We highly recommend freezing/bouncing tracks to reduce CPU usage. Brusfri is currently AUv3 only, which means it won’t work in stand alone mode. It must be used as a plugin
Brusfri costs $7.99 on the app store now (includes a 33% launch discount)