The audio software chronicles

General Discusssions for Audiography
Post Reply
Nikhil Mulay
Jr. Member
Jr. Member
Posts: 52
Joined: Wed Dec 13, 2006 8:43 am

The audio software chronicles

Post by Nikhil Mulay »

EVAN BROOKS, DIGIDESIGN

I started out as a musician and technologist, studying electrical engineering and computer science at Berkeley. I’ve been playing in bands since I was 13 or 14; Peter, who I started Digidesign with, we’ve been playing together in bands since high school. Still do, in fact.

Back in high school, we would record ourselves on whatever reel-to-reel tape machines our parents had, and then we invested in one of the first TEAC four-track machines — remember that thing? Of course, once you got a four track, with overdubbing and synchronizing, and everything was edited to two track masters, you’d do splices with the razor blade and grease pencil. We used pencil [laughs]. I also did some rewiring and hacking of existing gear to fix it. Sometimes it was done in sport, sometimes to improve something or take care of a badly designed product. Or I’d make some sort of a patch, or build kit synthesizers or synths from scratch where you just sort of make it up yourself.

The earliest work that we did at the company came out of a number of years of just playing around with electronics and electronic music. We had built a recording studio basically from scratch; Peter had gone to college for recording arts. He bought a Drumulator one day — he’s a drummer himself — and he said, “We can certainly change these sounds on it.” And I had no friggin’ idea how to change the sounds on that machine. So I said “sure,” and we just learned how to do that. We went to the company and asked, “How do you change the sounds on the Drumulator?” They said, “A lot of people have asked us how to do it, and we’re happy to tell you but nobody’s ever actually done anything with this information.” I said I’d guarantee them that if they told me what it takes, I’d do it.

He explained the process. So, of course, we had to sample these sounds, and back then you couldn’t just go out and sample sounds; the only samplers back then were the Emulators. So we hacked into the Emulator to use its digital output; Peter wrote some custom software that would let us take sound out. Then we had to make a computer, and we had to design some interface parts that would sample into this computer.

Over time these drum machines got to have higher and higher quality sounds, and suddenly the Emulator’s recording wasn’t quite good enough. Remember the Sony PCM? That was the high-quality sampling machine of its time; actually it’s still pretty good. So Dana Massie [fact check] over at E-mu shared this information with us.

We wanted a card with a digital output that records things on the PCM and then brings them over to this computer. But we were still getting ASCII gibberish on the screen when it was time to edit it, because when you’d record a drum sound, these drum chips were really tiny and the capacity was really low. At that time it was: how do you fit a very long decaying sound onto this really short chip? You have to use a lot of compression, and a lot of really brutal editing. And it turned out that doing editing like that was really difficult — you could never do it on tape, it just doesn’t work; but doing it onscreen with a bunch of ASCII characters was ridiculous. So it occurred to me that if only I could see the sound, I could make qualified decisions about where to edit things.

This was at the time that the Macintosh came out, and suddenly, when I created a bit-mapped illustration on it, the first thing that came to my mind was, “Oh my god, I’d be able to see the sounds that I was trying to edit on the screen.” So it’d be great if I could take the sound out of this s100 hobbyist computer we had made, and transfer it out over to the Macintosh.

The Emulator II had come out at that time, and so we worked with E-mu to actually write the software we needed using the Emulator II as the sampling part. The idea was, you’d sample yourself on the Emulator II, just download it to the digital interface that was included with the Macintosh, where you could then edit the sound, and then you’d play them in the Macintosh. But if you could edit them, you could also do signal processing, because suddenly you had a computer that had a serious crunch to it, relatively speaking for that time. You could edit the sounds, manipulate the sounds, and then you would send them back. From there, you could move all the parameters of the edit of a complex instrument.

Motivation
A lot of people look back in hindsight and proclaim people to be visionaries, and in fact I think we’re all pretty short-sighted. Motivation is the key ingredient. I’m mostly motivated by my own needs, and what it is that I need at that point in time. Back in the early ’80s, I couldn’t meet my needs with the money I had available. Back then, if you wanted to do digital recording you had to buy a Synclavier for a couple hundred thousand dollars each, right?

The idea behind recording and editing digitally was out there, and it’d been put together by people who had an enormous amount of money — but they could never sell the thing. People over at Lucas were doing state of the art things; they had money and the time, and they had the expertise to develop custom systems that were doing all the audio and video editing, and they had the processing power. But people like us would look at that and go, “Yeah, that’s really cool, but I’ll never get one of these things in my lifetime.”

But what you do is, you say, “Wouldn’t it be neat if. . .?” Then you turn your attention to your basic needs, like “How am I going to edit this?” So for me, having all the experience with the technology and the computers, it was a no-brainer to me that our music was moving into the digital realm, because we were working with sampled sounds. I needed something visual to be able to edit it with, something accurate, something reproducible, and for God sakes, something that you could undo, because I was not a good editor, and you make a mistake with a razor blade, and it’s just over. So for my needs, I needed something where I could just say “undo.” And when that concept was presented to the guys at Macintosh, it was kind of the obvious answer to the particular problem of what we were doing at the time. In that sense, I guess it’s visionary in that I really believed that this is the way to go. Was I thinking about 32 channels of multitrack recording and all sorts of DSP? No, I wasn’t, because it was not what I was doing at that time, and it was just so far out in the future.

Once you’re editing, though, you want to be able to hear what you’re editing, right? If you’re editing this nice 16-bit audio, but the Macintosh you’re editing it on has 8-bit playback, you can’t make any subtle changes and actually hear it, because the sound quality is so crappy. So I said to myself, “How do I improve this situation? Where am I going to get better audio output?”

I started looking at the interfaces to the Mac, how to get high-quality audio in or out of this thing, and that’s when we came out with an interface for that machine that had high-quality A-to-D jacks on it. From there I plugged a Mac into it and I could see what I was doing, but gosh, whatever I was doing sure took an awful long time. I could sit there and type in some numbers and do a lot of crunch, and then I have to listen to it, and if I don’t like it I have to go back and type in some more numbers and let it crunch and listen to it again.

And you start comparing that experience with the experience you had on an analog console, where you can just reach out and grab something and turn it and adjust it, and you start to realize that the thing that’s missing from the experience is that the modified listen cycle is all wrong for the digital purpose: When you’re adjusting something, it’s a continuous loop between your hands making the adjustment and your brain listening to it, and you can’t just turn it from here to there; it’s an interactive process. It’s a loop. And you don’t really notice it because you’re used to doing it all the time, but when suddenly you have to slow down and do it piece by piece, and there’s a big gap between each piece of the movement, you realize that it is a big loop and it’s performed over and over. And when you make it take a long time, it grows from being ordinary to tedious.

So that was the next big thing that was missing from the picture. Our focus started to become “How do we speed up this process?” And there were several ways you could do it: You could get more clever with your coding to make things go fast, or how they do it with some Photoshop types of filters where it will render only a portion of the picture you’re working on. Or you can try to accelerate that process in some fashion. And our answer to that came in the guise of the Macintosh II; not only was it faster than the original Macintosh, but, lo and behold, it had card slots, and suddenly you could fit things into this machine.

So the first thing that occurred to me was what can we do about our problem here, our modified listen cycle? A good friend of mine who used to be at E-mu, a guy named Terry Schott, was working for Motorola at that time, and he said they’d been working on this digital signal processing chip that would be well adapted to audio. Terry got us an early prototype of this thing, and the very first thing I did was design a card with a circuit board, put on some high-quality digital-analog converters, and then we wanted our Sound Designer software to take advantage of it, so that all the EQs and whatnot that you could do in Sound Designer would actually run on this board. The digital signal processor would run in real time on this audio, then pump it out as analog output in 16-bit. And you could tweak the parameters on the screen — and suddenly, everything was in real time, and it was like night and day.

We took this board to the NAMM show, just to get some feedback from people, and we were just inundated. People thought right away, “Oh my god, this is amazing.” We figured out that we were gonna have to add some analog inputs to this thing — that version we showed at NAMM we had made in, like, two weeks, staying up all night at Apple, who was working with us real closely at that time, to make this thing work in time for the trade shows. It was only monophonic at that juncture, but we came back and we tricked it out and came up with what eventually became the Sound Accelerator.

Having solved the modified listen problem, the next thing that became apparent was that the biggest limitation of editing in the Mac was that you had to fit everything into memory if you wanted to hear it; using a floppy was ludicrous, and if you did have a hard disk, it was excruciatingly slow, you couldn’t actually play audio back in real time. So when the Mac II came out, the technology again had advanced by that point that the hard drives were just barely fast enough to play two channels of 16-bit audio at 44kHz. What was happening was that these drives, as you’d use them, would start getting hotter and hotter, and the metal inside would start expanding, and the drive every so often would have to recalibrate itself, trying to figure out where all the tracks were on the disc, because the size of the disc was changing. So every N number of minutes, the drive would shut down and it would rescan the disk and figure out where everything was again. And if you didn’t do this, then you got a bunch of drive errors.

This was a huge problem. You’d be recording to hard disk, and you’d get a glitch in the middle of it because the drive had basically gone offline for a tenth of second, half a second, whatever it was. And by then you’d be screwed. I brought in a little device that would play the audio from the hard disk and sure enough it worked. So we built that, and suddenly Sound Designer went from being really a kind of sample-editing tool to being a real two-track full-length audio production console. That was a huge shift. And again, it was this beautiful perfect storm of the technology being there and a need being had at the same time.

The visionary part is being able to recognize that the technology that’s there is going to be able to meet your needs, and then you further saying, “Now that my needs are met, what new things can we do that we couldn’t do before?” Sometimes you couldn’t do something just because the technology wasn’t available; or the technology was available, but you just couldn’t afford it.

This technology was so much more useful and efficient to us [as recording musicians and technologists] than the analog technology, that we figured other people would be driven by that also. And suddenly we realized that the technology was out there to make essentially a direct-to-disk Synclavier for a small percentage of the cost. You could bring these tools that are eminently useful into the vocabulary, because now everybody would be able to afford to use it.

When we started the company, we decided to make a purposeful move into hardware. And that was a very different step, because it had been so expensive. We didn’t have experience contracting out to make circuit boards and purchasing electronics parts, testing, etc., so it was a really big deal. Originally I was the hardware designer, but it became clear to me that there were people who do this kind of thing for a living day in and day out who’d do a much better job than me — people who were doing parts buying for electronics, people who understand the whole process of circuit board design — and as we got into the more advanced Pro Tools system, we had specialists in communications, as there was a need for developing techniques to include people who’d use the product.

The Musician of the Future
I think of myself primarily as a musician. That’s what I sit down and do everyday; I’m a piano player. It’s certainly an indelible part of me; it affects how I think, how I look at the world, and how I think about problems and how I solve problems. Being a musician, I was making products for myself. I can see the utility of the future as it applies to music and musicmaking, because that’s my job. That’s what I do.

Digital audio is pretty much a mature industry at this point. People like to talk about evolution vs. revolution, like at the beginning of the life cycle of a technology you tend to go through a revolution, with big changes in the big picture. The last kind of major sea change that we all saw, and we’re still on the tail end of, is the migration along the lines of duty of these workstations that have dedicated hardware to drive the software. There are workstations that don’t require external signal processors or external boxes; a lot of computers now come with high-quality converters built in, hard drives that are large enough and that literally can do anything straight out of the box.

As it turns out, what has happened is that people’s needs and desires have grown to keep pace with the capabilities of computers. What people were doing with Pro Tools 10 years ago you could probably do entirely in software now, but the fact is that people are doing a whole lot more than they were 10 years ago, and they have a whole lot more things that they want to do, so they’re still pushing the limits. But that move toward software only is kind of the last big thing; every so often you see people come out with new ideas, but mostly now it’s incremental — taking more functions and putting them strictly on software, making it work more reliably together.

People coming out with things like synthetic singing, for example, things that you can’t do yet. Well it all comes back to that old category of “What can’t you do?” Software that literally can understand enough music to take polyphonic music and pull it apart, and you can edit it after it’s been mixed. These are the kinds of things that people will say, “Wow,” about at some point in the future. I don’t think they’re anything that people are particularly missing right now, ’cause just about everybody’s needs have been met, but people keep pushing that envelope and coming out with new ideas. As the processing power grows, so does people’s ingenuity.

COLIN McDOWELL, McDSP

Analog. Amen. Arguably the most tweaked data archiving standard in history. There were many technical and objective criteria, both of which would change as the music industry grew. Many thanks to the brilliant folks who have gone before us to give us this Holy Grail.

Digital. Dang. What the heck happened to the warm-groovy-big-fat-in-your-face sound? Computers have a funny way of doing exactly what we tell them, not what we want them to do. Ditto when it comes to processing and saving our audio data.

And so the digital revolution begins.

Where was I? Like most of you, I’m just another foot soldier in the professional audio space, trying to figure out what to do next. So this will be kind of like embedded reporting.

Past
After many joyful suit-wearing years at IBM, I started at Digidesign in 1995. TDM was fairly new. Plug-ins were in modest supply. Analog angst was in full swing. The sound of any DAW was regarded as lifeless, without character, punch, or warmth.

Digital signal processing products of the day went for flexibility, but with the exactness only a computer could achieve. So while a user could now precisely carve out frequencies, or set up a compressor on every track, the end product was still fairly sterile.

I was tasked to improve the sound of the Digidesign EQ and Dynamics plug-ins, while making the algorithms as efficient as possible. The EQ II and Dynamics II plug-ins were done in 1996. During this time I also created a 4-band EQ config called the GQ (it was green, get it?). It used the same algorithm from the EQ II, just different filter shapes. Dave Lebolt, then the Director of Product Strategy, was (still is!) pretty anal when it comes to the sound of things, so I figured he’d give me some tips on how to make it better. We went to a local studio where he and Eric Valentine (Third Eye Blind, Smash Mouth) checked it out. After 20 minutes of tweaking, to which I assumed both would say “this sucks,” it was instead received warmly (no pun intended).

OK is this part of the revolution? Industry pros with analog-experienced ears giving a sonic nod to, holy crap, a software plug-in? Never mind it was a single precision, extremely minimal algorithm. It had a sound that met some subjective criteria known only to folks who spend way too much time in dark rooms with mixing consoles and big speakers.

So was it right then, to design products in the spirit of “as many as possible on a single DSP chip,“ or would it have been better to put the sound quality above all other design criteria, and worry about how many channels of audio it could process later?

If one considers all the over-design that goes into analog gear, I think you come to the same conclusion.

And so lesson number one was learned. It’s the sound, dummy! When a user finds something they can use, it will be their first call. It doesn’t matter if it can do 128 audio channels on the head of a pin, or does a mere two channels once it completes its required 24-hour warm up cycle. Audio engineers need tools that sound good. All other matters are secondary. Just look at how many folks thought splicing tape was a good way to work!

As my tenure at Digidesign continued, I began to note how the Pro Tools application grew, and yet the signal processing aspects of it (the plug-ins) were largely unchanged, or at least very low on the engineering queue of work. The GQ never saw the light of day.

Well, I liked the signal processing part of it all, and later moved on to Dolby Laboratories. We did DSP all day long. Life was good. Dolby E was fun (I still say it should have been “Dolby G” and green).

I read all about how Dolby started, the first set of products, and so on. It looked like even more fun. I figured if the dot-com folks could do it, then so could I. My wife had a license to kill me in my sleep if all did not go well. McDSP was born.

Nearly Present
McDSP’s first product, FilterBank had (has!) the highest level of sound quality I could spec out — zero noise floor, no sample delay, flexibility beyond any competing products, and sounded amazing (IMHO). It could adapt to a variety of user criteria for “good sounding EQ.”

Our first trade show was AES in San Francisco in 1998. I had the entire company on a dolly — one CPU and monitor, a box of lit, free slinkies and product demos. I met our first customer, the very excitable Rob Barrett Jr. I crashed and burned demonstrating to Jerry Harrison. My wife asked Rhett Lawrence if he used Pro Tools. In short, it was awesome.

With each demo, I learned a little more. Every bit of potential customer feedback gave me more insight into what folks wanted sonically out of EQ (and later, compression), and every dang one of those passionate comments somehow made its way into the final version that shipped in early 1999. I have been asked many times how I came up with FilterBank’s sound. The answer is simple — I didn’t, our customers did.

Let’s call that lesson number two. It’s the customers, dummy! Folks in the pro audio space expect gear to sound amazing and work forever. Why does a Neve 1081, a Manley SLAM, an API 550 command such industry respect (and high retail price)? They are as reliable as they are sonically superb. And when something goes wrong, there’s a human on the phone in about 30 seconds to explain how it can get fixed (or how to download the update). Minimizing periodic update charges isn’t a bad idea either.

I expect many other “old” plug-in companies had the same kind of experiences. Users wanted to integrate every aspect of their work into a DAW session file, and yet found the need for outboard gear continuing, if not increasing with the complexity of new sessions.

And so the plug-in industry grew. As more digital audio workstations found their way into every day use, digital signal processing (plug-ins) became a permanent part of the studio.

McDSP grew from the extra bedroom in the house, to the dining room, to real office with employees and cubicles. Heck, I think we’re a real company now.

Present
Now the digital audio workstation market is huge (have you been to Hall A at Winter NAMM lately?). Many options exist, not just Pro Tools. Even some of the big analog hold-outs (Neve, SSL) have joined the ranks of plug-in developers.

There are tools both flexible and one-trick-pony- like. Emulations and innovations abound in EQ, compression, virtual instruments, reverb, and guitar amp simulations. It would be very hard to make a “bad recording” these days.

And yet is this a good place?

We are over-run with plug-ins of every conceivable format, type, make, and brand. Some are great. Some are not so great. How is the user supposed to parse through all the information? Let’s go back to the days of a few manufacturers with good reputations and know the gear we’re going to buy will be around at least as long as we are.

You could say the digital revolution has led us into digital anarchy. Few file standards, many competing platforms, computers, and software updated at such frequent intervals you wonder when you’re going to get back to making, what’d they call it, music?

Fortunately, we all have the same facilities that allowed us to make choices in the “good old analog days” — our ears and the space between them. Does it sound good? Can you use this feature? Do you like the color green? It all comes right back to the customer and the customer’s needs will continue to drive the industry forward.

Future
Like the giants that have come before us — Neve, SSL, API, and many others — innovation, dedication to customer base, and reliability will determine who among us will be around in the next 20 years. Oh yeah, and it’s gotta sound “warm.” Here’s to hoping it’s green too.”

STEVE BERKLEY, BIAS, INC.

The story of BIAS, Inc. begins with Peak 1.0. I originally wrote Peak in 1995 because I used the audio editing tools of the day and found them limiting. My perspective was, and still is, that of a composer, sound designer, and keyboardist. Digidesign’s Sound Designer and Passport’s Alchemy were great tools at the time, but I became frustrated with some of their limitations, and that was my muse.

Sound Designer was slow when working on large files. Alchemy was super fast but had to hold the entire sample in RAM (remember when RAM was expensive?). Sound Designer was destructive — with only one level of undo, I always felt I was one edit away from messing up my source file. Another drawback: Sound Designer required hardware.

When Apple began to ship the first Power PC Macs in the early 1990s, I realized that there was an exciting future for native audio software. So I decided to write a dedicated sample-editing application that had the best of both worlds (fast and non-destructive) without requiring extra audio hardware (native).

So the result was Peak 1.0 — a fusion of an audio editor and a DAW. The editing engine under the hood in Peak is essentially a DAW-like EDL-based system but with a 2-track audio-editor-style view of the EDL. This makes editing fast and non-destructive, and gives you an unlimited undo/redo capability so you can always try something out and go back later. As a composer who likes to write my own sound design tools, I also got to add many interesting sound design tools into Peak, such as Convolution, ImpulseVerb, Harmonic Rotate, and Rappify.

My wife Christine and I took Peak 1.0 to the NAMM show in 1996, hopeful that a few people might also find Peak interesting enough to use. To our surprise, it was a huge hit. There was a void in the market for sample editing, so it was good timing. That’s how BIAS got started, and for the first year we operated out of our rented condominium in Sausalito, California. Several other talented people joined our cause, each of us working as a “distributed company” out of our own homes in the San Francisco Bay Area. Soon BIAS outgrew the home-operated infrastructure and it came time for us to open our first office in 1998. We’re still a very small company relative to others in the MI, just 20 people, mostly musicians like me who believe passionately in our products and end-users.

On Native Audio Software and the Evolution of Software Plug-ins
We’ve always placed a great deal of importance on supporting third-party plug-ins. Allowing other companies to extend your product via plug-ins is an extremely successful model I’ve observed other companies like Digidesign and Autodesk use to build a large third-party developer base, and keep their own products exciting by leveraging and co-marketing with one another.

When Peak 1.0 first came out, processors had only recently become fast enough to mix and apply gain to audio. As a result, plug-in formats were all offline, except for the ones that used a 56k chip on a dedicated hardware card. OSC was supporting Adobe Premiere’s audio filter plug-ins in Deck, so I decided to add this to Peak 1.0.

This ended up being a great thing, especially for SFX Machine, a sound effects monster that was once published by BIAS. We really pushed the envelope with Premiere plug-ins, even allowing them to be used in real time at one point (commonplace for plug-ins now, but very innovative in its day).

Later, we added support for Digidesign DAE and TDM plug-ins in Peak 2.0. It was becoming clear that processors were headed toward exponential speed increases, and by the time Peak 2.5 shipped we had added support for what had quickly become the dominant native plug-in format, Steinberg’s VST. Peak Pro 5 now also supports virtual instruments (VSTi), as well as Apple’s AudioUnits plug-ins for real-time effects processing and instruments.

On the Evolution of the Mac OS
OS transitions are hard, period. And nobody likes them. But we have to do them. After going through the months of work that was involved going from 68k to PowerPC, from Toolbox to Carbon, from CFM to Mach-O, and now working on IBM to Intel, I’d say I’m now ready for a little break from transitions. But I think Mac OS X is a great OS for audio and has a bright future.

On Audio Editing Innovation
Some features people may not realize were original Peak innovations: Blending, Unlimited Undo/Redo with audio, Threshold, the Recording Notepad, editing during playback, “Loop Surfing” to move loops points alone or together in realtime during playback, Dynamic Scrubbing, native third-party audio plug-ins running in real time. . . .

Dynamic Scrubbing is a feature that grew out of a sound effect I used while working on some compositions I was writing using an Ensoniq EPS-16+. Basically you could make a very small loop, and then assign the loop position to a controller like the mod wheel. The effect was very coarse, but it was exciting because you could “play” through a sound at any speed, forward or backward, without shifting the pitch. When I implemented this in Peak, I was able to refine it by applying an envelope to the sound snippet to eliminate any crackling.

The Playlist, which first appeared in Peak 2.0, was based on the old Sound Designer Playlist, but with some novel ideas. First, it didn’t require all the regions to reside in one audio file — you could use regions from multiple files. Also, we put native realtime effects plug-ins on each individual plug-in event. The Playlist went basically unchanged for a while, until recently. The Playlist in Peak Pro 5 has a graphic view, unlimited undo/redo, and direct support for CD-burning with CD-TEXT, PQ-subcodes, ISRC, audio-in-pause, and so on.

On the Future of Audio Software
As Peak was successful enough for us to grow the company with it, we’ve been able to add engineers that have been working on new products for us like SoundSoap & SoundSoap Pro (Noise Reduction) and the Master Perfection Suite (Sqweez, Reveal, Repli-Q, Gate-X, and PitchCraft). We’ve developed a nice collection of mastering plug-ins, with beautiful user interfaces that complement the editing and mastering tools available in Peak.

Peak has evolved into a robust product, an industry-standard for editing audio on the Mac, with a large and devoted user base. It’s not been easy, but we’re doing great. Our SoundSoap noise reduction products have been very successful because we listened to users’ requests for easy-to-use noise reduction. Peak’s users and beta testers have been invaluable, providing lots of feedback over the years that has literally shaped the feature set of the product, yet we’ve managed to keep the product as simple to use as a text editor.

The music software industry was originally created and sustained by a group of innovative companies with visionary ideas. Into the future, the industry must continue to be seeded with more people who have interesting ideas. Great products are created by people who can think of unique approaches to solving problems, new ways of processing sound, and are willing to take bold risks to try new things.

There are a lot of exciting technologies I expect we’ll start seeing more of in the next few years, and I would also expect to see native processing power continue to increase, with more distributed processing capabilities, and better audio quality as a result.”

ENRICO IORI, IK MULTIMEDIA

I have an electronics educational background, but I started studying music when I was 10 years old and then began playing the guitar. Then I moved onto learning everything I could about traditional recording and production with my first experience in digital music in the late ’80s and early ’90s, first with NeXT (anybody remember this Steve Jobs adventure?) and then with the Macintosh, where I was able to work on digital audio, making musical productions in dance and other styles.

I had been always passionate for computer music, back to the early days of MIDI sequencing with a Yamaha MX computer, one of the first examples of a music-dedicated computer that was in retrospect perhaps ahead of its time.

My first true digital audio-MIDI system was a Macintosh Quadra 840 doing eight audio digital tracks on Deck II (when it was made by OSC) syncing MIDI with Metro. Then I moved to Pro Tools with Session 8 and the SampleCell card. There were no plug-ins back then, apart from the first suite from Waves (L1, C1, etc.) running on Pro Tools NuBus cards. Later I worked on Opcode Studio Vision that appeared as the first truly integrated audio-MIDI sequencer working fully natively on the CPU, and later Digital Performer with its off-line pitch- and time-stretching capabilities, and then Cubase.

In the beginning of the ’90s the sample market was in its early stages, with very few titles on the market. I actually had the idea for our first software product from working with sample products such as those from companies like Spectrasonics. It was by the late ’90s that I envisioned a huge potential in developing software instruments and effects for the newly forming computer music market.

My musical background has been a great help in designing software. IK was founded in ’96 by myself and one of my main partners, Davide Barbi, an audio engineer with a strong background in electronics, who also happens to be a bass/guitar/keyboard player. Davide is our R&D director today and the “ear” behind many IK products. We started as a multimedia company with a focus on audio production, and by ’97 we had designed our first software called AXE, a preliminary version of what became our first successfully sold product worldwide: GrooveMaker, a loop-remixing tool with included sample content.

Initially the goal was to develop instruments that didn’t exist in the hardware domain, using the new possibilities offered by the computer. When I went to Winter NAMM in ’97 with our first version of GrooveMaker, I was demonstrating it on a Pentium I laptop, 75MHz, and our remixing software was offered with sample content in both 44kHz 16-bit but also 22kHz/8-bit in order to be able to run on a low-power CPU at that time. But hardware power has made tremendous steps, quickly passing through to Pentium II and III and the Macintosh Power PC, that was an opportunity for us to enter into the development of realtime effect processing and the realization of completely native software, which we did in ’98 with T-RackS, developing an all-in-one mastering station for every user. This opened the possibility of having a high-end tool with a studio-quality sound for the masses, with a price that anyone could afford. T-RackS was then able to set a sort of standard in analog-modeled mastering using computer.

The T-RackS 24 pioneered analog modeling with the computer and was very well received by many musicians and engineers. For us T-RackS was developed as an initial step toward the development of a series of technologies that started with the analog modeling of hardware for emulating EQs, compressors, and limiters and modeling analog sound in general. Actually, there are some extremely rare circuits modeled in T-RackS, for example the EQ was modeled on an analog console at Abbey Road studios in the ’70s.

SampleTank appeared in 2001 and it offered for the first time an integrated plug-in instrument with thousands of high-quality sounds with built-in effects. Software samplers already existed (I remember the now defunct Bitheadz Unity), but there was no strong integration between software, sounds, and DSP. But you’ll hear SampleTank, Sonik Synth, and Miroslav Philharmonik being used all the time on records, such as Eminem’s “Lose Yourself” from the 8-Mile soundtrack.

In 2002 we also developed a completely integrated guitar amp and effects rig as a plug-in for all platforms. Here too there was already one example on the market, but with limited functionality and platform support. Our idea was to include everything the guitarist needed all in one plug-in, from stomps to amp, cabinet, microphone, and rack effects. We added AmpliTube for separate modeling of the various components of the amplifier, offering thousands of new amps from different combinations of the elements. It allowed the guitarist to use the software like a custom amp creation tool. In 2003 we released SampleTank 2, and in 2006 we are releasing AmpliTube 2, the sequels to our products in virtual instruments and guitar and effect modeling plug-ins.

With the launch of our first two hardware products at the 2006 Winter NAMM show, for us the future is leaning toward a stronger integration between hardware and software for the ultimate exploitation of the computer as a super-powerful musical instrument.

STEPHAN SCHMITT, NATIVE INSTRUMENTS

“I studied electrical engineering and worked as a developer of electronic equipment (e.g., communication systems), where I got my original theoretical background in analog and digital signal processing. As a musician I am mainly self-taught.

I started experimenting with electronic circuits in my youth, and the most fascinating aspect for me was to create and manipulate sounds with it. At the same time I had a big interest in music and got some education on flute and piano. So it was quite natural for me to become involved with the technical aspects of music. I was fascinated by being creative in both the technical and musical domain and it was sometimes hard to decide between both, even though today I would recommend musicians not to let the technology distract them from their music.

I was always looking for unusual sounds, and for new ways to express myself with an instrument. Sounds can be a source of inspiration, especially when you can improvise with them. I always saw it as an interesting challenge to explore the complex behavior of electronic sound sources like synthesizers. In my youth I read magazines on electronic engineering to study schematics of oscillators, filters, mixers, and so on. I then combined them in a small modular system. Guitars, distortion, amps, and speakers were another field where I began experimenting early on. The first keyboard instruments that I bought were the Rhodes and the Korg CX3, because I was playing in rock bands. When the Prophet 2000 came out, it became my first sampler and I experimented a lot with it while creating music for a theater project.

With all of those instruments I felt serious limitations after a certain time. I tried to modify them, which was hard due to the highly integrated digital devices. Usually there was so much room for improvement in the synthesis engines of the instruments, and I was always disappointed that this was not used by the manufacturers in the following product generations. The user interfaces of most digital instruments seemed also completely unacceptable to me. We had to learn to realize realtime audio processing in the environment of operating systems like Windows. Limited processor power made intensive code optimization necessary. The latencies of the available soundcards made it hard to make playable software instruments. Therefore Generator, our first modular synth, was bundled with a low-latency soundcard that we developed ourselves.

I was always fascinated by the expressive potential of acoustic instruments, when played by good musicians, and one of my goals is to make electronic instruments as playable and responsive as acoustic instruments. You can get a dramatic range of dynamics and expression from a synthesizer. Generator was my first software project. Before that, I developed circuits for mixing consoles and similar hardware.

In the ’90s it became clear that the future of audio would be nearly completely digital, and when I saw the chance to implement one of the first software synthesizers, I teamed up with a programmer and transferred my experience with circuit design into the digital domain.”

ERIC PERSING, SPECTRASONICS

I’ve done many things, and being a musician is a very important part of it. I’m coming from a completely creative standpoint, very much the point of view of the user. But I’ve worked with engineers for many years to realize Sound Tools and these kinds of instruments. A lot of times when companies have been driven by technical people, those companies fail. I think you need to know who you’re making the product for and why, and what is it you’re trying to do musically. Those are the most important questions.

Since I was a kid, I was really enthusiastic about this concept of making synthesizers and things that make noise. I would draw diagrams of synths that I would love to make someday. So it’s amazing to me that now I actually have the ability to do that, and it was such a great experience for me being at work with Roland. They considered me kind of the voice of the American musician, and so I had this great gig where I would go to Japan and they would show me all the things they were working on, and I would give them my opinion of what musicians would think of it. I started with them right at the beginning of MIDI; I was one of the only people who knew how to use a MIDI sequencer, so I was really fortunate in the timing of my work with Roland.

That whole process of working with engineers and explaining my ideas and enjoying the process of seeing an instrument come to life — and getting to see my ideas show up in an instrument like the D-50 or the Jupiter series synthesizers — it was an exciting time to be doing all of that. But what was even more exciting was when software came along, and I realized that I could actually realize my dream instruments and some of the things that I wanted to do myself directly, and I could do that with my own company, without hundred-million-dollar factories and a thousand employees, and that kind of structure that was necessary in the hardware world. That’s really the revolution, I think, being able to implement your ideas in a very direct way. So times have changed dramatically just because of that. We used to work a year to two years on a synthesizer at Roland, and there would be maybe one or two synthesizers introduced a year by a major manufacturer. And now literally there’s at least one new synth introduced every day, so it’s a completely different situation now.

I’d always been interested in sound recording, from when I got my first reel to reel, playing with my dad’s tape deck and that sort of thing, and I got interested in feedback and when I’d patch things together and they would freak out everything, and just unique sounds you could make. And then when I played my first synth, a Minimoog, it was all over. I still enjoy that process of discovering the personality of a machine.

But the challenge now is that there’s so much of it, so it’s a little more difficult in a way, because there’s so much to sift through. With Spectrasonics we put a lot of emphasis on trying to help show people how the product can be used, like we’ve done with video tutorials. I think that part’s just as important as the innovation, because if you only have innovation but you don’t have the education behind it, the application of it, it’s kinda pointless.

The first thing I wanted to do with the synth was to be able to turn it off [laughs]. The first synth I had didn’t have presets, and it was very easy if you didn’t know what you were doing to just end up with an infinite sustained sound, and it was very frustrating to me. I thought when I was using those machines, too, that there was so much there — of course I had some ideas I’d always wanted to try; particularly in the early days, it was such a feeling of there being no end to all the things you could do with sound. I’m kind of always searching, trying to get myself back to that place, because it’s so important that you push yourself to expand your creative thinking.

I went into a recording engineering program, to expand my knowledge, but at the time there wasn’t the kind of education structure that there is now; I remember we had a recording engineering class and a production class, and a songwriting class, and there was a contest and the songwriting winner would get to be recorded by the recording winner and produced by the producing winner. And I ended up winning in those categories, and they couldn’t accept the idea that somebody would produce their own songs, and then they almost kicked me out of school because I wanted to do it with electronic instruments [laughs]. They said, “No one graduating here will ever record a drum machine.” And for the last, what, 15 years no one has recorded anything but drum machines.

On the Development of the D-50
There was a large team of people involved, and I was fortunate to be in there right at the beginning of the creation of it. It was Roland’s first all-digital synth, and they had been working quite a while to get that together because they were kind of running behind the DX7, and so they wanted to do something different. The first idea was to have the sample attacks, like the clarinet attacks or cello attacks, but when I and some of the other sound designers got ahold of it, what freaked us out was some of the quirkier things you could do with it. We ended up putting a lot of strange samples in it, and then that really became the personality of the D-50 — it was the unique sounds it could make, not trying to imitate clarinets. They had originally conceived it as a way to make better realistic sounds — what’s funny now is that they don’t sound realistic at all. But the Digital Native Dance patch that I did for it, and which became famous, was based on a joke that the engineer did, because we thought it was funny that we had a little sequencer that had a PCM attack and figured out a way to play around with the processor in the D-50. We said, “No, don’t take it out, that’s great!” Then of course that was the element of it that really made that synth unique.

Flying Solo
I’d done a lot of work with re-creating real instruments, and that certainly has its place. But where the really interesting territory lies is in creating new sounds, and that’s what software allows us to do. After developing sample banks for Roland and others, I decided to develop my own software, and the first thing we would do was create sample libraries for all the different hardware samplers that were out there. We did that for quite a while, and that was frustrating because we were always limited by the technology of the hardware company or whoever was making the sampler.

That was very frustrating, because of what we wanted to do in the development of sounds and instruments. We made the shift in 2002 to doing software instruments exclusively, and then we were not only able to be involved with the development of the sounds but also development of the technology that played those sounds. And so we were one of the first companies that developed virtual instruments, large sample-based virtual instruments. It’s now pretty commonplace, but at the time that was a novel idea. Then in 2004 we made the shift completely into our own technology, and we’ve got all of our software development in-house. So we’ve evolved from a sound company into a music software company.

Ears and Eyes Open
I always have my finger on what we’re doing in the industry, and I look at the shareware developers and freeware developers and find out who’s doing what. We don’t have a huge team, we have basically a small team, but that team is very powerful, and very experienced. And that lets us go with the direction that the industry is going, to respond to changes that are happening in the business, which is very important — you had companies like New England Digital or Fairlight that didn’t survive because they weren’t flexible enough to keep up with changes in the industry.

The big change was when we went to virtual instruments. I put together these virtual instrument platforms because we’d already designed the engine to work hand in hand with the sounds, so then the instruments would work on all the different platforms, like VST, etc., that way you can sell one product and everybody can use it, instead of what I was doing before, which was you’d have to make the same product over and over again for each of these samplers.

We’ve seen some of our ideas get incorporated into hardware instruments. It’s been an interesting reversal, but it’s all good, it’s all part of the process of things moving forward. People get inspired by what other people are doing, and that’s one thing I like about this business, it keeps you sharp. What you did a couple of years ago, it doesn’t matter. It’s what you have coming next. And it’s not only what you’re doing, it’s what other people are doing, too.

So there is a progression to the whole thing — but a lot of times, the older ideas I’ve had as a studio musician and arranger and producer, I still use those techniques all the time. So I draw on that — it’s just the habit of working on music, and that’s a really important thing. This led indirectly, for example, to the development of a rhythm program called Chaos Designer, contained in the Groove Control software, where you can interface with your own playing in an improvisatory way. It’s getting to the idea of people understanding that they need other musicians’ input; it’s great to be able to realize an idea, but when you’re playing in a band or interacting with another musician, there’s that spontaneity, there’s surprise — something can go wrong, but it might be cool. Chaos Designer introduces an element of controllable instability, and the ability to capture it.

The initial idea with the Distorted Reality series was, I just wanted to make some weird sounds [laughs]. I just pushed myself to use anything and everything I could find to create new sounds, and taking a lot of those same techniques and ideas but bringing them into the virtual instruments themselves so the end user could bring their own spin to it. It’s no longer just about selling a library of sounds; those sounds are the basis of the core of what we’re gonna do, and then we encourage the creativity of the end user to really take this and customize it and take it to where they want to go with it.

In coming up with new sounds, one of my secrets is that I’ll just do a whole lot of crazy things, and I won’t put many limitations on myself, I’ll just try a bunch of stuff, and I’m just constantly recording. In the process of doing these experimentation sessions, invariably I come up with something really excellent, and then I do as many variations of that as I can ’til I’m sick of that, then I’ll try some other things. Then I put that away and I come back to it maybe a couple of months later, and then when I listen to it from that point of view, I’m being an editor, I’m really critical and listening purely from the point of view of “Let’s find only the good nuggets here, get rid of everything else.’’

I’ve created so many sounds now, I’m pretty harsh on myself, and also there are people who I work with a lot, sound designers, and we inspire each other, we criticize each other, and that’s a really important part of the process now. The coolest thing is that when we have an idea for something that doesn’t work as an instrument, we actually implement that into the virtual instrument so that it’s not only the sound itself, but it’s the control of the sound, the generation of the sound. Instead of only capturing or sampling the sound, we can actually capture the process of making a sound.

For me, the key is that we’re in the musical instrument business, we’re not in the software business, even if software is the tool that we use to realize our ideas. I think that any company that really understands that is going to do well; companies that make technology for technology’s sake or don’t really understand how the process can actually be used, they’re not going to do as well.

I’m constantly challenging myself to remember to make it musical, and the experience of the product and the experience of what we sell is in many ways an experience; it is virtual, you know, there’s not a physical thing that you hold or strum, but the experience is very important, and we’re doing all we can to enhance that experience in every respect. But what’s exciting about this software evolution is being able to do that directly, and not having to ask a hardware company if they can produce a watered-down version of my idea; I can actually do the real thing now, and it’s just a matter of time, and patience and resources. But it’s a great time.

CHARLIE STEINBERG, STEINBERG

Were you schooled in electronic music or audio engineering?
Charlie Steinberg: I had no schooling like that. I studied music at a conservatory in Münster, Germany, with the main instrument being guitar. I then worked several years in various studios in and around Hamburg as a freelance audio engineer prior to founding the company. But my motivations came from electronics in general. I used to develop and build electronic circuits like analog synthesizers and step-sequencers.

What was it about electronic music and/or audio software that initially intrigued you?
CS: When I discovered the potential of computers as “soldering without the soldering-iron,” I started to evaluate the possibilities of computers regarding music creation. But I was into analog synths and any available gear that would be able to control these devices. I remember a first, which was Davolisint (synthmuseum.com/davoli/davsint01.html). It had an interesting, huge lever for controlling pitch-bend. The first serious unit I owned was a Micromoog. Did I learn from it? Well, I spent countless hours finding possible modulation routing and parameters . . . but devices like the Moogs were very exciting. I just definitely felt that computers had much more potential than what was achievable initially.

What was the first step in your development of new software?
CS: Figuring out hardware issues. There were MIDI interfaces for the C-64, but there was no Internet or the like to easily find out how to control this hardware as a programmer. The next step was how to describe timing information. The first sequencer we did (Multitrack Recorder on the C-64) had a storage format that was almost identical to what ended up as the MIDI File Format. . . . But the very main problems were memory issues. We squeezed the sequencer, score editor, drum editor, and key editor into 64 kilobytes. We had much less because there had to be space for storing the recorded data and so on and it was just 64KB altogether. In order to even access some parts of that memory one would have to switch several memory layers. There were no comments in the code because that would have eaten way too much precious memory. . . .

My partner Manfred Ruerup, though, was into new gear, as he sold keyboards in a music store, so we learned a lot from using E-mu-2, Synergy, GS II, DX7, step-sequencers and the like, which we used in the studio. That was on the user-level; technically, I bought electronic magazines for information and eventually developed some synths on my own before computers appeared. Because of the modular nature of analog synths, there was never any lack of vision as to how to expand sound generation, modulation, or filtering.

But take Cubase, for example. Early in the company history, Werner Kracht, another very smart programmer, came in. He already had a program on the C-64: He introduced the “locators” (the left/right locators, found on the arrange page of Cubase) things like that. When the Atari came out (1040st), that was when he started to make a program that was much more elaborate and that became “pro24” and predated Cubase. The pro24, it had a panel, so it was kind of like a tape machine and one problem with it was that you couldn’t really see your musical structure. That’s why we sat down and tried to think of how to make it become visible. And that’s how Cubase was born. We really thought about it, because the Atari (like the Mac) let us use a graphical user interface to make the events visible and playable.

When shopping for software, users commonly now see terms such as meta synthesis, physical modeling, transform multiplication, enharmonic and cellular morphing, sonic dispersion, convolution, transwave cycling, phase vocoding, harmonic stretching, virtual analog synthesis. Has the process been one of one idea leading to another? Is it somewhat like a detective story, where one development makes the big picture clearer, makes it possible to imagine further what might be accomplished?
CS: It IS somewhat like detective work. But for the big picture, not much has changed from the urge to replace the $1,000,000 studio with something more affordable. If I look back that way, the evolution of DAWs was “foretold” 25 years ago: MIDI as a starting point, a realtime engine to cope with timing issues (the “deck”), hard disk recording (the “tape”), the virtual mixing desk, virtual effects, and virtual instruments, with more to come. Developing new synthesis/modeling techniques is still a process of putting things together in a unique way. This starts with looking from a user’s point and imagining what could be done, for instance by combining algorithms in a unique way. Implementation has become much easier today, but at the same time some areas have expanded to a remarkable complexity, like time stretch algorithms. But sound generation will certainly continue to be evolving as more innovative algorithms are being developed.

JIM COOPER, MOTU

In the early 1980s, when Apple Computer gave birth to the Lisa, the computer predecessor to the Mac, MOTU engineers took one look at it and said, “Now that’s a computer musicians could relate to.” More specifically, the Lisa’s graphical user interface allowed notes to be displayed on the screen in what-you-see-is-what-you-get fashion. This was incredibly exciting stuff. And work quickly began on a music software program called Professional Composer, one of the first commercially available music engraving programs. Programming was done in Pascal. Yes, the same programming language used by legions of Programming 101 students in the ’80s.

As work on Professional Composer progressed, there was another technological development: Musical Instrument Digital Interface. Once the MIDI spec was ratified, right around the time that the Mac became commercially available (1984–1985), work quickly began on a MIDI sequencer software application for the Mac: Performer.

Performer was an entirely different ball game than Professional Composer from an engineering standpoint because of the realtime performance considerations. Programming was done largely in both C and native machine code, because of the importance of accurate realtime performance tolerances. Timing had to be tight because musicians would categorically reject computers in the studio if it wasn’t. As a result, most of the first 10 years of Performer and Digital Performer programming centered around ways of getting around all of the obstacles in the Mac hardware and software architecture that prevented accurate realtime performance. Performer Version 1, and many subsequent versions, went directly to the Mac motherboard’s timing crystal, completely bypassing the clock provided by the Mac system. Originally, much of Performer was written in machine language (the native code of the hardware chips on the Mac motherboard) because that was the only way to achieve the timing accuracy that the hardware was capable of. Machine code was much shorter and more concentrated than C. Like the difference between poetry and prose.

By June of 1987, we were already gearing up for Performer Version 2. This upgrade was going to have a mind-boggling array of new features including a Conductor track (what the heck is that?) and (gasp!) meter changes and tempo changes. For Performer users, who could barely figure out how to insert their Performer floppy disc into their $5,000 512K “fat” Mac, Performer 2 represented a breakthrough in what could be done in a recording studio. I still remember talking to one hapless customer, new to computers, who dutifully reported, “I peeled the hard plastic cover off the floppy disc and inserted it into the computer, but nothing happens.”

But we had enormous interface design challenges, as well. How could we possibly package all of this functionality in a comprehensible and intuitive design? We were breaking ground, and the early user interface conventions we designed in Performer, such as the transport controls and the event list, still reverberate today in just about every piece of music software you can find. Early on, we decided that the most probable road to success was to base the software design on the standard, familiar recording hardware of the time: tape recorders and mixing consoles. Plus, we threw in an overall graphic design that built on the conventions being established by the Mac System software (scroll bars, etc.). We added artistic headers to dialog boxes that Performer users overwhelmingly appreciated. Here was a computer not acting and looking like a computer, but instead acting more like a familiar — and cool-looking — recording device.

FRÉDÉRIC BRUN, ARTURIA

Paris, January 1998 — It was a rainy day and I decided to take the Metro from the Saint-Paul station to the Bastille. The train was almost empty and I immediately found a seat. While opening my book, I noticed that two eyes were fixed upon me from the other side of the coach, and in some way his face looked familiar. After a moment of reflection, I said to myself “I’ve seen those side-burns before”. But would I go and talk with him or would I stay seated, enjoying my book and pretending I had not seen him?

I didn’t spend too much time mulling this over before my inner-interrogation was halted by a smiling face moving in my direction. So I stood up to shake hands with Mr. Gilles Pommereuil, a man I had met in Grenoble three years before. Gilles was the conductor for the University’s orchestra where I had played the violin, but we had only rehearsed together a few times since I had left shortly after he had taken over.

In 10 minutes, Gilles explained to me that he was currently doing a Master with Ircam, a French laboratory dedicated to the research of music. As a software engineer, he’d also recently started a personal project called Continuo, consisting of a software workshop that connected sound modules (very much in the Reaktor style). I was personally trying to get my own company off the ground (international trading) while finishing my law studies at the Pantheon-Sorbonne University. Gilles was thinking of creating a business around Continuo and was looking for external advice. We decided to meet again a few days later, in a café called Les Trois Maillets: about 200 meters from Notre Dame and one of the few places where you can hear good music while drinking wine.

January 2000 — Launched in France two months before, Storm Music Studio, Arturia’s first product, was officially introduced to the international community at NAMM. In the basement of the Downtown L.A. Convention Center, our small stand displayed the very arrogant slogan: “Time for a new paradigm.” I remember that a lot of people came to the booth and asked “what do you mean exactly by ‘paradigm’?”

It was the same year Reason was introduced, on the Steinberg booth, upper floor. We went to look at it and immediately saw that it would be a great peace of software. The approach we had taken for Storm had actually been quite influenced by another Propellerhead product: ReBirth. A year and half before this NAMM show, we had agreed on the fact that Continuo was far too complex. So, we had decided to use it as a development tool for a new software product that lets you make music in an easy and gratifying way.

By looking at ReBirth and Sonic Foundry Acid, we had come to the conclusion that a pattern-oriented virtual music studio, offering a way to time-stretch samples while using sound synthesis, could be a very good entry in the music production world. I remember we were very much concerned by the complexity of music software, ranging from Cubase to Koblo instruments. We had in mind to do things differently; Gilles was particularly keen on creating some sort of new user experience. Storm 1.0 did not come with menus and offered, for example, a nice trash-bin to get rid of useless samples. No need to say, Storm 1.5 was a victory of realism over idealism.

January 2002 — At NAMM yet again, we had our first meeting with Dr. Robert Moog. Nearly a year before, one of us had started working on an advanced algorithm for the digital emulation of analog circuits audio characteristics. Since results were very encouraging, we had decided to make a dedicated virtual instrument and our choice was to recreate the Moog Modular.

Bob Moog was not so enthusiastic about software synthesizers at first, but he was open-minded and came by our booth at NAMM to see a very early prototype of what would become the Moog Modular V. We met him again at Musikmesse a couple of months later, and he expressed more interest in the project. For a long time, a drawing Bob had made at our Musikmesse meeting, hung in our office. It shows the importance of soft-clipping, something he was very keen on finding in our re-creation. In October the same year, I went to Asheville where I introduced the beta version of the Moog Modular V. Bob asked for other evolutions and we finally secured his endorsement at NAMM 2003 after he really was finally satisfied with what he heard. This was a strong push for us and, along with the quality of the audio algorithms (that we ended up marketing as TAE), it helped the Moog Modular V stand out in the crowd of software synths.

ERNST NATHORST-BÖÖS, PROPELLERHEAD

So, you wanted to hear how Reason was conceived? Sure, but I’m not entirely convinced it’s such an exciting story. As often with these things it was 1% inspiration and 99% transpiration.

Reason was Propellerhead Software’s third product. After two pretty successful attempts with the smaller scale programs ReCycle and ReBirth, we felt that in 1998 it was time for us to take a stab at a major application.

We’re in this business because we love music, computers, synths, and studio equipment, so there was never any question of which direction to take. We already had ReBirth and that simplified the process decision-making further. In a way you could say that Reason is the program we actually wanted to create when we designed ReBirth. We made some prototypes in 1998 that are evidence of that.

In 1996, when we did ReBirth, computers weren’t fast enough for an application like Reason, and we were too small a team to pull off a hubris project. This lead to ReBirth being limited to two monophonic synths with pattern programming, but when the Reason project started in 1998, we felt that both the world, and us, were ready for polyphony, realtime playing, and more advanced instruments and effects.

We — in this context — are the three founders of Propellerhead Software, Marcus Zetterquist, Peter Jubel, and myself, Ernst Nathorst-Böös. We are all still active in the company, in the roles of development manager, DSP specialist, and CEO, respectively. As you might understand there were a lot of other people involved even in the Reason 1.0 project, people working on coding, designing graphics, creating sounds, etc., but the core design was made by the three of us. To this day that is how we work, we basically lock ourselves into a room and battle it out. It can be pretty fierce at times. . . .

The two things in the design that people usually ask about are the rack and devices and the cables. The idea of creating devices that resembled physical counterparts came fairly easily, since we had already that type of design in ReBirth. What took more time to settle upon was the metaphor of the 19" rack, but being gearheads ourselves, the idea of a never-ending studio rack seemed appealing. The idea to use ‘real’ cables for patching audio and control signals between the instruments came from Marcus. It took him a while to convince Peter and me about it. It was quite a bit of work to get them to work as naturally as they do, but it paid off both in usability and in carrying the metaphor all the way.

Another major design decision we had to make was regarding the sequencer; surprisingly not how it should work, but whether to include it at all. At the time, Steinberg distributed our products and the worries were that the sequencer would make Reason too much of a competitor to Cubase. What tipped the scale was — again — the experiences from ReBirth; we really wanted to create a self-contained environment where people could create complete pieces of music, without using any other software.

The same thinking goes behind our decision not to include plug-in support. We feel that one aspect of what makes the program so appealing is that everyone has exactly the same setup, that all settings are stored with your song, and that you can easily share songs with others, without the risk of incompatibility problems. There’s no audio in the program simply since we think it would kill the focus and appeal of the application, besides there are so many great audio recording apps out there and it’s super-simple to integrate them with Reason via ReWire. Every now and then a rumor appears that we don’t do audio recording because of some deal with Steinberg, but there’s absolutely no truth in that.

Reason 1.0 saw the day of light in December 1999. Development time was actually surprisingly short, eight months once all the design was done and development started full throttle. Since then, we have updated it three times to address various needs and shortcomings: to make the instrument selection complete, to make sure all devices are up to the standards of the most professional applications, and to make sure Reason delivers regardless of what musical style you’re in, dance, techno, hip-hip, R’n’B, whatever. Last, with 3.0, we tore down the last barrier between our hardware counterparts and us. There is now absolutely no rational reason to choose a hardware piece over Reason.”

JIM HEINTZ: HOW THE ARP 2600 CHANGED MY LIFE

What makes a person drop everything in their life and decide to create a software ARP 2600 in an already crowded soft-synth market? A combination of two things: a long time desire to own a hard-to-find perfect working condition ARP 2600, and the belief that current software synths could be improved upon greatly with a bit of hard work. Turns out it was a lot of hard work, but we’re getting ahead of ourselves.

It was 1983 when I fell in love with the original ARP 2600 synthesizer while taking an Electronic Music class at Santa Barbara City College. Even then I was frustrated by the small amount of time I got to spend with this amazing machine during the class labs. I wanted one of my own, but the price tag of about $2,600 was daunting to a starving student.

At the same time, while studying Computer programming, I worked for a start up company in Studio City called Home Studio Inc. (HSI) with the lofty goal of allowing average people to have a studio in their own home (go figure . . . who’d ever want one of those?). We worked on an early MIDI sequencing and patch librarian system based on the Apple II computer. In order to keep development funded, we repaired synthesizers and amplifiers on the side. This experience was the perfect chance for me to get my hands on some great gear, and also the schematics and repair manuals (including those for the ARP 2600) that allowed me to understand how the hardware worked.

HSI was ahead of their time, which combined with their poor hardware platform choice, caused the company to promptly fail. However, the seed was set in my 18-year old mind for what I wanted to do in the future.

By October 2003, all those memories had enough time to completely stew in my brain, and the project was completely obvious: an ARP 2600 emulation so precise that even side- by-side comparison could not tell them apart (something not achieved by any other emulations that I had heard), with an interface that brings the user as close to the original product as can be done in software while still offering the best features of today’s digital synth environments.

I started out working on the core design of the Way Out Ware TimewARP 2600 on the side, and after about three months, there was a glimmer of hope. Finally after all those years away from the ARP 2600, I was able to begin sensing and feeling it again, although in a completely new and different way. I decided it was time to jump in head first, and make this project my full-time obsession rather than just a part-time hobby. Now came the hard part . . . convincing my wife that I was not about to dump our future down the drain on a pipe dream. Happily, she understood and supported this new endeavor.

Having developed many different products for many different people over the prior 20 years, I knew what I needed to do in order to get a high quality product to market in a timely fashion. I also know what it takes to fail, and I would not let that happen to my own dream.

Locating a working ARP 2600 was essential, but this proved difficult and expensive in today’s used market and on eBay. I contacted Santa Barbara City College and as luck would have it, they still had the ARP 2600 I used as a student buried in the back corner of their class/studio. They were kind enough to let me visit it and take as many measurements and photos as I needed. After a couple of visits, they decided that I was serious, and lent me the ARP 2600 long enough to complete the project. This was a middle-era grey-faced ARP 2600 with several broken sliders, a keyboard that barely worked and, in general, it needed a lot of help. I decided to bring this machine to my friend from the HSI days, Rich Diemer of Diemer Keyboards and Amps in Studio City, California, to see if we could restore it to its past glory. Thanks to his great talents, Rich was able to make this ARP 2600 sing like it was new, and now Way Out Ware had access to a great ARP 2600 to make our emulation as accurate as possible.

Being a musician myself, and also a perfectionist, I wasn’t content with producing a so-so emulation of the venerable ARP 2600. In order to have my name in the credits, it had to be both a perfect emulation, and also as controllable as possible so musicians could truly express themselves with it.

To achieve the sound quality goal required re-inventing the state of the art for software oscillators and filters, and also digital signal routing. To model the oscillators and filter, we enlisted the help of a Stanford University CCRMA grad named David Lowenfels who is a genius with DSP. I had already produced oscillators and filters that sounded really good, but really good was not good enough for my desires. They had to be exactly perfect. With David’s help, we achieved our goal on the sound quality from the filter and oscillators. One of the extreme goals for it was to accurately model the use of an audio signal as a modulator, which is the Achilles’ heel of the other software emulations on the market, and also the source of many of the most interesting and useful sounds created by modular synths. I created the signal routing algorithms that could accomplish this along with everything else in the TimewARP 2600 using extreme care, spectrum analyzers, scopes, and a very sensitive set of ears. Don’t take my word. Try it for yourself.

To achieve the controllability goal required taking MIDI controller mapping to a new level as well. Being a violinist, I understand what a musician wants in terms of expressive control, so we provide the ability to set individual ranges and sensitivity curves on each of the 72 sliders, knobs, and switches. Also, a single MIDI controller or velocity, aftertouch, or modulation wheel can be assigned to control groups of sliders, knobs, and switches, each with its own range and sensitivity to provide an un-paralleled level of expressive control. Way Out Ware’s TimewARP 2600 also supports micro-tunings for composers that require other than the standard equi-tempered 12-note scale.

To make the TimewARP even more accurate, we needed access to information regarding the design and creation of the original instrument. I began researching the original team that built the ARP 2600, and, of course, wanted to contact Alan R. Pearlman (ARP), Philip Dodds, David Friend, and anyone else involved with its creation to better answer questions about the ARP 2600’s behavior that were not easily measured, or in the schematics. We were also interested in supplying the best possible users’ manual with the TimewARP 2600. The original manual is a classic, which worked as a tutorial about synthesis in general, as well as guide to the ARP 2600.

My break came when I found Jim Michmerhuizen on the Internet and approached him regarding re-doing the original ARP 2600 manual for the TimewARP 2600. He was very skeptical at first since he was aware of the flaws in other software modular synthesizers, and was less than enthusiastic about the idea. Basically, we had to prove to him that the TimewARP 2600 was different. Way Out Ware had already overcome many of the flaws present in other software synths, and were up to Jim’s challenge. I flew back to Boston and met with Jim, and also, as chance would have it, he was still in touch with Alan R. Pearlman. After he got to hear my emulation, he arranged a lunch with Alan, and I got to meet my idol and demonstrate TimewARP 2600 to him.

I asked Alan everything unknown about the original ARP 2600. I also got to see the very lab that Alan developed the first circuits for the ARP 2500 and ARP 2600 in. Alan, being quite computer literate has been using the TimewARP 2600 since shortly after that visit. Since that visit, Alan has become quite a supporter of Way Out Ware and continues to offer insight and encouragement on our product development plans.

Needless to say, Jim Michmerhuizen signed on to produce the manual, and also got me in touch with Philip Dodds, who provided answers to some of the toughest questions about the emulatioin. Between these fantastic resources (especially Jim), Way Out Ware was able to get the first version of the TimewARP 2600 to market, and achieved the quality (sound and otherwise) and controllability goals we had set out for ourselves.

Since then, some pretty interesting things have been happening. Mostly people realizing that a software emulation of a classic analog synthesizer can in fact sound very, very good.
Post Reply