Virtual reality or digital actuality?

It is now the mid to late 90's, and the synthesizer world was about to change again. With more synthesizers appearing on the market using nothing but software models running on generic or custom made processors, computers were becoming key to sound generation, only there was not that much software available to actually make a desktop computer into a synthesizer. There were programs around, but they tended to take up most of the processing power of the computer, and the soundcards of the day that the average user could afford were not up to the job of professional audio work.

Then things started to change...

With the world still jumping on the bandwagon of the Acid House legacy, buying up as many TB-303 bass line synthesizers and TR-808 and 909 analogue drum machine's and clones of these devices as they could get their hands on, and enterprising group of Swedish programmers came up with a program called Rebirth, and suddenly the sounds of these now legendary synthesizers and drum machines was within reach. With version 1 of Rebirth, you could have 2 authentic sounding TB-303 Bass line synthesizers and 1 TB-808 inside your computer, for a fraction of the price of a single genuine TB-303. Synchronize this over MIDI with either a hardware or computer based sequencer and you had classic sounds synced up with the rest of your studio.

Rebirth sold out, and the real beauty of the software was its CPU efficiency. A relatively modest PC of the day could run the software, and it didn't need the best and most expensive soundcards to get the most out of it. While it had been growing for a number of years, the software studio revolution was now just literally around the corner.

More and more software synthesizers were appearing on the market, and soundcards that could handle more than just stereo input and output were beginning to appear at lower and lower prices. Computer based sequencers could now manipulate audio data in a similar way to MIDI data, and had software mixer sections that could have software based effects and EQ units, and through additional software such as MIDI Yoke, could also control software synthesizers that were also running on the same computer by re-directing MIDI information internally to the software synthesizer. The problem here was that multi client (Drivers that allow more than 1 application at a time access the soundcard) drivers were not that common, so most of the time, only 1 software synthesizer at a time could be run, unless you had a VERY expensive sound system in your computer. All this changed with the launch of Cubase VST 4 on the Mac, and Cubase VST 3.7 on the PC in 1999,

Steinberg had launched the VST (Virtual Studio Technology) versions of Cubase a few years earlier, which allowed the users to use effects within the Cubase mixer just like you could in a hardware based studio. The only limits were the computers memory and CPU, as effects such as reverb are very CPU intensive. Version 4 (Mac) and version 3.7 (PC) now added the ability to add software synthesizers into the VST environment, controlled and edited like any other part of the software, and the audio output went straight into the Cubase mixer, no need for multi client drivers or applications such as MIDI Yoke, plus the format was open, meaning that anyone could develop synthesizers or effects and install them.

The market for software synthesizers was growing, not only with commercial offerings, but cheaper and free programs from ordinary people who had their own ideas on what a synthesizer should be. Some of the free ones have gained a reputation and a following, such as Crystal, which is available on both Mac and PC, and is a truly unique synthesizer that is extremely capable. Small businesses started up, one of which, Muon software, now license their sampler and synthesizer engines to other companies, though their own range of instruments are still widely respected by many. But there was another bit of software about to hit the market, from the makers of Rebirth came Reason, a complete software studio...

Reason was a lot more than Rebirth was, Rebirth was fixed in what it could offer (2 x TB-303, 1 TR-808 and 1 TR-909 by this time) while Reason was totally configurable, right down to wiring how the audio was routed with virtual cables (That swing when the back was brought into view!). The Rewire protocol introduced with Rebirth was updated to allow for more MIDI channels to be passed into reason from sequencers, and more audio channels out of reason into sequencers. Version 1 of Reason was launched towards the end of 2000 and contained a number of built in effects, a loop player, a drum machine, sample player and analogue style synthesizer. The track Spirals was produced in Reason V1 on a 600MHz Celeron laptop with 128MB RAM, so once again, the programmers from Sweden had made an exceptionally efficient system, but the real feature was that if you needed a synthesizer, you just add 1 to the virtual rack. Need another? Then just add 1. Same with drum machines, effects, mixers, or whatever was available within the virtual studio at the time.

There was a fault though, the initial effects were good, except for the reverb, which tended to flatten the sound and make the mix dull. Spirals was sent via rewire to Cubase VST, where better quality reverbs were used, though this weakness was commented on the company's online support forums, and subsequent versions had better quality effects. This initial weakness did not hamper sales though, and the software continues to go from strength to strength today. It's real strength though is in the Analogue synthesizer sounds, for example, this track shows how the bass can really cut through, and also how big pad sounds can be created, while this track shows how complete arrangements can now be done without any additional hardware.

Like the modelled hardware synthesizers, software synthesizers were now emulating analogue synthesis, and on occasion, actual analogue classic synthesizers, with uncanny accuracy, so what were the programmers doing to get around the static sounds of early digital synthesis? The answer is that they were programming the faults of analogue into the software to get closer to the original sound.

If you read part 2 of this series, it explains that due to the components used to make the oscillators, filters, amplifiers and envelope generators used to affect the actual shape of the waveforms, filer and envelope response, creating unique character in every synthesizer. By creating waveforms that were not perfect, and adding a bit more randomness to the sound generation, some of the character of old character of analogue synthesis could be re-created, though put next to a true analogue modular system, software systems still sound that bit weaker, though they do sound authentic.

With the arrival of the software synthesizer, suddenly synthesis engines were becoming more creative, with new and interesting ways to create sounds. One of the holy grails of synthesis is the re-synthesis engine, which could now be possible with the processor power available. Re-synthesis takes a sound, in a similar way to a sampler, and brakes it down into it's component frequencies so it can be stored, manipulated and played like any synthesized sound. The theory behind this is based on the fact that adding sine waves together at the right frequencies at the right time can make more complex waveforms. If a sound can be processed and broken down into component sine waves, every aspect of that sound can be changed and/or edited. systems do exist that can do this, though the process of analyzing the sound information takes time, even with today's fast processor designs, so a practical and cost effective re-synthesis system is still some time away from release, but it is here, and may well be on a desktop near you soon, if not blasting out from every hit single on the radio!

It is not only analogue systems that have been captured and re-created in software, Yamaha's DX7, Korg's M1 and Wavestation synthesizers have also been re-created in software, on occasion with improvements over the hardware original. Different systems have been created, allowing different synthesis systems to be integrated into 1 program, creating a single powerful instrument, and now, instruments have been created from computers, in dedicated configurations.

The first system to do this was the Gekko, with large touch screen colour display, this ran a stripped down version of Windows XP and a Steinberg VST engine, which also allowed users to download their own VST instruments on to the machine, assuming it was compatible with the Gekko's interface. Opening the Gekko revealed a standard PC motherboard with hard drive, USB, network and firewire ports. The soundcard could be chosen from a number of available options, which could be loaded into the systems PCI slot, or connected via firewire or USB. This system was close to what Korg were promising with the Oasys system back in 1993, but the Gekko had it's own limitations. One thing was the VST engine, as this was not quite compatible with all of the VST instruments, so its sound palette was limited a little. Plus the high price, you could build a similar unit yourself for less than half the price, but you would lose the nice keyboard case and touch screen LCD. The Gekko did sell and as far as I am aware, is still available today, however it has competition.

The original Oasys may never have really worked or been released, but Korg never forgot the concept, and in 2005 the world finally saw the new Oasys keyboard system, which to be honest, was everything the original Oasys concept was and a lot more besides. Like the Gekko, Oasys has a touch screen LCD, and is based on a modern day computer system, however its operating system is Linux based and highly optimised for audio applications. It's full size weighted keyboard has a quality and feel all of it's own, and for the price it should have, though if money were no object, many studio owners (Me included) would have had an Oasys months ago, though many of us would need bigger houses to store and use the instrument as it is huge(!) Like the original Oasys concept, additional synthesis engines can be added, and multiple engines can be used at the same time. The system has onboard effects and a comprehensive sequencer, as well as copious amounts of RAM, which can be expanded as needed. While the Gekko uses off the shelf pro audio hardware, Oasys has it's own dedicated audio subsystem, with a quality and clarity that have to be heard to be believed. Physical modelling is possible, and like the Prophecy and Z1, plucked models are included, alongside other string and woodwind models. No doubt in years to come it will be bettered, but for the moment at least, the Oasys IS the ultimate synthesis system on the market.

So that is pretty much it for this series, but the world of synthesis still moves on. Soon there will be new methods and new technologies, but for now they are something to speculate on. With all this software and hardware emulation sounding so good, why are analogue systems still so sought after? Well, with the emulations, every digital synthesizer is still identical to the next one on the production line, and no amount of programming can really eliminate that. Each analogue synthesizer was sufficiently different to have it's own subtle sound, something which the real fans of analogue systems love so much. Indeed today, new analogue systems are available from the synthesizer master Bob Moog (Now sadly departed) and a niche market has always existed for such hardware. At the end of the day, software and hardware emulations of analogue rely on computers using random numbers, and as any hardware designer will tell you, computers are incapable of generating true random numbers, they simulate that randomness (But that is another series for someone else on another website to do(!)), so the sound never quite gets there, though today it is incredibly close. The truth is however, if you want truly authentic analogue sound you need an analogue synthesizer, and with that, I bring this series to a close.

Thank you for reading.