You may already be aware of the self indulgent 1980s influenced electronic boogie that is my alter-ego ‘DATAStream’ – it’s pretty much what I get up to when I’m not working on freelance projects and something I’ve been enjoying for a few years now. I’m currently working towards an LP release in the summer and ‘Night Walk’ is taken from said release. Enjoy!
Hi folks! A welcome break from ‘Digital Dark Age’ I’m sure for any readers of this blog – moving onto a project I’ve been working on for some time with Massive Galaxy studios, based in Portugal. I’ve composed a number of pieces of music for the game, including the title theme which you’ll be able to hear in the forthcoming trailer. For now, here’s a selection of some music from the various scenes and levels in-game.
As mentioned in my previous post my interactive concept album / EP will be available to download and play on March 21st – however cassette / digital pre-orders are open now! Any support would be greatly appreciated so please share this with a friend, family member or even an intrigued pet!
Here’s a short teaser for my upcoming interactive concept album ‘Digital Dark Age’ – part of my third year degree major project. The album features 5 songs implemented inside a custom built first person environment in Unreal Engine.
Recently I was asked to write some music for a short first person game level created by a fellow student at Futureworks here in Manchester. It was an interesting piece to work on, and whilst the designer gave me a lot of creative freedom with this piece, the theme of the level was quite particular.
As some of you may know I also write and record music under my synthetic alter-ego ‘DATAStream’, mostly for self indulgent experiments with MIDI, synthesis and using lots of 1980s production techniques. But on the rare occasion, just every so often, a song emerges. This is the tale of one such song….
A little while back I started working on a short composition using mostly hardware synths and drum machines in my studio in Chesterfield, sequencing MIDI parts in Reason and sending them out to various bits of kit. Recording was done using a Soundcraft Ghost LE desk with a little EQ here and there. I could go on about the fun geeky audio stuff for a long time but this post is more about the fact that I was lucky enough to work with JJ Mist to record the topline in the last couple of months and have been very excited about sharing it with everyone!
You’ll be able to hear / download ‘The Spark’ on December 16th via the DATAStream Bandcamp page and the release also includes the extended mix (in true 1980’s 7inch fashion!) so keep an eye out!
For fellow audio-geeks here is the equipment list for the single…
Drums – Roland JV-80 / Boss DR550
Bass – Yamaha DX7s
Synths – Yamaha DX27 / Roland D5 / Yamaha YS200
GTR – Hondo H76 Strat
Desk (Tracking) – Soundcraft Ghost LE and Tascam 38 multitrack 1/2 inch recorder
Desk (Mixing) – Neve VR Legend with AMS 16 Reverb
About a year ago I embarked on a mission to write some multitimbral compositions using some of my more capable digital hardware synths. The first – ‘JV80’ was written and sequenced using Reason, with a single MIDI cable sending all of that lovely MIDI goodness into the Roland JV-80 (my first synth!). I often feel the need to explain what ‘multitimbral’ means today, and why it’s so awesome – since it’s something I feel that we (myself included) take for granted in DAWs and modern music production, but if you’re curious read a little bit about it in my previous post here.
More recently, I used my Yamaha MU5 tone generator for a similar composition, this time using Logic as the sequencer running 10 MIDI channels. The MU5 is an inexpensive sound module – using mostly GM type MIDI sounds but it’s handy size makes it great for noodling on the go, and using for multitimbral compositions such as this.
As you can see in the picture, Logic is being used to sequence this song – which was created all in one morning. There are no FX on the MU5 so it might sound a little dry… but I think it makes up for it in character!
Whilst the MU5 is 16 part multitimbral, it does only have 28 note polyphony so you’d probably not ever use all the 16 channels simultaneously – that said it would be useful for keeping things interesting in a song, especially if you include some programme change messages as well. If anything, this forces you to be a little more creative and I find limiting myself during writing and recording often tends to be beneficial in the long run.
Next in line for multitimbral duties will probably be my Roland D5 but this will probably be in a few months (hopefully not a full year again!). I really enjoy working in this way and it’s very rewarding if you can handle the odd polyphonic headache!
So it’s taken me around a month to complete my game audio showreel level ‘DATA Labs’ and whilst it’s been a steep learning curve it has also been hugely enjoyable! I’d had experience using FMOD before but I’d never delved so deeply into Unreal’s Blueprints system until now, creating custom blueprints for audio events and getting to grips with how FMOD talks to Unreal.
You can watch the full showreel above, and listen to me babbling about how I created certain parts of the level. What I really wanted to get out of this project was a deeper understanding of production workflow and how to develop an audio engine within a game, in this sense it’s been very successful and I’m not only more confident in implementing audio, I also feel my knowledge of level design, asset management and optimisation has increased as part of this process. I will continue to use this level to develop my skills, and have further plans to extend the gameplay to include the research ‘facility’ in the basement so I can utilise more FMOD snapshots and music cues in particular.
I’d love any constructive feedback and comments you may have, and if you want to know more about a certain element of the level then please ask, I love discussing game audio and you can find me on Twitter or just drop me an email. You can watch the full level playthrough below.
One of the bigger projects I’ve embarked on for my portfolio recently is a complete audio redesign of the trailer for ‘Event Horizon’ from 1996. It’s always been a favourite film of mine and I thought it would be a suitable challenge.
In a previous redesign video I worked on the opening sequence to Gerry Anderson’s UFO (check it out here) and I think this was good preparation for what lay ahead in Event Horizon. It was a longer piece of footage, even after I’d edited it slightly to remove scenes with dialogue in, and also involved a lot more opportunity for creative sound design.
I started looking at the sound design elements of the trailer before getting stuck in to composing some suitable music, I think this was really so I could see how the feel of the trailer changed over time and how I could fit the music around this. I used a mixture of library sounds (explosions mainly) and original audio assets recorded at home and from the past few years in various locations. All the audio was heavily edited to fit with the video, including pitch, EQ, reverb and modulation effects.
This was the first time I’ve created a project using this many audio assets (over 200 in total) and the screenshot above shows a little of how my project was laid out. Some sounds are used throughout the video (such as some low frequency rumble sounds) whilst others are just used for particular moments.
I recently posted about some sound design I worked on using a slate chopping board in my kitchen here that ended up being used to form part of the sound of the ship’s ‘Gravity Drive’ and I’ve found my library of sounds incredibly useful for this project. I used sounds from air conditioners, hand dryers, vending machines, cassette players and a lot more. I also used some excellent sounds from the Logic sound library and manipulated these so they worked, mostly this involved some pitch shifting or spectral effects such as blurring. Whilst I’d love to go out and record some electrical explosions and such like, it wasn’t really possible for this project! I feel it’s important if you use sound libraries that you give them credit, and that you have the right to use them in the first place.
I’ll be posting the video without music soon so you’ll be able to hear all the SFX without the music getting in the way and I’ll post more about how I created some of the sounds in the near future.
During my sequencing module at Point Blank I spent a lot of time researching about MIDI – the one stop communications protocol for musical instruments and other fun equipment! It was whilst researching that I truly started to grasp how powerful it is..
I’m a bit of a synth collector, although I can’t afford the more expensive (and more famous) analogue synths so I lean towards the digital synth era, from the early 80s onwards. A lot of my synths are MULTITIMBRAL which I never quite got my head around until I looked into MIDI further. WIthout boring you… multitimbral instruments can play more than one ‘patch’ at a time – a patch referring to a particular preset, drum bank etc. This differs from polyphony as that is the maximum number of notes a synth can play simultaneously.
Anyway – I’ve hooked up my Roland JV-80 which has a multitimbral mode and used Reason to sequence 7 channels of MIDI. I then sent them directly out to the synth using a single MIDI DIN cable, where I selected the appropriate patches, adjusted levels and mixed in chorus and reverb to boot!
REMEMBER everything you hear is coming from one synth! 🙂