Nick Collins

Home [Music] Research {Software} [Teaching] Contact

I'm now Reader in Composition at Durham University and the actively maintained version of my website is [HERE]

This website is only maintained for historical links




[LINK] iPhone/iPod Touch Apps

[LINK] iPhone development links

[] Demonstration Xcode project for audio input and output using RemoteIO; acts as a play through patch cord, ready for you to extend via your own audio processing/synthesis

[] Demonstration Xcode project for getting float samples from any Music Library track for processing with RemoteIO (adapted from code for BBCut, requires OS 4.1 at least)

[] Source code for TOPLAPapp

SuperCollider 3

A lot of my SC work is based around creating new UGens, and is inside the SuperCollider source or the sc3-plugins project. Independent projects are here

SCMIR SuperCollider Music Information Retrieval Library for audio content analysis; version 1.0. Feature extraction, plotting features, similarity matrix, novelty curve, section boundary detection, beat tracking, onset detection, arbitrary feature segmentation, dynamic time warping, machine learning, SCMIRLive. The associated Chromagram, SensoryDissonance and FeatureSave plugins are now part of sc3-plugins.

autocousmatic program for automatic generation of electroacoustic works, incorporating machine listening: standalone and source

stealthissound recipes from the first half of the 'Steal this Sound' book (Mitchell Sigman 2011) adapted for SuperCollider.

wavelets SC3 plug-ins for discrete wavelet transform analysis and resynthesis (analogous to FFT-PV_UGen-IFFT chains, you get DWT-WT_UGen-IDWT)

PolyPitch SC3 plug-in for multiple fundamental frequency tracking, after Anssi Klapuri's 2008 paper 'Multipitch analysis of polyphonic music and speech signals using an auditory model'

SourceSeparation SC3 plug-in for live source separation, using non-negative matrix factorisation on the power spectrogram to discover the sources and mixing matrix, and spectral masking on resynthesis

sc3-plugins contributions Including SC3 plug-ins for advanced sound analysis, such as the tracking phase vocoder, and spectral modeling synthesis, further machine listening UGens such as Tartini, Qitch (constant Q pitch tracker) and Concat (live concatenative synth), auditory modeling (gammatone filter and haircell models), and Anti Aliasing Oscillators (band-limited oscillators from research work by Vesa Valimaki, Juhan Nam, and colleagues)

SLUGens: set of SC3 plug-ins of my non-standard sound synthesis experiments. Includes new nonlinear oscillators, breakpoint set and buffer manipulations, miscellaneous filters and helpful functions.[pre-built for MacIntel 32/64 bit][source]

experimental Clang UGen code, precompiled for OS X 10.6 only (though the source should be portable); a draft of putting an LLVM jit compiler inside a SuperCollider UGen. Uses Clang in the background to create LLVM intermediate code for compilation.

bbcut2 beat tracking, event analysis and automated audio cutting library (Released under the GNU GPL)

PCSet Library 1.0 SC3 code for generating pitch class universes released under GNU GPL

Reinforcement Learning research code; untidy, but may be helpful in accompaniment to my ICMC2008 paper on live musical agents

BBCut1.3 The SC3 port of the automated audio cutting library. Now with real-time Onset Detection support and an onset finder GUI. If you desire automated jungle, algorithmic drill and bass and recursive audio cutting, plus lots of sample and live audio processing power...(Released under the GNU GPL). Note that bbcut2 is the newer and more powerful version of this, but some people may desire to play around with the old version.

There also may be some material of use on this workshop page including MIDI File analysis and demoes of non standard sound synthesis


[LINK] A few Max/MSP externals, ported from SuperCollider, like ~weakly and ~lpcanalyzer

[LINK] ll~: On the fly learning system; extracts features, and discovers timbral clusters. See the help file; hit learn 1 button to start once audio is turned on. Give it some time to settle, learning about its inputs, then set it to finds clusters. The outputs give you the assigned cluster index closest to current input, and the recent feature data. More detail in the following research report:

(2011) [PDF] "LL: Listening and Learning in an Interactive Improvisation System"


[LINK] Code to accompany a paper on dictionary-based methods for cross-synthesis

[LINK] Code to accompany a paper on even more errant sound synthesis

[LINK] Code to accompany a paper on chord enumeration: Python counting code, and C++ brute force enumeration of representatives


[LINK] TOPLAPapp re-created in javascript via the Web Audio API (requires a compatible browser, like Chrome)

[LINK] Stubject ody: Generative study in audiovisual synchrony; events are sometimes visual only, audio only, or together. Web Audio API again, uses audiolib.js here.