In [ ]:
import numpy, scipy, matplotlib.pyplot as plt, sklearn, librosa, mir_eval, urllib, IPython.display, stanford_mir
plt.rcParams['figure.figsize'] = (14,5)

Exercise: Instrument Classification using K-NN

This exercise is loosely based upon "Lab 1" from previous MIR workshops (2010).

For more on K-NN, see the notebook on K-NN.

For help from a similar exercise, follow the steps in the feature sonification exercise first.

Goals

  1. Extract spectral features from an audio signal.
  2. Train a K-Nearest Neighbor classifier.
  3. Use the classifier to classify beats in a drum loop.

Step 1: Retrieve Audio, Detect Onsets, and Segment

Download the file simple_loop.wav onto your local machine.


In [ ]:
filename = 'simple_loop.wav'
urllib.urlretrieve?

Load the audio file:


In [ ]:
librosa.load?

Play the audio file:


In [ ]:
IPython.display.Audio?

Detect onsets:


In [ ]:
librosa.onset.onset_detect?

Convert onsets from frames to seconds (and samples):


In [ ]:
librosa.frames_to_time?

In [ ]:
librosa.frames_to_samples?

Listen to a click track, with clicks located at each onset, plus the original audio:


In [ ]:
mir_eval.sonify.clicks?

In [ ]:
IPython.display.Audio?

Step 2: Extract Features

For each segment, compute the zero crossing rate and spectral centroid.


In [ ]:
librosa.zero_crossings?

In [ ]:
librosa.feature.spectral_centroid?

Scale the features to be in the range [-1, 1]:


In [ ]:
sklearn.preprocessing.MinMaxScaler?

In [ ]:
sklearn.preprocessing.MinMaxScaler.fit_transform?

Step 3: Train K-NN Classifier

Use stanford_mir.download_drum_samples to download ten kick drum samples and ten snare drum samples. Each audio file contains a single drum hit at the beginning of the file.


In [ ]:
stanford_mir.download_drum_samples?

For each audio file, extract one feature vector. Concatenate all of these feature vectors into one feature table.


In [ ]:
numpy.concatenate?

Step 4: Run the Classifier

Create a K-NN classifer model object:


In [ ]:
sklearn.neighbors.KNeighborsClassifier?

Train the classifier:


In [ ]:
sklearn.neighbors.KNeighborsClassifier.fit?

Finally, run the classifier on the test input audio file:


In [ ]:
sklearn.neighbors.KNeighborsClassifier.predict?

Step 5: Sonify the Classifier Output

Play a "beep" for each detected kick drum. Repeat for the snare drum.


In [ ]:
mir_eval.sonify.clicks?

For Further Exploration

In addition to the features used above, extract the following features:

  • spectral centroid
  • spectral spread
  • spectral skewness
  • spectral kurtosis
  • spectral rolloff
  • MFCCs

Re-train the classifier, and re-run the classifier over the test audio signal. Do the results change?

Repeat the steps above for more audio files.