In [ ]:
%matplotlib inline
import seaborn
import numpy, scipy, matplotlib.pyplot as plt, librosa, IPython.display, urllib
There is no written component to be submitted for this part, Part 1. This section is intended to acquaint you with Python, the IPython notebook, and librosa.
When you see a cell that looks like this:
In [ ]:
plt.plot?
that is a cue to use a particular command, in this case, plot
. Run the cell to see documentation for that command. (To quickly close the Help window, press q
.)
For more documentation, visit the links in the Help menu above.
In this exercise, you will segment, feature extract, and analyze audio files.
Download the file simple_loop.wav
onto your local machine.
In [ ]:
filename = 'simple_loop.wav'
url = 'http://audio.musicinformationretrieval.com/' + filename
In [ ]:
urllib.urlretrieve?
Make sure the download worked:
In [ ]:
%ls *.wav
Save the audio signal into an array.
In [ ]:
librosa.load?
Show the sample rate:
In [ ]:
print fs
Listen to the audio signal.
In [ ]:
IPython.display.Audio?
Display the audio signal.
In [ ]:
librosa.display.waveplot?
Compute the short-time Fourier transform:
In [ ]:
librosa.stft?
For display purposes, compute the log amplitude of the STFT:
In [ ]:
librosa.logamplitude?
Display the spectrogram.
In [ ]:
# Play with the parameters, including x_axis and y_axis
librosa.display.specshow?
Find the times, in seconds, when onsets occur in the audio signal.
In [ ]:
librosa.onset.onset_detect?
In [ ]:
librosa.frames_to_time?
Convert the onset times into sample indices.
In [ ]:
librosa.frames_to_samples?
Play a "beep" at each onset.
In [ ]:
# Use the `length` parameter so the click track is the same length as the original signal
librosa.clicks?
In [ ]:
# Play the click track "added to" the original signal
IPython.display.Audio?
Save into an array, segments
, 100-ms segments beginning at each onset.
In [ ]:
# Assuming these variables exist:
# x: array containing the audio signal
# fs: corresponding sampling frequency
# onset_samples: array of onsets in units of samples
frame_sz = int(0.100*fs)
segments = numpy.array([x[i:i+frame_sz] for i in onset_samples])
Here is a function that adds 300 ms of silence onto the end of each segment and concatenates them into one signal.
Later, we will use this function to listen to each segment, perhaps sorted in a different order.
In [ ]:
def concatenate_segments(segments, fs=44100, pad_time=0.300):
padded_segments = [numpy.concatenate([segment, numpy.zeros(int(pad_time*fs))]) for segment in segments]
return numpy.concatenate(padded_segments)
concatenated_signal = concatenate_segments(segments, fs)
Listen to the newly concatenated signal.
In [ ]:
IPython.display.Audio?
For each segment, compute the zero crossing rate.
In [ ]:
# returns a boolean array of zero crossing locations, not a total count
librosa.core.zero_crossings?
In [ ]:
# you'll need this to actually count the number of zero crossings per segment
sum?
Use argsort
to find an index array, ind
, such that segments[ind]
is sorted by zero crossing rate.
In [ ]:
# zcrs: array, number of zero crossings in each frame
ind = numpy.argsort(zcrs)
print ind
Sort the segments by zero crossing rate, and concatenate the sorted segments.
In [ ]:
concatenated_signal = concatenate_segments(segments[ind], fs)
Listen to the sorted segments. What do you hear?
In [ ]:
IPython.display.Audio?
Repeat the steps above for the following audio files:
In [ ]:
#url = 'http://audio.musicinformationretrieval.com/125_bounce.wav'
#url = 'http://audio.musicinformationretrieval.com/conga_groove.wav'
#url = 'http://audio.musicinformationretrieval.com/58bpm.wav'