This example illustrates how BarBitURythme can be used to
Depending on whether the new sequences sound pleasing or not, the user may give some feedback to the model by adding some of the generated sequences to the dataset and retraining the model. Eventually, user specific preferences/style are taken into account in the model and it is therefore more likely to generate sequences that sound good to the user.
Let's see:
If barbiturythme is not in our system's python package repositories (sys.path), we add it.
In [1]:
import os, sys
sys.path.append( '/'.join(os.getcwd().split('/')[:-1]) + '/barbiturythme' )
In [2]:
import numpy as np
import subprocess
import barbiturythme
from barbiturythme import bbur_gene
We load a dataset made of encoded 4bars (a 4bar will be set of 4 consecutive bars) of trap music (hip hop). (More precisely, it is loaded from ../data/io.data_folder_name/io.hh_00.dat)
In [3]:
io = bbur_gene.bbur_io()
io.data_folder_name = '4bars_4beats_4subs_3bands'
io.data_file_name = 'hh_00'
io.n_freq_int = 3
observations, observations_bin = io.load_data()
To see how the data look like, we print the first bar (measure) of the 4bar number 0. In our encoding (see the encoding notebook example for more details), the fact that observations_bin has
In [4]:
observations_bin[0][0:16], observations[0][0:16]
Out[4]:
We initialize a hidden Markov model with 2**io.n_freq_int observable states and n_markov_states hidden states
In [6]:
n_markov_states = 70
hmm = io.load_hmm( n_markov_states )
hmm.epochs = 30
hmm.init_model()
We initialize a PGBE model which decomposes the 4bars into 4 equal length subsequences
In [9]:
rho = 4
pgbe = io.load_pgbe( rho )
pgbe.fit_model( observations_bin, 1e-3 )
We initialize a combined PGBEhmm model
In [10]:
pgbehmm = barbiturythme.PGBEhmm( pgbe, hmm )
We initialize a rhythm sequence to be optimized.
In [11]:
pgbehmm.lambda_ = 5.0
pgbehmm.set_x_past( observations[-1][0:16] )
print pgbehmm.x_past, pgbehmm.x_future
We can then watch move pgbehmm.x_future move around until it finds the point where the sequence pgbehmm.x_past + pgbehmm.x_future is the most likely to appear under the combined PGBEhmm model
In [12]:
pgbehmm.fit_x_future()
Let's call this sequence x_pred
In [13]:
x_pred = np.array(list(pgbehmm.x_past)+list(pgbehmm.x_future))
x_pred_bin = np.array([ bbur_gene.int_to_bin( x, io.n_freq_int) for x in x_pred ])
print x_pred
We print x_pred to a csound score that can be read by csound
In [14]:
io.print_score_3bands( x_pred_bin )
We call the bash script gener_wav.sh in (../bin/) to generate a .wav file that represents x_pred and that is using Roland-808 kick drum, snare and hihat samples. We then play the file using aplay.
*Note that csound could be replaced by a somewhat simple python script and aplay can be substituted by whatever program that plays .wav files.
In [16]:
path_to_script = '/'.join(os.getcwd().split('/')[:-1]) + '/bin/gener_wav.sh'
cmd = 'xterm -hold -e ' + path_to_script + ' ' + io.data_file_name
p = subprocess.Popen(cmd , shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
p.wait()
#we play it
cmd2 = 'xterm -hold -e aplay hh_00.wav'
p2 = subprocess.Popen(cmd2 , shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
p2.wait()
Out[16]:
If we happen to dance uncontrollably as we listen to it, we may want to include it to our dataset and retrain our models on it.
In [16]:
io.append_data( [x_pred] )
We update the observations
In [17]:
observations, observations_bin = io.load_data()
We update the model parameters (and save them to the relevant files)
In [20]:
hmm.fit_model( observations )
io.save_hmm( hmm )
pgbe.fit_model( observations_bin, 1e-3 )
io.save_pgbe( pgbe )
The PGBEhmm has now been updated and we generate a new sample
In [23]:
pgbehmm.set_x_past( observations[-10][0:16] )
print pgbehmm.x_past, pgbehmm.x_future
pgbehmm.fit_x_future()
And so on.