Copyright 2019 The Magenta Authors
Licensed under the Apache License, Version 2.0 (the "License");


In [0]:
# Copyright 2019 The Magenta Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================

Understanding the Structure of MusicXML Document :

Authored by Prakruti Joshi, Falak Shah, Twisha Naik

Infocusp Innovations Private Limited

MusicXML is a digital sheet music interchange and distribution format. It was created with the aim to create a universal format for common Western music notation, similar to the role that the MP3 format serves for recorded music.[1] Magenta is a research project exploring the role of machine learning in the process of creating art and music.[2]

The goal of this notebook is to explore one of the magenta libraries for music, musicxml_parser.py. It is a library used to interpret and access the music data from MusicXML format into a MusicXMLDocument object where the attributes can be accessed easily and used for further processing. In other words, it is a library used to convert MusicXML into a MusicXMLDocument object which can then be converted into tensorflow.magenta.NoteSequence which is a convenient format for further machine learning applications.

The contents of this notebook are based on the library musicxml_parser. Apart from the classes declared to catch and resolve errors, the below classes are the major classes defined containing attributes decribing the music file.


In [0]:
import warnings
warnings.filterwarnings('ignore')

import os
import pprint
from magenta.music import musicxml_parser
import xml.etree.ElementTree as ET
import plotly.plotly as py
import plotly.graph_objs as go
from matplotlib import pyplot as plt

from plotly.offline import init_notebook_mode, iplot
init_notebook_mode(connected=False)


WARNING: The TensorFlow contrib module will not be included in TensorFlow 2.0.
For more information, please see:
  * https://github.com/tensorflow/community/blob/master/rfcs/20180907-contrib-sunset.md
  * https://github.com/tensorflow/addons
If you depend on functionality not listed there, please file an issue.


In [0]:
pp = pprint.PrettyPrinter(indent=4, depth=4)

In [0]:
# Generic function to get the methods of a class

def get_method_list(class_name):
    method_list = [func for func in dir(class_name) if callable(getattr(class_name, func)) and not func.startswith("__")]
    return method_list

Class MusicXMLDocument

This represents the top level object which holds the MusicXMLDocument. This class is responsible for loading the .xml or .mxl file using the _get_score method. Given a MusicXML file (takes the path of a MusicXML file as the input), this method returns the score as an xml.etree.ElementTree. If the input file to be accessed is of .mxl format, this class uncompresses it first. (.mxl is a compressed file with one .xml file and other metadata file)

After the file is loaded, this class then parses the document into memory using the parse method.

There are two ways of structuring MusicXML files - measures within parts [score-partwise], and parts within measures [score-timewise]. Here, we have MusicXML file with measures within parts.


In [0]:
# Initialing MusicXML Document object

!wget -c https://github.com/InFoCusp/ui_resources/raw/master/mxml_sample/sample.mxl


--2019-03-29 07:13:52--  https://github.com/InFoCusp/ui_resources/raw/master/mxml_sample/sample.mxl
Resolving github.com (github.com)... 192.30.253.112, 192.30.253.113
Connecting to github.com (github.com)|192.30.253.112|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://raw.githubusercontent.com/InFoCusp/ui_resources/master/mxml_sample/sample.mxl [following]
--2019-03-29 07:13:53--  https://raw.githubusercontent.com/InFoCusp/ui_resources/master/mxml_sample/sample.mxl
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.0.133, 151.101.64.133, 151.101.128.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.0.133|:443... connected.
HTTP request sent, awaiting response... 416 Range Not Satisfiable

    The file is already fully retrieved; nothing to do.


In [0]:
file_name = 'sample.mxl'
musicxml_document = musicxml_parser.MusicXMLDocument(file_name)

In [0]:
type(musicxml_document)


Out[0]:
magenta.music.musicxml_parser.MusicXMLDocument

Internal representation of a MusicXML Document.


In [0]:
pp.pprint(musicxml_document.__dict__)


{   '_score': <Element 'score-partwise' at 0x7fc26b2295e8>,
    '_score_parts': {   'P1': <magenta.music.musicxml_parser.ScorePart object at 0x7fc26b239208>},
    '_state': <magenta.music.musicxml_parser.MusicXMLParserState object at 0x7fc28b5edba8>,
    'midi_resolution': 220,
    'parts': [<magenta.music.musicxml_parser.Part object at 0x7fc26b231cf8>],
    'total_time_secs': 49.500000000000014}

Methods of MusicXMLDocument:


In [0]:
method_list = get_method_list(musicxml_parser.MusicXMLDocument)
pp.pprint(method_list)


[   '_get_score',
    '_parse',
    'get_chord_symbols',
    'get_key_signatures',
    'get_tempos',
    'get_time_signatures']

1. Score

Score is the XML Element Tree returned by _get_score method.


In [0]:
type(musicxml_document._score)


Out[0]:
xml.etree.ElementTree.Element

In [0]:
root_tree = musicxml_document._score

for neighbor in root_tree.iter():
      print(neighbor.tag, neighbor.attrib)


score-partwise {'version': '2.0'}
work {}
work-number {}
work-title {}
movement-number {}
movement-title {}
identification {}
creator {'type': 'composer'}
creator {'type': 'poet'}
rights {}
encoding {}
software {}
encoding-date {}
software {}
source {}
defaults {}
scaling {}
millimeters {}
tenths {}
page-layout {}
page-height {}
page-width {}
page-margins {'type': 'both'}
left-margin {}
right-margin {}
top-margin {}
bottom-margin {}
credit {'page': '1'}
credit-words {'font-size': '24', 'default-y': '2014.04', 'default-x': '736.842', 'justify': 'center', 'valign': 'top'}
credit {'page': '1'}
credit-words {'font-size': '12', 'default-y': '1951.17', 'default-x': '1403.51', 'justify': 'right', 'valign': 'top'}
credit {'page': '1'}
credit-words {'font-size': '12', 'default-y': '1951.17', 'default-x': '70.1754', 'justify': 'left', 'valign': 'top'}
part-list {}
score-part {'id': 'P1'}
part-name {}
score-instrument {'id': 'P1-I3'}
instrument-name {}
midi-instrument {'id': 'P1-I3'}
midi-channel {}
midi-program {}
volume {}
pan {}
part {'id': 'P1'}
measure {'width': '492.75', 'number': '1'}
print {}
system-layout {}
system-margins {}
left-margin {}
right-margin {}
top-system-distance {}
attributes {}
divisions {}
key {}
fifths {}
mode {}
time {}
beats {}
beat-type {}
clef {}
sign {}
line {}
note {}
rest {}
duration {}
voice {}
type {}
note {'default-y': '-25.00', 'default-x': '186.28'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
time-modification {}
actual-notes {}
normal-notes {}
stem {}
beam {'number': '1'}
notations {}
tuplet {'type': 'start', 'bracket': 'no'}
lyric {'number': '1'}
syllabic {}
text {}
note {'default-y': '-20.00', 'default-x': '237.10'}
pitch {}
step {}
alter {}
octave {}
duration {}
voice {}
type {}
time-modification {}
actual-notes {}
normal-notes {}
stem {}
beam {'number': '1'}
lyric {'number': '1'}
syllabic {}
text {}
note {'default-y': '-15.00', 'default-x': '287.91'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
time-modification {}
actual-notes {}
normal-notes {}
stem {}
beam {'number': '1'}
notations {}
tuplet {'type': 'stop'}
lyric {'number': '1'}
syllabic {}
text {}
note {'default-y': '-10.00', 'default-x': '338.72'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
time-modification {}
actual-notes {}
normal-notes {}
stem {}
beam {'number': '1'}
notations {}
tuplet {'type': 'start', 'bracket': 'no'}
lyric {'number': '1'}
syllabic {}
text {}
note {'default-y': '-15.00', 'default-x': '389.53'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
time-modification {}
actual-notes {}
normal-notes {}
stem {}
beam {'number': '1'}
lyric {'number': '1'}
syllabic {}
text {}
note {'default-y': '-20.00', 'default-x': '440.34'}
pitch {}
step {}
alter {}
octave {}
duration {}
voice {}
type {}
time-modification {}
actual-notes {}
normal-notes {}
stem {}
beam {'number': '1'}
notations {}
tuplet {'type': 'stop'}
lyric {'number': '1'}
syllabic {}
text {}
measure {'width': '299.79', 'number': '2'}
harmony {'print-frame': 'no'}
root {}
root-step {}
kind {'text': ''}
note {'default-y': '-25.00', 'default-x': '19.73'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
stem {}
lyric {'number': '1'}
syllabic {}
text {}
note {'default-y': '-30.00', 'default-x': '165.59'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
stem {}
beam {'number': '1'}
lyric {'number': '1'}
syllabic {}
text {}
note {'default-y': '-40.00', 'default-x': '231.89'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
stem {}
beam {'number': '1'}
lyric {'number': '1'}
syllabic {}
text {}
measure {'width': '288.05', 'number': '3'}
harmony {'print-frame': 'no'}
root {}
root-step {}
root-alter {}
kind {'text': ''}
note {'default-y': '-45.00', 'default-x': '16.01'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
stem {}
lyric {'number': '1'}
syllabic {}
text {}
harmony {'print-frame': 'no'}
root {}
root-step {}
kind {'text': '7'}
note {'default-y': '-30.00', 'default-x': '157.67'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
stem {}
beam {'number': '1'}
lyric {'number': '1'}
syllabic {}
text {}
note {'default-y': '-40.00', 'default-x': '222.06'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
stem {}
beam {'number': '1'}
lyric {'number': '1'}
syllabic {}
text {}
measure {'width': '249.75', 'number': '4'}
harmony {'print-frame': 'no'}
root {}
root-step {}
kind {'text': ''}
note {'default-y': '-50.00', 'default-x': '29.04'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
dot {}
stem {}
lyric {'number': '1'}
syllabic {}
text {}
measure {'width': '444.40', 'number': '5'}
print {'new-system': 'yes'}
system-layout {}
system-margins {}
left-margin {}
right-margin {}
system-distance {}
note {}
rest {}
duration {}
voice {}
type {}
note {'default-y': '-25.00', 'default-x': '117.71'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
time-modification {}
actual-notes {}
normal-notes {}
stem {}
beam {'number': '1'}
notations {}
tuplet {'type': 'start', 'bracket': 'no'}
lyric {'number': '1'}
syllabic {}
text {}
note {'default-y': '-20.00', 'default-x': '171.89'}
pitch {}
step {}
alter {}
octave {}
duration {}
voice {}
type {}
time-modification {}
actual-notes {}
normal-notes {}
stem {}
beam {'number': '1'}
lyric {'number': '1'}
syllabic {}
text {}
note {'default-y': '-15.00', 'default-x': '226.07'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
time-modification {}
actual-notes {}
normal-notes {}
stem {}
beam {'number': '1'}
notations {}
tuplet {'type': 'stop'}
lyric {'number': '1'}
syllabic {}
text {}
harmony {'print-frame': 'no'}
root {}
root-step {}
root-alter {}
kind {'text': 'Maj7'}
note {'default-y': '-10.00', 'default-x': '280.25'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
time-modification {}
actual-notes {}
normal-notes {}
stem {}
beam {'number': '1'}
notations {}
tuplet {'type': 'start', 'bracket': 'no'}
lyric {'number': '1'}
syllabic {}
text {}
note {'default-y': '-15.00', 'default-x': '334.43'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
time-modification {}
actual-notes {}
normal-notes {}
stem {}
beam {'number': '1'}
lyric {'number': '1'}
syllabic {}
text {}
note {'default-y': '-20.00', 'default-x': '388.62'}
pitch {}
step {}
alter {}
octave {}
duration {}
voice {}
type {}
time-modification {}
actual-notes {}
normal-notes {}
stem {}
beam {'number': '1'}
notations {}
tuplet {'type': 'stop'}
lyric {'number': '1'}
syllabic {}
text {}
measure {'width': '336.45', 'number': '6'}
harmony {'print-frame': 'no'}
root {}
root-step {}
kind {'text': ''}
note {'default-y': '-25.00', 'default-x': '29.04'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
stem {}
lyric {'number': '1'}
syllabic {}
text {}
note {'default-y': '-15.00', 'default-x': '189.23'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
stem {}
beam {'number': '1'}
lyric {'number': '1'}
syllabic {}
text {}
note {'default-y': '-25.00', 'default-x': '262.04'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
stem {}
beam {'number': '1'}
lyric {'number': '1'}
syllabic {}
text {}
measure {'width': '291.90', 'number': '7'}
harmony {'print-frame': 'no'}
root {}
root-step {}
root-alter {}
kind {'text': ''}
note {'default-y': '-10.00', 'default-x': '29.67'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
stem {}
lyric {'number': '1'}
syllabic {}
text {}
note {'default-y': '-35.00', 'default-x': '190.06'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
stem {}
lyric {'number': '1'}
syllabic {}
text {}
measure {'width': '257.59', 'number': '8'}
harmony {'print-frame': 'no'}
root {}
root-step {}
kind {'text': ''}
note {'default-y': '-25.00', 'default-x': '26.88'}
pitch {}
step {}
octave {}
duration {}
tie {'type': 'start'}
voice {}
type {}
dot {}
stem {}
notations {}
tied {'type': 'start'}
lyric {'number': '1'}
syllabic {}
text {}
measure {'width': '329.02', 'number': '9'}
print {'new-system': 'yes'}
system-layout {}
system-margins {}
left-margin {}
right-margin {}
system-distance {}
note {'default-y': '-25.00', 'default-x': '12.00'}
pitch {}
step {}
octave {}
duration {}
tie {'type': 'stop'}
voice {}
type {}
stem {}
notations {}
tied {'type': 'stop'}
note {}
rest {}
duration {}
voice {}
type {}
note {'default-y': '-25.00', 'default-x': '206.10'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
stem {}
beam {'number': '1'}
lyric {'number': '1'}
syllabic {}
text {}
note {'default-y': '-20.00', 'default-x': '266.76'}
pitch {}
step {}
alter {}
octave {}
duration {}
voice {}
type {}
stem {}
beam {'number': '1'}
lyric {'number': '1'}
syllabic {}
text {}
measure {'width': '301.46', 'number': '10'}
note {'default-y': '-15.00', 'default-x': '32.16'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
stem {}
lyric {'number': '1'}
syllabic {}
text {}
note {'default-y': '-50.00', 'default-x': '196.90'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
stem {}
lyric {'number': '1'}
syllabic {}
text {}
measure {'width': '322.61', 'number': '11'}
harmony {'print-frame': 'no'}
root {}
root-step {}
kind {'text': 'm'}
note {'default-y': '-45.00', 'default-x': '22.23'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
stem {}
lyric {'number': '1'}
syllabic {}
text {}
note {'default-y': '-25.00', 'default-x': '178.73'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
stem {}
beam {'number': '1'}
lyric {'number': '1'}
syllabic {}
text {}
note {'default-y': '-20.00', 'default-x': '249.87'}
pitch {}
step {}
alter {}
octave {}
duration {}
voice {}
type {}
stem {}
beam {'number': '1'}
lyric {'number': '1'}
syllabic {}
text {}
measure {'width': '377.24', 'number': '12'}
harmony {'print-frame': 'no'}
root {}
root-step {}
kind {'text': ''}
note {'default-y': '-15.00', 'default-x': '14.15'}
pitch {}
step {}
octave {}
duration {}
tie {'type': 'start'}
voice {}
type {}
stem {}
notations {}
tied {'type': 'start'}
lyric {'number': '1'}
syllabic {}
text {}
note {'default-y': '-15.00', 'default-x': '117.43'}
pitch {}
step {}
octave {}
duration {}
tie {'type': 'stop'}
voice {}
type {}
stem {}
beam {'number': '1'}
notations {}
tied {'type': 'stop'}
note {'default-y': '-10.00', 'default-x': '181.99'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
stem {}
beam {'number': '1'}
lyric {'number': '1'}
syllabic {}
text {}
harmony {'print-frame': 'no'}
root {}
root-step {}
kind {'text': ''}
note {'default-y': '-20.00', 'default-x': '246.54'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
accidental {}
stem {}
beam {'number': '1'}
lyric {'number': '1'}
syllabic {}
text {}
note {'default-y': '-30.00', 'default-x': '311.09'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
stem {}
beam {'number': '1'}
lyric {'number': '1'}
syllabic {}
text {}
measure {'width': '349.76', 'number': '13'}
print {'new-system': 'yes'}
system-layout {}
system-margins {}
left-margin {}
right-margin {}
system-distance {}
harmony {'print-frame': 'no'}
root {}
root-step {}
kind {'text': ''}
note {'default-y': '-5.00', 'default-x': '25.02'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
stem {}
lyric {'number': '1'}
syllabic {}
text {}
note {'default-y': '-5.00', 'default-x': '194.29'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
stem {}
beam {'number': '1'}
lyric {'number': '1'}
syllabic {}
text {}
note {'default-y': '-15.00', 'default-x': '271.22'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
stem {}
beam {'number': '1'}
lyric {'number': '1'}
syllabic {}
text {}
measure {'width': '440.58', 'number': '14'}
note {'default-y': '-30.00', 'default-x': '32.17'}
pitch {}
step {}
octave {}
duration {}
tie {'type': 'start'}
voice {}
type {}
stem {}
notations {}
tied {'type': 'start'}
lyric {'number': '1'}
syllabic {}
text {}
note {'default-y': '-30.00', 'default-x': '148.40'}
pitch {}
step {}
octave {}
duration {}
tie {'type': 'stop'}
voice {}
type {}
stem {}
beam {'number': '1'}
notations {}
tied {'type': 'stop'}
note {'default-y': '-25.00', 'default-x': '221.04'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
stem {}
beam {'number': '1'}
lyric {'number': '1'}
syllabic {}
text {}
harmony {'print-frame': 'no'}
root {}
root-step {}
kind {'text': 'dim'}
note {'default-y': '-35.00', 'default-x': '293.69'}
pitch {}
step {}
alter {}
octave {}
duration {}
voice {}
type {}
accidental {}
stem {}
beam {'number': '1'}
lyric {'number': '1'}
syllabic {}
text {}
note {'default-y': '-30.00', 'default-x': '366.33'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
stem {}
beam {'number': '1'}
lyric {'number': '1'}
syllabic {}
text {}
measure {'width': '276.59', 'number': '15'}
harmony {'print-frame': 'no'}
root {}
root-step {}
kind {'text': '9'}
note {'default-y': '-10.00', 'default-x': '20.98'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
stem {}
lyric {'number': '1'}
syllabic {}
text {}
harmony {'print-frame': 'no'}
root {}
root-step {}
kind {'text': '7'}
note {'default-y': '-5.00', 'default-x': '177.29'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
stem {}
lyric {'number': '1'}
syllabic {}
text {}
measure {'width': '263.41', 'number': '16'}
harmony {'print-frame': 'no'}
root {}
root-step {}
kind {'text': '7'}
note {'default-y': '-15.00', 'default-x': '26.57'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
dot {}
stem {}
lyric {'number': '1'}
syllabic {}
text {}
measure {'width': '474.60', 'number': '17'}
print {'new-system': 'yes'}
system-layout {}
system-margins {}
left-margin {}
right-margin {}
system-distance {}
note {}
rest {}
duration {}
voice {}
type {}
harmony {'print-frame': 'no'}
root {}
root-step {}
kind {'text': ''}
note {'default-y': '-25.00', 'default-x': '125.12'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
time-modification {}
actual-notes {}
normal-notes {}
stem {}
beam {'number': '1'}
notations {}
tuplet {'type': 'start', 'bracket': 'no'}
lyric {'number': '1'}
syllabic {}
text {}
note {'default-y': '-20.00', 'default-x': '183.10'}
pitch {}
step {}
alter {}
octave {}
duration {}
voice {}
type {}
time-modification {}
actual-notes {}
normal-notes {}
stem {}
beam {'number': '1'}
lyric {'number': '1'}
syllabic {}
text {}
note {'default-y': '-15.00', 'default-x': '241.08'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
time-modification {}
actual-notes {}
normal-notes {}
stem {}
beam {'number': '1'}
notations {}
tuplet {'type': 'stop'}
lyric {'number': '1'}
syllabic {}
text {}
harmony {'print-frame': 'no'}
root {}
root-step {}
root-alter {}
kind {'text': 'Maj7'}
note {'default-y': '-10.00', 'default-x': '299.06'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
time-modification {}
actual-notes {}
normal-notes {}
stem {}
beam {'number': '1'}
notations {}
tuplet {'type': 'start', 'bracket': 'no'}
lyric {'number': '1'}
syllabic {}
text {}
note {'default-y': '-15.00', 'default-x': '357.04'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
time-modification {}
actual-notes {}
normal-notes {}
stem {}
beam {'number': '1'}
lyric {'number': '1'}
syllabic {}
text {}
note {'default-y': '-20.00', 'default-x': '415.02'}
pitch {}
step {}
alter {}
octave {}
duration {}
voice {}
type {}
time-modification {}
actual-notes {}
normal-notes {}
stem {}
beam {'number': '1'}
notations {}
tuplet {'type': 'stop'}
lyric {'number': '1'}
syllabic {}
text {}
measure {'width': '307.49', 'number': '18'}
harmony {'print-frame': 'no'}
root {}
root-step {}
kind {'text': 'Maj7'}
note {'default-y': '-25.00', 'default-x': '20.97'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
stem {}
lyric {'number': '1'}
syllabic {}
text {}
note {'default-y': '-30.00', 'default-x': '170.21'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
stem {}
beam {'number': '1'}
lyric {'number': '1'}
syllabic {}
text {}
note {'default-y': '-40.00', 'default-x': '238.05'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
stem {}
beam {'number': '1'}
lyric {'number': '1'}
syllabic {}
text {}
measure {'width': '293.27', 'number': '19'}
harmony {'print-frame': 'no'}
root {}
root-step {}
root-alter {}
kind {'text': ''}
note {'default-y': '-45.00', 'default-x': '16.01'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
stem {}
lyric {'number': '1'}
syllabic {}
text {}
harmony {'print-frame': 'no'}
root {}
root-step {}
kind {'text': '7'}
note {'default-y': '-30.00', 'default-x': '160.40'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
stem {}
beam {'number': '1'}
lyric {'number': '1'}
syllabic {}
text {}
note {'default-y': '-40.00', 'default-x': '226.04'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
stem {}
beam {'number': '1'}
lyric {'number': '1'}
syllabic {}
text {}
measure {'width': '254.97', 'number': '20'}
harmony {'print-frame': 'no'}
root {}
root-step {}
kind {'text': ''}
note {'default-y': '-50.00', 'default-x': '29.04'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
dot {}
stem {}
lyric {'number': '1'}
syllabic {}
text {}
measure {'width': '414.54', 'number': '21'}
print {'new-system': 'yes'}
system-layout {}
system-margins {}
left-margin {}
right-margin {}
system-distance {}
note {}
rest {}
duration {}
voice {}
type {}
note {'default-y': '-25.00', 'default-x': '110.38'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
time-modification {}
actual-notes {}
normal-notes {}
stem {}
beam {'number': '1'}
notations {}
tuplet {'type': 'start', 'bracket': 'no'}
lyric {'number': '1'}
syllabic {}
text {}
note {'default-y': '-20.00', 'default-x': '160.81'}
pitch {}
step {}
alter {}
octave {}
duration {}
voice {}
type {}
time-modification {}
actual-notes {}
normal-notes {}
stem {}
beam {'number': '1'}
lyric {'number': '1'}
syllabic {}
text {}
note {'default-y': '-15.00', 'default-x': '211.23'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
time-modification {}
actual-notes {}
normal-notes {}
stem {}
beam {'number': '1'}
notations {}
tuplet {'type': 'stop'}
lyric {'number': '1'}
syllabic {}
text {}
harmony {'print-frame': 'no'}
root {}
root-step {}
root-alter {}
kind {'text': 'Maj7'}
note {'default-y': '-10.00', 'default-x': '261.66'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
time-modification {}
actual-notes {}
normal-notes {}
stem {}
beam {'number': '1'}
notations {}
tuplet {'type': 'start', 'bracket': 'no'}
lyric {'number': '1'}
syllabic {}
text {}
note {'default-y': '-15.00', 'default-x': '312.08'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
time-modification {}
actual-notes {}
normal-notes {}
stem {}
beam {'number': '1'}
lyric {'number': '1'}
syllabic {}
text {}
note {'default-y': '-20.00', 'default-x': '362.51'}
pitch {}
step {}
alter {}
octave {}
duration {}
voice {}
type {}
time-modification {}
actual-notes {}
normal-notes {}
stem {}
beam {'number': '1'}
notations {}
tuplet {'type': 'stop'}
lyric {'number': '1'}
syllabic {}
text {}
measure {'width': '247.94', 'number': '22'}
harmony {'print-frame': 'no'}
root {}
root-step {}
kind {'text': 'Maj7'}
note {'default-y': '-25.00', 'default-x': '22.84'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
stem {}
lyric {'number': '1'}
syllabic {}
text {}
note {'default-y': '-15.00', 'default-x': '139.91'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
stem {}
beam {'number': '1'}
lyric {'number': '1'}
syllabic {}
text {}
note {'default-y': '-25.00', 'default-x': '193.12'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
stem {}
beam {'number': '1'}
lyric {'number': '1'}
syllabic {}
text {}
measure {'width': '232.26', 'number': '23'}
harmony {'print-frame': 'no'}
root {}
root-step {}
root-alter {}
kind {'text': ''}
note {'default-y': '-10.00', 'default-x': '30.28'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
stem {}
lyric {'number': '1'}
syllabic {}
text {}
note {'default-y': '-35.00', 'default-x': '153.59'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
stem {}
lyric {'number': '1'}
syllabic {}
text {}
measure {'width': '184.44', 'number': '24'}
harmony {'print-frame': 'no'}
root {}
root-step {}
kind {'text': ''}
note {'default-y': '-25.00', 'default-x': '26.88'}
pitch {}
step {}
octave {}
duration {}
tie {'type': 'start'}
voice {}
type {}
dot {}
stem {}
notations {}
tied {'type': 'start'}
lyric {'number': '1'}
syllabic {}
text {}
measure {'width': '251.16', 'number': '25'}
note {'default-y': '-25.00', 'default-x': '12.00'}
pitch {}
step {}
octave {}
duration {}
tie {'type': 'stop'}
voice {}
type {}
stem {}
notations {}
tied {'type': 'stop'}
note {'default-y': '-25.00', 'default-x': '136.44'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
stem {}
beam {'number': '1'}
lyric {'number': '1'}
syllabic {}
text {}
note {'default-y': '-20.00', 'default-x': '193.00'}
pitch {}
step {}
alter {}
octave {}
duration {}
voice {}
type {}
stem {}
beam {'number': '1'}
lyric {'number': '1'}
syllabic {}
text {}
measure {'width': '216.00', 'number': '26'}
print {'new-system': 'yes'}
system-layout {}
system-margins {}
left-margin {}
right-margin {}
system-distance {}
note {'default-y': '-15.00', 'default-x': '22.23'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
stem {}
lyric {'number': '1'}
syllabic {}
text {}
harmony {'print-frame': 'no'}
root {}
root-step {}
kind {'text': 'm'}
note {'default-y': '-50.00', 'default-x': '140.49'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
stem {}
lyric {'number': '1'}
syllabic {}
text {}
measure {'width': '239.09', 'number': '27'}
harmony {'print-frame': 'no'}
root {}
root-step {}
kind {'text': 'm'}
note {'default-y': '-45.00', 'default-x': '20.99'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
stem {}
lyric {'number': '1'}
syllabic {}
text {}
note {'default-y': '-25.00', 'default-x': '134.39'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
stem {}
beam {'number': '1'}
lyric {'number': '1'}
syllabic {}
text {}
note {'default-y': '-15.00', 'default-x': '185.94'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
stem {}
beam {'number': '1'}
lyric {'number': '1'}
syllabic {}
text {}
measure {'width': '371.27', 'number': '28'}
harmony {'print-frame': 'no'}
root {}
root-step {}
kind {'text': '9'}
note {'default-y': '-5.00', 'default-x': '22.23'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
stem {}
beam {'number': '1'}
lyric {'number': '1'}
syllabic {}
text {}
note {'default-y': '-10.00', 'default-x': '80.13'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
stem {}
beam {'number': '1'}
lyric {'number': '1'}
syllabic {}
text {}
note {'default-y': '-15.00', 'default-x': '138.04'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
stem {}
beam {'number': '1'}
lyric {'number': '1'}
syllabic {}
text {}
note {'default-y': '-25.00', 'default-x': '195.95'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
stem {}
beam {'number': '1'}
lyric {'number': '1'}
syllabic {}
text {}
note {'default-y': '-30.00', 'default-x': '253.85'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
stem {}
beam {'number': '1'}
lyric {'number': '1'}
syllabic {}
text {}
note {'default-y': '-35.00', 'default-x': '311.76'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
stem {}
beam {'number': '1'}
lyric {'number': '1'}
syllabic {}
text {}
measure {'width': '270.65', 'number': '29'}
harmony {'print-frame': 'no'}
root {}
root-step {}
kind {'text': '7'}
note {'default-y': '-45.00', 'default-x': '24.09'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
stem {}
lyric {'number': '1'}
syllabic {}
text {}
note {'default-y': '-25.00', 'default-x': '152.40'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
stem {}
beam {'number': '1'}
lyric {'number': '1'}
syllabic {}
text {}
note {'default-y': '-20.00', 'default-x': '210.72'}
pitch {}
step {}
alter {}
octave {}
duration {}
voice {}
type {}
stem {}
beam {'number': '1'}
lyric {'number': '1'}
syllabic {}
text {}
measure {'width': '233.33', 'number': '30'}
harmony {'print-frame': 'no'}
root {}
root-step {}
kind {'text': 'm7'}
note {'default-y': '-15.00', 'default-x': '31.53'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
stem {}
lyric {'number': '1'}
syllabic {}
text {}
note {'default-y': '-15.00', 'default-x': '154.73'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
stem {}
lyric {'number': '1'}
syllabic {}
text {}
measure {'width': '442.90', 'number': '31'}
print {'new-system': 'yes'}
system-layout {}
system-margins {}
left-margin {}
right-margin {}
system-distance {}
harmony {'print-frame': 'no'}
root {}
root-step {}
kind {'text': '7'}
note {'default-y': '-10.00', 'default-x': '17.25'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
stem {}
lyric {'number': '1'}
syllabic {}
text {}
note {'default-y': '-15.00', 'default-x': '278.21'}
pitch {}
step {}
octave {}
duration {}
voice {}
type {}
stem {}
lyric {'number': '1'}
syllabic {}
text {}
measure {'width': '435.36', 'number': '32'}
direction {}
direction-type {}
words {'default-y': '100'}
note {'default-y': '-35.00', 'default-x': '25.57'}
pitch {}
step {}
octave {}
duration {}
tie {'type': 'start'}
voice {}
type {}
dot {}
stem {}
notations {}
tied {'type': 'start'}
lyric {'number': '1'}
syllabic {}
text {}
measure {'width': '452.07', 'number': '33'}
note {'default-y': '-35.00', 'default-x': '12.00'}
pitch {}
step {}
octave {}
duration {}
tie {'type': 'stop'}
voice {}
type {}
stem {}
notations {}
tied {'type': 'stop'}
note {}
rest {}
duration {}
voice {}
type {}
note {}
rest {}
duration {}
voice {}
type {}
barline {'location': 'right'}
bar-style {}

2. Class MusicXMLParserState:

Maintains internal state of the MusicXML parser

MusicXMLParserState Class has attributes:

Detailed understanding of MIDI compatible attributes can be found here - Introduction to Midi. Below is the lsit of attributes which are again explained in greater detail when each of these are explored further.

  1. Divisions
    • The divisions element indicates how many divisions per quarter note are used to indicate a note's duration.
    • For example, if duration = 1 and divisions = 2, this is an eighth note duration. (default to one note per measure)

  2. Midi Channel
    • Current MIDI channel (usually equal to the part number).
    • DEFAULT_MIDI_CHANNEL = 0 (first channel)

  3. Midi Program
    • Defines which instrument notes are being specified.
    • Default MIDI program = 0 (grand piano)

  4. Previous Note [Note Object]
    • Keeps track of previous note to get chord timing correct. Explained here: Chord Duration.
    • This variable stores an instance of the Note class. [default previous_note = None]

  5. QPM
    • MusicXML calls this tempo, but Magenta calls this qpm (quater notes per minute). Therefore, the variable is called qpm, but reads the MusicXML tempo attribute (120 qpm is the default tempo according to the Standard MIDI Files 1.0 Specification).
    • In other words, this attribute describes the speed at which notes are played.

  6. Second_per_quarter
    • Duration of a single quarter note in seconds. (Default: 0.5)

  7. Time Position
    • Running total of time for the current event in seconds. Resets to 0 on every part. Affected by [forward] and [backup] elements.

  8. Time Signature [Time Signature Object]
    • Keep track of current time signature. Does not support polymeter. [default time_signature = None]

  9. Transpose
    • Keeps track of current transposition level in +/- semitones. [default transpose = 0]

  10. Velocity
    • Default to a MIDI velocity of 64 (mf). This indicates with what velocity was a key played.

Internal representation of a MusicXML State:


In [0]:
state = musicxml_document._state    # State Object
print(state) 

pp.pprint(state.__dict__)


<magenta.music.musicxml_parser.MusicXMLParserState object at 0x7fc28b5edba8>
{   'divisions': 6,
    'midi_channel': 1,
    'midi_program': 41,
    'previous_note': <magenta.music.musicxml_parser.Note object at 0x7fc26930de48>,
    'qpm': 120,
    'seconds_per_quarter': 0.5,
    'time_position': 49.500000000000014,
    'time_signature': <magenta.music.musicxml_parser.TimeSignature object at 0x7fc2693764a8>,
    'transpose': 0,
    'velocity': 64}

3. Functions of MusicXMLDocument for a particular score:

a. Class ChordSymbol:

Parsed from the MusicXML harmony element.

A chord, in music, is any harmonic set of pitches consisting of three or more notes (also called "pitches") that are heard as if sounding simultaneously (two pitches played together results in an interval). In simple words, chords are notes played simultaneously. An ordered series of chords is called a chord progression.

The duration elements in MusicXML move a musical counter. To play chords, we need to indicate that a note should start at the same time as the previous note, rather than following the previous note. To do this in MusicXML, a chord element is added to the note.

Musicians use various kinds of chord names and symbols in different contexts, to represent musical chords. This class represents a chord symbol with four components:

  1. Root: a string representing the chord root pitch class, e.g. "C#".
  2. Kind: a string representing the chord kind, e.g. "m7" for minor-seventh, "9" for dominant-ninth, or the empty string for major triad.
  3. Scale degree modifications: a list of strings representing scale degree modifications for the chord, e.g. "add9" to add an unaltered ninth scale degree (without the seventh), "b5" to flatten the fifth scale degree, "no3" to remove the third scale degree, etc.
  4. Bass: a string representing the chord bass pitch class, or None if the bass pitch class is the same as the root pitch class.

There's also a special chord kind "N.C." which stands for "No Chord" and represents no harmony, for which all other fields should be None.

Chord structure and illustration of piano chords can be found here: Chord Type and Piano chords

get_chord_symbols:

This function returns a list of Chord Symbol objects in this particular score. Also, it prevents duplicate chords to be included in this list.


In [0]:
chord_symbols = musicxml_document.get_chord_symbols() 
print("Number of chord symbols : " + str(len(chord_symbols)))
print("List of some of the chord symbols used in this score :")
pp.pprint(chord_symbols[:5])


Number of chord symbols : 32
List of some of the chord symbols used in this score :
[   <magenta.music.musicxml_parser.ChordSymbol object at 0x7fc269376a20>,
    <magenta.music.musicxml_parser.ChordSymbol object at 0x7fc269376d30>,
    <magenta.music.musicxml_parser.ChordSymbol object at 0x7fc269376eb8>,
    <magenta.music.musicxml_parser.ChordSymbol object at 0x7fc26937b160>,
    <magenta.music.musicxml_parser.ChordSymbol object at 0x7fc26937b5c0>]

Internal representation of a MusicXML Chord Symbol class.


In [0]:
chord_symbol = chord_symbols[4]

pp.pprint(chord_symbol.__dict__)


{   'bass': None,
    'degrees': [],
    'kind': 'maj7',
    'root': 'Bb',
    'state': <magenta.music.musicxml_parser.MusicXMLParserState object at 0x7fc28b5edba8>,
    'time_position': 7.000000000000001,
    'xml_harmony': <Element 'harmony' at 0x7fc26b255138>}

Chord Kind abbreviations:

The below dictionary maps chord kinds to an abbreviated string as would appear in a chord symbol in a standard lead sheet. There are often multiple standard abbreviations for the same chord type, e.g. "+" and "aug" both refer to an augmented chord, and "maj7", "M7", and a Delta character all refer to a major-seventh chord; this dictionary attempts to be consistent but the choice of abbreviation is somewhat arbitrary.

The MusicXML-defined chord kinds are listed here: http://usermanuals.musicxml.com/MusicXML/Content/ST-MusicXML-kind-value.htm


In [0]:
cd_dict = chord_symbol.CHORD_KIND_ABBREVIATIONS
# labels = list(cd_dict.keys())
# values = [cd_dict[k] for k in labels]

# trace = go.Pie(labels=labels, textinfo='label', hoverinfo="skip",hoverinfosrc=None,
#     hoverlabel=None,
#     hovertemplate=None,
#     hovertemplatesrc=None,
#     hovertext=None,
#     hovertextsrc=None, showlegend=False)
# iplot([trace], filename='basic_pie_chart', image_height=90, image_width=90)


In [0]:
print("Chord Kind Abbreviations:")
pp.pprint(chord_symbol.CHORD_KIND_ABBREVIATIONS)


Chord Kind Abbreviations:
{   '': '',
    '6': '6',
    '7': '7',
    '9': '9',
    'aug': 'aug',
    'augmented': 'aug',
    'augmented-ninth': 'aug9',
    'augmented-seventh': 'aug7',
    'dim': 'dim',
    'dim7': 'dim7',
    'diminished': 'dim',
    'diminished-seventh': 'dim7',
    'dominant': '7',
    'dominant-11th': '11',
    'dominant-13th': '13',
    'dominant-ninth': '9',
    'dominant-seventh': '7',
    'half-diminished': 'm7b5',
    'm7b5': 'm7b5',
    'maj69': '6(add9)',
    'maj7': 'maj7',
    'maj9': 'maj9',
    'major': '',
    'major-11th': 'maj11',
    'major-13th': 'maj13',
    'major-minor': 'm(maj7)',
    'major-ninth': 'maj9',
    'major-seventh': 'maj7',
    'major-sixth': '6',
    'min': 'm',
    'min6': 'm6',
    'min7': 'm7',
    'min9': 'm9',
    'minMaj7': 'm(maj7)',
    'minor': 'm',
    'minor-11th': 'm11',
    'minor-13th': 'm13',
    'minor-major': 'm(maj7)',
    'minor-ninth': 'm9',
    'minor-seventh': 'm7',
    'minor-sixth': 'm6',
    'none': 'N.C.',
    'pedal': 'ped',
    'power': '5',
    'sus47': 'sus7',
    'suspended-fourth': 'sus',
    'suspended-second': 'sus2'}

In [0]:
print("Accessing attributes of Chord Symbol: \n")

print("1. Bass:" + str(chord_symbol.bass))
print("2. Degrees:" + str(chord_symbol.degrees))
print("3. Kind:" + str(chord_symbol.kind))
print("4. Root:" + str(chord_symbol.root))
print("5. State:")
pp.pprint((chord_symbol.state.__dict__)) # MusicXMLParserState object
print("6. Time Position:" + str(chord_symbol.time_position))
print("7. XML Harmony:" + str(chord_symbol.xml_harmony.getchildren()))


Accessing attributes of Chord Symbol: 

1. Bass:None
2. Degrees:[]
3. Kind:maj7
4. Root:Bb
5. State:
{   'divisions': 6,
    'midi_channel': 1,
    'midi_program': 41,
    'previous_note': <magenta.music.musicxml_parser.Note object at 0x7fc26930de48>,
    'qpm': 120,
    'seconds_per_quarter': 0.5,
    'time_position': 49.500000000000014,
    'time_signature': <magenta.music.musicxml_parser.TimeSignature object at 0x7fc2693764a8>,
    'transpose': 0,
    'velocity': 64}
6. Time Position:7.000000000000001
7. XML Harmony:[<Element 'root' at 0x7fc26b255188>, <Element 'kind' at 0x7fc26b255278>]

Methods of MusicXML Chord Symbol:


In [0]:
method_list = get_method_list(chord_symbol)
pp.pprint(method_list)


[   '_alter_to_string',
    '_parse',
    '_parse_bass',
    '_parse_degree',
    '_parse_pitch',
    '_parse_root',
    'get_figure_string']

get_figure_string function:

Use the get_figure_string method to get a string representation of the chord symbol as might appear in a lead sheet. This string representation is what we use to represent chord symbols in NoteSequence protos, as text annotations. While the MusicXML representation has more structure, using an unstructured string provides more flexibility and allows us to ingest chords from other sources, e.g. guitar tabs on the web.


In [0]:
print(" Get Figure String: " + str(chord_symbol.get_figure_string()))


 Get Figure String: Bbmaj7

_alter_to_string function:

This function parses alter text to a string of one or two sharps/ flats. It takes a string representation of an integer number of semitone between -2 and 2 inclusive. It raises ChordSymbolParseError error if alter_text cannot be parsed to an integer or if the integer is not a valid number for a semitone [-2,2]. It returns the corresponing semitone string, one of 'bb', 'b', '#', '##', or the empty string.


In [0]:
print(' 0 : ' + str(chord_symbol._alter_to_string('0')))
print(' 1 : ' + str(chord_symbol._alter_to_string('1')))
print(' 2 : ' + str(chord_symbol._alter_to_string('2')))
print('-1 : ' + str(chord_symbol._alter_to_string('-1')))
print('-2 : ' + str(chord_symbol._alter_to_string('-2')))


 0 : 
 1 : #
 2 : ##
-1 : b
-2 : bb

b. Class Tempos:

Tempo can be defined as the pace or speed at which a section of music is played. Tempos, or tempi, help the composer to convey a feeling of either intensity or relaxation. We can think of the tempo as the speedometer of the music. Typically, the speed of the music is measured in beats per minute, or BPM. For example, if you listen to the second hand on a clock, you will hear 60 ticks - or in musical terms, 60 beats - in one minute. The tempo can have virtually any amount of beats per minute. The lower the number of beats per minute, the slower the tempo will feel. Inversely, the higher the number of beats per minute, the faster the tempo will be.

Parsed from the MusicXML sound element. If no tempo is specified, default to DEFAULT_QUARTERS_PER_MINUTE is set.

get_tempos function:

This function returns a list of all tempos in this score. If no tempos are found, create a default tempo of 120 qpm.


In [0]:
tempos = musicxml_document.get_tempos()   
print("Number of tempos : " + str(len(tempos)))
print("List of all the tempos used in this score :")
print(tempos)


Number of tempos : 1
List of all the tempos used in this score :
[<magenta.music.musicxml_parser.Tempo object at 0x7fc2693177b8>]

In [0]:
tempo = tempos[0]

pp.pprint(tempo.__dict__)


{   'qpm': 120,
    'state': <magenta.music.musicxml_parser.MusicXMLParserState object at 0x7fc28b5edba8>,
    'time_position': 0,
    'xml_sound': None}

In [0]:
print("Accessing attributes of Tempos: \n")

print(" 1. qpm:" + str(tempo.qpm))
print(" 2. State:")
pp.pprint((tempo.state.__dict__))  # MusicXMLParserState object
print(" 3. Time Position:" + str(tempo.time_position))
print(" 4. XML sound:" + str(tempo.xml_sound))


Accessing attributes of Tempos: 

 1. qpm:120
 2. State:
{   'divisions': 6,
    'midi_channel': 1,
    'midi_program': 41,
    'previous_note': <magenta.music.musicxml_parser.Note object at 0x7fc26930de48>,
    'qpm': 120,
    'seconds_per_quarter': 0.5,
    'time_position': 49.500000000000014,
    'time_signature': <magenta.music.musicxml_parser.TimeSignature object at 0x7fc2693764a8>,
    'transpose': 0,
    'velocity': 64}
 3. Time Position:0
 4. XML sound:None

Methods of MusicXML Tempo:


In [0]:
method_list = get_method_list(tempo)
pp.pprint(method_list)


['_parse']

c. Class TimeSignature:

Time Signature in music: A number denoted in form numerator over denominator - numerator showing number of beats in a bar and denominator showing what kind of notes constitute one beat.

Example: If time signature is 3/4, the 4 in denominator indicates that quarter note constitutes a beat and 3 in the numerator indicates that 3 beats (each made of a quarter note) constitues a bar.

This class does not support:

- Composite time signatures: 3+2/8
- Alternating time signatures 2/4 + 3/8
- Senza misura [A senza-misura element explicitly indicates that no time signature is present.]

Parsed from the MusicXML time element.

get_time_signature:

  • Returns a list of all the time signatures used in this score.

  • Does not support polymeter (i.e. assumes all parts have the same time signature such as Part 1 having a time signature of 6/8 while Part 2 has a simultaneous time signature of 2/4).

  • Ignores duplicate time signatures to prevent Magenta duplicate time signature error. This happens when multiple parts have the same time signature is used in multiple parts at the same time.

  • Example: If Part 1 has a time siganture of 4/4 and Part 2 also has a time signature of 4/4, then only instance of 4/4 is sent to Magenta.

In [0]:
time_signature = musicxml_document.get_time_signatures()
print("Number of time signatures : " + str(len(time_signature)))
print("List of all the time_signature used in this score :")
print(time_signature)


Number of time signatures : 1
List of all the time_signature used in this score :
[<magenta.music.musicxml_parser.TimeSignature object at 0x7fc2693764a8>]

In [0]:
ts = time_signature[0]
pp.pprint(ts.__dict__)


{   'denominator': 4,
    'numerator': 3,
    'state': <magenta.music.musicxml_parser.MusicXMLParserState object at 0x7fc28b5edba8>,
    'time_position': 0,
    'xml_time': <Element 'time' at 0x7fc26b236868>}

In [0]:
print("Accessing attributes of Time Signature: \n")

print("1) Denominator: "+ str(ts.denominator))
print("2) Numerator: "+ str(ts.numerator)) 
print("3) State:")
pp.pprint((ts.state.__dict__)) # MusicXMLParserState object
print("4) Time Position: "+ str(ts.time_position)) 
print("5) XML time: "+ str(ts.xml_time))


Accessing attributes of Time Signature: 

1) Denominator: 4
2) Numerator: 3
3) State:
{   'divisions': 6,
    'midi_channel': 1,
    'midi_program': 41,
    'previous_note': <magenta.music.musicxml_parser.Note object at 0x7fc26930de48>,
    'qpm': 120,
    'seconds_per_quarter': 0.5,
    'time_position': 49.500000000000014,
    'time_signature': <magenta.music.musicxml_parser.TimeSignature object at 0x7fc2693764a8>,
    'transpose': 0,
    'velocity': 64}
4) Time Position: 0
5) XML time: <Element 'time' at 0x7fc26b236868>

Methods of MusicXML Time Signature:


In [0]:
method_list = get_method_list(ts)
pp.pprint(method_list)


['_parse']

d. Class KeySignature:

Parsed from the MusicXML key element into a MIDI compatible key. Detailed explanation of key signature can be found here: link
If the mode is not minor (e.g. dorian), default to "major" because MIDI only supports major and minor modes.

get_key_signatures:

  • Returns a list of all the key signatures used in this score.

  • Supports different key signatures in different parts (score in written pitch).

  • Ignores duplicate key signatures to prevent Magenta duplicate key signature error. This happens when multiple parts have the same key signature at the same time.

  • Example: If the score is in written pitch and the flute is written in the key of Bb major, the trombone will also be written in the key of Bb major. However, the clarinet and trumpet will be written in the key of C major because they are Bb transposing instruments.

  • If no key signatures are found, creates a default key signature of C major.

In [0]:
key_signatures = musicxml_document.get_key_signatures()
print("Number of key signatures : " + str(len(key_signatures)))
print("List of all the key signatures used in this score :")
print(key_signatures)


Number of key signatures : 1
List of all the key signatures used in this score :
[<magenta.music.musicxml_parser.KeySignature object at 0x7fc269376470>]

In [0]:
ks = key_signatures[0]
pp.pprint(ks.__dict__)


{   'key': -1,
    'mode': 'major',
    'state': <magenta.music.musicxml_parser.MusicXMLParserState object at 0x7fc28b5edba8>,
    'time_position': 0,
    'xml_key': <Element 'key' at 0x7fc26b236778>}

In [0]:
print("Accessing attributes of Key Siganture: \n")

print("1) Key: "+ str(ks.key))
print("2) Mode: "+ str(ks.mode))
print("3) State: ")
pp.pprint((ks.state.__dict__)) # MusicXMLParserState object
print("4) Time Position: "+ str(ks.time_position)) 
print("5) XML key: "+ str(ks.xml_key))


Accessing attributes of Key Siganture: 

1) Key: -1
2) Mode: major
3) State: 
{   'divisions': 6,
    'midi_channel': 1,
    'midi_program': 41,
    'previous_note': <magenta.music.musicxml_parser.Note object at 0x7fc26930de48>,
    'qpm': 120,
    'seconds_per_quarter': 0.5,
    'time_position': 49.500000000000014,
    'time_signature': <magenta.music.musicxml_parser.TimeSignature object at 0x7fc2693764a8>,
    'transpose': 0,
    'velocity': 64}
4) Time Position: 0
5) XML key: <Element 'key' at 0x7fc26b236778>

Methods of MusicXML Key Signature:


In [0]:
method_list = get_method_list(ks)
pp.pprint(method_list)


['_parse']

4. Class Part:

A part generally refers to a single strand or melody or harmony of music within a larger ensemble or a polyphonic musical composition.

Parsed from the MusicXML part element.

parts:


In [0]:
parts = musicxml_document.parts
print("Number of parts : " + str(len(parts)))
print("List of all the parts in this score :")
print(parts)


Number of parts : 1
List of all the parts in this score :
[<magenta.music.musicxml_parser.Part object at 0x7fc26b231cf8>]

Internal represention of a MusicXML Part Class:


In [0]:
part = parts[0]
pp.pprint(list(part.__dict__.keys()))


['id', 'score_part', 'measures', '_state']

In [0]:
print("Accessing attributes of Parts: \n")
print("1. Part ID: "+ str(part.id))
print("2. Measures: "+ str(part.measures[0]))
print("3." + str(part.score_part))    # ScorePart Object

print(" \nNumber of measures in this part : " + str(len(part.measures)))


Accessing attributes of Parts: 

1. Part ID: P1
2. Measures: <magenta.music.musicxml_parser.Measure object at 0x7fc2693763c8>
3.ScorePart: , Channel: 1, Program: 41
 
Number of measures in this part : 33

Methods of MusicXML Part:


In [0]:
method_list = get_method_list(part)
pp.pprint(method_list)


['_parse', '_repair_empty_measure']

Class Measure:

Internal representation of a Measure in Parts:

In musical notation, a measure (or bar) is a segment of time corresponding to a specific number of beats in which each beat is represented by a particular note value and the boundaries of the bar are indicated by vertical bar lines. Dividing music into bars provides regular reference points to pinpoint locations within a musical composition. It also makes written music easier to follow, since each bar of staff symbols (staff consists of five horizontal lines on which musical notes lie) can be read and played as a batch. Typically, a piece consists of several bars of the same length, and in modern musical notation the number of beats in each bar is specified at the beginning of the score by the time signature. In simple time, (such as 3/4), the top figure indicates the number of beats per bar, while the bottom number indicates the note value of the beat (the beat has a quarter note value in the 3/4 example).

Parsed from the MusicXML measure element.

Duration:
  • The duration element is an integer that represents a note's duration in terms of divisions per quarter note. For example, let divisions (per quarter note) = 24, then a quarter note has a duration of 24. The eighth-note triplets have a duration of 8, while the eighth notes have a duration of 12.
  • The duration elements in MusicXML move a musical counter.

In [0]:
measure = part.measures[0]
pp.pprint(measure.__dict__)


{   'chord_symbols': [],
    'duration': 18,
    'key_signature': <magenta.music.musicxml_parser.KeySignature object at 0x7fc269376470>,
    'notes': [   <magenta.music.musicxml_parser.Note object at 0x7fc269376438>,
                 <magenta.music.musicxml_parser.Note object at 0x7fc269376550>,
                 <magenta.music.musicxml_parser.Note object at 0x7fc2693765f8>,
                 <magenta.music.musicxml_parser.Note object at 0x7fc2693766d8>,
                 <magenta.music.musicxml_parser.Note object at 0x7fc2693767b8>,
                 <magenta.music.musicxml_parser.Note object at 0x7fc269376898>,
                 <magenta.music.musicxml_parser.Note object at 0x7fc269376978>],
    'start_time_position': 0,
    'state': <magenta.music.musicxml_parser.MusicXMLParserState object at 0x7fc28b5edba8>,
    'tempos': [],
    'time_signature': <magenta.music.musicxml_parser.TimeSignature object at 0x7fc2693764a8>,
    'xml_measure': <Element 'measure' at 0x7fc26b236408>}

In [0]:
print("Accessing attributes of Measure: \n")

print("1. Chord Symbols: "+ str(measure.chord_symbols))  # Chord Symbol Object
print("2. Duration: "+ str(measure.duration)) # Cumulative duration in MusicXML duration.
print("3) Key Signature: ")
pp.pprint((measure.key_signature.__dict__)) # Key Signature Object
print("4. Notes: ")
pp.pprint(measure.notes[0].__dict__) # Note Object
print("5. Start Time Position: "+ str(measure.start_time_position)) # Record the starting time of this measure so that time signatures
                                                                     # can be inserted at the beginning of the measure
print("6. State: ")
pp.pprint(measure.state.__dict__) # MusicXMLParserState object
print("7. Tempos:")
pp.pprint(measure.tempos) # Tempos Object
print("8. Time Signature: ")
pp.pprint(measure.time_signature.__dict__) # Time Signature Object
print("9. XML Measure: "+ str(measure.xml_measure))
print("\nNumber of notes in this measure : " + str(len(measure.notes)))


Accessing attributes of Measure: 

1. Chord Symbols: []
2. Duration: 18
3) Key Signature: 
{   'key': -1,
    'mode': 'major',
    'state': <magenta.music.musicxml_parser.MusicXMLParserState object at 0x7fc28b5edba8>,
    'time_position': 0,
    'xml_key': <Element 'key' at 0x7fc26b236778>}
4. Notes: 
{   'is_grace_note': False,
    'is_in_chord': False,
    'is_rest': True,
    'midi_channel': 1,
    'midi_program': 41,
    'note_duration': <magenta.music.musicxml_parser.NoteDuration object at 0x7fc2693764e0>,
    'pitch': None,
    'state': <magenta.music.musicxml_parser.MusicXMLParserState object at 0x7fc28b5edba8>,
    'velocity': 64,
    'voice': 1,
    'xml_note': <Element 'note' at 0x7fc26b236a48>}
5. Start Time Position: 0
6. State: 
{   'divisions': 6,
    'midi_channel': 1,
    'midi_program': 41,
    'previous_note': <magenta.music.musicxml_parser.Note object at 0x7fc26930de48>,
    'qpm': 120,
    'seconds_per_quarter': 0.5,
    'time_position': 49.500000000000014,
    'time_signature': <magenta.music.musicxml_parser.TimeSignature object at 0x7fc2693764a8>,
    'transpose': 0,
    'velocity': 64}
7. Tempos:
[]
8. Time Signature: 
{   'denominator': 4,
    'numerator': 3,
    'state': <magenta.music.musicxml_parser.MusicXMLParserState object at 0x7fc28b5edba8>,
    'time_position': 0,
    'xml_time': <Element 'time' at 0x7fc26b236868>}
9. XML Measure: <Element 'measure' at 0x7fc26b236408>

Number of notes in this measure : 7

Methods of MusicXML Measure:


In [0]:
method_list = get_method_list(measure)
pp.pprint(method_list)


[   '_fix_time_signature',
    '_parse',
    '_parse_attributes',
    '_parse_backup',
    '_parse_direction',
    '_parse_forward']

Class Note

Parsed from the MusicXML note element.

Internal Representation of Note class in Measure:


In [0]:
note = measure.notes[1]

In [0]:
#In-Built method to get the Note object attributes

pp.pprint(note.__str__())


('{duration: 2, midi_ticks: 73.33333333333333, seconds: 0.16666666666666666, '
 'pitch: A4, MIDI pitch: 69, voice: 1, velocity: 64} (@time: 0.5)')

In [0]:
pp.pprint(note.__dict__)


{   'is_grace_note': False,
    'is_in_chord': False,
    'is_rest': False,
    'midi_channel': 1,
    'midi_program': 41,
    'note_duration': <magenta.music.musicxml_parser.NoteDuration object at 0x7fc269376588>,
    'pitch': ('A4', 69),
    'state': <magenta.music.musicxml_parser.MusicXMLParserState object at 0x7fc28b5edba8>,
    'velocity': 64,
    'voice': 1,
    'xml_note': <Element 'note' at 0x7fc26b236bd8>}

In [0]:
print("Accessing attributes of Note: \n")

print("1. Is Grace Note: "+ str(note.is_grace_note))
print("2. Is in Chord: "+ str(note.is_in_chord))
print("3. Is Rest: "+ str(note.is_rest))
print("4. Midi Channel: "+ str(note.midi_channel))
print("5. Midi Program: "+ str(note.midi_program))
print("6. Duration: "+ str(note.note_duration)) # Note Duration object
print("7. Pitch: "+ str(note.pitch))
print("      Pitch: "+ str(note.pitch[0]))
print("      Midi Pitch: "+ str(note.pitch[1]))
print("8. State: ")
pp.pprint(note.state.__dict__) # MusicXMLParserState object
print("9. Velocity: "+ str(note.velocity))
print("10. Voice: "+ str(note.voice))
print("11. XML Note: "+ str(note.xml_note))


Accessing attributes of Note: 

1. Is Grace Note: False
2. Is in Chord: False
3. Is Rest: False
4. Midi Channel: 1
5. Midi Program: 41
6. Duration: <magenta.music.musicxml_parser.NoteDuration object at 0x7fc269376588>
7. Pitch: ('A4', 69)
      Pitch: A4
      Midi Pitch: 69
8. State: 
{   'divisions': 6,
    'midi_channel': 1,
    'midi_program': 41,
    'previous_note': <magenta.music.musicxml_parser.Note object at 0x7fc26930de48>,
    'qpm': 120,
    'seconds_per_quarter': 0.5,
    'time_position': 49.500000000000014,
    'time_signature': <magenta.music.musicxml_parser.TimeSignature object at 0x7fc2693764a8>,
    'transpose': 0,
    'velocity': 64}
9. Velocity: 64
10. Voice: 1
11. XML Note: <Element 'note' at 0x7fc26b236bd8>

Methods of MusicXML Note:


In [0]:
method_list = get_method_list(note)
pp.pprint(method_list)


['_parse', '_parse_pitch', '_parse_tuplet', 'pitch_to_midi_pitch']

pitch_to_midi_pitch function:

This function converts MusicXML pitch representation to MIDI pitch number. In Midi format, pitch of a note is represented by a single number.

MusicXML divides pitch up into three parts which together constitute a particular note:

  1. Step (A, B, C, D, E, F, or G)
  2. Alter (Sharp # / Flat b)
  3. Octave (0, 1, 2....7)

                     Pitch Class Mapping:
Step (If starting at C) C C# D D# E F F# G G# A A# B
Pitch class 0 1 2 3 4 5 6 7 8 9 10 11
                     Alter Mapping : 

Sharp Double Sharp Flat Double Flat Empty()
# x b bb
1 2 -1 -2 0

The function takes the arguements as: (step, alter, octave). The Midi Pitch is derived as follows:

Conversion Formula:

pitch_class = [ pitch_class + int( alter ) ] % 12
midi_pitch = ( 12 + pitch_class )+ ( int(octave) * 12 )

For example,
Consider note C#4.
pitch_class : (C# ) = 1
octave : 4
Therefore, C#4 = (12 + 1) + (4 * 12) = 13 + 48 = 61


In [0]:
print(" Pitch (C#4) to Midi Pitch: "+ str(note.pitch_to_midi_pitch('C',1,4)))


 Pitch (C#4) to Midi Pitch: 61

Class NoteDuration:

This class represents MusicXML note's duration properties.

Internal Representaion of Note Duration class:


In [0]:
note_durations = note.note_duration

pp.pprint(note_durations.__dict__)


{   '_type': 'eighth',
    'dots': 0,
    'duration': 2,
    'is_grace_note': False,
    'midi_ticks': 73.33333333333333,
    'seconds': 0.16666666666666666,
    'state': <magenta.music.musicxml_parser.MusicXMLParserState object at 0x7fc28b5edba8>,
    'time_position': 0.5,
    'tuplet_ratio': Fraction(3, 2)}

In [0]:
print("Type Ratio Map: ")
pp.pprint(note_durations.TYPE_RATIO_MAP)


Type Ratio Map: 
{   '1024th': Fraction(1, 1024),
    '128th': Fraction(1, 128),
    '16th': Fraction(1, 16),
    '256th': Fraction(1, 256),
    '32nd': Fraction(1, 32),
    '512th': Fraction(1, 512),
    '64th': Fraction(1, 64),
    'breve': Fraction(2, 1),
    'eighth': Fraction(1, 8),
    'half': Fraction(1, 2),
    'long': Fraction(4, 1),
    'maxima': Fraction(8, 1),
    'quarter': Fraction(1, 4),
    'whole': Fraction(1, 1)}

convert_type_to_ratio function:

This function converts the MusicXML note-type-value to a Python Fraction. It returns a Fraction object representing the note type.

Examples:

- whole = 1/1
- half = 1/2
- quarter = 1/4
- 32nd = 1/32

In [0]:
note_type = note_durations._convert_type_to_ratio()
print(note_durations._type)
print(note_type)


eighth
1/8

In [0]:
print("Accessing attributes of Note Duration: \n")

print("1. Dots: "+ str(note_durations.dots)) # Number of augmentation dots
print("2. Duration: "+ str(note_durations.duration)) # MusicXML duration
print("3. Is Grace Note: "+ str(note_durations.is_grace_note)) # Assume true until not found
print("4. Midi Ticks: "+ str(note_durations.midi_ticks)) # Duration in MIDI ticks 
print("5. Seconds: "+ str(note_durations.seconds)) # Duration in seconds
print("6. State: ")
pp.pprint(note_durations.state.__dict__) # MusicXmlParserState object
print("7. Time Position: "+ str(note_durations.time_position)) # Onset time in seconds
print("8. Tuplet Ratio: "+ str(note_durations.tuplet_ratio)) # Ratio for tuplets (default to 1)
print("9. Type: "+ str(note_durations._type)) #MusicXML duration type from the map


Accessing attributes of Note Duration: 

1. Dots: 0
2. Duration: 2
3. Is Grace Note: False
4. Midi Ticks: 73.33333333333333
5. Seconds: 0.16666666666666666
6. State: 
{   'divisions': 6,
    'midi_channel': 1,
    'midi_program': 41,
    'previous_note': <magenta.music.musicxml_parser.Note object at 0x7fc26930de48>,
    'qpm': 120,
    'seconds_per_quarter': 0.5,
    'time_position': 49.500000000000014,
    'time_signature': <magenta.music.musicxml_parser.TimeSignature object at 0x7fc2693764a8>,
    'transpose': 0,
    'velocity': 64}
7. Time Position: 0.5
8. Tuplet Ratio: 3/2
9. Type: eighth

Methods of MusicXML NoteDuration:


In [0]:
method_list = get_method_list(note_durations)
pp.pprint(method_list)


['_convert_type_to_ratio', 'duration_float', 'duration_ratio', 'parse_duration']

5. Class ScorePart

A [score-part] class contains MIDI program and channel information for the [part] elements in the MusicXML document.
If no MIDI info is found for the part, use the default MIDI channel (0) and default to the Grand Piano program (MIDI Program #1)

Internal representation of a MusicXML ScorePart Class.


In [0]:
print("Type of _Score_parts :" )
print(type(musicxml_document._score_parts))


Type of _Score_parts :
<class 'dict'>

In [0]:
score_part = musicxml_document._score_parts.items()
print(score_part.__str__())


dict_items([('P1', <magenta.music.musicxml_parser.ScorePart object at 0x7fc26b239208>)])

In [0]:
score_part_val = musicxml_document._score_parts['P1']
print(score_part_val)


ScorePart: , Channel: 1, Program: 41

In [0]:
print("Accessing attributes of Score Part: \n")

print("ID: " + str(score_part_val.id))
print("Midi channel: " + str(score_part_val.midi_channel))
print("Midi Program :" + str(score_part_val.midi_program))
print("Part Name :" + str(score_part_val.part_name))


Accessing attributes of Score Part: 

ID: P1
Midi channel: 1
Midi Program :41
Part Name :

Methods of MusicXML ScorePart:


In [0]:
method_list = get_method_list(score_part_val)
pp.pprint(method_list)


['_parse']

6. Midi Resolution:


In [0]:
midi_resolution = musicxml_document.midi_resolution
print("Midi Resolution : " + str(midi_resolution))


Midi Resolution : 220

7. Total Time Seconds


In [0]:
total_time_sec = musicxml_document.total_time_secs
print("Total time seconds : " + str(total_time_sec))


Total time seconds : 49.500000000000014