In [1]:
from policy import NEATProperty, PropertyArray, properties_to_json
from cib import CIB
from pib import PIB, NEATPolicy
We define three properties to represent these application requirements and combine them into a NEATRequest:
In [2]:
property1 = NEATProperty(('low_latency', True), precedence=NEATProperty.IMMUTABLE)
property2 = NEATProperty(('remote_ip', '10.1.23.45'), precedence=NEATProperty.IMMUTABLE)
property3 = NEATProperty(('MTU', {"start":1500, "end":9000}), precedence=NEATProperty.OPTIONAL)
property4 = NEATProperty(('TCP', True)) # OPTIONAL is the default property precedence
request = PropertyArray(property1,property2,property3,property4)
print(request)
In [3]:
properties_to_json(request)
Out[3]:
In [4]:
cib = CIB('cib/example/')
The currently known network chracteristics are stored as entries in the CIB, where each entry contains a set of properties associated with some interface:
In [5]:
cib.dump()
In [6]:
profiles = PIB('pib/example/')
pib = PIB('pib/example/')
For the current scenario, the low latency profile is defined as follows:
In [14]:
profile1 = NEATPolicy()
profile1.match.add(NEATProperty(('low_latency', True)))
profile1.properties.add(NEATProperty(('iw_wired', True)),
NEATProperty(('interface_latency', (0,40)), precedence=NEATProperty.IMMUTABLE))
profiles.register(profile1)
Next, we define two sample policies and add them to the Policy Information Base (PIB).
A "bulk transfer" policy is configured which is triggered by a specific destination IP, which is known to be the address of backup NFS share:
In [15]:
policy1 = NEATPolicy()
policy1.match.add(NEATProperty(('remote_ip', '10.1.23.45')))
policy1.properties.add(NEATProperty(('capacity', (10000, 100000)), precedence=NEATProperty.IMMUTABLE),
NEATProperty(('MTU', 9600)))
print(policy1)
Another policy is in place to enable TCP window scaling on 10G links (if possible):
In [16]:
policy2 = NEATPolicy(name='TCP options')
policy2.match.insert(NEATProperty(('MTU', 9600)), NEATProperty(('is_wired', True)))
policy2.properties.insert(NEATProperty(('TCP_window_scale', True)))
In [ ]:
pib.register(policy1)
pib.register(policy2)
pib.dump()
First, we apply the low_latency profile to the request properties. The low_latency property in the request is replaced by the corresponding profile properties:
In [ ]:
print(request.properties)
profiles._lookup(request.properties, remove_matched=True, apply=True)
print(request.properties)
In [ ]:
cib.lookup(request)
request.dump()
Each candidate is comprised of the union of the properties of a single CIB entry and the application request. Whenever the two sets intersect, the values of the corresponding properties are compared. If two properties match, the associated candidate property score is increased (e.g., [MTU|1500]+1.0 indicates a new score of 1.0). The score is decreased if there is a mismatch in the property values.
In [ ]:
pib.lookup_all(request.candidates)
request.dump()
Candidate 1 becomes:
In [ ]:
request.candidates[0].dump()
Next we examine Candidate 2:
In [ ]:
request.candidates[1].dump()
Note that the score of the MTU property was reduced, as it did not match the requested property of the "Bulk transfer" policy.
The "TCP options" policy is not applied as the candidate does not match the policy's MTU property.
The third candidate was invalidated because the "Bulk transfer" policy contains an immutable property requiring a capacity of 10G, which candidate 3 cannot fulfil.
Finally, we can obtain the total score of the properties associated with each candidate:
In [ ]:
print(request.candidates[0].score)
In [ ]:
print(request.candidates[1].score)
The score indicates that candidate one (interface en0) is most suitable for the given application request.
In [ ]:
request.candidates[0].properties.json()
In [ ]:
request.candidates[1].properties.json()
Note that properties which were not matched/updated during the lookup contain NaN as a score. This means that the PM did not have enough information to rank theses properties. The NEAT logic must decide how to deal with these unprocessed properties.