In [1]:
from jupyterthemes import get_themes
from jupyterthemes.stylefx import set_nb_theme
themes = get_themes()
set_nb_theme(themes[1])
Out[1]:
In [2]:
%load_ext watermark
%watermark -a 'Ethen' -d -t -v -p jupyterthemes
Resources:
There are two main ways to traverse through a tree. By traversing, we essentially mean we try to walk through the tree without repeating ourselves.
Resources:
From an implementation standpoint, the difference between the two is that the breadth first search uses a queue, while depth first search uses a stack.
Breadth First Search (BFS)
We will use a graph that has 7 nodes and 7 edges as our example. The edges are undirected and unweighted. Distance between two nodes will be measured based on the number of edges separating two vertices.
When using BFS to traverse through all the nodes in the graph, the algorithm follows the following steps:
In [3]:
# adjacency list can be efficiently represented as
# a python dictionary, where the nodes are the keys
# and the nodes that they are connected to are stored
# as a list of values
graph = {'A': ['B', 'C', 'E'],
'B': ['A','D', 'E'],
'C': ['A', 'F', 'G'],
'D': ['B'],
'E': ['A', 'B','D'],
'F': ['C'],
'G': ['C']}
In [4]:
from collections import deque
def bfs(graph, start):
"""Graph traversal using Breadth First Searh"""
# keep track of all visited nodes
visited = set()
# keep track nodes to be checked
queue = deque(start)
while queue:
node = queue.popleft()
if node not in visited:
visited.add(node)
neighbors = graph[node]
for neighbor in neighbors:
queue.append(neighbor)
return visited
In [5]:
start = 'A'
bfs(graph, start)
Out[5]:
For depth first search, the list of actions to perform upon each visit to a node:
In [6]:
def dfs(graph, start):
"""Graph traversal using Depth First Searh"""
visited = set()
stack = [start]
while stack:
node = stack.pop()
if node not in visited:
visited.add(node)
neighbors = graph[node]
for neighbor in neighbors:
stack.append(neighbor)
return visited
In [7]:
start = 'A'
dfs(graph, start)
Out[7]:
Resources:
In order for a binary tree to be searchable, all of the nodes to the left of root node must be smaller than the value of the root node. On the other hand, all the value to the right of the root node must be bigger than the root node's value. This property also applies to every subtree.
One of the main use case for binary search tree is to search for the presence of a number in O(log(n))
time. The rationale behind this is that if the value is below root, we can say for sure that the value is not in the right subtree then we only need to search in the left subtree and if the value is above root, we can say for sure that the value is not in the left subtree and we need to only search in the right subtree.
The heap is a binary tree with two additional rules
We can classify heap into two categories based on the ordering, a minheap or a maxheap.
From an implementation standpoint, heaps are often times represented as arrays. We start off by representing the root node of the heap as the first element of the array. Then if a parent node’s index is represented by index i in an array, then it’s left child will always be located at 2i + 1. Similarly, if a parent node’s index is represented by index i in an array, then it’s right child will always be located at 2i + 2.