This notebook contains the functions that provide the scaffolding needed to test out mapping methods. The following steps are taken to test functions and calibrate data for the project:
process_image()
function is populated with the appropriate steps/functions to go from a raw image to a worldmap.moviepy
functions are used to construct a video output from processed image data.perception.py
and decision.py
are modified to allow the rover to navigate and map in autonomous mode!Note: If, at any point, display windows freeze up or other confounding issues are encountered, Kernel should be restarted and output cleared from the "Kernel" menu above.
Uncomment and run the next cell to get code highlighting in the markdown cells.
In [2]:
#%%HTML
#<style> code {background-color : orange !important;} </style>
In [3]:
%matplotlib inline
#%matplotlib qt # Choose %matplotlib qt to plot to an interactive window
import cv2 # OpenCV for perspective transform
import numpy as np
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
import scipy.misc # For saving images as needed
import glob # For reading in a list of images from a folder
There's some example data provided in the test_dataset
folder. This basic dataset is enough to get you up and running but if you want to hone your methods more carefully you should record some data of your own to sample various scenarios in the simulator.
Next, read in and display a random image from the test_dataset
folder
In [4]:
path = '../test_dataset/IMG/*'
img_list = glob.glob(path)
# Grab a random image and display it
idx = np.random.randint(0, len(img_list)-1)
image = mpimg.imread(img_list[idx])
plt.imshow(image)
Out[4]:
In [5]:
# In the simulator the grid on the ground can be toggled on for calibration.
# The rock samples can be toggled on with the 0 (zero) key.
# Here's an example of the grid and one of the rocks
example_grid = '../calibration_images/example_grid1.jpg'
example_rock = '../calibration_images/example_rock1.jpg'
example_rock2 = '../calibration_images/example_rock2.jpg'
grid_img = mpimg.imread(example_grid)
rock_img = mpimg.imread(example_rock)
rock_img2 = mpimg.imread(example_rock2)
fig = plt.figure(figsize=(12,3))
plt.subplot(131)
plt.imshow(grid_img)
plt.subplot(132)
plt.imshow(rock_img)
plt.subplot(133)
plt.imshow(rock_img2)
Out[5]:
Define the perspective transform function and test it on an image.
Four source points are selected which represent a 1 square meter grid in the image viewed from the rover's front camera. These source points are subsequently mapped to four corresponding grid cell points in our "warped" image such that a grid cell in it is 10x10 pixels viewed from top-down. Thus, the front_cam image is said to be warped into a top-down view image by the perspective transformation. The example grid image above is used to choose source points for the grid cell which is in front of the rover (each grid cell is 1 square meter in the sim). The source and destination points are defined to warp the image to a grid where each 10x10 pixel square represents 1 square meter.
The following steps are used to warp an image using a perspective transform:
Refer to the following documentation for geometric transformations in OpenCV: http://docs.opencv.org/trunk/da/d6e/tutorial_py_geometric_transformations.html
In [6]:
def perspect_transform(input_img, sourc_pts, destn_pts):
"""
Apply a perspective transformation to input 3D image.
Keyword arguments:
input_img -- 3D numpy image on which perspective transform is applied
sourc_pts -- numpy array of four source coordinates on input 3D image
destn_pts -- corresponding destination coordinates on output 2D image
Return value:
output_img -- 2D numpy image with overhead view
"""
transform_matrix = cv2.getPerspectiveTransform(
sourc_pts,
destn_pts
)
output_img = cv2.warpPerspective(
input_img,
transform_matrix,
(input_img.shape[1], input_img.shape[0]) # keep same size as input_img
)
return output_img
# Define calibration box in source (actual) and destination (desired)
# coordinates to warp the image to a grid where each 10x10 pixel square
# represents 1 square meter and the destination box will be 2*dst_size
# on each side
dst_size = 5
# Set a bottom offset (rough estimate) to account for the fact that the
# bottom of the image is not the position of the rover but a bit in front
# of it
bottom_offset = 6
source = np.float32(
[[14, 140],
[301, 140],
[200, 96],
[118, 96]]
)
destination = np.float32(
[
[image.shape[1]/2 - dst_size,
image.shape[0] - bottom_offset],
[image.shape[1]/2 + dst_size,
image.shape[0] - bottom_offset],
[image.shape[1]/2 + dst_size,
image.shape[0] - 2*dst_size - bottom_offset],
[image.shape[1]/2 - dst_size,
image.shape[0] - 2*dst_size - bottom_offset]
]
)
warped = perspect_transform(grid_img, source, destination)
plt.imshow(warped)
# scipy.misc.imsave('../output/warped_example.jpg', warped)
Out[6]:
In [7]:
warped_rock = perspect_transform(rock_img, source, destination)
warped_rock2 = perspect_transform(rock_img2, source, destination)
fig = plt.figure(figsize=(16,7))
plt.subplot(221)
plt.imshow(rock_img)
plt.subplot(222)
plt.imshow(rock_img2)
plt.subplot(223)
plt.imshow(warped_rock)
plt.subplot(224)
plt.imshow(warped_rock2)
Out[7]:
In [8]:
rock1_pixels = np.copy(rock_img)
plt.imshow(rock1_pixels[90:112,150:172])
Out[8]:
Define the color thresholding function for navigable terrain and apply it to the warped image.
Ultimately, the map not only includes navigable terrain but also obstacles and the positions of the rock samples we're searching for. New functions are needed to return the pixel locations of obstacles (areas below the threshold) and rock samples (yellow rocks in calibration images), such that these areas can be mapped into world coordinates as well.
In [9]:
def color_thresh_nav(input_img, rgb_thresh=(160, 160, 160)):
"""
Apply a color threshold to extract only ground terrain pixels.
Keyword arguments:
input_img -- numpy image on which RGB threshold is applied
rgb_thresh -- RGB thresh tuple above which only ground pixels are detected
Return value:
nav_img -- binary image identifying ground/navigable terrain pixels
"""
# Create an array of zeros same xy size as input_img, but single channel
nav_img = np.zeros_like(input_img[:, :, 0])
# Require that each of the R(0), G(1), B(2) pixels be above all three
# rgb_thresh values such that pix_above_thresh will now contain a
# boolean array with "True" where threshold was met
pix_above_thresh = (
(input_img[:, :, 0] > rgb_thresh[0]) &
(input_img[:, :, 1] > rgb_thresh[1]) &
(input_img[:, :, 2] > rgb_thresh[2])
)
# Index the array of zeros with the boolean array and set to 1 (white)
# those pixels that are above rgb_thresh for ground/navigable terrain
nav_img[pix_above_thresh] = 1
# nav_img will now contain white pixels identifying navigable terrain
return nav_img
threshed = color_thresh_nav(warped)
plt.imshow(threshed, cmap='gray')
#scipy.misc.imsave('../output/warped_threshed.jpg', threshed*255)
Out[9]:
In [44]:
def color_thresh_obst(input_img, rgb_thresh=(160, 160, 160)):
"""
Apply a color threshold to extract only mountain rock pixels.
Keyword arguments:
input_img -- numpy image on which RGB threshold is applied
rgb_thresh -- RGB thresh tuple below which only obstacle pixels are detected
Return value:
nav_img -- binary image identifying rocks/obstacles terrain pixels
"""
# Create an array of zeros same xy size as input_img, but single channel
obs_img = np.zeros_like(input_img[:, :, 0])
# Require that each of the R(0), G(1), B(2) pixels be below all three
# rgb_thresh values such that pix_below_thresh will now contain a
# boolean array with "True" where threshold was met
#pix_below_thresh = (
# (input_img[:, :, 0] < rgb_thresh[0]) &
# (input_img[:, :, 1] < rgb_thresh[1]) &
# (input_img[:, :, 2] < rgb_thresh[2])
#)
pix_below_thresh = (
(np.logical_and(input_img[:, :, 0] > 0,input_img[:, :, 0] <= rgb_thresh[0])) &
(np.logical_and(input_img[:, :, 1] > 0,input_img[:, :, 1] <= rgb_thresh[1])) &
(np.logical_and(input_img[:, :, 2] > 0,input_img[:, :, 2] <= rgb_thresh[2]))
)
# Index the array of zeros with the boolean array and set to 1 (white)
# those pixels that are below rgb_thresh for rocks/obstacles terrain
obs_img[pix_below_thresh] = 1
# obs_img will now contain white pixels identifying obstacle terrain
return obs_img
threshed_obstacles_image = color_thresh_obst(warped)
plt.imshow(threshed_obstacles_image, cmap='gray')
Out[44]:
In [11]:
def color_thresh_rock(input_img, low_bound, upp_bound):
"""
Apply a color threshold using OpenCV to extract pixels for gold rocks.
Keyword arguments:
input_img -- numpy image on which OpenCV HSV threshold is applied
low_bound -- tuple defining lower HSV color value for gold rocks
upp_bound -- tuple defining upper HSV color value for gold rocks
Return value:
threshed_img -- binary image identifying gold rock pixels
"""
# Convert BGR to HSV
hsv_img = cv2.cvtColor(input_img, cv2.COLOR_BGR2HSV)
# Threshold the HSV image to get only colors for gold rocks
threshed_img = cv2.inRange(hsv_img, low_bound, upp_bound)
return threshed_img
# define range of gold rock color in HSV
lower_bound = (75, 130, 130)
upper_bound = (255, 255, 255)
# apply rock color threshold to original rocks 1 and 2 images
threshed_rock_image = color_thresh_rock(
rock_img,
lower_bound,
upper_bound
)
threshed_rock2_image = color_thresh_rock(
rock_img2,
lower_bound,
upper_bound
)
# apply rock color threshold to warped rocks 1 and 2 images
threshed_warped_rock_image = color_thresh_rock(
warped_rock,
lower_bound,
upper_bound
)
threshed_warped_rock2_image = color_thresh_rock(
warped_rock2,
lower_bound,
upper_bound
)
# verify correctness of gold rock threshold
fig = plt.figure(figsize=(20,11))
plt.subplot(421)
plt.imshow(rock_img)
plt.subplot(422)
plt.imshow(threshed_rock_image, cmap='gray')
plt.subplot(423)
plt.imshow(warped_rock)
plt.subplot(424)
plt.imshow(threshed_warped_rock_image, cmap='gray')
plt.subplot(425)
plt.imshow(rock_img2)
plt.subplot(426)
plt.imshow(threshed_rock2_image, cmap='gray')
plt.subplot(427)
plt.imshow(warped_rock2)
plt.subplot(428)
plt.imshow(threshed_warped_rock2_image, cmap='gray')
Out[11]:
In [45]:
def to_rover_coords(binary_img):
"""Convert all points on img coord-frame to those on rover's frame."""
# Identify nonzero pixels in binary image representing
# region of interest e.g. rocks
ypos, xpos = binary_img.nonzero()
# Calculate pixel positions with reference to rover's coordinate
# frame given that rover front cam itself is at center bottom of
# the photographed image.
x_pixel = -(ypos - binary_img.shape[0]).astype(np.float)
y_pixel = -(xpos - binary_img.shape[1]/2).astype(np.float)
return x_pixel, y_pixel
def to_polar_coords(x_pix, y_pix):
"""Convert cartesian coordinates to polar coordinates."""
# compute distance and angle of 'each' pixel from origin and
# vertical respectively
distances = np.sqrt(x_pix**2 + y_pix**2)
angles = np.arctan2(y_pix, x_pix)
return distances, angles
def rotate_pix(x_pix, y_pix, angle):
"""Apply a geometric rotation."""
angle_rad = angle * np.pi / 180 # yaw to radians
x_pix_rotated = (x_pix * np.cos(angle_rad)) - (y_pix * np.sin(angle_rad))
y_pix_rotated = (x_pix * np.sin(angle_rad)) + (y_pix * np.cos(angle_rad))
return x_pix_rotated, y_pix_rotated
def translate_pix(x_pix_rot, y_pix_rot, x_pos, y_pos, scale):
"""Apply a geometric translation and scaling."""
x_pix_translated = (x_pix_rot / scale) + x_pos
y_pix_translated = (y_pix_rot / scale) + y_pos
return x_pix_translated, y_pix_translated
def pix_to_world(x_pix, y_pix, x_pos, y_pos, yaw, world_size, scale):
"""
Apply a geometric transformation i.e. rotation and translation to ROI.
Keyword arguments:
x_pix, y_pix -- numpy array coords of ROI being converted to world frame
x_pos, y_pos, yaw -- rover position and yaw angle in world frame
world_size -- integer length of the square world map (200 x 200 pixels)
scale -- scale factor between world frame pixels and rover frame pixels
Note:
Requires functions rotate_pix and translate_pix to work
"""
# Apply rotation and translation
x_pix_rot, y_pix_rot = rotate_pix(
x_pix,
y_pix,
yaw
)
x_pix_tran, y_pix_tran = translate_pix(
x_pix_rot,
y_pix_rot,
x_pos,
y_pos,
scale
)
# Clip pixels to be within world_size
x_pix_world = np.clip(np.int_(x_pix_tran), 0, world_size - 1)
y_pix_world = np.clip(np.int_(y_pix_tran), 0, world_size - 1)
return x_pix_world, y_pix_world
# Grab another random image
idx = np.random.randint(0, len(img_list)-1)
image = mpimg.imread(img_list[idx])
warped = perspect_transform(image, source, destination)
threshed = color_thresh_nav(warped)
# Calculate pixel values in rover-centric coords and
# distance/angle to all pixels
xpix, ypix = to_rover_coords(threshed)
dist, angles = to_polar_coords(xpix, ypix)
mean_dir = np.mean(angles)
######## TESTING ############
xpix = xpix[dist < 130]
ypix = ypix[dist < 130]
# Do some plotting
fig = plt.figure(figsize=(12,9))
plt.subplot(221)
plt.imshow(image)
plt.subplot(222)
plt.imshow(warped)
plt.subplot(223)
plt.imshow(threshed, cmap='gray')
plt.subplot(224)
plt.plot(xpix, ypix, '.')
plt.ylim(-160, 160)
plt.xlim(0, 160)
arrow_length = 100
x_arrow = arrow_length * np.cos(mean_dir)
y_arrow = arrow_length * np.sin(mean_dir)
plt.arrow(0, 0, x_arrow, y_arrow, color='red', zorder=2, head_width=10, width=2)
Out[45]:
In [54]:
x_nav_test_pix, y_nav_test_pix = to_rover_coords(threshed)
nav_test_dists, nav_test_angles = to_polar_coords(x_nav_test_pix, y_nav_test_pix)
mean_test_angle = np.mean(nav_test_angles)
# separate nav_test_angles into left and right angles
nav_test_left_angles = nav_test_angles[nav_test_angles > 0]
mean_test_left_angle = np.mean(nav_test_left_angles)
nav_test_right_angles = nav_test_angles[nav_test_angles < 0]
mean_test_right_angle = np.mean(nav_test_right_angles)
print('nav_test_angles:')
print(nav_test_angles)
print('amount: ', len(nav_test_angles))
print('mean:', mean_test_angle * 180 / np.pi)
print('')
print('nav_test_left_angles:')
print(nav_test_left_angles)
print('amount: ', len(nav_test_left_angles))
print('mean:', mean_test_left_angle * 180 / np.pi)
print('')
print('nav_test_right_angles:')
print(nav_test_right_angles)
print('amount: ', len(nav_test_right_angles))
print('mean:', mean_test_right_angle * 180 / np.pi)
print('')
#### do some plotting ######
fig = plt.figure(figsize=(12,9))
plt.plot(x_nav_test_pix, y_nav_test_pix, '.')
plt.ylim(-160, 160)
plt.xlim(0, 180)
arrow_length = 150
# main test angle
x_mean_test_angle = arrow_length * np.cos(mean_test_angle)
y_mean_test_angle = arrow_length * np.sin(mean_test_angle)
plt.arrow(0, 0, x_mean_test_angle, y_mean_test_angle, color='red', zorder=2, head_width=10, width=2)
# main left test angle
x_mean_test_left_angle = arrow_length * np.cos(mean_test_left_angle)
y_mean_test_left_angle = arrow_length * np.sin(mean_test_left_angle)
plt.arrow(0, 0, x_mean_test_left_angle, y_mean_test_left_angle, color='yellow', zorder=2, head_width=10, width=2)
# main right test angle
x_mean_test_right_angle = arrow_length * np.cos(mean_test_right_angle)
y_mean_test_right_angle = arrow_length * np.sin(mean_test_right_angle)
plt.arrow(0, 0, x_mean_test_right_angle, y_mean_test_right_angle, color='blue', zorder=2, head_width=10, width=2)
Out[54]:
In [25]:
nav_x_pixs, nav_y_pixs = to_rover_coords(threshed)
nav_dists, nav_angles = to_polar_coords(nav_x_pixs, nav_y_pixs)
print('nav_x_pixs:')
print(nav_x_pixs)
print(nav_x_pixs.shape)
print('')
print('nav_y_pixs:')
print(nav_y_pixs)
print(nav_y_pixs.shape)
print('')
print('nav_dists:')
print('len(nav_dists):', len(nav_dists))
print(nav_dists[:4])
print('mean:', np.mean(nav_dists))
print('shape:', nav_dists.shape)
print('')
# remove some pixels that are farthest away
#indexes_to_remove = []
#trim_nav_x_pixs = np.delete(nav_x_pixs, x )
trim_nav_x_pixs = nav_x_pixs[nav_dists < 120]
print('trim_nav_x_pixs')
print(trim_nav_x_pixs)
trim_nav_y_pixs = nav_y_pixs[nav_dists < 120]
print('trim_nav_y_pixs')
print(trim_nav_y_pixs)
The next cell is all setup to read data saved from rover sensors into a pandas
dataframe. Here we'll also read in a "ground truth" map of the world, where white pixels (pixel value = 1) represent navigable terrain.
After that, we'll define a class to store telemetry data and pathnames to images. When the class (data = SensorData()
) is instantiated, we'll have a global variable called data
that can be referenced for telemetry and to map data within the process_image()
function in the following cell.
In [29]:
import pandas as pd
# Change the path below to your data directory
# If you are in a locale (e.g., Europe) that uses ',' as the decimal separator
# change the '.' to ','
# Read in csv log file as dataframe
df = pd.read_csv('../test_dataset_2/robot_log.csv', delimiter=';', decimal='.')
csv_img_list = df["Path"].tolist() # Create list of image pathnames
# Read in ground truth map and create a 3-channel image with it
ground_truth = mpimg.imread('../calibration_images/map_bw.png')
ground_truth_3d = np.dstack(
(ground_truth*0,
ground_truth*255,
ground_truth*0)
).astype(np.float)
class SensorData():
"""
Create a class to be a container of rover sensor data from sim.
Reads in saved data from csv sensor log file generated by sim which
includes saved locations of front camera snapshots and corresponding
rover position and yaw values in world coordinate frame
"""
def __init__(self):
"""
Initialize a SensorData instance unique to a single simulation run.
worldmap instance variable is instantiated with a size of 200 square
grids corresponding to a 200 square meters space which is same size as
the 200 square pixels ground_truth variable allowing full range
of output position values in x and y from the sim
"""
self.images = csv_img_list
self.xpos = df["X_Position"].values
self.ypos = df["Y_Position"].values
self.yaw = df["Yaw"].values
# running index set to -1 as hack because moviepy
# (below) seems to run one extra iteration
self.count = -1
self.worldmap = np.zeros((200, 200, 3)).astype(np.float)
self.ground_truth = ground_truth_3d # Ground truth worldmap
# Instantiate a SensorData().. this will be a global variable/object
# that can be referenced in the process_image() function below
data = SensorData()
The process_image()
function below is modified by adding in the perception step processes (functions defined above) to perform image analysis and mapping. The following cell is all set up to use this process_image()
function in conjunction with the moviepy
video processing package to create a video from the rover camera image data taken in the simulator.
In short, we will be passing individual images into process_image()
and building up an image called output_image
that will be stored as one frame of the output video. A mosaic of the various steps of above analysis process and additional text can also be added.
The output video ultimately demonstrates our mapping process.
In [34]:
def process_image(input_img):
"""
Establish ROIs in rover cam image and overlay with ground truth map.
Keyword argument:
input_img -- 3 channel color image
Return value:
output_img -- 3 channel color image with ROIs identified
Notes:
Requires data (a global SensorData object)
Required by the ImageSequeceClip object from moviepy module
"""
# Example of how to use the SensorData() object defined above
# to print the current x, y and yaw values
# print(data.xpos[data.count], data.ypos[data.count], data.yaw[data.count])
# 1) Define source and destination points for perspective transform
# 2) Apply perspective transform
warped_img = perspect_transform(input_img, source, destination)
# 3) Apply color threshold to identify following ROIs:
# a. navigable terrain
# b. obstacles
# c. rock samples
threshed_img_navigable = color_thresh_nav(warped_img)
threshed_img_obstacle = color_thresh_obst(warped_img)
threshed_img_rock = color_thresh_rock(
warped_img,
lower_bound,
upper_bound
)
# 4) Convert thresholded image pixel values to rover-centric coords
navigable_x_rover, navigable_y_rover = to_rover_coords(threshed_img_navigable)
obstacle_x_rover, obstacle_y_rover = to_rover_coords(threshed_img_obstacle)
rock_x_rover, rock_y_rover = to_rover_coords(threshed_img_rock)
########################### TESTING ############################
nav_dists = to_polar_coords(navigable_x_rover, navigable_y_rover)[0]
navigable_x_rover = navigable_x_rover[nav_dists < 130]
navigable_y_rover = navigable_y_rover[nav_dists < 130]
# 5) Convert rover-centric pixel values to world coords
my_worldmap = np.zeros((200, 200))
my_scale = 10 # scale factor assumed between world and rover space pixels
#curr_rover_xpos = data.xpos[data.count-1]
#curr_rover_ypos = data.ypos[data.count-1]
#curr_rover_yaw = data.yaw[data.count-1]
navigable_x_world, navigable_y_world = pix_to_world(
navigable_x_rover,
navigable_y_rover,
data.xpos[data.count],
data.ypos[data.count],
data.yaw[data.count],
#curr_rover_xpos,
#curr_rover_ypos,
#curr_rover_yaw,
my_worldmap.shape[0],
my_scale
)
obstacle_x_world, obstacle_y_world = pix_to_world(
obstacle_x_rover,
obstacle_y_rover,
data.xpos[data.count],
data.ypos[data.count],
data.yaw[data.count],
#curr_rover_xpos,
#curr_rover_ypos,
#curr_rover_yaw,
my_worldmap.shape[0],
my_scale
)
rock_x_world, rock_y_world = pix_to_world(
rock_x_rover,
rock_y_rover,
data.xpos[data.count],
data.ypos[data.count],
data.yaw[data.count],
#curr_rover_xpos,
#curr_rover_ypos,
#curr_rover_yaw,
my_worldmap.shape[0],
my_scale
)
# 6) Update worldmap (to be displayed on right side of screen)
#data.worldmap[obstacle_y_world, obstacle_x_world] = (255,0,0)
#data.worldmap[rock_y_world, rock_x_world] = (255,255,255)
#data.worldmap[navigable_y_world, navigable_x_world] = (0,0,255)
data.worldmap[obstacle_y_world, obstacle_x_world, 0] += 1
data.worldmap[rock_y_world, rock_x_world, 1] += 1
data.worldmap[navigable_y_world, navigable_x_world, 2] += 1
# 7) Make a mosaic image
# First create a blank image (can be whatever shape)
output_image = np.zeros(
(input_img.shape[0] + data.worldmap.shape[0],
input_img.shape[1]*2,
3)
)
# Next we populate regions of the image with various output
# Here we're putting the original image in the upper left hand corner
output_image[0:input_img.shape[0], 0:input_img.shape[1]] = input_img
# add a warped image to the mosaic
warped = perspect_transform(input_img, source, destination)
# Add the warped image in the upper right hand corner
output_image[0:input_img.shape[0], input_img.shape[1]:] = warped
# Overlay worldmap with ground truth map
map_add = cv2.addWeighted(data.worldmap, 1, data.ground_truth, 0.5, 0)
# Flip map overlay so y-axis points upward and add to output_image
output_image[
input_img.shape[0]:,
0:data.worldmap.shape[1]
] = np.flipud(map_add)
# Then putting some text over the image
cv2.putText(
output_image,
"Populate this image with your analyses to make a video!",
(20, 20),
cv2.FONT_HERSHEY_COMPLEX,
0.4,
(255, 255, 255),
1
)
data.count += 1 # Keep track of the index in the Databucket()
return output_image
Use the moviepy library to process images and create a video.
In [35]:
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from moviepy.editor import ImageSequenceClip
# Define pathname to save the output video
output = '../output/test_mapping.mp4'
# Re-initialize data in case this cell is run multiple times
data = SensorData()
# Note: output video will be sped up because recording rate in
# simulator is fps=25
clip = ImageSequenceClip(data.images, fps=60)
new_clip = clip.fl_image(process_image) # process_image expects color images!
%time new_clip.write_videofile(output, audio=False)
In [36]:
from IPython.display import HTML
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(output))
Out[36]:
In [37]:
import io
import base64
video = io.open(output, 'r+b').read()
encoded_video = base64.b64encode(video)
HTML(data='''<video alt="test" controls>
<source src="data:video/mp4;base64,{0}" type="video/mp4" />
</video>'''.format(encoded_video.decode('ascii')))
In [ ]: