In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below.
Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.
Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image.
Note If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".
The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.
In [1]:
# importing some useful packages
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
%matplotlib inline
# calculate running average for line coordinates
def running_average(avg, sample, n=12):
if (avg == 0):
return sample
avg -= avg / n;
avg += sample / n;
return int(avg);
# global variables - need to reset for before each of the video processing
prev_left_line = []
prev_right_line = []
# setting globals
def set_global_prev_left_line(left_line):
global prev_left_line
if prev_left_line and left_line:
x1, y1, x2, y2 = prev_left_line
prev_left_line = (running_average(x1, left_line[0])),(running_average(y1, left_line[1])),(running_average(x2, left_line[2])),(running_average(y2, left_line[3]))
else:
prev_left_line = left_line
def set_global_prev_right_line(right_line):
global prev_right_line
if prev_right_line and right_line:
x1, y1, x2, y2 = prev_right_line
prev_right_line = (running_average(x1, right_line[0])),(running_average(y1, right_line[1])),(running_average(x2, right_line[2])),(running_average(y2, right_line[3]))
else:
prev_right_line = right_line
In [2]:
# reading in an image
image = mpimg.imread('test_images/solidWhiteRight.jpg')
# printing out some stats and plotting
print('This image is:', type(image), 'with dimesions:', image.shape)
plt.imshow(image)
# if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray')
Out[2]:
Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:
cv2.inRange()
for color selection
cv2.fillPoly()
for regions selection
cv2.line()
to draw lines on an image given endpoints
cv2.addWeighted()
to coadd / overlay two images
cv2.cvtColor()
to grayscale or change color
cv2.imwrite()
to output images to file
cv2.bitwise_and()
to apply a mask to an image
Check out the OpenCV documentation to learn about these and discover even more awesome functionality!
Below are some helper functions to help get you started. They should look familiar from the lesson!
In [3]:
import math
def grayscale(img):
"""Applies the Grayscale transform
This will return an image with only one color channel
but NOTE: to see the returned image as grayscale
(assuming your grayscaled image is called 'gray')
you should call plt.imshow(gray, cmap='gray')"""
return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# Or use BGR2GRAY if you read an image with cv2.imread()
# return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
def canny(img, low_threshold, high_threshold):
"""Applies the Canny transform"""
return cv2.Canny(img, low_threshold, high_threshold)
def gaussian_blur(img, kernel_size):
"""Applies a Gaussian Noise kernel"""
return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)
def region_of_interest(img, vertices):
"""
Applies an image mask.
Only keeps the region of the image defined by the polygon
formed from `vertices`. The rest of the image is set to black.
"""
#defining a blank mask to start with
mask = np.zeros_like(img)
#defining a 3 channel or 1 channel color to fill the mask with depending on the input image
if len(img.shape) > 2:
channel_count = img.shape[2] # i.e. 3 or 4 depending on your image
ignore_mask_color = (255,) * channel_count
else:
ignore_mask_color = 255
#filling pixels inside the polygon defined by "vertices" with the fill color
cv2.fillPoly(mask, vertices, ignore_mask_color)
#returning the image only where mask pixels are nonzero
masked_image = cv2.bitwise_and(img, mask)
return masked_image
def draw_lines(img, lines, color=[255, 0, 0], thickness=2):
"""
NOTE: this is the function you might want to use as a starting point once you want to
average/extrapolate the line segments you detect to map out the full
extent of the lane (going from the result shown in raw-lines-example.mp4
to that shown in P1_example.mp4).
Think about things like separating line segments by their
slope ((y2-y1)/(x2-x1)) to decide which segments are part of the left
line vs. the right line. Then, you can average the position of each of
the lines and extrapolate to the top and bottom of the lane.
This function draws `lines` with `color` and `thickness`.
Lines are drawn on the image inplace (mutates the image).
If you want to make the lines semi-transparent, think about combining
this function with the weighted_img() function below
"""
for line in lines:
for x1,y1,x2,y2 in line:
cv2.line(img, (x1, y1), (x2, y2), color, thickness)
def average_lines(lines):
line_top = []
line_bottom = []
for line in lines:
for x1,y1,x2,y2 in line:
if y1 < y2:
line_top.append([x1, y1])
line_bottom.append([x2, y2])
else:
line_top.append([x2, y2])
line_bottom.append([x1, y1])
if len(line_top) > 0:
average_top = [ int(np.average([point[0] for point in line_top])), int(np.average([point[1] for point in line_top]))]
else:
average_top = []
if len(line_bottom) > 0:
average_bottom = [ int(np.average([point[0] for point in line_bottom])), int(np.average([point[1] for point in line_bottom]))]
else:
average_bottom = []
return average_top + average_bottom
def segment_hough_lines(lines):
left_lines = []
right_lines = []
# Segment left and right lines based on their slope
for line in lines:
for x1,y1,x2,y2 in line:
slope = (y2 - y1) / (x2 - x1)
if abs(slope) < 0.5:
# skip and continue, if the slope is less than 0.5
continue
if slope >= 0:
right_lines.append(line)
else:
left_lines.append(line)
return right_lines, left_lines
def find_largest_lines(lines):
largest_right_line = []
largest_left_line = []
largest_right_line_length = 0.0
largest_left_line_length = 0.0
# Segment left and right lines based on their slope
for line in lines:
for x1,y1,x2,y2 in line:
slope = (y2 - y1) / (x2 - x1)
if abs(slope) < 0.5:
# skip and continue, if the slope is less than 0.5
continue
line_length = math.hypot(x2 - x1, y2 - y1)
if slope >= 0:
if line_length > largest_right_line_length:
largest_right_line = [x1, y1, x2, y2]
largest_right_line_length = line_length
else:
if line_length > largest_left_line_length:
largest_left_line = [x1, y1, x2, y2]
largest_left_line_length = line_length
return largest_right_line, largest_left_line
def extrapolate_line(line, top_max, bottom_max):
x1, y1, x2, y2 = line
a = np.array([[x1, 1], [x2, 1]])
b = np.array([y1, y2])
m, c = np.linalg.solve(a, b)
# find the extrapolated bottom
y2 = top_max
x2 = int((y2 - c)/m)
# find the extrapolated top
y1 = bottom_max
x1 = int((y1 - c)/m)
return [x1, y1, x2, y2]
def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):
"""
`img` should be the output of a Canny transform.
Returns an image with hough lines drawn.
"""
# Image dimensions
img_height = img.shape[0]
img_width = img.shape[1]
lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
line_img = np.zeros((img_height, img_width, 3), dtype=np.uint8)
"""
Approach 1 - Based on largest hough line
"""
"""
# Get largest left/right lines from hough lines
largest_right_line, largest_left_line = find_largest_lines(lines)
# Extrapolate the line to the top and bottom of the region of interest
if largest_right_line:
final_right_line = extrapolate_line(largest_right_line, top_max = int(img_height * 0.6), bottom_max = img_width)
set_global_prev_right_line(final_right_line)
else:
final_right_line = prev_right_line
if largest_left_line:
final_left_line = extrapolate_line(largest_left_line, top_max = int(img_height * 0.6), bottom_max = img_width)
set_global_prev_left_line(final_left_line)
else:
final_left_line = prev_left_line
"""
"""
# Approach 2 - Average the left and right lines
"""
# Segment hough lines
right_lines, left_lines = segment_hough_lines(lines)
# Find average for a line
average_right_line = average_lines(right_lines)
average_left_line = average_lines(left_lines)
# Extrapolate the line to the top and bottom of the region of interest
if average_right_line:
final_right_line = extrapolate_line(average_right_line, top_max = int(img_height * 0.6), bottom_max = img_width)
set_global_prev_right_line(final_right_line)
else:
final_right_line = prev_right_line
if average_left_line:
final_left_line = extrapolate_line(average_left_line, top_max = int(img_height * 0.6), bottom_max = img_width)
set_global_prev_left_line(final_left_line)
else:
final_left_line = prev_left_line
#new_lines = [[final_right_line], [final_left_line]]
new_lines = [[prev_right_line], [prev_left_line]]
draw_lines(line_img, new_lines, thickness=10)
return line_img
# Python 3 has support for cool math symbols.
def weighted_img(img, initial_img, α=0.8, β=1., λ=0.):
"""
`img` is the output of the hough_lines(), An image with lines drawn on it.
Should be a blank image (all black) with lines drawn on it.
`initial_img` should be the image before any processing.
The result image is computed as follows:
initial_img * α + img * β + λ
NOTE: initial_img and img must be the same shape!
"""
return cv2.addWeighted(initial_img, α, img, β, λ)
def process_image(image):
# NOTE: The output you return should be a color image (3 channel) for processing video below
# you should return the final output (image with lines are drawn on lanes)
# Image dimensions
img_height = image.shape[0]
img_width = image.shape[1]
# Step 1: Convert to Grayscale
processed_image = grayscale(image)
# Step 2: Gaussian Blur Transform
kernel_size = 7
processed_image = gaussian_blur(processed_image, kernel_size)
# Step 3: Canny Transform
low_threshold = 100
high_threshold = 200
processed_image = canny(processed_image, low_threshold, high_threshold)
# Step 4: Region of Interest
bottom_left = [100, img_height]
bottom_right = [img_width - 80, img_height]
top_left = [int(0.4 * img_width), int(0.6 * img_height)]
top_right = [int(0.6 * img_width), int(0.6 * img_height)]
processed_image = region_of_interest(processed_image, [np.array([bottom_left, top_left, top_right, bottom_right], dtype=np.int32)])
# Step 5: Hough lines
rho = 1.0
theta = np.pi/180
threshold = 25
min_line_len = 50
max_line_gap = 200
processed_image = hough_lines(img=processed_image, theta=theta, rho=rho, min_line_len=min_line_len, max_line_gap=max_line_gap, threshold=threshold)
# Step 6: Weighted Image
processed_image = weighted_img(img=processed_image, initial_img=image)
return processed_image
In [4]:
img = process_image(mpimg.imread("test_images/solidYellowCurve2.jpg"))
#plt.plot(x, y, 'b--', lw=4)
plt.imshow(img, cmap='gray')
Out[4]:
run your solution on all test_images and make copies into the test_images directory).
In [5]:
import os
test_images_dir = "test_images/"
test_out_dir = "test_output/"
if not os.path.exists(test_out_dir):
os.makedirs(test_out_dir)
for file in os.listdir(test_images_dir):
# Ignore hidden files
if file.startswith('.'):
continue
image = mpimg.imread(test_images_dir + file)
processed_image = process_image(image)
out_file = test_out_dir + file
#cv2.imwrite(out_file, processed_image)
mpimg.imsave(out_file, processed_image)
print('Processed [{0}] -> [{1}]'.format(file, out_file))
print("Done with processing images in {0} folder.".format(test_images_dir))
In [6]:
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML
Let's try the one with the solid white lane on the right first ...
In [7]:
# Reset global variables
set_global_prev_left_line([])
set_global_prev_right_line([])
white_output = 'white.mp4'
clip1 = VideoFileClip("solidWhiteRight.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
%time white_clip.write_videofile(white_output, audio=False)
Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.
In [8]:
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(white_output))
Out[8]:
At this point, if you were successful you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. Modify your draw_lines function accordingly and try re-running your pipeline.
Now for the one with the solid yellow lane on the left. This one's more tricky!
In [9]:
# Reset global variables
set_global_prev_left_line([])
set_global_prev_right_line([])
yellow_output = 'yellow.mp4'
clip2 = VideoFileClip('solidYellowLeft.mp4')
yellow_clip = clip2.fl_image(process_image)
%time yellow_clip.write_videofile(yellow_output, audio=False)
In [10]:
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(yellow_output))
Out[10]:
Congratulations on finding the lane lines! As the final step in this project, we would like you to share your thoughts on your lane finding pipeline... specifically, how could you imagine making your algorithm better / more robust? Where will your current algorithm be likely to fail?
Please add your thoughts below, and if you're up for making your pipeline more robust, be sure to scroll down and check out the optional challenge video below!
Finding lane lines project is a starter project with relatively a simple use case. For a real self-driving car scenarios, finding lane lines need to take care of complex scenarios, real world road conditions, lights and other factors.
Finding lane lines project doesn't take into consideration of normal driving scenarios, like junctions, signals, lane changes etc.
Algorithms for finding lane lines project is based on successful canny and hough lines algorithms, any road conditions that may not work well for canny and hough lines requirements/assumptions will have challenges.
Making the algorithm work for a video (moving car) is much different from making it work for a single image. For video, the algorithm needs to consider prior frames and average the lane lines for smoother lane prediction/projections.
Tried 2 different approaches, first one based on largest line segment, extrapolating and averaging with prior frames. Second approach was to average both left and right segments, extrapolate and average with prior frames. Haven't found significant difference on the results for the given videos.
Averaging with prior frames needed to be balanced based on our tests. Noticed that, larger the number of prior frames for average, it affects the average lane line to be off the actual lane line on the frame.
To have a smooth lane line projection, average lane line to be drawn instead of that frame's lane line.
Many of the values and parameters could be further adjusted for better results, on the other hand, tuning to specific scenarios may not work well for more challenging scenarios. It would be interesting to find out how to come up with algorithms & parameter values that works well regardless of the road conditions!!
For example, Optional Challenge video has a few different scenarios on road conditions, with a few frames not having clear lane lines, shadows and turns. Algorithms had to be revised to take into consideration of those conditions.
Good to start with a simple finding lane line project and move towards more challenging scenarios. Looking forward to upcoming projects!
In [11]:
# Reset global variables
set_global_prev_left_line([])
set_global_prev_right_line([])
challenge_output = 'extra.mp4'
clip2 = VideoFileClip('challenge.mp4')
challenge_clip = clip2.fl_image(process_image)
%time challenge_clip.write_videofile(challenge_output, audio=False)
In [12]:
HTML("""
<video width="960" height="540" controls>
<source src="{0}">
</video>
""".format(challenge_output))
Out[12]: