Self-Driving Car Engineer Nanodegree

Project: Finding Lane Lines on the Road


In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below.

Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.

In addition to implementing code, there is a brief writeup to complete. The writeup should be completed in a separate file, which can be either a markdown file or a pdf document. There is a write up template that can be used to guide the writing process. Completing both the code in the Ipython notebook and the writeup template will cover all of the rubric points for this project.


Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image.

Note: If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".


The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.


Your output should look something like this (above) after detecting line segments using the helper functions below

Your goal is to connect/average/extrapolate line segments to get output like this

Run the cell below to import some packages. If you get an import error for a package you've already installed, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, see this forum post for more troubleshooting tips.

Import Packages


In [1]:
#importing some useful packages
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import cv2
%matplotlib inline

Read in an Image


In [2]:
#reading in an image
image = mpimg.imread('test_images/solidWhiteRight.jpg')

#printing out some stats and plotting
print('This image is:', type(image), 'with dimensions:', image.shape)
plt.imshow(image)  # if you wanted to show a single color channel image called 'gray', for example, call as plt.imshow(gray, cmap='gray')


This image is: <class 'numpy.ndarray'> with dimensions: (540, 960, 3)
Out[2]:
<matplotlib.image.AxesImage at 0x7f0b56b58470>

Ideas for Lane Detection Pipeline

Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:

cv2.inRange() for color selection
cv2.fillPoly() for regions selection
cv2.line() to draw lines on an image given endpoints
cv2.addWeighted() to coadd / overlay two images cv2.cvtColor() to grayscale or change color cv2.imwrite() to output images to file
cv2.bitwise_and() to apply a mask to an image

Check out the OpenCV documentation to learn about these and discover even more awesome functionality!

Helper Functions

Below are some helper functions to help get you started. They should look familiar from the lesson!


In [15]:
import math

def grayscale(img):
    """Applies the Grayscale transform
    This will return an image with only one color channel
    but NOTE: to see the returned image as grayscale
    (assuming your grayscaled image is called 'gray')
    you should call plt.imshow(gray, cmap='gray')"""
    return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
    # Or use BGR2GRAY if you read an image with cv2.imread()
    # return cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
    
def canny(img, low_threshold, high_threshold):
    """Applies the Canny transform"""
    return cv2.Canny(img, low_threshold, high_threshold)

def gaussian_blur(img, kernel_size):
    """Applies a Gaussian Noise kernel"""
    return cv2.GaussianBlur(img, (kernel_size, kernel_size), 0)

def region_of_interest(img, vertices):
    """
    Applies an image mask.
    
    Only keeps the region of the image defined by the polygon
    formed from `vertices`. The rest of the image is set to black.
    """
    #defining a blank mask to start with
    mask = np.zeros_like(img)   
    
    #defining a 3 channel or 1 channel color to fill the mask with depending on the input image
    if len(img.shape) > 2:
        channel_count = img.shape[2]  # i.e. 3 or 4 depending on your image
        ignore_mask_color = (255,) * channel_count
    else:
        ignore_mask_color = 255
        
    #filling pixels inside the polygon defined by "vertices" with the fill color    
    cv2.fillPoly(mask, vertices, ignore_mask_color)
    
    #returning the image only where mask pixels are nonzero
    masked_image = cv2.bitwise_and(img, mask)
    return masked_image

def draw_lines(img, lines, color=[0, 75, 255], thickness=9):

#   get m (slope)
    
    right_lines = []
    left_lines = []
    threshold = 0.5

    #   append right or left

    for line in lines:
        x1, y1, x2, y2 = line[0]
        m = (y2-y1)/(x2-x1)
        if m > threshold: 
            left_lines.append(line)
        if m < -threshold: 
            right_lines.append(line)

#   fit lines through the points

    if len(right_lines) > 0:
        x = np.concatenate([np.array(right_lines)[:,0,0],np.array(right_lines)[:,0,2]])
        y = np.concatenate([np.array(right_lines)[:,0,1],np.array(right_lines)[:,0,3]])
        m_right, b_right = np.polyfit(x, y, 1)
    
    if len(left_lines) > 0:
        x = np.concatenate([np.array(left_lines)[:,0,0],np.array(left_lines)[:,0,2]])
        y = np.concatenate([np.array(left_lines)[:,0,1],np.array(left_lines)[:,0,3]])
        m_left, b_left = np.polyfit(x, y, 1)
        
#     draw lines
    
    lines = [[[int((img.shape[0] - b_left)/m_left), int(img.shape[0]),
              int((img.shape[0]/1.7 - b_left)/m_left), int(img.shape[0]/1.7)],
    
             [int((img.shape[0] - b_right)/m_right), int(img.shape[0]),
              int((img.shape[0]/1.7 - b_right)/m_right), int(img.shape[0]/1.7)]]]
    
    for line in lines:
        for x1,y1,x2,y2 in line:
            cv2.line(img, (x1, y1), (x2, y2), color, thickness)

def hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap):
    """
    `img` should be the output of a Canny transform.
        
    Returns an image with hough lines drawn.
    """
    lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
    line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
    draw_lines(line_img, lines)
    return line_img

# Python 3 has support for cool math symbols.

def weighted_img(img, initial_img, α=0.8, β=1., λ=0.):
    """
    `img` is the output of the hough_lines(), An image with lines drawn on it.
    Should be a blank image (all black) with lines drawn on it.
    
    `initial_img` should be the image before any processing.
    
    The result image is computed as follows:
    
    initial_img * α + img * β + λ
    NOTE: initial_img and img must be the same shape!
    """
    return cv2.addWeighted(initial_img, α, img, β, λ)

Test Images

Build your pipeline to work on the images in the directory "test_images"
You should make sure your pipeline works well on these images before you try the videos.


In [17]:
import os
os.listdir("test_images/")


Out[17]:
['whiteCarLaneSwitch.jpg',
 'solidYellowLeft.jpg',
 'solidWhiteCurve.jpg',
 'solidYellowCurve2.jpg',
 'solidWhiteRight.jpg',
 'solidYellowCurve.jpg']

Build a Lane Finding Pipeline

Build the pipeline and run your solution on all test_images. Make copies into the test_images_output directory, and you can use the images in your writeup report.

Try tuning the various parameters, especially the low and high Canny thresholds as well as the Hough lines parameters.


In [18]:
# TODO: Build your pipeline that will draw lane lines on the test_images
# then save them to the test_images directory.
def image_pipe(img):
    gr_img = grayscale(img)
    ga_img = gaussian_blur(gr_img, 11)
    edges = canny(ga_img, 40, 50)
    imshape = img.shape
    vertices = np.array([[(0,imshape[0]),(460, 310), (500, 310), (imshape[1],imshape[0])]], dtype=np.int32)
    masked_img = region_of_interest(edges, vertices)
#     parameters of Hough grid
    rho = 2            #distance resolution
    theta = np.pi/180  #angular resolution
    threshold = 30     #min votes
    min_line_len = 40  #min line len
    max_line_gap = 20  #maximum gap between lines
    line_image = hough_lines(masked_img, rho, theta, threshold, min_line_len, max_line_gap)
    
    result = weighted_img(line_image, img)
    return result
    
img_path = "test_images/"
tst_imgs = os.listdir(img_path)
for tst_img in tst_imgs:
    img = mpimg.imread(img_path + tst_img)
    img = image_pipe(img)
    mpimg.imsave(img_path + 'processed_' + tst_img[:-3] + '.png', img)

Test on Videos

You know what's cooler than drawing lanes over images? Drawing lanes over video!

We can test our solution on two provided videos:

solidWhiteRight.mp4

solidYellowLeft.mp4

Note: if you get an import error when you run the next cell, try changing your kernel (select the Kernel menu above --> Change Kernel). Still have problems? Try relaunching Jupyter Notebook from the terminal prompt. Also, check out this forum post for more troubleshooting tips.

If you get an error that looks like this:

NeedDownloadError: Need ffmpeg exe. 
You can download it by calling: 
imageio.plugins.ffmpeg.download()

Follow the instructions in the error message and check out this forum post for more troubleshooting tips across operating systems.


In [19]:
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML

In [23]:
def process_image(img):
    gr_img = grayscale(img)
    ga_img = gaussian_blur(gr_img, 11)
    edges = canny(ga_img, 40, 50)
    height = image.shape[0]
    width = image.shape[1]
    vertices = np.array( [[[3*width/4, 3*height/5],[width/4, 3*height/5],[40, height],[width-40, height]]], dtype=np.int32)
    masked_img = region_of_interest(edges, vertices)
#     parameters of Hough grid
    rho = 2            #distance resolution
    theta = np.pi/180  #angular resolution
    threshold = 30     #min votes
    min_line_len = 40 #min line len
    max_line_gap = 20  #maximum gap between lines
    line_image = hough_lines(masked_img, rho, theta, threshold, min_line_len, max_line_gap)
    
    result = weighted_img(line_image, img)
    return result

Let's try the one with the solid white lane on the right first ...


In [24]:
white_output = 'test_videos_output/solidWhiteRight.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4").subclip(0,5)
clip1 = VideoFileClip("test_videos/solidWhiteRight.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
%time white_clip.write_videofile(white_output, audio=False)


[MoviePy] >>>> Building video test_videos_output/solidWhiteRight.mp4
[MoviePy] Writing video test_videos_output/solidWhiteRight.mp4
100%|█████████▉| 221/222 [00:03<00:00, 70.48it/s]
[MoviePy] Done.
[MoviePy] >>>> Video ready: test_videos_output/solidWhiteRight.mp4 

CPU times: user 6.41 s, sys: 188 ms, total: 6.6 s
Wall time: 3.44 s

Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.


In [ ]:
HTML("""
<video width="960" height="540" controls>
  <source src="{0}">
</video>
""".format(white_output))

Improve the draw_lines() function

At this point, if you were successful with making the pipeline and tuning parameters, you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. As mentioned previously, try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4".

Go back and modify your draw_lines function accordingly and try re-running your pipeline. The new output should draw a single, solid line over the left lane line and a single, solid line over the right lane line. The lines should start from the bottom of the image and extend out to the top of the region of interest.

Now for the one with the solid yellow lane on the left. This one's more tricky!


In [22]:
yellow_output = 'test_videos_output/solidYellowLeft.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4').subclip(0,5)
clip2 = VideoFileClip('test_videos/solidYellowLeft.mp4')
yellow_clip = clip2.fl_image(process_image)
%time yellow_clip.write_videofile(yellow_output, audio=False)


[MoviePy] >>>> Building video test_videos_output/solidYellowLeft.mp4
[MoviePy] Writing video test_videos_output/solidYellowLeft.mp4
100%|█████████▉| 681/682 [00:10<00:00, 64.30it/s]
[MoviePy] Done.
[MoviePy] >>>> Video ready: test_videos_output/solidYellowLeft.mp4 

CPU times: user 20.3 s, sys: 568 ms, total: 20.9 s
Wall time: 10.9 s

In [ ]:
HTML("""
<video width="960" height="540" controls>
  <source src="{0}">
</video>
""".format(yellow_output))

Writeup and Submission

If you're satisfied with your video outputs, it's time to make the report writeup in a pdf or markdown file. Once you have this Ipython notebook ready along with the writeup, it's time to submit for review! Here is a link to the writeup template file.

Optional Challenge

Try your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project!


In [25]:
challenge_output = 'test_videos_output/challenge.mp4'
## To speed up the testing process you may want to try your pipeline on a shorter subclip of the video
## To do so add .subclip(start_second,end_second) to the end of the line below
## Where start_second and end_second are integer values representing the start and end of the subclip
## You may also uncomment the following line for a subclip of the first 5 seconds
##clip3 = VideoFileClip('test_videos/challenge.mp4').subclip(0,5)
clip3 = VideoFileClip('test_videos/challenge.mp4')
challenge_clip = clip3.fl_image(process_image)
%time challenge_clip.write_videofile(challenge_output, audio=False)


[MoviePy] >>>> Building video test_videos_output/challenge.mp4
[MoviePy] Writing video test_videos_output/challenge.mp4
  2%|▏         | 6/251 [00:00<00:04, 51.67it/s]
---------------------------------------------------------------------------
UnboundLocalError                         Traceback (most recent call last)
<ipython-input-25-03bf28c731e9> in <module>()
      7 clip3 = VideoFileClip('test_videos/challenge.mp4')
      8 challenge_clip = clip3.fl_image(process_image)
----> 9 get_ipython().magic('time challenge_clip.write_videofile(challenge_output, audio=False)')

/root/miniconda3/envs/carnd-term1/lib/python3.5/site-packages/IPython/core/interactiveshell.py in magic(self, arg_s)
   2156         magic_name, _, magic_arg_s = arg_s.partition(' ')
   2157         magic_name = magic_name.lstrip(prefilter.ESC_MAGIC)
-> 2158         return self.run_line_magic(magic_name, magic_arg_s)
   2159 
   2160     #-------------------------------------------------------------------------

/root/miniconda3/envs/carnd-term1/lib/python3.5/site-packages/IPython/core/interactiveshell.py in run_line_magic(self, magic_name, line)
   2077                 kwargs['local_ns'] = sys._getframe(stack_depth).f_locals
   2078             with self.builtin_trap:
-> 2079                 result = fn(*args,**kwargs)
   2080             return result
   2081 

<decorator-gen-59> in time(self, line, cell, local_ns)

/root/miniconda3/envs/carnd-term1/lib/python3.5/site-packages/IPython/core/magic.py in <lambda>(f, *a, **k)
    186     # but it's overkill for just that one bit of state.
    187     def magic_deco(arg):
--> 188         call = lambda f, *a, **k: f(*a, **k)
    189 
    190         if callable(arg):

/root/miniconda3/envs/carnd-term1/lib/python3.5/site-packages/IPython/core/magics/execution.py in time(self, line, cell, local_ns)
   1179         if mode=='eval':
   1180             st = clock2()
-> 1181             out = eval(code, glob, local_ns)
   1182             end = clock2()
   1183         else:

<timed eval> in <module>()

<decorator-gen-172> in write_videofile(self, filename, fps, codec, bitrate, audio, audio_fps, preset, audio_nbytes, audio_codec, audio_bitrate, audio_bufsize, temp_audiofile, rewrite_audio, remove_temp, write_logfile, verbose, threads, ffmpeg_params)

/root/miniconda3/envs/carnd-term1/lib/python3.5/site-packages/moviepy/decorators.py in requires_duration(f, clip, *a, **k)
     52         raise ValueError("Attribute 'duration' not set")
     53     else:
---> 54         return f(clip, *a, **k)
     55 
     56 

<decorator-gen-171> in write_videofile(self, filename, fps, codec, bitrate, audio, audio_fps, preset, audio_nbytes, audio_codec, audio_bitrate, audio_bufsize, temp_audiofile, rewrite_audio, remove_temp, write_logfile, verbose, threads, ffmpeg_params)

/root/miniconda3/envs/carnd-term1/lib/python3.5/site-packages/moviepy/decorators.py in use_clip_fps_by_default(f, clip, *a, **k)
    135              for (k,v) in k.items()}
    136 
--> 137     return f(clip, *new_a, **new_kw)

<decorator-gen-170> in write_videofile(self, filename, fps, codec, bitrate, audio, audio_fps, preset, audio_nbytes, audio_codec, audio_bitrate, audio_bufsize, temp_audiofile, rewrite_audio, remove_temp, write_logfile, verbose, threads, ffmpeg_params)

/root/miniconda3/envs/carnd-term1/lib/python3.5/site-packages/moviepy/decorators.py in convert_masks_to_RGB(f, clip, *a, **k)
     20     if clip.ismask:
     21         clip = clip.to_RGB()
---> 22     return f(clip, *a, **k)
     23 
     24 @decorator.decorator

/root/miniconda3/envs/carnd-term1/lib/python3.5/site-packages/moviepy/video/VideoClip.py in write_videofile(self, filename, fps, codec, bitrate, audio, audio_fps, preset, audio_nbytes, audio_codec, audio_bitrate, audio_bufsize, temp_audiofile, rewrite_audio, remove_temp, write_logfile, verbose, threads, ffmpeg_params)
    336                            audiofile = audiofile,
    337                            verbose=verbose, threads=threads,
--> 338                            ffmpeg_params=ffmpeg_params)
    339 
    340         if remove_temp and make_audio:

/root/miniconda3/envs/carnd-term1/lib/python3.5/site-packages/moviepy/video/io/ffmpeg_writer.py in ffmpeg_write_video(clip, filename, fps, codec, bitrate, preset, withmask, write_logfile, audiofile, verbose, threads, ffmpeg_params)
    214 
    215     for t,frame in clip.iter_frames(progress_bar=True, with_times=True,
--> 216                                     fps=fps, dtype="uint8"):
    217         if withmask:
    218             mask = (255*clip.mask.get_frame(t))

/root/miniconda3/envs/carnd-term1/lib/python3.5/site-packages/tqdm/_tqdm.py in __iter__(self)
    831 """, fp_write=getattr(self.fp, 'write', sys.stderr.write))
    832 
--> 833             for obj in iterable:
    834                 yield obj
    835                 # Update and print the progressbar.

/root/miniconda3/envs/carnd-term1/lib/python3.5/site-packages/moviepy/Clip.py in generator()
    471             for t in np.arange(0, self.duration, 1.0/fps):
    472 
--> 473                 frame = self.get_frame(t)
    474 
    475                 if (dtype is not None) and (frame.dtype != dtype):

<decorator-gen-135> in get_frame(self, t)

/root/miniconda3/envs/carnd-term1/lib/python3.5/site-packages/moviepy/decorators.py in wrapper(f, *a, **kw)
     87         new_kw = {k: fun(v) if k in varnames else v
     88                  for (k,v) in kw.items()}
---> 89         return f(*new_a, **new_kw)
     90     return decorator.decorator(wrapper)
     91 

/root/miniconda3/envs/carnd-term1/lib/python3.5/site-packages/moviepy/Clip.py in get_frame(self, t)
     93                 return frame
     94         else:
---> 95             return self.make_frame(t)
     96 
     97     def fl(self, fun, apply_to=[] , keep_duration=True):

/root/miniconda3/envs/carnd-term1/lib/python3.5/site-packages/moviepy/Clip.py in <lambda>(t)
    134 
    135         #mf = copy(self.make_frame)
--> 136         newclip = self.set_make_frame(lambda t: fun(self.get_frame, t))
    137 
    138         if not keep_duration:

/root/miniconda3/envs/carnd-term1/lib/python3.5/site-packages/moviepy/video/VideoClip.py in <lambda>(gf, t)
    511         `get_frame(t)` by another frame,  `image_func(get_frame(t))`
    512         """
--> 513         return self.fl(lambda gf, t: image_func(gf(t)), apply_to)
    514 
    515     # --------------------------------------------------------------

<ipython-input-23-0dc592f2320d> in process_image(img)
     13     min_line_len = 40 #min line len
     14     max_line_gap = 20  #maximum gap between lines
---> 15     line_image = hough_lines(masked_img, rho, theta, threshold, min_line_len, max_line_gap)
     16 
     17     result = weighted_img(line_image, img)

<ipython-input-15-fd77e3cf6cd3> in hough_lines(img, rho, theta, threshold, min_line_len, max_line_gap)
     93     lines = cv2.HoughLinesP(img, rho, theta, threshold, np.array([]), minLineLength=min_line_len, maxLineGap=max_line_gap)
     94     line_img = np.zeros((img.shape[0], img.shape[1], 3), dtype=np.uint8)
---> 95     draw_lines(line_img, lines)
     96     return line_img
     97 

<ipython-input-15-fd77e3cf6cd3> in draw_lines(img, lines, color, thickness)
     75 #     draw lines
     76 
---> 77     lines = [[[int((img.shape[0] - b_left)/m_left), int(img.shape[0]),
     78               int((img.shape[0]/1.7 - b_left)/m_left), int(img.shape[0]/1.7)],
     79 

UnboundLocalError: local variable 'b_left' referenced before assignment
  2%|▏         | 6/251 [00:20<13:37,  3.34s/it]

In [ ]:
HTML("""
<video width="960" height="540" controls>
  <source src="{0}">
</video>
""".format(challenge_output))

In [ ]: