Finding Lane Lines on the Road


In this project, you will use the tools you learned about in the lesson to identify lane lines on the road. You can develop your pipeline on a series of individual images, and later apply the result to a video stream (really just a series of images). Check out the video clip "raw-lines-example.mp4" (also contained in this repository) to see what the output should look like after using the helper functions below.

Once you have a result that looks roughly like "raw-lines-example.mp4", you'll need to get creative and try to average and/or extrapolate the line segments you've detected to map out the full extent of the lane lines. You can see an example of the result you're going for in the video "P1_example.mp4". Ultimately, you would like to draw just one line for the left side of the lane, and one for the right.


Let's have a look at our first image called 'test_images/solidWhiteRight.jpg'. Run the 2 cells below (hit Shift-Enter or the "play" button above) to display the image.

Note If, at any point, you encounter frozen display windows or other confounding issues, you can always start again with a clean slate by going to the "Kernel" menu above and selecting "Restart & Clear Output".


The tools you have are color selection, region of interest selection, grayscaling, Gaussian smoothing, Canny Edge Detection and Hough Tranform line detection. You are also free to explore and try other techniques that were not presented in the lesson. Your goal is piece together a pipeline to detect the line segments in the image, then average/extrapolate them and draw them onto the image for display (as below). Once you have a working pipeline, try it out on the video stream below.


Your output should look something like this (above) after detecting line segments using the helper functions below

Your goal is to connect/average/extrapolate line segments to get output like this


In [1]:
#importing some useful packages
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
import glob
import scipy.misc
import cv2
from math import isinf, isnan
%matplotlib inline

from os import chdir; chdir('../')
from lib.image_processing import *


/root/miniconda3/envs/carnd_term_1/lib/python3.5/site-packages/matplotlib/font_manager.py:273: UserWarning: Matplotlib is building the font cache using fc-list. This may take a moment.
  warnings.warn('Matplotlib is building the font cache using fc-list. This may take a moment.')
/root/miniconda3/envs/carnd_term_1/lib/python3.5/site-packages/matplotlib/font_manager.py:273: UserWarning: Matplotlib is building the font cache using fc-list. This may take a moment.
  warnings.warn('Matplotlib is building the font cache using fc-list. This may take a moment.')

Basic Image Display


In [2]:
image = mpimg.imread('assets/img/_raw_solidWhiteRight.jpg')
print('This image is:', type(image), 'with dimesions:', image.shape)
plt.imshow(image)


This image is: <class 'numpy.ndarray'> with dimesions: (540, 960, 3)
Out[2]:
<matplotlib.image.AxesImage at 0x7f4a4eb70470>

Some OpenCV functions (beyond those introduced in the lesson) that might be useful for this project are:

cv2.inRange() for color selection
cv2.fillPoly() for regions selection
cv2.line() to draw lines on an image given endpoints
cv2.addWeighted() to coadd / overlay two images cv2.cvtColor() to grayscale or change color cv2.imwrite() to output images to file
cv2.bitwise_and() to apply a mask to an image

Check out the OpenCV documentation to learn about these and discover even more awesome functionality!

Below are some helper functions to help get you started. They should look familiar from the lesson!

Lane Identification Algorithm

Load Raw Image For Processing

We also print the shape of the image.


In [4]:
image_file = 'processed_solidWhiteCurve.jpg'
raw_image = mpimg.imread('assets/img/_raw_solidWhiteRight.jpg')
imshape = raw_image.shape
plt.imshow(raw_image)
imshape


Out[4]:
(540, 960, 3)

Convert Image to Grayscale


In [5]:
this_image = grayscale(raw_image)
plt.imshow(this_image, cmap='gray')


Out[5]:
<matplotlib.image.AxesImage at 0x7f4a7dfffef0>

Apply A Gaussian Blur to Smooth Edges


In [6]:
this_image = gaussian_blur(this_image, kernel_size=7)
plt.imshow(this_image, cmap='gray')


Out[6]:
<matplotlib.image.AxesImage at 0x7f4a53344940>

Perform Canny Edge Detection


In [7]:
this_image = canny(this_image, low_threshold=50, high_threshold=100)
plt.imshow(this_image, cmap='gray')


Out[7]:
<matplotlib.image.AxesImage at 0x7f4a4eb0e7f0>

Apply an appropriate mask to the image


In [8]:
mask_vertices = np.array([[raw_image.shape[1]*0, raw_image.shape[0]*1],
                          [raw_image.shape[1]*0.45, raw_image.shape[0]*0.62],
                          [raw_image.shape[1]*0.55, raw_image.shape[0]*0.62],
                          [raw_image.shape[1]*1, raw_image.shape[0]*1]],
                         dtype=np.int32)

this_image = region_of_interest(this_image, vertices=[mask_vertices])
plt.imshow(this_image, cmap='gray')


Out[8]:
<matplotlib.image.AxesImage at 0x7f4a4eaf4320>

Perform a Hough Transform


In [9]:
lines = cv2.HoughLinesP(this_image,2,np.pi/180,10, 
                            np.array([]), 
                            minLineLength=20, 
                            maxLineGap=5)
lines.shape


Out[9]:
(18, 1, 4)

Convert the shape of the result from the Hough Transform


In [10]:
# flatten image tensor
lines = lines.reshape((lines.shape[0], lines.shape[2]))
lines.shape


Out[10]:
(18, 4)

Identify an Anchor Point for Our Lines


In [11]:
y_min = min(lines[:,1].min(),lines[:,3].min())
y_max = imshape[0]
y_min, y_max


Out[11]:
(334, 540)

Find the Slopes of the Lines


In [12]:
slopes = calculate_slopes(lines)
slopes


Out[12]:
array([ 0.62385321,  0.62820513, -0.66666667,  0.63636364, -0.71428571,
        0.63636364,  0.64      ,  0.62162162, -0.65217391,  0.60869565,
        0.625     , -0.71428571,  0.625     ,  0.62222222,  0.63265306,
        0.61764706,  0.61904762,  0.57692308])

Split the Lines into Left and Right Sides


In [13]:
lines_l, lines_r = split_on_side(lines,slopes)
slopes_l, slopes_r = split_on_side(slopes,slopes)

print(lines_l)
print(lines_r)
print(slopes_l)
print(slopes_r)


[[315 420 357 392]
 [320 424 362 394]
 [310 422 356 392]
 [318 425 339 410]]
[[526 334 853 538]
 [519 334 597 383]
 [648 418 714 460]
 [730 472 796 514]
 [612 394 662 426]
 [774 501 811 524]
 [709 458 732 472]
 [794 514 818 529]
 [802 507 834 527]
 [640 406 685 434]
 [633 408 682 439]
 [721 466 755 487]
 [693 439 714 452]
 [730 462 756 477]]
[-0.66666667 -0.71428571 -0.65217391 -0.71428571]
[ 0.62385321  0.62820513  0.63636364  0.63636364  0.64        0.62162162
  0.60869565  0.625       0.625       0.62222222  0.63265306  0.61764706
  0.61904762  0.57692308]

Find the Average Slope and Intercept for Each Side


In [14]:
avg_slope_l = slopes_l.mean()
avg_slope_r = slopes_r.mean()
avg_intercept_l = calculate_intercept(lines_l, avg_slope_l)
avg_intercept_r = calculate_intercept(lines_r, avg_slope_r)
avg_slope_l, avg_slope_r, avg_intercept_l, avg_intercept_r


Out[14]:
(-0.68685300207039335,
 0.62239970885557472,
 639.71318581780542,
 13.012493050708599)

Create Lines with Average Slope, Intercept and Anchor Points


In [15]:
lane_line_l = slope_intercept_to_two_points(avg_slope_l, avg_intercept_l, y_min, y_max)
lane_line_r = slope_intercept_to_two_points(avg_slope_r, avg_intercept_r, y_min, y_max)
lane_line_l, lane_line_r


Out[15]:
((445, 334, 145, 540), (515, 334, 846, 540))

Draw the Lines


In [16]:
this_image = np.zeros(this_image.shape, dtype=np.uint8)
draw_line(this_image, lane_line_l)
draw_line(this_image, lane_line_r)
plt.imshow(this_image, cmap='gray')


Out[16]:
<matplotlib.image.AxesImage at 0x7f4a3c19d0b8>

Convert to Appropriate Format


In [17]:
this_image = np.dstack([this_image,np.dstack([np.zeros_like(this_image)]*(raw_image.shape[-1]-1))])
plt.imshow(this_image, cmap='gray')


Out[17]:
<matplotlib.image.AxesImage at 0x7f4a3c17f390>

Draw Lines on Raw Image


In [18]:
plt.imshow(weighted_img(this_image, raw_image))


Out[18]:
<matplotlib.image.AxesImage at 0x7f4a3c156828>

Test on Images

Now you should build your pipeline to work on the images in the directory "test_images"
You should make sure your pipeline works well on these images before you try the videos.


In [19]:
import os
[image for image in os.listdir("assets/img/") if '_raw_' in image]


Out[19]:
['_raw_solidWhiteCurve.jpg',
 '_raw_solidWhiteRight.jpg',
 '_raw_solidYellowCurve.jpg',
 '_raw_solidYellowCurve2.jpg',
 '_raw_solidYellowLeft.jpg',
 '_raw_whiteCarLaneSwitch.jpg']

run your solution on all test_images and make copies into the test_images directory).


In [21]:
plt.figure(figsize=(20,20))

plt.imshow(
    process_image(
        mpimg.imread('assets/img/_raw_solidWhiteRight.jpg')))


Out[21]:
<matplotlib.image.AxesImage at 0x7f4a7e01e518>

In [22]:
images = glob.glob('../assets/img/*')
plt.figure(figsize=(20,20))
i = 1
for raw_image in images:
    if 'raw_' in raw_image:
        proc_image = raw_image.replace('_raw_', 'processed_')
        this_image = mpimg.imread(raw_image)
        this_image_output = process_image(this_image)
        scipy.misc.imsave(proc_image, this_image_output)
        plt.subplot(1,6,images.index(raw_image)+1)
        plt.imshow(this_image_output, cmap='gray')


<matplotlib.figure.Figure at 0x7f4a7e02c7f0>

Test on Videos

You know what's cooler than drawing lanes over images? Drawing lanes over video!

We can test our solution on two provided videos:

solidWhiteRight.mp4

solidYellowLeft.mp4


In [ ]:
from moviepy import ima

In [27]:
from imageio import plugins
plugins.ffmpeg.download()


Imageio: 'ffmpeg.linux64' was not found on your computer; downloading it now.
Try 1. Download from https://github.com/imageio/imageio-binaries/raw/master/ffmpeg/ffmpeg.linux64 (27.2 MB)
Downloading: 8192/28549024 bytes (0.0%)204800/28549024 bytes (0.7%)663552/28549024 bytes (2.3%)1253376/28549024 bytes (4.4%)1818624/28549024 bytes (6.4%)2752512/28549024 bytes (9.6%)3883008/28549024 bytes (13.6%)5070848/28549024 bytes (17.8%)6078464/28549024 bytes (21.3%)7290880/28549024 bytes (25.5%)8454144/28549024 bytes (29.6%)9560064/28549024 bytes (33.5%)10543104/28549024 bytes (36.9%)11706368/28549024 bytes (41.0%)12877824/28549024 bytes (45.1%)14049280/28549024 bytes (49.2%)15228928/28549024 bytes (53.3%)16408576/28549024 bytes (57.5%)17580032/28549024 bytes (61.6%)18718720/28549024 bytes (65.6%)19906560/28549024 bytes (69.7%)21110784/28549024 bytes (73.9%)22274048/28549024 bytes (78.0%)23478272/28549024 bytes (82.2%)24649728/28549024 bytes (86.3%)25829376/28549024 bytes (90.5%)27017216/28549024 bytes (94.6%)28188672/28549024 bytes (98.7%)28549024/28549024 bytes (100.0%)
  Done
File saved as /root/.imageio/ffmpeg/ffmpeg.linux64.

In [23]:
# Import everything needed to edit/save/watch video clips
from moviepy.editor import VideoFileClip
from IPython.display import HTML


---------------------------------------------------------------------------
NeedDownloadError                         Traceback (most recent call last)
/root/miniconda3/envs/carnd_term_1/lib/python3.5/site-packages/imageio/plugins/ffmpeg.py in get_exe()
     81             exe = get_remote_file('ffmpeg/' + FNAME_PER_PLATFORM[plat],
---> 82                                   auto=False)
     83             os.chmod(exe, os.stat(exe).st_mode | stat.S_IEXEC)  # executable

/root/miniconda3/envs/carnd_term_1/lib/python3.5/site-packages/imageio/core/fetching.py in get_remote_file(fname, directory, force_download, auto)
    101     if not auto:
--> 102         raise NeedDownloadError()
    103 

NeedDownloadError: 

During handling of the above exception, another exception occurred:

NeedDownloadError                         Traceback (most recent call last)
<ipython-input-23-b16dd2c24d5c> in <module>()
      1 # Import everything needed to edit/save/watch video clips
----> 2 from moviepy.editor import VideoFileClip
      3 from IPython.display import HTML

/root/miniconda3/envs/carnd_term_1/lib/python3.5/site-packages/moviepy/editor.py in <module>()
     20 # Clips
     21 
---> 22 from .video.io.VideoFileClip import VideoFileClip
     23 from .video.io.ImageSequenceClip import ImageSequenceClip
     24 from .video.io.downloader import download_webfile

/root/miniconda3/envs/carnd_term_1/lib/python3.5/site-packages/moviepy/video/io/VideoFileClip.py in <module>()
      1 import os
      2 
----> 3 from moviepy.video.VideoClip import VideoClip
      4 from moviepy.audio.io.AudioFileClip import AudioFileClip
      5 from moviepy.Clip import Clip

/root/miniconda3/envs/carnd_term_1/lib/python3.5/site-packages/moviepy/video/VideoClip.py in <module>()
     18 
     19 import moviepy.audio.io as aio
---> 20 from .io.ffmpeg_writer import ffmpeg_write_image, ffmpeg_write_video
     21 from .io.ffmpeg_tools import ffmpeg_merge_video_audio
     22 from .io.gif_writers import (write_gif,

/root/miniconda3/envs/carnd_term_1/lib/python3.5/site-packages/moviepy/video/io/ffmpeg_writer.py in <module>()
     13     DEVNULL = open(os.devnull, 'wb')
     14 
---> 15 from moviepy.config import get_setting
     16 from moviepy.tools import verbose_print
     17 

/root/miniconda3/envs/carnd_term_1/lib/python3.5/site-packages/moviepy/config.py in <module>()
     36 if FFMPEG_BINARY=='ffmpeg-imageio':
     37     from imageio.plugins.ffmpeg import get_exe
---> 38     FFMPEG_BINARY = get_exe()
     39 
     40 elif FFMPEG_BINARY=='auto-detect':

/root/miniconda3/envs/carnd_term_1/lib/python3.5/site-packages/imageio/plugins/ffmpeg.py in get_exe()
     84             return exe
     85         except NeedDownloadError:
---> 86             raise NeedDownloadError('Need ffmpeg exe. '
     87                                     'You can download it by calling:\n'
     88                                     '  imageio.plugins.ffmpeg.download()')

NeedDownloadError: Need ffmpeg exe. You can download it by calling:
  imageio.plugins.ffmpeg.download()

Let's try the one with the solid white lane on the right first ...


In [24]:
white_output = '../assets/videos/processed_solidWhiteRight.mp4'
clip1 = VideoFileClip("../assets/videos/solidWhiteRight.mp4")
white_clip = clip1.fl_image(process_image) #NOTE: this function expects color images!!
%time white_clip.write_videofile(white_output, audio=False)


[MoviePy] >>>> Building video ../assets/videos/processed_solidWhiteRight.mp4
[MoviePy] Writing video ../assets/videos/processed_solidWhiteRight.mp4
100%|█████████▉| 221/222 [00:08<00:00, 26.97it/s]
[MoviePy] Done.
[MoviePy] >>>> Video ready: ../assets/videos/processed_solidWhiteRight.mp4 

CPU times: user 0 ns, sys: 0 ns, total: 0 ns
Wall time: 9.78 s

Play the video inline, or if you prefer find the video in your filesystem (should be in the same directory) and play it in your video player of choice.

At this point, if you were successful you probably have the Hough line segments drawn onto the road, but what about identifying the full extent of the lane and marking it clearly as in the example video (P1_example.mp4)? Think about defining a line to run the full length of the visible lane based on the line segments you identified with the Hough Transform. Modify your draw_lines function accordingly and try re-running your pipeline.

Now for the one with the solid yellow lane on the left. This one's more tricky!


In [25]:
yellow_output = '../assets/videos/processed_solidYellowLeft.mp4'
clip2 = VideoFileClip('../assets/videos/solidYellowLeft.mp4')
yellow_clip = clip2.fl_image(process_image)
%time yellow_clip.write_videofile(yellow_output, audio=False)


[MoviePy] >>>> Building video ../assets/videos/processed_solidYellowLeft.mp4
[MoviePy] Writing video ../assets/videos/processed_solidYellowLeft.mp4
100%|█████████▉| 681/682 [00:28<00:00, 23.70it/s]
[MoviePy] Done.
[MoviePy] >>>> Video ready: ../assets/videos/processed_solidYellowLeft.mp4 

CPU times: user 0 ns, sys: 0 ns, total: 0 ns
Wall time: 30.1 s

Reflections

Congratulations on finding the lane lines! As the final step in this project, we would like you to share your thoughts on your lane finding pipeline... specifically, how could you imagine making your algorithm better / more robust? Where will your current algorithm be likely to fail?

Please add your thoughts below, and if you're up for making your pipeline more robust, be sure to scroll down and check out the optional challenge video below!

My original algorithm is a little jumpy. I wanted to keep it as simple as possible. I didn't want to be hard coding handling for edge cases. Considering this, I am satisfied with the pipeline that I have developed. I do think that with machine learning techniques we will be able to decrease the jumpiness.

I also note that we are searching for lines. It would be better to be able to handle curvature. Perhaps some sort of kernelized hough transform.

My second approach was to split the lines on side and take the mean slope and intercept per side. This works well for all the videos included.

Submission

If you're satisfied with your video outputs it's time to submit! Submit this ipython notebook for review.

Optional Challenge

Try your lane finding pipeline on the video below. Does it still work? Can you figure out a way to make it more robust? If you're up for the challenge, modify your pipeline so it works with this video and submit it along with the rest of your project!


In [26]:
challenge_output = '../assets/videos/processed_challenge.mp4'
clip2 = VideoFileClip('../assets/videos/challenge.mp4')
challenge_clip = clip2.fl_image(process_image)
%time challenge_clip.write_videofile(challenge_output, audio=False)


[MoviePy] >>>> Building video ../assets/videos/processed_challenge.mp4
[MoviePy] Writing video ../assets/videos/processed_challenge.mp4
100%|██████████| 251/251 [00:18<00:00, 13.45it/s]
[MoviePy] Done.
[MoviePy] >>>> Video ready: ../assets/videos/processed_challenge.mp4 

CPU times: user 0 ns, sys: 0 ns, total: 0 ns
Wall time: 21.4 s

In [ ]: