# Self-Driving Car: Finding Lane Lines

Source: Deep Learning on Medium # Self-Driving Car: Finding Lane Lines

The self-driving car is a fantastic technology that needs lots of crucial skills. One of the essential technique is detecting street lanes. In this repository, I shared my code for that purpose. In the future, I will share other essential parts of the Self-Driving Car.

In this article, I am going to implement Lane Lines Detection on a video. At the beginning of implementing this project, we have to set up some requirements. I recommend you see this online free course in the Udacity for more information about setting up a suitable environment. Of course, I am using Google Colab. It is easy and free.

There is another powerful tool is named Anaconda. Anaconda is a distribution of packages built for data science. It comes with Conda, a package, and environment manager. You’ll be using Conda to create environments for isolating your projects that use different versions of Python and/or different packages. You’ll also use it to install, uninstall, and update packages in your environments. Using Anaconda has made my life working with data much more pleasant. So it depends on you which tool will be selected.

# STEPS TO FIND LANE LINES

For lane detection on a video, we have to pass some steps that you can see below:

• Getting each frame from video
• Making grayscale each frame
• Detecting edges by using Canny Algorithm
• Finding Lane by using Hough Algorithm
• Improving output and making a new video as a result

At the beginning of this project let me show you why we can not use just color segmentation for finding a lane in a video. As clear, often the color of the lane is white. When daylight is enough you can detect white lanes easily, But think about the night. It is hardly possible to detect the lanes. But it is not the only reason to refuse the color segmentation algorithm. You can imagine there are lots of objects would be either near white or completely white in the real world. Objects have some features such as color, shape, orientation, position, etc that would be helpful for us for detecting them. Anyway, I want to show you practically by using python and OpenCV which is a powerful tool for image processing.

# COLOR SELECTION

Values from 0 (dark) to 255 (bright) in Red, Green, and Blue color channels. it is possible we extract lane lines by choosing just white pixels. But there are some areas that have white pixels that do not belong to the lane line. For eliminating these areas we can use region masking. Add criteria in code to only focus on a specific region of an image, since lane lines will always appear in the same general part of the image. So it is another problem of Color Segmentation Algorithm. If you want to test practically just clone this repository and use `ColorLaneLinesDetection.py`. After running `ColorLaneLinesDetection.py`, you must see the below pictures as the result. This code extracts white pixels as lane lines. Then by putting a triangular mask, we can limit the area of exploring.

`import matplotlib.pyplot as pltimport matplotlib.image as mpimgimport numpy as np image = mpimg.imread('test.jpg')print('This image is: ', type(image), ', and the size of image is: ', image.shape)ysize = image.shapexsize = image.shaperegion_select = np.copy(image)color_select= np.copy(image)line_image = np.copy(image) red_threshold = 200green_threshold = 200blue_threshold = 200rgb_threshold = [red_threshold, green_threshold, blue_threshold] left_bottom = [0, 540]right_bottom = [960, 540]apex = [480, 320] fit_left = np.polyfit((left_bottom, apex), (left_bottom, apex), 1)fit_right = np.polyfit((right_bottom, apex), (right_bottom, apex), 1)fit_bottom = np.polyfit((left_bottom, right_bottom), (left_bottom, right_bottom), 1)`
`color_thresholds = (image[:,:,0] < rgb_threshold) | (image[:,:,1] < rgb_threshold) | (image[:,:,2] < rgb_threshold)XX, YY = np.meshgrid(np.arange(0, xsize), np.arange(0, ysize)) region_thresholds = (YY > (XX*fit_left + fit_left)) & (YY > (XX*fit_right + fit_right)) & (YY < (XX*fit_bottom + fit_bottom)) color_select[color_thresholds] = [0,0,0]line_image[~color_thresholds & region_thresholds] = [255,0,0]region_select[region_thresholds] = [255, 0, 0] plt.imshow(region_select) plt.imshow(color_select)plt.imshow(line_image)`

So let’s continue our steps to find lane lines. We’ll start with the Canny Algorithm.

# CANNY EDGE DETECTION

The Canny edge detector is an edge detection operator that uses a multi-stage algorithm to detect a wide range of edges in images. It was developed by John F. Canny in 1986. Canny also produced a computational theory of edge detection explaining why the technique works. An edge in an image may point in a variety of directions, so the Canny algorithm uses four filters to detect horizontal, vertical and diagonal edges in the blurred image by Finding the intensity gradients of the image. After applying the gradient calculation, the edge extracted from the gradient value is still quite blurred. Thus non-maximum suppression can help to suppress all the gradient values (by setting them to 0) except the local maxima, which indicates locations with the sharpest change of intensity value.

`import matplotlib.pyplot as pltimport matplotlib.image as mpimgimport cv2import numpy as npimage = mpimg.imread('exit-ramp.jpg')plt.imshow(image)gray = cv2.cvtColor(image, cv2.COLOR_RGB2GRAY)plt.imshow(gray, cmap='gray')kernel_size = 9blur_gray = cv2.GaussianBlur(gray, (kernel_size, kernel_size), 0)low_threshold = 30high_threshold = 100edges = cv2.Canny(blur_gray, low_threshold, high_threshold)`

For exploring of Canny Edge Detection, run `CannytoDetectLaneLines.py` code. After successfully running you must see the below picture as the result.

`plt.imshow(edges, cmap='Greys_r')`

As you can see, just the important lines of the image which have strong edges have remained. The main question is, how we can extract straight lines from an image. The Hough Algorithm can answer this question.

# HOUGH TRANSFORM

The Hough transform is a feature extraction technique used in image analysis, computer vision, and digital image processing. The purpose of the technique is to find imperfect instances of objects within a certain class of shapes by a voting procedure. This voting procedure is carried out in parameter space, from which object candidates are obtained as local maxima in a so-called accumulator space that is explicitly constructed by the algorithm for computing the Hough transform. The classical Hough transform was concerned with the identification of lines in the image, but later the Hough transform has been extended to identifying positions of arbitrary shapes, most commonly circles or ellipses. A line in image space can be represented as a single point in parameter space or Hough Space. We use this theory for detecting lines in a picture. So for achieving this goal, we should add the result of the Canny Algorithm to Hough.

Hough Algorithm has some parameters which have role key in tuning algorithm fine. You can either put a long time for tuning parameters of algorithm or put an especial mask for eliminating other unuseful areas from the picture. Check the difference between using the masked area and just tuning parameters of the Hough Algorithm. In the below images you can see the detected lines as red color.

`vertices = np.array([[(0, imshape), (0, 0), (imshape, 0), (imshape, imshape)]], dtype=np.int32)`
`vertices = np.array([[(0,imshape),(450, 290), (490, 290), (imshape,imshape)]], dtype=np.int32)`

# Putting together

Now it is time to put all our knowledge together for finding lanes in a video. This video is a source for testing our algorithm for finding lanes. You can find `solidWhiteRight.mp4` inside of the `test_videos` folder. Click on the below image and see the source video.