Alexander Sickert
HOME | Contact
Udacity’s Self-driving Car Engineering Program

Since November 2016 I’m participating in the 9 month self-driving car engineering nanodegree program at Udacity. My motivation is this: On one side I am a bit tired being purely a manager detached from where the rubber hits the street. To be a better manager I want to upgrade my coding skills to have a more adult conversation with engineers. On the other side there are only few topics in IT that really interest me. One interesting area are systems that can learn. Adn this leads to artificial intelligence, machine learning and self-driving cars. So I applied at Udacity and I was quite surprised that I got accepted given the fact that I have not coded for a few years. I was afraid not to have sufficient programming skills to pass the assignments.
The nanodegree is split up in three terms and each term is 3 months long. The first term is purely in Python, the second mainly in C++
Currently I’m in the middle of term two. I still like the program, but it is hard for me to keep up. It turns out that it’s not the programming skills that makes is hard for me. It is also not the pure maths skills that I am lacking. What makes it difficult is the length of certain algorithms that consist of a combination of many mathematical formulas. The individual formulas are not difficult to comprehend, but when packing all together then I easily get lost.
The projects I so far accomplished are:
- Find the lane on a street and its curvature. This is done by applying certain computer vision techniques called Sobel operator and HLS color space. First the camera gets calibrated using the OpenCV library and then we do a perspective transformation to calculate lane curvature. The application can perform in real-time using video signal
- Using the video signal from a video camera in the car we train a deep neural network so that the program can keep the car on the street and navigate the car. First the neural network learns by observing a manual driver and then it can drive by itself. The technologies used are Keras, NumPy and OpenCV.
- Detect other vehicles on the street using a video signal. The computer vision technique HOG – Histogram Of Gradients is being used to indicate the shape of objects. Then the training images were used to train a support vector machine. Furthermore computer vision techniques like sliding window and heat maps are used to improve accuracy of classification. Python libraries used are numpy, openCV, scipy, skimage, matplotlib, sklearn, moviepy
- Classify images of traffic signs. A Python pickle file with several thousands of images was used to train the LeNet TensorFlow architecture. Libraries used: TensorFlow, NumPy, OpenCV
- Implementation of a Kalman filter to localize the car using radar signal and laser signal as an input. Programmed in C++
- Implementation of a particle filter to localize a car on the map in C++
More info: https://www.udacity.com/course/self-driving-car-engineer-nanodegree–nd013