Katie Hughes

Resume LinkedIn Github

Project Title & GitHub Link

EKF SLAM on a Turtlebot3

Technologies

ROS 2, C++, SLAM

Project

In this project, I implemented extended Kalman filter SLAM from scratch on a Turtlebot3.



The video above displays my full algorithm working on a real robot. The upper left corner shows my best estimates for the robot position according to odometry only (blue) and EKF SLAM (green). The SLAM estimate is significantly more accurate to the real robot position as factors like noise and wheel slippage do not get incorporated into odometry. Additionally, the green obstacles represent the location of the cardboard cylinders according to SLAM. The aqua obstacles display the intermediate results of my circle fitting algorithm on the lidar data, which then gets incorporated into SLAM updates. Parts of the surrounding walls get marked as obstacles as the surface is not completely flat. The robot was moved using commands to teleop_keyboard in the turtlebot3_teleop package.


Low Level Control

I created a C++ library to handle 2D transformation matrix math and the forward and inverse kinematics of a differential drive robot. This assists with converting a desired robot velocity (specified by the cmd_vel topic) into wheel commands, as well as converting updates to encoder readings into a new robot position via odometry.


Localization and Mapping

I performed online extended Kalman filter SLAM with discrete features. This keeps track of a robot state in 2D (\(x, y, \theta\)) and \(n\) map locations, which are represented as cylinders centered at some (\(m_{xi}, m_{yi}\)). More details of the implementation are described here. On each iteration, I make a prediction of the new robot state and covariance based on odometry estimates. Then, for each obstacle measurement I see, I update my predicition of the map, robot state, and robot covariance. This is able to significantly outperform odometry alone in both simulation and real life.


Perception Pipeline

First, I performed usupervised clustering of lidar data. This groups together lidar points that are within a certain threshold of each other.


Next, I performed circle regression and circle classification on these clusters. This computes the center and radius of a circle in 2D, and determines whether it should be considered as a circle by both radius filtering and Inscribed Angle Variance.


Finally, I performed data association. This determines whether a detected circle should be classified as a new map object, or should be associated with an obstacle that has already been seen. To accomplish this, I calculate the Mahalanobis distance from the robot to each of the previously initialized obstaces and find the minimum. If this minimum is greater than some threshold, then it is a new obstacle, otherwise it belongs to the object with the minimum Mahalanobis distance.



The above video shows my full SLAM algorithm working in simulation. Again, the blue robot represents the odometry estimate and the green robot/obstacles represent the SLAM estimates. The red robot and obstacles represent the simulated “true” locations. Over time, the SLAM robot sticks with the location of the simulated robot over the odometry robot.