RC Car ML Model Development with an Ouster OS1 Lidar

The purpose of this project was to develop a Machine Learning model to enable an RC car to autonomously navigate a race track using an Ouster OS1 lidar sensor as the primary sensor input. The model is an end-to-end Convolutional Neural Network (CNN) that processes intensity image data from the lidar and outputs a steering command for the RC car.

This is inspired by the Udacity Self Driving Car “behavior cloning” module as well as the DIY Robocars races. This project builds upon earlier work developing a similar model using color camera images from a webcam to autonomously navigate the RC car.

This post covers the development of a data pipeline for collecting and processing training data, the compilation and training of the ML model using Google Colab, and the integration and deployment of the model on the RC car using ROS.

RC Car Lidar-based ML Model Deployment
RC Car Lidar-based ML Model Deployment

Continue reading

RC Car ML Model Development with Google Colab

Previously, the process for training and deploying an ML model to autonomously operate an RC car was described in the post, “RC Car End-to-end ML Model Development.” The purpose of the project was to develop an ML model that predicted steering angles given a color camera image input to enable the RC car to follow a lane around a track autonomously. A video of the real-world implementation of this model is depicted below.

RC End-to-End ML Model Verification
RC End-to-End ML Model Verification

This post describes the process of using Google Colab for the model training process instead of the embedded device on the RC car. Google Colab is a free research tool for machine learning education and research. It’s a Jupyter notebook environment that can be accessed via a web browser. Code is executed in a virtual machine and the notebook files are stored in your Google Drive account. For this example, everything will be done in Python.

Depending on the size of the dataset or the complexity of the model, it can be difficult to train an ML model on an embedded device. Google Colab offers free use of GPUs and now provides TPUs as well. The virtual machines also have 12GB of RAM and 320GB of disk space. This makes them a great tool for training ML models with large data sets.

Continue reading

RC Car End-to-end ML Model Development

This post describes the process of developing a end-to-end Machine Learning model to steer an RC Car around using a color camera as an input. This is inspired by the Udacity Self Driving Car “behavior cloning” module as well as the DIY Robocars races.

The purpose of this project is to create a pipeline for collecting and processing data as well as training and deploying a ML model. Once that basic infrastructure is in place, we can build upon that foundation to tackle more challenging projects.

For this project, the goal is to develop a ML model that reliably navigates the RC car around a racetrack. The ML model will perform a regression to estimate a steering angle for each camera image. We will test everything in simulation using ROS Gazebo and then use the same system to drive autonomously around an indoor track in the real world.

RC Vehicle End-to-end ML Model Verification
RC Vehicle End-to-end ML Model Verification

Continue reading

Odometry Estimation with an Ouster OS-1 lidar Sensor

This post describes the process of fusing the IMU and range data from an OS-1 lidar sensor in order to estimate the odometry of a moving vehicle. The position, orientation, and velocity estimates are critical to enabling high levels of automated behavior such as path planning and obstacle avoidance.

One of the most fundamental methods used by robots for navigation is known as odometry. The goal of odometry is to enable the robot to determine its position in the environment. The position and velocity are determined by measuring the change from the robot’s known initial position using the onboard sensor data.

Continue reading

Ouster OS-1 lidar and Google Cartographer Integration

This post describes the process of integrating Ouster OS-1 lidar data with Google Cartographer to generate 2D and 3D maps of an environment.

Cartographer is a system that provides real-time simultaneous localization and mapping (SLAM) in 2D and 3D across multiple platforms and sensor configurations. SLAM algorithms combine data from various sensors (e.g. LIDAR, IMU, and cameras) to simultaneously compute the position of the sensor and a map of the sensor’s surroundings. SLAM is an essential component of autonomous platforms such as self-driving cars, automated forklifts in warehouses, robotic vacuum cleaners, and UAVs. A detailed description of Cartographer’s 2D algorithms can be found in their ICRA 2016 paper.

Map Making with Google Cartographer
Map Making with Google Cartographer

Cartographer was released as an open source project in October of 2016. Google also included ROS support through the release of the cartogrpaher_ros repository which contains several ROS packages that can be used to integrate cartographer into an existing ROS system.

Continue reading

OpenMV Cam ROS Node Development

This post describes the development of a ROS node that can be used to process images from an OpenMV Cam in a ROS-based robotics system.

The goal of the OpenMV Cam project was to create a low-cost, extensible, Python powered, machine vision module to become the “Arduino of Machine Vision“. The OpenMV team wanted to make machine vision algorithms more approachable to makers and hobbyists. The camera comes with software to complete common computer vision tasks like tracking colors and detecting faces while also supporting the control of I/O pins to control physical objects in the real world.

OpenMV Cam
OpenMV Cam Image from openmv.io

Continue reading

OS-1 ROS Package Deployment with Docker

Docker is an open platform for developing, shipping, and running applications with containers that enables you to separate your applications from your infrastructure. Docker provides the ability to package and run an application in a loosely isolated environment called a container.

The container provides a standardized environment for development. This means that we can distribute our ROS packages along with a known environment to ensure that it runs reliably on other platforms.

A Docker image is a read-only template with instructions for creating a Docker container. These can be further customized from a base image. The image is created from a Dockerfile which contains the steps necessary to create and run the image.

A container is a runnable instance of an image. A container is defined by its image as well as any configuration options you provide to it when you create or start it.

Continue reading

Ouster ROS Package Continuous Integration with Travis-CI

This post details the process of integrating the Travis-CI tool with the Ouster ROS package. Continuous Integration (CI) is a software development practice where developers integrate code into a shared repository. Each integration can then be verified by an automated build and automated tests. This automated testing allows you to quickly detect errors and locate them more easily. CI has become a best practice for software development and is guided by a set of key principles. These include revision control, build automation, and automated testing.

CI also supports the concept of Continuous Delivery which is another best practice. The goal is to continually keep your applications in a deployable state and even automatically push your application into production after each successful update. The post “OS-1 ROS Package Deployment with Docker” details the steps to integrate a ROS package on github with Dockerhub to support Continuous Delivery.

As a continuous integration platform, Travis CI supports your development process by automatically building and testing code changes, providing immediate feedback on the success of the change. Travis CI can also automate other parts of your development process by managing deployments and notifications.

When you run a build, Travis CI clones your GitHub repository into a brand new virtual environment and carries out a series of tasks to build and test your code. If one or more of those tasks fails, the build is considered broken. If none of the tasks fail, the build is considered passed, and Travis CI can deploy your code to a web server, or application host.

Continue reading