RC Car ML Model Development with Google Colab

Previously, the process for training and deploying an ML model to autonomously operate an RC car was described in the post, “RC Car End-to-end ML Model Development.” The purpose of the project was to develop an ML model that predicted steering angles given a color camera image input to enable the RC car to follow a lane around a track autonomously. A video of the real-world implementation of this model is depicted below.

RC End-to-End ML Model Verification
RC End-to-End ML Model Verification

This post describes the process of using Google Colab for the model training process instead of the embedded device on the RC car. Google Colab is a free research tool for machine learning education and research. It’s a Jupyter notebook environment that can be accessed via a web browser. Code is executed in a virtual machine and the notebook files are stored in your Google Drive account. For this example, everything will be done in Python.

Depending on the size of the dataset or the complexity of the model, it can be difficult to train an ML model on an embedded device. Google Colab offers free use of GPUs and now provides TPUs as well. The virtual machines also have 12GB of RAM and 320GB of disk space. This makes them a great tool for training ML models with large data sets.

Continue reading

RC Car End-to-end ML Model Development

This post describes the process of developing a end-to-end Machine Learning model to steer an RC Car around using a color camera as an input. This is inspired by the Udacity Self Driving Car “behavior cloning” module as well as the DIY Robocars races.

The purpose of this project is to create a pipeline for collecting and processing data as well as training and deploying a ML model. Once that basic infrastructure is in place, we can build upon that foundation to tackle more challenging projects.

For this project, the goal is to develop a ML model that reliably navigates the RC car around a racetrack. The ML model will perform a regression to estimate a steering angle for each camera image. We will test everything in simulation using ROS Gazebo and then use the same system to drive autonomously around an indoor track in the real world.

RC Vehicle End-to-end ML Model Verification
RC Vehicle End-to-end ML Model Verification

Continue reading

Odometry Estimation with an Ouster OS-1 lidar Sensor

This post describes the process of fusing the IMU and range data from an OS-1 lidar sensor in order to estimate the odometry of a moving vehicle. The position, orientation, and velocity estimates are critical to enabling high levels of automated behavior such as path planning and obstacle avoidance.

One of the most fundamental methods used by robots for navigation is known as odometry. The goal of odometry is to enable the robot to determine its position in the environment. The position and velocity are determined by measuring the change from the robot’s known initial position using the onboard sensor data.

Continue reading

Ouster OS-1 lidar and Google Cartographer Integration

This post describes the process of integrating Ouster OS-1 lidar data with Google Cartographer to generate 2D and 3D maps of an environment.

Cartographer is a system that provides real-time simultaneous localization and mapping (SLAM) in 2D and 3D across multiple platforms and sensor configurations. SLAM algorithms combine data from various sensors (e.g. LIDAR, IMU, and cameras) to simultaneously compute the position of the sensor and a map of the sensor’s surroundings. SLAM is an essential component of autonomous platforms such as self-driving cars, automated forklifts in warehouses, robotic vacuum cleaners, and UAVs. A detailed description of Cartographer’s 2D algorithms can be found in their ICRA 2016 paper.

Map Making with Google Cartographer
Map Making with Google Cartographer

Cartographer was released as an open source project in October of 2016. Google also included ROS support through the release of the cartogrpaher_ros repository which contains several ROS packages that can be used to integrate cartographer into an existing ROS system.

Continue reading

OpenMV Cam ROS Node Development

This post describes the development of a ROS node that can be used to process images from an OpenMV Cam in a ROS-based robotics system.

The goal of the OpenMV Cam project was to create a low-cost, extensible, Python powered, machine vision module to become the “Arduino of Machine Vision“. The OpenMV team wanted to make machine vision algorithms more approachable to makers and hobbyists. The camera comes with software to complete common computer vision tasks like tracking colors and detecting faces while also supporting the control of I/O pins to control physical objects in the real world.

OpenMV Cam
OpenMV Cam Image from openmv.io

Continue reading

OS-1 ROS Package Deployment with Docker

Docker is an open platform for developing, shipping, and running applications with containers that enables you to separate your applications from your infrastructure. Docker provides the ability to package and run an application in a loosely isolated environment called a container.

The container provides a standardized environment for development. This means that we can distribute our ROS packages along with a known environment to ensure that it runs reliably on other platforms.

A Docker image is a read-only template with instructions for creating a Docker container. These can be further customized from a base image. The image is created from a Dockerfile which contains the steps necessary to create and run the image.

A container is a runnable instance of an image. A container is defined by its image as well as any configuration options you provide to it when you create or start it.

Continue reading

Ouster ROS Package Continuous Integration with Travis-CI

This post details the process of integrating the Travis-CI tool with the Ouster ROS package. Continuous Integration (CI) is a software development practice where developers integrate code into a shared repository. Each integration can then be verified by an automated build and automated tests. This automated testing allows you to quickly detect errors and locate them more easily. CI has become a best practice for software development and is guided by a set of key principles. These include revision control, build automation, and automated testing.

CI also supports the concept of Continuous Delivery which is another best practice. The goal is to continually keep your applications in a deployable state and even automatically push your application into production after each successful update. The post “OS-1 ROS Package Deployment with Docker” details the steps to integrate a ROS package on github with Dockerhub to support Continuous Delivery.

As a continuous integration platform, Travis CI supports your development process by automatically building and testing code changes, providing immediate feedback on the success of the change. Travis CI can also automate other parts of your development process by managing deployments and notifications.

When you run a build, Travis CI clones your GitHub repository into a brand new virtual environment and carries out a series of tasks to build and test your code. If one or more of those tasks fails, the build is considered broken. If none of the tasks fail, the build is considered passed, and Travis CI can deploy your code to a web server, or application host.

Continue reading

Ouster OS-1 ROS Gazebo Simulation in MCity and Citysim

In this post, we will adapt OSRF’s car_demo and citysim packages to simulate a Toyota Prius driving around a simulated city environment with an OS-1-64 lidar sensor.

During ROSCon 2017, Ian Chen and Carlos Aguero gave a presentation entitled, “Vehicle and city simulation with Gazebo and ROS” (slides here). They developed a URDF model of the Prius configured with various sensors operating on both MCity and Sonoma Raceway in Gazebo. This was in support of the Toyota Prius Challenge.

This was followed up by a blog post on the OSRF website publicizing the work and making the car_demo package available.

Following that release, a blog post was published describing citysim. This is a more detailed simulation world consisting of pedestrians, other vehicles, and multiple city blocks. This simulation also leveraged the Prius vehicle model. A quick intro video was developed:

Continue reading

Simulating an Ouster OS-1 lidar Sensor in ROS Gazebo and RViz

In this post, we will model and simulate an Ouster OS-1-64 in Gazebo, ROS’s simulation environment. Simulating the sensor in Gazebo allows users to experiment with the OS-1 sensor without needing to purchase the physical unit. Users can determine the ideal placement and orientation of the sensor for their specific application and evaluate the sensor’s performance when integrated into their robotics software stack. This ensures a smoother integration process when the user purchases an OS-1 and begins using it in their real-world applications. This post will cover the following topics:

  • Developing an accurate URDF model of the sensor
  • Rendering the sensor model in RViz and Gazebo
  • Using Gazebo Laser plugins
  • Publishing PointCloud2 messages with the correct structure

This work assumes the user is running Ubuntu 18.04 with ROS Melodic and Gazebo 9 installed.

Ouster OS-1 Overview

The OS-1-64 is a multi-beam flash lidar developed by Ouster. The sensor comes in 16 and 64 laser version. It’s capable of running at 10 or 20Hz and covers a full 360˚ in each scan. For horizontal resolution, the sensor support 512, 1024, or 2048 operating modes. The sensor has a 1m minimum range and a 150m maximum range. Finally, the sensor has an IMU onboard. For more detailed specifications of the sensor, refer to the OS-1 Product Page.

Continue reading