2D Mapping & Navigation

 

Once the quadrotor can reliably and stably navigate the environment based on a series of desired waypoints, the quadrotor system can be used to sense and comprehend it’s surrounding environment. A map is a representation of the environment where the quadrotor is operating. To operate in the map, the quadrotor needs to know its position in the map coordinate frame. Ultimately, the goal is to develop a system that allows the quadrotor to autonomously reach a desired goal state in the map. The path from the initial state to the goal state can be a series of waypoints or actions, if a path exists.

A perfect estimate of the quadrotor’s pose needed to map creation is typically not available, especially in indoor environments. Simultaneous Localization and Mapping is the process of both making and updating a map of the environment while also estimating the system’s location in that map. This map can be saved and used later for localization and navigation without the need to rebuild the map. Typically, lasers are used to create two dimensional maps based on range measurements. However, these are often too large in size and mass or power intensive for use on small quadrotors. Instead, a RGBD sensor is used as the primary means of measuring the environment. This sensor, a Microsoft Kinect, contains stereo cameras as well as a depth sensor. The Kinect outputs point clouds which can be used by the SLAM algorithms to build the map of the environment. It is also possible to use these point clouds to emulate a 2D laser scan output.

In an ideal world, the map accurately reflects the environment, the environment is stable, and the localized estimate of the quadrotor’s pose in that map is accurate. However, in the real world, the environment is dynamic and sensor measurements are noisy. Therefore, obstacle avoidance and detection systems are developed as methods of continually updating the map. The ROS Navigation Stack combines all of these requirements into a complete sense-plan-act system.

The Navigation Stack

The ROS Navigation Stack is a 2D navigation stack that takes in information from odometry, sensor streams, and a goal pose and outputs safe velocity commands that are sent to a mobile base. The Navigation Stack will rely on the map_server package for the 2D map, the amcl package for localization in that map, sensor and odometry messages from the quadrotor, and the move_base packages to fuse all the messages in order to output a desired velocity command. This integration can be visualized below.

ROS Navigation Stack System Architecture Overview

 

The Navigation Stack needs to be configured for the shape and dynamics of a robot to perform at a high level and there are several pre-requisites for navigation stack use.

  • The navigation stack can only handle a differential drive and holonomic-wheeled robots. It assumes that the mobile base is controlled by sending desired velocity commands to achieve in the form of: x velocity, y velocity, theta velocity.
  • The shape of the robot must either be a square or a rectangle. The Navigation Stack was developed on a square robot, so its performance will be best on robots that are nearly square or circular.
  • The robot have a tf transform tree in place so that the robot publishes information about the relationships between the positions of all the joints and sensors.
  • It requires a planar laser mounted somewhere on the mobile base. This laser is used for map building and localization. The robot must publish sensor data using the correct ROS Message types.
Constructing a Mapping Environment

For simulating this application, the original 3DR Kit C Gazebo model was modified to include a Kinect sensor mounted on the front arm of the quadrotor. Additionally, the WillowGarage world was loaded to provide an indoor environment suitable to mapping. The WillowGarage world and RGB output of the Kinect sensor can be seen in the Gazebo screenshot below.

ROS Gazebo WillowGarage Simulation Environment

 

Developing the 2D Map

In order to make an initial map of the environment, a series of waypoints was recorded. These waypoints maneuvered the quadrotor from its initial position outside the office complex, in and around several rooms, and returning back to the initial position. This resulted in a repeatable path with enough sensor coverage to support the development of the environment map.

In order to create the occupancy grid map to be used for future path planning, the Kinect sensor output was modified to appear like a laser scan output. The Kinect sensor provides depth information in a 3-dimensional point cloud. However, for 2D mapping, all that is necessary is a series of depth images at a single altitude. This reduces computation costs and makes navigating basic environments much simpler. In order to do this, the depthimage_to_laserscan package was used. This packages reduces computation even further by using the disparity image instead of the complete point cloud. The node takes the Kinect sensor’s disparity image input and needs to Kinect’s coordinate frame to correctly output the laser scan data. This node outputs a LaserScan message that can be used to build the 2D occupancy grid map.

To launch this package in the WillowGarag world with the appropriate control nodes, the quad_slam_laser_map.launch file was used. The depthimage_to_laserscan node portion of the launch file is reproduced below. This depicts several of the variables that were set for this implementation of the depthimage_to_laserscan node. Specifically, the coordinate frame of the sensor is set as well as the input point cloud topic name and desired output laserscan topic name.

The image below depicts a sample laser scan measurement overlaid on top of the point cloud that it was sampled from.

ROS PointCloud to LaserScan Kinect Simulation

 

Next, OpenSlam’s Gmapping algorithm was used to create the 2D map. The gmapping package contains a ROS wrapper for the Gmapping algorithm that provides laser-based SLAM. The node subscribes to the laser scan message and outputs a map in the form of a OccupancyGrid message. The gmapping node portion of the launch file is reproduced below. Note that we remapped the laserscan input to the output of the depthimage_to_laserscan node. The odometry frame value is also explicitly set.

Once the environment has been sufficiently mapped, the map can be saved for later use. Specifically, the map can be used to support localization and future path planning algorithms. The map_server node is used to save dynamically generated maps to a pair of files. A YAML file describes the map meta-data, and names the image file. The image file encodes the occupancy data. The map is saved using the map_saver utility using the example code below.

The final map of the WillowGarage environment is depicted below.

ROS RVIZ Quadrotor 2D Mapping Simulation

 

2D Localization Utilizing the Map

In order to localize in the environment, a previously recorded map is loaded with the map_server node. This node reads a map from disk and offers it via a ROS service. The current implementation of the map_server converts color values in the map image data into ternary occupancy values: free (0), occupied (100), and unknown (-1). The map is available on the map topic as a OccupancyGrid message. This node is initialized with the following excerpt from the quad_2dnav.launch file and loads the map file previously generated for the WillowGarage environment.

The amcl (adaptive Monte Carlo localization) package is a probabilistic localization system for a robot moving in 2D. This approach uses a particle filter to track the pose of a robot against a known map. As currently implemented, this node works only with laser scans and laser maps. The amcl node takes in a laser-based map, laser scans, and transform messages, and outputs pose estimates. The amcl node has many configuration options that can affect the performance of localization. A description of these parameters can be viewed here. These parameters are set in the amcl_quad.launch file. Portions of that file are reproduced below. Specifically, the input laserscan topic name is set as well as the coordinate frames for the quadrotor, world, and frame.

When the node is created, it initializes its particle filter according to the parameters provided. If no parameters are set, the initial filter state will be a moderately sized particle cloud centered around the origin. It is possible to set the initial position in RVIZ using the 2D Pose Estimate button. Alternatively, the initial x, y, and yaw can be set in the launch file. The particle cloud can be viewed as a Pose Array in RVIZ. The particle cloud (a series of red arrows) will converge around the current localized posed of the quadrotor. As the quadrotor maneuvers in the environment, the amcl node will use the laser scan data and the odometry data to continually update the particle cloud with the predicted location of the quadrotor. An example of the amcl point cloud visualization is depicted below.

ROS RVIZ AMCL Package Visualization

 

2D Path Planning

The move_base node is a major component of the navigation stack that provides a ROS interface for configuring, running, and interacting with the navigation stack on a robot. The move_base node links together a global and local planner to accomplish its global navigation task. The move_base node also maintains two costmaps, one for the global planner, and one for a local planner that are used to accomplish navigation tasks. Running the move_base node on a robot that is properly configured results in a robot that will attempt to achieve a goal pose with its base to within a user-specified tolerance. The move_base node may optionally perform recovery behaviors when the robot perceives itself as stuck.  These recovery behaviors can be configured using the recovery_behaviors parameter, and disabled using the recovery_behavior_enabled parameter. For more information on configuration of the move_base node, and the navigation stack as a whole, please see the navigation setup and configuration tutorial.

The excerpt below is from the quad_2dnav.launch file. Here, the odometry topic name is modified to fit the simulation environment and the planning configuration files are loaded.

The navigation stack stores information about obstacles in the world in two costmaps. The global costmap is used for long term planning over the entire environment whereas the local costmap is used for short term planning and obstacle avoidance. One configuration file contains options common to both costmaps while other configuration files contain options specific to the global or local costmap.

The costmap_common_params.yaml file specifies the thresholds on obstacle information put into the costmap, footprint of the robot, inflation radius for the costmap, and the list of sensors that are going to be passing information to the costmap. The global_costmap_params.yaml file defines the costmap coordinate frame, robot coordinate frame, an update frequency, and the availability of a map served by the map_server. The local_costmap_params.yaml file includes similar coordinate frame and update options. For documentation on the full range of options, please see the costmap_2d documentation.

Lastly, a configuration file for the base_local_planner is loaded. The base_local_planner is responsible for computing velocity commands to send to the quadrotor waypoint node. The base_local_planner_params.yaml file contains options to configure the base_local_planner to make it compatible with the quadrotor’s control systems. Specifically, it is necessary to set the holonomic flag so that the local planner can compute the full range of appropriate velocity commands. For documentation on the full range of options, please see the base_local_planner documentation.

RVIZ is a good tool for visualizing all the possible data processed by the navigation stack. It was previously described how RVIZ can be used to set the initial robot pose for the amcl node as well as visualize the point cloud pose estimate generated by the amcl node. Additionally, the navigation goal can be set using RVIZ. This results in a local and global plan that can be visualized. Additionally, both the local and global costmaps can be visualized to depict obstacles identified in the environment.

ROS RVIZ Quadrotor 2D Path Planning Simulation


In the screenshot above, the quadrotor’s footprint is the red box and the global goal is the large red arrow. The global path is the green line from the quadrotor’s current position to the global goal and the blue line is the local path. The previously recorded map is shown as well. On top of the map, is the global cost function identifying all the walls as obstacles. The local cost function based on the current sensor readings is displayed slightly brighter and in the immediate vicinity of the quadrotor.

A video summarizing this section as well as displaying ROS and Gazebo footage is available below.

 

 

20 thoughts on “2D Mapping & Navigation

  1. Good job! But I have a problem using your demos.I cannot find the file “mav_msgs/CommandTrajectory.h”,can you give me some advice? Thank you!

  2. Wow thanks !!!!
    I have a question about this. Generally, I think, most of people play with drone or RC in outdoor like parks.

    Turth be told , I have a trouble using expensive lidar so I chose GPS, and IMU and encoder with ROS.

    So in reality, those kinds of things including Drones or UGV are runned by GPS (+IMU +Encoder)

    LiDAR – only for researchers, engineers I guess

    Is there any package that can we put several waypoints and then move ? (GPS based ~, GPS guided~)

    1. Brian,

      Check out the section “Sending Waypoints” on this page:
      https://www.wilselby.com/research/ros-integration/control-system-implementation/

      This describes implementing a “waypoint mission” mode that reads in a waypoint file that contains a series of waypoints and the time for them to be sent to the quarotor. If enabled, the waypoint node will send each waypoint at the defined time. This allows you to develop and implement autonomous waypoint missions to navigate the quadrotor around the Gazebo world relying only on a GPS measurement, much like the GPS navigation modes used on quadrotors in the real world.

      The simulation uses perfect GPS measurements but you could add noise to the GPS measurements to make this more realistic as well.

      -Wil

  3. So, I’m looking to simulate simple quadrotor path-planning in a 3-D environment, where the assumption is that the sensors on the robot provide an accurate description of the environment, and it does not matter how. I want to compare the different path-planning algorithms available in OMPL. However, I have never done anything like this before, and I have no idea where to begin. Do you have any suggestions?

    Any help would be greatly appreciated.

    1. Austin,

      Check out my page on 3D Mapping and Navigation

      The MoveIt! architecture will allow you to choose from various path planning algorithms in OMPL and implement them in the simulation. Specifically, in the “Context” tab you can select the algorithm to use.

  4. Thank you for all of these tutorials. I want to mention that there is a problem on 3D mapping page. If you didn’t remove the page, could you refresh the document?

    1. Aykut,
      I apologize for the inconvenience, I’m not sure what happened. I just navigated to the page and it pulled appeared to load properly. Let me know if there is a bad link somewhere or if you are still having issues.

  5. Wonderful tutorial, it has given me a clearer picture on how ROS works. But I am not able to comprehend on how to implement the same on hardware (in a real-time environment with a UAV) and I am not sure how or where to begin?! Can you provide me some assistance?

    1. Kumar,
      I’m glad you found the information valuable. I never implemented this software in a quadrotor myself unfortunately. However, I did a similar project developing an autonomous car. In that project, I did go from ROS simulations to running the algorithms on a real vehicle. You can check those out for reference.

      For a quadrotor specifically, the process would be unique to each platform. I would look at the autopilot that comes with the quadrotor you are interested in. You will need to be able to override the RC inputs with the inputs computed by your path planning software. If the quadrotor isn’t already set-up for ROS, you could use the ROS serial package to communicate with the quadrotor.

      For

      1. Thank you! I have taken the initial steps in implementing this on hardware. But I am getting a warning stating control loop missed its desired rate. Despite changing the frequency, I still get that error. Any idea on where it might be going wrong?

  6. Hi.
    I’m trying to get things right with the quad2dnav package. All seems right, apart from that I can’t liftoff the quadrotor. (I do not have a joystick, so I’m not using the joy node)
    Since now I tried to publish

    cmd_vel
    waypoint
    direct commands to rotors
    Any suggestion? What am I doing wrong?

    1. What value of commands are you sending over cmd_vel? You could modify the code to work with a keyboard instead of a joystick.

  7. Hi.
    I’m having trouble in using the 2Dnavigation package: after I unpause the simulation in Gazebo, the quad falls on the ground and there is no way to make it take off again.
    Do you have any idea what am I doing wrong ?
    Thanks!

  8. Hi,

    is there a way to run just this 2D navigation? I’m following your git steps to install everything, but I get some errors. I’d like to simplify and just install the stuff needed for this 2D navigation. Thanks.

  9. Great job..
    I have a question plz
    I have trouble when i build 2d map using kinect 360,it build single frame of map and process stops.
    What should i do

  10. Hello,
    I am using ROS kinetic and i tried to apply your solution but i encountered some problems, i get this error
    [ WARN] [1536253878.793418015, 6.324500000]: Timed out waiting for transform from base_link to odom to become available before running costmap, tf error: canTransform: target_frame odom does not exist.. canTransform returned after 6.3245 timeout was 0.1.

Leave a Reply

Your email address will not be published.