System Overview#

The autonomy system of the rover leverages the navigation stack provided by ROS that takes in data from several different sensor streams , odometry, and a goal pose and provides us with movement commands that we use to navigate our rover. It handles a lot of the tricky tasks for us like object avoidance and path planning, given the correct data (lidar, IMU, GPS, odometry).

../../../_images/overview_tf.png

From here.

Coordinate System#

ENU: We use the East North Up (ENU) coordinate system to define our map of the world. The ENU system has each axis correspond to XYZ, respectively: “east” is X, “north” is Y, and “up” is Z.

../../../_images/enu.png

UTM: Stands for Universal Transverse Mercator, and is a map projection system for assigning coordinates to locations on the surface of the Earth. It divides the world into sixty North-South zones. Each zone is segmented into 20 latitude bands, and within each zone and band, coordinates are measured as northings and eastings in meters. For more info: USGS, Wikipedia

The rover’s coordinate system is a right-handed system with the positive X-axis in the forward direction, and the positive Z-axis pointing up. Positive Y then points to the left, such that \(X x Y = Z\), making a standard right-handed coordinate system.

Frames of Reference#

To represent the rover’s location, we use 4 important frames of reference in the autonomy system:

  1. base_link

    The rover frame, where the origin is located at the center of the rover’s bounding box, with the positive x-axis (red) pointing forward, the positive y-axis (green) pointing left, and the positive z-axis (blue) pointing directly up with relation to the rover. With regards to sensor frames, this frame should be aligned with the IMU, GPS, and LIDAR frames in RVIZ, as all three devices are mounted and setup to align directly with the rover.

  2. odom

    A frame generated using all sensors except for GPS. Assuming this frame is fixed, the transform between odom and base_link represents where the rover’s frame is relative to where it started based on all sensor data except for GPS.

  3. map

    This frame is the fixed frame generated using all sensors, including GPS. The transform between this frame and base_link represents where the rover is relative to some external map, including odometry, IMU, and GPS data. The transform between map and odom mostly fixes drift in odometry and IMU data, and the two frames should be aligned with each other. The x-axis points east, y-axis points north, and z-axis points up.

  4. utm

    This frame has an origin at the origin of the current UTM grid zone (so you might not easily see it in RVIZ). The transform between this frame and the “odom” and “map” frames will allow us to determine where GPS coordinates, converted to UTM coordinates via “cmr utm,” lie in relation to the rover.

Main Nodes:#

The autonomy system consists of a set of nodes launched by cmr_nav/launch/cmr_nav.launch. The other launch files in that directory are for testing only. Do not use them.

  1. rplidarNode

    the LIDAR node, which takes LIDAR data and sends it as an array of readings around the existing sensor to the path planner. Real-number entries represent occlusions detected by the rover for a specific angle while “inf” represents no occlusions detected by the LIDAR in said angle. Along with the “bounding box,” which defines the occlusions that represent the rover, this information is passed to the costmap to update the path planner’s map.

  2. static_transform_publisher
    There are a bunch of these:
    • /nav/base_link_to_frame_gps

    • /nav/base_link_to_frame_imu

    • /nav/base_link_to_lidar_link

    • /nav/lidar_link_to_laser

    They broadcast transforms from the base_link (the rover) the frames of all the sensors on the rover. With the exception of the LIDAR, all these transforms are the identity transform. For example, if the orientation vector of the rover points off to the side instead of forwards, you could adjust these transforms.

    The format is: static_transform_publisher x y z qx qy qz qw frame_id child_frame_id period_in_ms

  3. attitude_ekf

    The attitude estimation node. Uses a Kalman filter on the magnetometer, accelerometer, and gyroscope data to obtain an accurate attitude estimate. Reads the IMU data from /imu/reading . Publishes the estimated attitude to /imu/filtered. Publishes geometry_msgs/PoseStamped with the current attitude to /nav/attitude_ekf/pose. You can display this in Rviz pretty easily; use the “browse by topic” tab. You might need to change the global frame to base_link if you want the attitude arrow to be centered.

  4. gps_frame_id.py

    Injects a frame ID into the GPS messages (from /fix ), and publishes them to /frame_fix.

  5. odom_filter.py

    Reads messages from the microcontroller odometry (/odom/reading), and attitude estimates from /imu/filtered, and fuses them into a new odometry message published to /odom/filtered. The new odometry message is the displacement reported by the wheels, but in the direction reported by the IMU. It compares the direction reported by the odometry to the change in position reported by the odometry to determine if the rover went forwards or backwards. In then uses either the direction of the IMU or the opposite direction accordingly.

  6. compass_test.py

    Publishes the heading based on true north to /imu/heading. For debugging purposes. If these values look strange, try calibrating the compass.

  7. ekf_localization_imu

    Fuses /imu/filtered and /odom/filtered to produce a continuous pose estimate. Publishes this pose to /nav/odometry/imu (as a nav_msgs/Odometry) message. This is the key for local planning. Also publishes the odom->base_link transform.

  8. navsat_transform_node

    Turns GPS data into rectangular coordinates. Subscribes to /nav/odometry/full (the output of the next node, below), /frame_fix (the GPS readings with a frame ID), and /imu/filtered. Publishes the GPS odometry offset at /nav/odometry/utm_gps.

    If you don’t see all the transforms, make sure this topic is being published. It does not output to the screen because it prints too much garbage. Obviously, if you are having trouble, set output="screen" in the launch file. The most common reason this is not being published is that there is no GPS data at /gps/fix .

    Important parameter: yaw_offset . Lets you spin the IMU data. This is currently set to pi/2 (90 degrees) because the IMU node reports north as 0 degrees, but all the navigation nodes expected the ENU convention.

    This node also publishes a transform from the odom frame to the UTM frame. This allows us to have the robot navigate to GPS goals. If you convert the GPS coordinates to UTM grid coordinates and ask the rover to move to those coordinates in the UTM frame, it will go to the GPS coordinates. Finally, this node outputs filtered GPS coordinates. The GPS data sometimes has discrete jumps. This node outputs a smooth version at /nav/gps/filtered. These always converge to the raw GPS coordinates (from /gps/fix), but are smoother.

  9. ekf_localization_gps

    The final localization node. This fuses all the sensor data (GPS, IMU, and odometry) to determine an absolute pose. It publishes a very important map->base_link transform. This is the best pose estimate for the rover. It will not be continuous however because of discrete jumps every time a GPS fix is received. It publishes to /nav/odometry/full. Note that the node above (the navsat_transform_node) uses this topic.

  10. move_base

    The path planner that lets you actually move the rover. You can move the rover by calling the /nav/move_base action. For some reason, you must specify the type: ac /nav/move_base move_base_msgs/MoveBaseAction . Publishes the output velocity to /nav/move_base/cmd_vel, which is remapped to /dd/heading. You can easily send a goal using the cmr move macro. To send a goal relative to the rover (for example, 1 m forwards), specify the base_link frame ID, and send the desired offset. To send a GPS goal, specify the utm frame ID, and send the UTM coordinates.

Costmaps#

We use two costmaps, found in cmr_nav/nav_conf, to store information about obstacles in the world: a local costmap and a global costmap. The former is used for local planning and obstacle avoidance, while the latter is used for global planning- creating long-term plans over the entire environment to reach the desired goal pose. More information here.