Udacity Robotics Lunar Rover challenge
I am taking part in the Robotics Software Engineer nanodegree at Udacity. Our first project was modeled after the NASA find and retrieve rover mission challenge. In this blog post I will go through the techniques that were used to achieve a success rate and pass the requirements. The code for the project was done in Python using a Unity simulator.
The project aim was to give experience with the basic elements of robotics: perception, decision making and action. The goal was to create the code for a rover that will navigate autonomously in simulation and create a map of the environment.
Requirements are as follows:
The requirement for a passing submission is to map at least 40% of the environment at 60% fidelity and locate at least one of the rock samples. Each time you launch the simulator in autonomous mode there will be 6 rock samples scattered randomly about the environment and your rover will start at random orientation in the middle of the map.
Using sensor data
One of the most important parts of the project was handled by the
perception_step(). This is the place, where we read sensor data (so access the environment parameters) and convert it, so it is useful to us. The main reading you get is the camera image. I had to use some camera transformations that were already described here: Transform forward camera to top view.
Finding ground, obstacles and samples
We need to set a minimum and maximum value that counts as a sample and should be picked up. Note that the values are approximate, and you could use your own.
# Threshold for identyfying yellow samples sample_low_thresh = (120, 110, 0) sample_high_thresh = (205, 180, 70)
I also created a helper enum class to store information about the type of threshold:
from enum import Enum class Mode(Enum): GROUND = "ground" OBSTACLE = "obstacle" SAMPLE = "sample"
Then I modified the
color_thresh() function to return thresholds for a specified mode. You can see that I’m using the mode parameter to build specific tresholds depending on the type we want to get - ground, obstacle and sample. The tresholds are gathered depending on the color range that is present on the pixel. This might look a bit unclean, but then we have to compare quite a lot variables.
def color_thresh(img, low_thresh=(0, 0, 0), high_thresh=(160, 160, 160), mode=Mode.GROUND): color_select = np.zeros_like(img[:,:,0]) final_thresh = None if (mode == Mode.OBSTACLE): final_thresh = (img[:,:,0] < high_thresh) \ & (img[:,:,1] < high_thresh) \ & (img[:,:,2] < high_thresh) elif (mode == Mode.SAMPLE): final_thresh = (np.logical_and(img[:,:,0] >= low_thresh, img[:,:,0] <= high_thresh)) \ & (np.logical_and(img[:,:,1] >= low_thresh, img[:,:,1] <= high_thresh)) \ & (np.logical_and(img[:,:,2] >= low_thresh, img[:,:,2] <= high_thresh)) else: final_thresh = (img[:,:,0] > high_thresh) \ & (img[:,:,1] > high_thresh) \ & (img[:,:,2] > high_thresh) color_select[final_thresh] = 1 return color_select
And here is a result of using the function:
Once I got the thresholds from the previous step, I assign them to our rover:
Rover.vision_image[:,:,0] = obstacle_thresh * 255 Rover.vision_image[:,:,1] = sample_thresh * 255 Rover.vision_image[:,:,2] = ground_thresh * 255
We need to create some rover-centric coordinates:
ground_x, ground_y = rover_coords(ground_thresh) obstacle_x, obstacle_y = rover_coords(obstacle_thresh) sample_x, sample_y = rover_coords(sample_thresh)
Aaaand transform them to world coordinates (of course you need to do this for ground, obstacle and sample respectively):
w_ground_x, w_ground_y = pix_to_world( ground_x, ground_y, Rover.pos, Rover.pos, Rover.yaw, Rover.worldmap.shape, map_scale)
Then we will update the world map, but only if the
roll values are close to 0. This will increase our map fidelity by a great amount.
Rover.max_roll are custom fields in our
if Rover.pitch < Rover.max_pitch and Rover.roll < Rover.max_roll: Rover.worldmap[w_obstacle_y, w_obstacle_x, 0] += 1 Rover.worldmap[w_sample_y, w_sample_x, 1] += 1 Rover.worldmap[w_ground_y, w_ground_x, 2] += 1
And the last thing we need to do is to create polar coordinates for the Rover:
polar = to_polar_coords(ground_x, ground_y) Rover.nav_dists = polar Rover.nav_angles = polar
Decisions and taking action
Once we got our thresholds, we can start doing some actions based on the data. All the necessary steps can be done in the
decision_step() of the code. This piece of code is responsible for… well decisions.
First of all, I broke the code into separate functions, so that everything is much clearer. First thing I’m doing is to record the robot previous position. I am using time and that position to check if the robot is stuck or not. If it is stuck for a longer time, it should rotate a bit to the right. Then depending on the robot mode it is performing certain tasks.
def decision_step(Rover): if Rover.p_pos == None: Rover.p_pos = Rover.pos else: if Rover.p_pos != Rover.pos: Rover.stop_time = Rover.total_time if Rover.total_time - Rover.stop_time > Rover.max_time: Rover.throttle = 0 Rover.brake = 0 Rover.steer = -15 time.sleep(1) # wait for the turn if Rover.nav_angles is not None: if Rover.mode == 'start': initial_setup(Rover) if Rover.mode == 'sample': recover_sample(Rover, nearest_sample) if Rover.mode == 'forward': move(Rover) # Forward in all glory! if Rover.mode == 'tostop': stop(Rover) # Bring the rover to a stop if Rover.mode == 'stop': find_and_go(Rover) # Find a path to start the Rover again return Rover
When in start mode the robot turns in a programmed direction and moves to the wall. When it reaches it, he goes into the stop mode.
def initial_setup(Rover): if 90 < Rover.yaw < 95: Rover.throttle = Rover.throttle_set Rover.brake = 0 Rover.steer = 0 if len(Rover.nav_angles) < Rover.go_forward: Rover.mode = 'stop' else: Rover.brake = 0 Rover.throttle = 0 if 90 > Rover.yaw or Rover.yaw >= 270: Rover.steer = 10 else: Rover.steer = -10
When in stop mode, the robot enters a state, where it looks for possible paths to move to as seen in the
find_and_go() function. It also checks if a sample is near to pick it up.
def find_and_go(Rover): if Rover.near_sample and Rover.vel == 0 and not Rover.picking_up: Rover.send_pickup = True else: if len(Rover.nav_angles) < Rover.go_forward: Rover.throttle = 0 Rover.brake = 0 Rover.steer = -15 if len(Rover.nav_angles) >= Rover.go_forward: Rover.throttle = Rover.throttle_set Rover.brake = 0 Rover.mode = 'forward'
The forward mode initializes the wall crawler. It moves next to the wall at a given offset, to accord for the rough terrain near the walls.
def move(Rover): if Rover.near_sample: Rover.mode = 'tostop' if len(Rover.nav_angles) >= Rover.stop_forward: if Rover.vel < Rover.max_vel: Rover.throttle = Rover.throttle_set else: Rover.throttle = 0 Rover.brake = 0 Rover.p_steer = Rover.steer Rover.steer = np.max((Rover.nav_angles) * 180 / np.pi) - 30 # minus wall offset else: Rover.mode = 'tostop'
The stop mode does exactly that. Stops the rover - to be used for sample picking.
def stop(Rover): if Rover.vel > 0.2: Rover.throttle = 0 Rover.brake = Rover.brake_set Rover.steer = 0 elif Rover.vel < 0.2: Rover.mode = 'stop'
I must say that this project was both fun and challenging. I didn’t think that we will get a Lunar Rover as our first project. This also shows how well the course is prepared. The challenge not only taught me how to work with image recognition and transformations, but also how to apply the data to take action and move a robot in simulation.