Simultaneous Localization and Mapping for robotic problems

Posted by Łukasz on 23 Oct 2020.

”SLAM is a process by which a mobile robot can build a map of an environment and at the same time use this map to deduce its location. In SLAM, both the trajectory of the platform and the location of all landmarks are estimated online without the need for any a priori knowledge of location.”

~ Durrant-Whyte and Bailey (2006).

Simultaneous Localization and Mapping tackles the problem where a mobile intelligent agent is placed in an unknown environment and its goal is to build a map of the location, based on the knowledge it gains while performing exploration. The idea behind this problem is the basic building block of modern robotics, where agents are capable of moving when left alone, meaning that it gives foundation for building truly autonomous robotic systems. Nowadays there are many variations of the initial SLAM algorithm and many of them were tested both in laboratory conditions and in real world applications. Right now such systems are implemented not only in indoor problems (Chong and Kleeman (1999),Davison et al. (2004)), but also outdoor (Folkesson and Christensen (2004)), underwater (Newman and Leonard (2003), Eustice et al. (2005)) and airborne projects (Kim and Sukkarieh (2003)). As a theoretical problem SLAM can be considered a solved problem, but according to Durrant-Whyte and Bailey (2006), there are still practical solutions that need revisiting to build more generally rich maps as part of the SLAM portfolio. The initial considerations for SLAM algorithms for a mobile robot moving in an unknown environment and taking relative observations of landmarks at timekcan be seen in the following table:

| Notation | Description | | :--- | :--- | | $$x_k$$ | State vector of robot location and orientation | | $$v_k$$ | Control vector to move robot to state $$x_k$$ at time $$k$$ | | $$m$$ | All landmarks | | $$m_i$$ | Location of the ith landmark | | $$z_{ik}$$ | Observation taken from robot of ith landmark location at time $$k$$ | | $$X_{0:k}$$ | History of robot locations | | $$U_{0:k}$$ | History of control inputs | | $$Z_{0:k}$$ | All landmark observations |

First time SLAM was mentioned was during the 1986 IEEE Robotics and Automation Conference, San Francisco, California, with first key scientific research papers by Smith and Cheesman (1987) or Durrant-Whyte (1988) appearing soon after. The paper being a fundamental to SLAM research by Smith et al. (1990) was built upon research by Chatila and Laumond (1985), Ayache and Faugeras (1988) and Crowley (1989). A full consistent solution was formulated when Leonard and Durrant-Whyte (1991) proposed a mobile robot moving through unknown territory while taking relative observation of nearby landmarks. The estimates of those landmarks were necessarily corelated with each other based on the common error in the estimated vehicle location, creating a joint state between the vehicle pose and every landmark position observation. Further work on landmark filtering was done by Leonard and Durrant-Whyte (1992) and Renken (1993). The SLAM acronym was first presented by Durrant-Whyte et al. (1996), along with the realization that the combined mapping and localization problem was convergent, so correlation between landmarks was crucial to achieve a good outcome of the algorithm. The convergence was further researched by multiple scientific groups eg. Csorba and Durrant-Whyte (1996), Csorba (1997), Leonardand Feder (2000), Castellanos et al. (1997), Deans and Hebert (2000) and more. Further research regarding computational efficiency and data associations was described by Thrunet al. (1998) and Hollerbach and Koditscheck (2000) in the following years.

SLAM Problem The essential SLAM problem as seen in Durrant-Whyte and Bailey (2006). A simultaneous estimate of both robot and landmark locations is required, while true locations are never known or measured by direct means. The observations are made between true robot and landmark locations.

Probabilistic SLAM

Simultaneous Localization and Mapping in its probabilistic form requires that at all times $$k$$ a probability distribution is calculated.

$$ P(X_k, m | Z_{0:k}, U_{0:k}, x_0) $$

This distribution describes the joint posterior density of landmark locations and robot state, considering recorded observations and control inputs in regard to the initial state of the intelligent agent. The goal is to promote a recursive solution to the SLAM problem, building on the estimate distribution at time $$k−1$$:

$$ P(X_{k-1}, m | Z_{0:k-1}, U_{0:k-1}) $$

which follows a control $$u_k$$ and observation $$z_k$$ computed using the Bayes theorem, requiring a state transition and observation models defined by describing the effect of control inputs and observations. The observation model and motion model can be defined as follows:

Motion model:

$$P(x_k| x_{k-1}, y_k)$$

Observation model:

$$P(z_k|x_k, m)$$

This brings the calculations to a standard two-step sequential prediction corection.


FastSLAM, which was introduced by Montemerlo et al. (2002a), is another fundamental SLAM method, influenced by Murphy (1999) and Thrun et al. (2000a). This method was the first attempt to use Monte Carlo sampling (particle filtering) to represent the non-Gaussian pose distribution and the nonlinear process model. It is worth mentioning, that FastSLAM still linearizes the observation model, but is typically a reasonable approximation for range-bearing measurements when the vehicle pose is known (Durrant-Whyte and Bailey(2006)). A sample realization of a robot trajectory in FastSLAM can be seen in the image below.

Due to the high dimensional space of the SLAM problem, applying particle filters directly makes it computationally infeasible. Therefore the sample space is reduced by applying Rao-Blackwellization (R-B), so that the join state is partitioned.

FastSLAM An example realization of a robots trajectory in the FastSLAM algorithm as seen in Durrant-Whyte and Bailey (2006). The proposal distribution for each stage update is depicted by ellipsoids. From them a robot pose is sampled and the observed landmarks are updated (assuming the pose is perfect). The probabilistic model of robot location consists of many such trajectories.

Tell me what you think!