Sensor data fusion is one of the major building blocks for automated vehicles and the core of BASELABS’ offering

Sensor Fusion


What is sensor fusion?

Sensor fusion is the process of combining the outputs of different sensors in order to obtain more reliable and meaningful data. In the context of automated driving, the term usually refers to the perception of a vehicle’s environment using automotive sensors such as radars, cameras, and lidars.


What is sensor fusion required for?

Sensor fusion is required for 

  • resolving contradictions between sensors (deciding whom to believe if one sensor reports a detection from a particular position and another does not) 

  • synchronizing sensors (ensuring the position of a vehicle is correctly calculated even though the vehicle moved between the times of measurement of two sensors) 

  • predicting the future positions of objects (calculating where the detected vehicles or pedestrians will most likely be a couple of seconds later) 

  • exploiting the strengths of heterogenous sensors (e.g. combining the longitudinal accuracy of a radar with the lateral accuracy and classification capabilities of a camera) 

  • detecting malfunctions of sensors (detecting if one sensor systematically provides implausible detections compared to other sensors) 

  • achieving automated driving safety requirements (making sure automated vehicles will operate safely in a wider range of scenarios than a single sensor could cover)


What is the benefit of using BASELABS‘ expertise in sensor fusion?

Sensor data fusion is a topic of intensive research since half a decade. It requires significant practical experience and investments to identify and implement techniques that are really working in practice and meet functional, runtime, and safety constraints. Therefore, our customers are building upon BASELABS’ expertise in sensor fusion in order to accelerate their development, minimize risks and use their internal resources economically. 


How is sensor fusion implemented? 

There are two major variants of sensor data fusion: 

  • Object fusion determines a list of potential objects including their positions, kinematic parameters (speed, acceleration, …) as well as confidence metrics. This is achieved by applying so-called Multiple Objects Tracking (MOT) algorithms, which work as follows. As sensors sometimes provide so-called “false positives” (that is, the sensor falsely claims that there is an object though it is not), detections are initially considered as potential objects whose existence is yet to be confirmed (“tracks”). For each track, an Extended Kalman filter (EKF) or Unscented Kalman filter (UKF) is used to estimate its position and motion. In the next measurement cycle, the tracker determines which of the new detections may belong to already existing tracks using a group of techniques called data association (exemplary implementations include PDA, IPDA, JPDA, or JIPDA). Those detections are used to update the position and motion estimates. From the remainders, new tracks are created. Once a track has been detected often enough (in mathematical terms, if its probability of existence gets sufficiently high), it is considered as a confirmed object. 
  • Grid fusion divides the environment of the vehicle into discrete cells and estimates the probability of each cell for being occupied (“static grid”) and the probable motion within each cell (“dynamic grid”).


BASELABS enables data fusion results for both data fusion technologies:

  • Object fusion: BASELABS Create Embedded is the tool for the development of data fusion systems for automated driving functions. It provides data fusion algorithms that combine data from radar, camera and lidar sensors. The resulting object fusion provides a unified object list for the vehicle's environment
  • Grid fusion: BASELABS Dynamic Grid provides integrated dynamic object and free space fusion for automated driving functions with SAE level 3-4 in unstructured urban environments


The book as a comprehensive introduction (>1250 pages) to the field of MOT with a detailed discussion on numerous practical issues. The book is written by one of the world’s experts in the field and provides numerous realistic examples using sensors such as radar, ultrasonic etc. The topics addressed by this monumental work include tracking of the maneuvering target, PDA-based methods, track-to-track fusion, tracking and association with attributes, measurement extraction for unresolved targets, sensor management etc. 

Y. Bar-Shalom et al Tracking and Data Fusion: A Handbook of Algorithms


The work provides an extensive review on state of the art methods for the problem of MOT. The paper consists of three main sections where correspondingly the methods of Joint Probabilistic Data Association (JPDA), Multiple Hypothesis Tracking (MHT) and the methods of RFS are discussed. For these three groups of the algorithms the key features are discussed and the extension methods are mentioned.  

Ba-Ngu Vo et al. Multitarget Tracking


The work starts with a Bayesian solution for a generic object tracking and proceeds to a problem of MOT. First, the association-based methods such starting with a relatively simple JPDA and ending with advanced IMM-JITS. Additionally, newer FISST-based methods are introduced and explained as an alternative to classical association-based for the MOT problem. Finally, the reader is introduced to the concepts Out-of-Sequence Measurements (OOSM), where the tracking algorithm is designed to incorporate delayed or out-of-time order measurements correctly as seen from the processing system. The latter effect is considered of an extreme importance in modern tracking system which relies on the networked sensors interconnected with complex communication networks as well as due to the delays caused by the internal sensor processing. 

S. Challa et al. Fundamentals of Object Tracking


The book provides a comprehensive introduction into the methods of Bayesian inference in robotics. The reader gets an easy-to-read introduction to the methods of both parametric and non-parametric filtering with an emphasis on motion and perception models as used in robotics applications. Additionally to classical estimation algorithms, an excellent introduction is given for the localization and mapping applications such as occupancy grid mapping and efficient implementations of Simultaneous Localization and Mapping (SLAM). 

S. Thrun et al. Probabilistic Robotics, MIT Press, 2005


The work presents a broad overview of the MTT algorithms as available before 2005 with a comparative analysis or the methods in terms of their processing structure, computational complexity and performance as well as association type used. The report concentrates on enumerative algorithms (the group of data association based methods) and does not consider newer FISST methods which got popularity after the work was published. 

G.W. Pulford, Taxonomy of Multiple Target Tracking Methods


The paper provides an excellent introduction on the state-of-the-art methods on extended object tracking. The work starts with a overview on the basic methods used to track a single extended object, while an extension of the methods for tracking multiple extended objects is provided in the second part of the paper. The overview paper has a tutorial structure with a number of important algorithms explained in a form of pseudocode. 

K. Granstrom et al. Extended Object Tracking: Introduction, Overview and Applications


The work addresses a problem of defining a miss-distance measure as a metric to assess the performance of a multi-object tracking algorithms. The authors introduce and explain in details a motivation behind a so-called Optimal Subpattern Assignment (OSPA) metric – a performance metric which is nowadays commonly accepted as one of the major Key Performance Indicators (KPI) for MOT algorithms. 

D. Schumacher et al. A Consistent Metric for Performance Evaluation of Multi-Object Filters.

Contact & further information

Top of page