Dynamic Grid Fusion for ADAS and AD

BASELABS Dynamic Grid is a multi-sensor perception technology for automated driving functions. It uses low-level sensor fusion algorithms to detect any object from high-resolution or imaging radar sensors, semantic segmentation cameras, and LiDAR sensors. For unsupervised automated driving, it serves as a complementary safety channel to ensure collision-free driving. For ADAS functions, it consistently determines dynamic and static objects, including free space. The configurable library follows a white box approach and is ready for production usage on CPUs in typical automotive ECUs, including safety certification up to ASIL B.

Loading the player ...

The Dynamic Grid has several valuable properties for future ADAS systems.

Benefits of the Dynamic Grid

For unsupervised automated driving, BASELABS Dynamic Grid is a complementary enabler technology that compensates for the inability of typical AI-based methods to detect unknown objects such as lost cargo and ensures collision-free driving.

For ADAS functions, the integrated determination of dynamic and static objects improves detection and lowers false alarm rates compared to other solutions.

The ASIL B certification reduces time to market drastically.

The benefits of the Dynamic Grid originate from its technical principles. It provides a consistent environmental model by design.

  • High Detection Rate - By low-level sensor fusion without object extraction
  • Low False Alarm Rate - By integrated estimation of static and dynamic objects
  • Runs on Automotive CPUs - By minimizing particles in static areas

Why is there a need for grid fusion as the next-generation sensor fusion technology? There are two major reasons for this:

1.   Current sensor fusion approaches have inherent properties that limit their applicability for next-generation driving functions and sensors.

2.    Integrated sensor fusion approaches resolve these limitations and thus, enable next-generation driving functions.

For more insights, please refer to our background article or watch the presentation.

When is the Dynamic Grid relevant?

The Dynamic Grid is the right solution for your project, if you...

  • use high-resolution point-cloud sensors like lidar or radar,
  • use semantic segmentation information,
  • address cluttered environments with both dynamic and static objects,
  • struggle with tricky scenes like roundabouts,
  • are looking for an approach that covers ISO26262 and ASIL B,
  • plan to utilize compute power comparable to an ARM Cortex-A72 or more powerful,
  • want a 100% white box approach in the form of self-contained MISRA-compliant C++ code.

Driving functions: Automated Parking, piloted driving, City AEB

The Dynamic Grid fusion is suited for automated parking applications. Besides its ability to detect and separate dynamic and static objects of all kinds, it also provides a dedicated feature to extract free parking spaces from the sensor data.
Piloted driving applications like Traffic Jam or Highway Pilot benefit from the capability of handling extended objects and providing a 360° perception around the vehicle, among others. A related use case is a radar sub-system with four corner radars which generates a unified output from all sensors of this modality.
Applications like City AEB are enabled by the capability of the Dynamic Grid to operate in crowded environments with many different object types and crossing traffic.

Grid Fusion Technology Overview

The Dynamic Grid transforms the data from high-resolution sensors into a unified and reliable representation of the vehicle's environment that is the basis for downstream algorithms like path planning and motion prediction.

The algorithm consists of an occupancy grid map and a particle filtering approach. This combination ensures the consistent estimation of static and dynamic objects, including object dynamics. Each grid cell receives one of the labels "statically occupied", "dynamically occupied" or "free". If a dynamic object occupies a cell, the cell additionally contains the object's velocity and driving direction. The Dynamic Grid can process additional class information to further improve the estimation quality, e.g., from cameras with semantic segmentation.

As an integrated low-level sensor fusion approach, the Dynamic Grid does not rely on explicit object extraction and, thus, does not suffer from error propagation due to early decisions. Instead, the classification between static and dynamic objects is based on more information, resulting in high detection and low false alarm rates. Per design, there are no conflicts between dynamic traffic participants and the static environment as they are derived from the same model at low latency.

Read more on our Sensor Fusion Hub.

Supported Sensors

The Dynamic Grid supports any sensor that provides dense information about the vehicle's environment. In particular, the Dynamic Grid is optimized for HD-radar sensors, semantic segmentation cameras, and LiDAR point cloud data.

Contact & More

Top of page