Low-Level Fusion Library for High-Resolution Sensors

BASELABS Dynamic Grid provides sensor fusion of dynamic and static objects, including free space detection. It also supports high-resolution and semantic sensors. The software library allows the development of challenging use cases in urban and highway environments, such as valet parking, L2+ driver assistance, and highly automated driving.

When is the Dynamic Grid relevant?

The Dynamic Grid is the right solution for your project, if you...

  • use high-resolution point-cloud sensors like lidar or radar,
  • use semantic segmentation information,
  • address cluttered environments with both dynamic and static objects,
  • struggle with tricky scenes like roundabouts,
  • are looking for an approach that covers ISO26262 and ASIL B,
  • plan to utilize compute power comparable to an ARM Cortex-A72 or more powerful,
  • don't have the required significant amounts of training data for an AI-based system, and
  • are aware of limitations of state-of-the-art AI approaches anyways.

More than AI: multiple sensor modalities, integrated dynamic object and free space fusion

The dynamic grid is an advanced approach to detect static and dynamic objects and to estimate free space in an integrated algorithm. This ensures consistent information about objects and free-space free of contradictions. ARM- and x64-based processors are ideally suited to run the algorithm in real-time. The dynamic grid makes optimal use of its performance. Compared to state-of-the-art AI approaches, the dynamic grid can fuse different sensor modalities like radar and camera in an integrated manner.

Input/output

Use cases

Parking, assisted and automated driving in urban and highway environments

Exemplary use cases for the dynamic grid are: 

  • parking use cases like trained parking and automated valet parking,
  • stop & go-use cases like traffic jam pilot,
  • City AEB and similar driving functions for urban and unstructured environments

From a system setup perspective, the Dynamic Grid is well suited for radar or lidar sub-systems and provides the required low-level fusion. A typical example is a radar sub-system with four corner radars which generates a unified output from all sensors of this modality.

"The dynamic grid provided by BASELABS is a promising algorithm to significantly improve the environment perception in challenging environments."

 

Dr. Steen Kristensen

Senior Expert and Teamleader Comprehensive Environment Model

Algorithm overview

The algorithm divides the environment into small areas, so-called cells. For each cell, the algorithm determines whether it is free or occupied. If it is occupied by an object, its velocity and driving direction are also calculated. Finally, static and dynamic objects are clearly separated from each other and provided together with the free space, e.g. for maneuver decisions and path planning.

 

 

 

How to process lidar point clouds for urban environments

Loading the player ...

Urban environments add new challenges for automated driving functions and the required environment models. Highway-like scenarios mainly contain objects that can be well modeled and detected using classical data fusion and tracking methods. The objects in city, however are more diverse, more complex to model, and partially unforeseeable. To address urban environments, high-resolution lidar and radar sensors are becoming more and more popular. However, classical algorithms like the occupancy grid have severe shortcomings in processing point clouds in scenarios that contain both stationary and moving objects. The Dynamic Grid is a new approach that overcomes these shortcomings and determines free space as well as static and dynamic objects in an integrated algorithm. To provide an even more comprehensive environmental model, semantic information from cameras can be incorporated as well.

Contact & More

Top of page