Stay informed about technical articles and developments by subscribing to our newsletter.
Sensor Fusion > When a machine is driving your car, how does it avoid getting into accidents?
When you are driving a car, how do you avoid getting into accidents?
Well, I assume you do three things:
1. You pay a lot of attention to the road.
2. You brake and evade whenever necessary.
3. And you try to avoid braking and evading if not necessary.
Technically speaking, you want yourself to have a complete understanding of the environment (1), you want to achieve a high detection rate for relevant objects (2) and a low false alarm rate (3) of their perception.
But what is a complete understanding of the environment? It means
- you want to keep track of all moving road users and traffic participants,
- you want to be aware of all stationary objects, lane marking and signs,
- and you want to identify where it is safe to go next.
So far, so trivial. However, as we all know, those tasks are quite hard to be mastered by machines.
When a machine is driving a car, how does it (typically) get into accidents?
Making a machine drive a car appears to be all about perception. Public debates about "self-driving" cars and driver assistance systems mostly revolve around specific accidents and events, where it became obvious that the computer under the hood definitely did not have a complete understanding of the environment. The number one question then typically is: Why didn't the machine see the object in question? This question is then often followed by an analysis of the sensors being used and their theoretical ability to perceive that object. An oversimplified summary of those analyses goes like this: The sensors should have been able to capture the object, but that information got lost somewhere down the processing chain. If you take a closer look at where, when, and why the information got lost, you often find the same answer: It happened when the machine had to decide what it was that it saw. Sometimes it decides for the wrong thing. And sometimes it struggles with the decision, which results in a lack of action.
But why does it do that? Well, the sensors provide an enormous amount of data at each cycle. Cameras capture intensities for millions of pixels, radars receive complex waveforms, and lidars, if used, output huge point clouds. All this data needs to be reduced to not only capture the environment but to truly understand it. So, the actual question is not why a machine makes these decisions, it is when and how, as the decisions are mostly already made by the sensors (high-level fusion). And again, the oversimplified answer for those accidents and events is that these sensors do not have all the information and hence later cannot correct these wrong decisions. This means that the decisions were made too early! In other words: Sensor data that actually was available was disregarded.
So-called high-level sensor fusion is prone to this issue, since each sensor takes its individual set of measurements and tries to create a full representation of its perceived environment. However, each sensor suffers from false positive measurements as well as false negatives. Thus, typically a set of handcrafted heuristics exist to cope with these errors. In essence, a decision is made based on each sensor individually, whether an object is or isn’t there. This decision can’t be reversed later in the perception processing if it is wrong.
There is a great article from my colleague Eric providing more technical background for this issue:
Low-Level Sensor Fusion as a key ingredient for automated driving
This might be one of the key challenges for automated driving: How can we make sure that we take as much sensor data into account as possible? The answer to that has long been discussed in the community as low-level sensor fusion, as opposed to high-level sensor fusion, which is commonly used in today's ADAS.
As the name indicates low-level sensor fusion is intended to combine information from multiple sensors early in the process without the need for extensive pre-processing - or in that sense decision-making - for each sensor individually. So, the idea behind low-level sensor fusion already contains the concept of utilizing all the data available with as little information loss as possible.
However, one question remains: How does low-level sensor fusion look like? How does it work, and can it actually keep up to its promise?
This is where BASELABS Dynamic Grid kicks in. It is a truly low-level sensor fusion technology based on a grid representation that allows for an integrated estimation of
- all kinds of dynamic objects,
- all kinds of static objects, and
- free space.
One of the main benefits of this technology is that it does not require an early-on object extraction. Thus, no early-on decision-making on what it is that the sensors perceive. This includes the type of objects but also their shapes. By not disregarding crucial information, the dynamic grid achieves a high detection rate for arbitrary, unknown, and potentially moving objects.
Due to the integrated approach for processing the static and the dynamic environment simultaneously, the Dynamic Grid features a low false alarm rate also.
You can find an in-depth walkthrough of this technology here:
So, does the Dynamic Grid keep up to the promise of low-level sensor fusion?
The answer to that currently is: Yes, it does!
The Dynamic Grid allows for processing high-resolution sensor data from next-generation sensors like HD radars, semantic segmentation cameras, and lidars. The processing of this kind of sensor data poses quite some challenges for classical high-level sensor fusion approaches. The dynamic grid on the other hand is designed for such data. It, therefore, enables the utilization of the full potential of those sensors in next-generation automated driving systems. But what are those next-generation driving systems? The Dynamic Grid has shown superior performance in urban environments by bringing reliable Automated Emergency Braking and Steering as well as Adaptive Cruise Control to those areas. At the same time, the Dynamic Grid has proven great potential to allow for next-generation automated parking functions, like Automated Valet Parking or Trained Parking. In combination, those applications pave the way for functions like Traffic Jam Pilots and Highway Pilots. And we also believe it will be a relevant part of level 4 automated driving, automated hub logistics, or last-mile delivery. All this with one goal in mind:
When the machine is driving the car, let’s make sure it avoids getting into accidents!
Sensor Fusion HubLearn more
Gain valuable insights into Sensor Fusion - From Object Fusion to Dynamic Grids.
more technical articles