BASELABS Create 3.1

Release notes

BASELABS is working on permanent product improvements including new features or bug fixing. We inform our users about software updates and releases regularly. Please contact us to share your experiences with our products and ideas for new approaches.


Efficient data fusion development for ADAS and automated driving
BASELABS Create is designed for the fast development of complex data fusion algorithms. BASELABS Create can be used with field-tested, pre-implemented algorithms as well as for the development of fully custom algorithms.

More information about BASELABS Create


This is a bug fix release of BASELABS Create which contains the following improvements:

  • Fixed a bug in the example code of the 'BASELABS Create Data Fusion Application' template Project which is part of the Visual Studio integration of BASELABS Create.
  • Fixed a bug in the Visual Studio extension which caused Visual Studio to crash if multiple versions of BASELABS Create are used.
  • Fixed a bug in the 'Baselabs.Statistics' NuGet package which could lead to inconsistent assembly references after an update of BASELABS Create.
  • Removed the contract assemblies from the 'Baselabs.Statistics' NuGet package.

Data Fusion Template project


Applies to: All previous versions of BASELABS Create.

Required user actions: Adapt user code.



The example code of the BASELABS Create Data Fusion Application contains a bug in the ego motion compensation. Depending on the timing of the available sensor data, the issue could have no or almost no impact to the data fusion result, or it could manifest itself in that the tracked objects move or maneuver in accordance with the motion of the ego vehicle.

To solve this issue in existing data fusion projects, follow the actions provided in the following section.

When a new BASELABS Create Data Fusion Application is created, no further action is required.


Required User Actions

In the case the user’s application is built upon the template provided in BASELABS Create Data Fusion Application, modifications have to be introduced to the PredictXAndReset() method of the ego motion estimation (see Fig. 1 and Fig. 2).


Detailed Description

The reported issue was caused by the method PredictXAndReset() of the ego motion filter class EgoMotionFilter. The method was not saving the internal predicted state of the vehicle between the time predict and reset steps (see Fig. 1 for the former implementation and Fig. 2 for the suggested change). The incorrect behavior only affects the example code of the auxiliary BASELABS Create Data Fusion Application while the other methods including those of the BASELABS Create SDK remain intact.

Within the example application the method PredictXAndReset() was called upon the arrival of the external measurement such as Radar. Depending on the mutual timing with respect to ego motion measurements, the described bug could lead to underestimation of the dynamics of the vehicle and propagation of the ego motion dynamics into the dynamics of the tracked targets. Moreover, in the unlikely situation of completely synchronous sensor data with the external measurements processed first, the former implementation could result in failure to change the velocity and the angular rate states. This is a direct consequence of the fact that no noise is added to these states during the time propagation and after the filter convergence the estimated uncertainty is close to zero.

If no changes have been introduced by the user to the functionality of this method, the new implementation as seen in Fig. 2 can be directly adopted. However, if the user code contains modifications compared to the original implementation of the PredictXAndReset() method, it is the user’s responsibility to ensure that both the time propagated ego motion state and covariance (Gaussian) are saved to the local state of the filter before the reset is called.

public Gaussian<CTRVSpace> PredictXAndReset(DateTime time)


           if (!IsStable || !_lastCorrectionTime.HasValue)


               return new Gaussian<CTRVSpace>(_state.Expectation, _state.Covariance);


<CTRVSpace> result = UKF.PredictState(time - _lastCorrectionTime.Value,
           _state, _systemModel.NoiseCovariance, _systemModel).ToGaussian();

           _lastCorrectionTime = time;

return result;


Figure 1. Former implementation of the PredictXAndReset method. The example of the ego motion estimation is shown for the case of Unscented Kalman Filter (UKF) based implementation.

public Gaussian<CTRVSpace> PredictXAndReset(DateTime time)


            if (!IsStable || !_lastCorrectionTime.HasValue)


                return new Gaussian<CTRVSpace>(_state.Expectation, _state.Covariance);



           _state = UKF.PredictState(time - _lastCorrectionTime.Value, _state, 
           _systemModel.NoiseCovariance, _systemModel).ToGaussian();


            var result = (Gaussian<CTRVSpace>)_state.Clone();


            _lastCorrectionTime = time;




            return result;




Figure 2. Suggested implementation of the PredictXAndReset method. The example of the ego motion estimation is shown for the case of Unscented Kalman Filter (UKF) based implementations. The required changes in code are marked with green Background.

The method PredictXAndReset() of the ego motion filter is triggered by the arrival of the external sensor data (e.g. Radar, Lidar, Camera, etc.) and performs two major steps:

1. Prediction step (integration) of the filter using the vehicle’s motion model (e.g. CTRV). The integration time for the time propagation is calculated as the difference between the actual measurement time and the last correction time of the filter.

2. Partial filter state/covariance reset. As only incremental pose (2D position and heading) information between the arrival of external measurements is of interest for ego motion compensation, the corresponding entries of both the filter state and the associated covariance matrix are reset to zeros for the next integration cycle.

Within a classical Kalman Filter the prediction step serves two purposes. First of all, the estimated state is propagated forward in time following the assumed kinematic (motion) model. Secondly, the estimate covariance matrix is also propagated forward in time and the step is usually employed to add an uncertainty to one’s knowledge on the system’s state. The latter is important as our knowledge on the kinematic model is incomplete, there are imperfections in models (e.g. slippery) and inherent stochastic nature of the process forbids an accurate deterministic prediction. The added uncertainty is then compensated by the correction step of the Kalman filter, where the estimate covariance is decreased depending on the accuracy of the available measurements and the actual measurement model.

Within the previously offered ego motion filter example code the method PredictXAndReset() was not saving the time-propagated state before the state/covariance reset (see Fig. 1). As a result, the motion prediction part of PredictXAndReset() method including the covariance inflation was essentially ignored by the rest of the filter. However, because the same class contains also Filter() methods for both the velocity and angular rate measurements, and because all these methods can be called asynchronously, both Filter() methods also follow a two-step approach from above with the time propagation step called first. As within these two steps the filter state was properly propagated, updated and saved, the possible implications of the reported issue are strongly data-dependent and can be grouped as following:

  • If the ego motion measurements are not synchronized with the external measurements and are made available on much higher rate compared to the latter, the only observable effect would be a slightly underestimated dynamics of the vehicle motion (e.g. the estimates would take longer to react to the actual measurement). Clearly, the sluggish behavior of the ego motion estimation will be directly propagated to the track estimation as all the objects in the environment will have a component caused by the error in tracking the ego motion. A practical remedy for these or similar effects, which is often adopted in practice, is to tune the tracking filter by increasing the corresponding process noises in order to compensate the effectively missing prediction step of the PredictXAndReset() method. A clear disadvantage of this strategy is that the process noise values do not correspond anymore to real measurement noises or true target dynamics, i.e. they have non-physical values which cannot be mapped to the results of real sensor characterization or deduced from typical vehicle dynamics. Moreover, if the arrival of the ego motion data is interrupted or timing had changed, even these tuned values will not be able to compensate the lack of the dynamics and the filter would seem either sluggish or too noisy. The effect could become especially noticeable during rapid turn maneuvers, where the un-modelled turn dynamics of the vehicle would result in tracks perceived as also maneuvering, although the true objects are either static or would move differently (e.g. straight with constant speed). Nevertheless, the observed effects strongly depend on the ratio of internal and external sensor rates and can become barely observable already for values as high as 10.
  • If the ego motion and external data are made available to the filter completely synchronously, but the ego motion data are applied first, no difference in filter performance would be observed. The complete time span for the filter prediction step would be already covered by the firstly called Filter() method. The PredictXAndReset() method would then need to integrate the state estimate for the time difference of zero seconds and, therefore, skipping this step will have no effect on the overall filter and MOT Performance.
  • The most challenging is the third scenario, where the ego motion and external measurements are also made available synchronously, but the external measurements are processed first. As the result of the prediction step was effectively ignored in PredictXAndReset() method, and because the Filter() method would be called with the time difference of zero seconds, both the motion model would be not applied and no process noise would be added to the state, leaving the state covariance unchanged. However, the measurement update will be still applied aiming to reduce the estimate covariance. In practice, this would result in the filter becoming overwhelmingly confident (albeit statistically inconsistent) regarding its estimate. Finally, this would lead to the filter look increasingly sluggish with time as the corrections would have less and less influence on the estimated state. Ultimately, the filter could even stop reacting to new measurements and, as its confidence regarding some of the state estimate becomes too high, the filter could suffer from numerical issues related to the covariance entries being close to zero or even become unstable. Within the MOT algorithm this effect would result in all tracks being biased by the un-modelled component of the vehicle dynamics. Similar to the first scenario, the effect could become mostly noticeable during fast maneuvers where the dynamics of the targets becomes biased by the un-modelled component of the vehicle dynamics.

The aforementioned issue is only relevant to the applications which are entirely built on the example code provided by BASELABS Create Data Fusion Application including the ego motion filter and its impact on the final performance is strongly dependent on the configuration of available sensors and the timing. From the practical point of view, the previous implementation of the EgoMotionFilter class could potentially result in a suboptimal performance of the MOT. Although one is always able to overcome some of the mentioned effects by adjusting the process noise for the targets or even changing the associated dynamical models, this would, nevertheless, lead to the inferior performance of the tracking algorithm compared to that of a properly configured system.

For further information please contact our support team.

Contact & Information

Top of page
This website uses cookies to ensure you get the best experience on our website. By visiting you agree to the use of cookies.