A Fusion Architecture for Object Detection using Replaceable Sensors 2009-01-0164
After the implementation of passive safety systems (e.g. airbags, seat belt pretensioners) and their parallel continuous improvement, active safety systems were launched in a great number of cars. Some of these active safety systems are based on environment recognition to detect critical situations or imminent crashes. These systems give warnings to the driver and can provide assistance (e.g. by setting the proper brake power). One key requirement for such systems is robust and reliable knowledge of the car environment. Today, the underlying environment recognition very often uses the data of single sensors. Due to increasing requirements in robustness, operating range (field of view, foresight, level of intervention) and number of different applications running in parallel, the fusion of data from different sensors may be necessary. This paper describes a fusion architecture which provides a base for safety and non-safety applications. The development objective for this architecture was the possibility to combine different sensors with only little adaptation effort for the various applications running in parallel. It should provide the ability to take both complementary and competitive aspects into account to ensure a reliable environment recognition. The paper shows the fusion architecture based on an example system - fusion of a Laser Scanner and a Mono Camera for a Collision Mitigation System (CMS). A Fusion Core combines the object lists from the Laser Scanner, a result of further development of the IDIS® sensor, and an Object Verification System based on a Mono Camera. The combined information describing the surrounding of the vehicle is then used by a Collision Mitigation System for generating a warning to the driver, pre-filling the brake system and initiating an autonomous braking in the case of an unavoidable crash to reduce the kinetic collision energy.