Refine Your Search

Search Results

Viewing 1 to 5 of 5
Technical Paper

Robust Sensor Fused Object Detection Using Convolutional Neural Networks for Autonomous Vehicles

2020-04-14
2020-01-0100
Environmental perception is considered an essential module for autonomous driving and Advanced Driver Assistance System (ADAS). Recently, deep Convolutional Neural Networks (CNNs) have become the State-of-the-Art with many different architectures in various object detection problems. However, performances of existing CNNs have been dropping when detecting small objects at a large distance. To deploy any environmental perception system in real world applications, it is important that the system achieves high accuracy regardless of the size of the object, distance, and weather conditions. In this paper, a robust sensor fused object detection system is proposed by utilizing the advantages of both vision and automotive radar sensors. The proposed system consists of three major components: 1) the Coordinate Conversion module, 2) Multi level-Sensor Fusion Detection (MSFD) system, and 3) Temporal Correlation filtering module.
Technical Paper

Modeling the Response of an Automotive Event-Based Architecture: A Case Study

2003-03-03
2003-01-1199
While many current vehicle network systems for body bus applications use event triggered analysis processes, the deterministic point of view raises concerns about system timing due to message latency. This paper studies the latency performance characteristics of a typical body bus vehicle network using event triggered analysis over the CAN bus.
Technical Paper

On the Safety of Autonomous Driving: A Dynamic Deep Object Detection Approach

2019-04-02
2019-01-1044
To improve the safety of automated driving, the paramount target of this intelligent system is to detect and segment the obstacle such as car and pedestrian, precisely. Object detection in self-driving vehicle has chiefly accomplished by making decision and detecting objects through each frame of video. However, there are diverse group of methods in both machine learning and machine vision to improve the performance of system. It is significant to factor in the function of the time in the detection phase. In other word, considering the inputs of system, which have been emitted from eclectic type of sensors such as camera, radar, and LIDAR, as time-varying signals, can be helpful to engross ‘time’ as a fundamental feature in modeling for forecasting the object, while car is moving on the way. In this paper, we focus on eliciting a model through the time to increase the accuracy of object detection in self-driving vehicles.
Technical Paper

An Architecture for a Safety-Critical Steer-by-Wire System

2004-03-08
2004-01-0714
A hardware and software architecture suitable for a safety-critical steer-by-wire systems is presented. The architecture supports three major failure modes and features several safety protocols and mechanisms. Failures due to component failures, software errors, and human errors are handled by the architecture and safety protocols. A test implementation using replicated communication channels, controllers, sensors, and actuators has been performed. The test implementation uses the CAN protocol, Motorola S12 microcontrollers, and Microchip MCP250XX components with a steering wheel and road wheel simulator. The focus of the paper is on the application level, using system engineering principles which incorporate a holistic approach to achieve safety at various levels.
Technical Paper

Sensor-Fused Low Light Pedestrian Detection System with Transfer Learning

2024-04-09
2024-01-2043
Objection detection using a camera sensor is essential for developing Advanced Driver Assistance Systems (ADAS) and Autonomous Driving (AD) vehicles. Due to the recent advancement in deep Convolution Neural Networks (CNNs), object detection based on CNNs has achieved state-of-the-art performance during daytime. However, using an RGB camera alone in object detection under poor lighting conditions, such as sun flare, snow, and foggy nights, causes the system's performance to drop and increases the likelihood of a crash. In addition, the object detection system based on an RGB camera performs poorly during nighttime because the camera sensors are susceptible to lighting conditions. This paper explores different pedestrian detection systems at low-lighting conditions and proposes a sensor-fused pedestrian detection system under low-lighting conditions, including nighttime. The proposed system fuses RGB and infrared (IR) thermal camera information.
X