Refine Your Search

Search Results

Viewing 1 to 9 of 9
Technical Paper

Robust Sensor Fused Object Detection Using Convolutional Neural Networks for Autonomous Vehicles

2020-04-14
2020-01-0100
Environmental perception is considered an essential module for autonomous driving and Advanced Driver Assistance System (ADAS). Recently, deep Convolutional Neural Networks (CNNs) have become the State-of-the-Art with many different architectures in various object detection problems. However, performances of existing CNNs have been dropping when detecting small objects at a large distance. To deploy any environmental perception system in real world applications, it is important that the system achieves high accuracy regardless of the size of the object, distance, and weather conditions. In this paper, a robust sensor fused object detection system is proposed by utilizing the advantages of both vision and automotive radar sensors. The proposed system consists of three major components: 1) the Coordinate Conversion module, 2) Multi level-Sensor Fusion Detection (MSFD) system, and 3) Temporal Correlation filtering module.
Technical Paper

A Forward Collision Warning System Using Deep Reinforcement Learning

2020-04-14
2020-01-0138
Forward collision warning is one of the most challenging concerns in the safety of autonomous vehicles. A cooperation between many sensors such as LIDAR, Radar and camera helps to enhance the safety. Apart from the importance of having a reliable object detector, the safety system should have requisite capabilities to make reasonable decisions in the moment. In this work, we concentrate on detecting front vehicles of autonomous cars using a monocular camera, beyond only a detection method. In fact, we devise a solution based on a cooperation between a deep object detector and a reinforcement learning method to provide forward collision warning signals. The proposed method models the relation between acceleration, distance and collision point using the area of the bounding box related to the front vehicle. An agent of learning automata as a reinforcement learning method interacts with the environment to learn how to behave in eclectic hazardous situations.
Technical Paper

Autonomous Lane Change Control Using Proportional-Integral-Derivative Controller and Bicycle Model

2020-04-14
2020-01-0215
As advanced vehicle controls and autonomy become mainstream in the automotive industry, the need to employ traditional mathematical models and control strategies arises for the purpose of simulating autonomous vehicle handling maneuvers. This study focuses on lane change maneuvers for autonomous vehicles driving at low speeds. The lane change methodology uses PID (Proportional-Integral-Derivative) controller to command the steering wheel angle, based on the yaw motion and lateral displacement of the vehicle. The controller was developed and tested on a bicycle model of an electric vehicle (a Chevrolet Bolt 2017), with the implementation done in MATLAB/Simulink. This simple mathematical model was chosen in order to limit computational demands, while still being capable of simulating a smooth lane change maneuver under the direction of the car’s mission planning module at modest levels of lateral acceleration.
Technical Paper

A Robust Failure Proof Driver Drowsiness Detection System Estimating Blink and Yawn

2020-04-14
2020-01-1030
The fatal automobile accidents can be attributed to fatigued and distracted driving by drivers. Driver Monitoring Systems alert the distracted drivers by raising alarms. Most of the image based driver drowsiness detection systems face the challenge of failure proof performance in real time applications. Failure in face detection and other important part (eyes, nose and mouth) detections in real time cause the system to skip detections of blinking and yawning in few frames. In this paper, a real time robust and failure proof driver drowsiness detection system is proposed. The proposed system deploys a set of detection systems to detect face, blinking and yawning sequentially. A robust Multi-Task Convolutional Neural Network (MTCNN) with the capability of face alignment is used for face detection. This system attained 97% recall in the real time driving dataset collected. The detected face is passed on to ensemble of regression trees to detect the 68 facial landmarks.
Journal Article

Design and Control of Vehicle Trailer with Onboard Power Supply

2015-04-14
2015-01-0132
Typically, when someone needs to perform occasional towing tasks, such as towing a boat on a trailer, they have two choices. They can either purchase a larger, more powerful vehicle than they require for their regular usage, or they can rent a larger vehicle when they need to tow something. In this project, we propose a third alternative: a trailer with an on-board power supply, which can be towed by a small vehicle. This system requires a means of sensing how much power the trailer's power supply should provide, and an appropriate control system to provide this power. In this project, we design and model the trailer, a standard small car, and the control system, and evaluate the concept's feasibility. We have selected a suitable power source for the trailer, a DC motor, coupled directly to the trailer's single drive wheel, which allow us to dispense with the need for a differential.
Technical Paper

KDepthNet: Mono-Camera Based Depth Estimation for Autonomous Driving

2022-03-29
2022-01-0082
Object avoidance for autonomous driving is a vital factor in safe driving. When a vehicle travels from any random start places to any target positions in the milieu, an appropriate route must prevent static and moving obstacles. Having the accurate depth of each barrier in the scene can contribute to obstacle prevention. In recent years, precise depth estimation systems can be attributed to notable advances in Deep Neural Networks and hardware facilities/equipment. Several depth estimation methods for autonomous vehicles usually utilize lasers, structured light, and other reflections on the object surface to capture depth point clouds, complete surface modeling, and estimate scene depth maps. However, estimating precise depth maps is still challenging due to the computational complexity and time-consuming process issues. On the contrary, image-based depth estimation approaches have recently come to attention and can be applied for a broad range of applications.
Technical Paper

Physical Validation Testing of a Smart Tire Prototype for Estimation of Tire Forces

2018-04-03
2018-01-1117
The safety of ground vehicles is a matter of critical importance. Vehicle safety is enhanced with the use of control systems that mitigate the effect of unachievable demands from the driver, especially demands for tire forces that cannot be developed. This paper presents the results of a smart tire prototyping and validation study, which is an investigation of a smart tire system that can be used as part of these mitigation efforts. The smart tire can monitor itself using in-tire sensors and provide information regarding its own tire forces and moments, which can be transmitted to a vehicle control system for improved safety. The smart tire is designed to estimate the three orthogonal tire forces and the tire aligning moment at least once per wheel revolution during all modes of vehicle operation, with high accuracy. The prototype includes two in-tire piezoelectric deformation sensors and a rotary encoder.
Technical Paper

On the Safety of Autonomous Driving: A Dynamic Deep Object Detection Approach

2019-04-02
2019-01-1044
To improve the safety of automated driving, the paramount target of this intelligent system is to detect and segment the obstacle such as car and pedestrian, precisely. Object detection in self-driving vehicle has chiefly accomplished by making decision and detecting objects through each frame of video. However, there are diverse group of methods in both machine learning and machine vision to improve the performance of system. It is significant to factor in the function of the time in the detection phase. In other word, considering the inputs of system, which have been emitted from eclectic type of sensors such as camera, radar, and LIDAR, as time-varying signals, can be helpful to engross ‘time’ as a fundamental feature in modeling for forecasting the object, while car is moving on the way. In this paper, we focus on eliciting a model through the time to increase the accuracy of object detection in self-driving vehicles.
Technical Paper

Sensor-Fused Low Light Pedestrian Detection System with Transfer Learning

2024-04-09
2024-01-2043
Objection detection using a camera sensor is essential for developing Advanced Driver Assistance Systems (ADAS) and Autonomous Driving (AD) vehicles. Due to the recent advancement in deep Convolution Neural Networks (CNNs), object detection based on CNNs has achieved state-of-the-art performance during daytime. However, using an RGB camera alone in object detection under poor lighting conditions, such as sun flare, snow, and foggy nights, causes the system's performance to drop and increases the likelihood of a crash. In addition, the object detection system based on an RGB camera performs poorly during nighttime because the camera sensors are susceptible to lighting conditions. This paper explores different pedestrian detection systems at low-lighting conditions and proposes a sensor-fused pedestrian detection system under low-lighting conditions, including nighttime. The proposed system fuses RGB and infrared (IR) thermal camera information.
X