Refine Your Search

Topic

Search Results

Viewing 1 to 11 of 11
Journal Article

3D Scene Reconstruction with Sparse LiDAR Data and Monocular Image in Single Frame

2017-09-23
The current approaches to achieve it are mainly using stereo vision, Structure from Motion (SfM) or mobile LiDAR sensors. Each of these approaches has its own limitation, stereo vision has high computational cost, SfM needs accurate calibration between a sequences of images, and the onboard LiDAR sensor can only provide sparse points without color information. ...Each of these approaches has its own limitation, stereo vision has high computational cost, SfM needs accurate calibration between a sequences of images, and the onboard LiDAR sensor can only provide sparse points without color information. This paper describes a novel method for traffic scene semantic segmentation by combining sparse LiDAR point cloud (e.g. from Velodyne scans), with monocular color image. ...This paper describes a novel method for traffic scene semantic segmentation by combining sparse LiDAR point cloud (e.g. from Velodyne scans), with monocular color image. The key novelty of the method is the semantic coupling of stereoscopic point cloud with color lattice from camera image labelled through a Convolutional Neural Network (CNN).
Journal Article

Physics-Based Simulation Solutions for Testing Performance of Sensors and Perception Algorithm under Adverse Weather Conditions

2022-04-13
To overcome the limitations of physical testing, a physics-based simulation workflow was developed by coupling computational fluid dynamics (CFD) with optical simulations of camera and lidar sensors. The computational data of various weather conditions can be rapidly generated by CFD and used to assess the impact of weather conditions on the sensors and perception algorithms. ...The developed CFD-optical workflow was tested using rainy conditions as a test case, the data for which were generated using CFD and exported to optical simulation software to assess how rainy conditions affect the performance of a visible camera, lidar, and an open-source perception algorithm. The results of the analysis indicate that the rainy conditions and intensification of the conditions by other vehicles on the road degrade the performance of camera and lidar sensors. ...The results of the analysis indicate that the rainy conditions and intensification of the conditions by other vehicles on the road degrade the performance of camera and lidar sensors. The ability of the perception algorithm to detect vehicles significantly deteriorated when rainy conditions changed to moderate rain.
Journal Article

Pedestrian Detection Method Based on Roadside Light Detection and Ranging

2021-11-12
To tackle this challenge, this study proposes a pedestrian detection algorithm based on roadside Light Detection And Ranging (LiDAR) by combining traditional and deep learning algorithms. To meet real-time demand, Octree with region-of-interest (ROI) selection is introduced and improved to filter the background in each frame, which improves the clustering speed. ...Afterward, an improved Euclidean clustering algorithm was proposed by analyzing the scanning characteristics of LiDAR. Concretely, on account of the vertical and the horizontal angular resolution of the LiDAR, the authors propose a new method for determining the search radius of Euclidean clustering with adaptive distance. ...What’s more, the entire background filtering and clustering process takes 88.7 ms per frame, and the model obtained was deployed on NVIDIA Jetson AGX Xavier, attaining the inference time of 110 ms per frame, which can meet the speed requirement of LiDAR update and achieve the real-time application.
Journal Article

A Novel Approach to Light Detection and Ranging Sensor Placement for Autonomous Driving Vehicles Using Deep Deterministic Policy Gradient Algorithm

2024-01-31
Abstract This article presents a novel approach to optimize the placement of light detection and ranging (LiDAR) sensors in autonomous driving vehicles using machine learning. As autonomous driving technology advances, LiDAR sensors play a crucial role in providing accurate collision data for environmental perception. ...As autonomous driving technology advances, LiDAR sensors play a crucial role in providing accurate collision data for environmental perception.
Journal Article

A Combined LiDAR-Camera Localization for Autonomous Race Cars

2022-01-06
The Steel and Foam Energy Reduction (SAFER) barrier, which encloses the whole oval, is detected by a three-dimensional (3D)-LiDAR, and the transformation of the barrier to the ego vehicle is estimated. We have validated the new approach via different simulation methods.
Journal Article

Safety Verification of RSS Model-Based Variable Focus Function Camera for Autonomous Vehicle

2022-02-25
In this article, the RSS model that ensures safety and reliability was derived to be suitable for variable focus function cameras that can cover the cognitive regions of radar and lidar with a single camera. It is thought that by calculating the safety distance under various acceleration and speed conditions, it could contribute to considering the safety distance depending on the performance of an autonomous vehicle in the future.
Journal Article

Implementation of the Correction Algorithm in an Environment with Dynamic Actors

2023-03-15
These tasks of detection and tracking are performed by the AV perception system that utilizes data from sensors such as LIDARs, radars, and cameras. The majority of AVs are typically fitted with multiple sensors to create redundancy and avoid dependence on a single sensor. ...The correction algorithm is first tested, assuming the availability of ground truth information to correct the LIDAR, and then tested with camera images which are used to determine ground truth. The comparison metric between expected and optimal parameters is the mean absolute error (MAE).
Journal Article

Localization and Perception for Control and Decision-Making of a Low-Speed Autonomous Shuttle in a Campus Pilot Deployment

2018-11-12
The article treats autonomous driving with Real-Time Kinematic (RTK) GPS (Global Positioning Systems) with an inertial measurement unit (IMU), combined with simultaneous localization and mapping (SLAM) with three-dimensional light detection and ranging (LIDAR) sensor, which provides solutions to scenarios where GPS is not available or a lower cost, and hence lower accuracy GPS is desirable.
X