Refine Your Search

Search Results

Viewing 1 to 9 of 9
Technical Paper

Benchmarking the Localization Accuracy of 2D SLAM Algorithms on Mobile Robotic Platforms

2020-04-14
2020-01-1021
Simultaneous Localization and Mapping (SLAM) algorithms are extensively utilized within the field of autonomous navigation. In particular, numerous open-source Robot Operating System (ROS) based SLAM solutions, such as Gmapping, Hector, Cartographer etc., have simplified deployments in application. However, establishing the accuracy and precision of these ‘out-of-the-box’ SLAM algorithms is necessary for improving the accuracy and precision of further applications such as planning, navigation, controls. Existing benchmarking literature largely focused on validating SLAM algorithms based upon the quality of the generated maps. In this paper, however, we focus on examining the localization accuracy of existing 2-dimensional LiDAR based indoor SLAM algorithms. The fidelity of these implementations is compared against the OptiTrack motion capture system which is capable of tracking moving objects at sub-millimeter level precision.
Technical Paper

Capability-Driven Adaptive Task Distribution for Flexible Multi-Human-Multi-Robot (MH-MR) Manufacturing Systems

2020-04-14
2020-01-1303
Collaborative robots are more and more used in smart manufacturing because of their capability to work beside and collaborate with human workers. With the deployment of these robots, manufacturing tasks are more inclined to be accomplished by multiple humans and multiple robots (MH-MR) through teaming effort. In such MH-MR collaboration scenarios, the task distribution among the multiple humans and multiple robots is very critical to efficiency. It is also more challenging due to the heterogeneity of different agents. Existing approaches in task distribution among multiple agents mostly consider humans with assumed or known capabilities. However human capabilities are always changing due to various factors, which may lead to suboptimal efficiency. Although some researches have studied several human factors in manufacturing and applied them to adjust the robot task and behaviors.
Technical Paper

Development of Endurance Testing Apparatus Simulating Wheel Dynamics and Environment on Lunar Terrain

2010-04-12
2010-01-0765
This paper entails the design and development of a NASA testing system used to simulate wheel operation in a lunar environment under different loading conditions. The test system was developed to test the design of advanced nonpneumatic wheels to be used on the NASA All-Terrain Hex-Legged Extra-Terrestrial Explorer (ATHLETE). The ATHLETE, allowing for easy maneuverability around the lunar surface, provides the capability for many research and exploration opportunities on the lunar surface that were not previously possible. Each leg, having six degrees of freedom, allows the ATHLETE to accomplish many tasks not available on other extra-terrestrial exploration platforms. The robotic vehicle is expected to last longer than previous lunar rovers.
Technical Paper

Design of an Open-Loop Steering Robot Profile for Double Lane Change Maneuver Using Simulation

2010-04-12
2010-01-0096
This paper presents a methodology for designing a simple open-loop steering robot profile to simulate a double lane change maneuver for track testing of a heavy tractor/trailer combination vehicle. For track testing of vehicles in a lane change type of maneuver, a human driver is typically used with a desired path defined with visual cues such as traffic cones. Such tests have been shown to result in poor test repeatability due to natural variation in driver steering behavior. While a steering robot may be used to overcome this repeatability issue, such a robot typically implements open-loop maneuvers and cannot be guaranteed to cause the vehicle to accurately follow a pre-determined trajectory. This paper presents a method using offline simulation to design an open-loop steering maneuver resulting in a realistic approximation of a double lane change maneuver.
Technical Paper

VoGe: A Voice and Gesture System for Interacting with Autonomous Cars

2017-03-28
2017-01-0068
In the next 20 years fully autonomous vehicles are expected to be in the market. The advance on their development is creating paradigm shifts on different automotive related research areas. Vehicle interiors design and human vehicle interaction are evolving to enable interaction flexibility inside the cars. However, most of today’s vehicle manufacturers’ autonomous car concepts maintain the steering wheel as a control element. While this approach allows the driver to take over the vehicle route if needed, it causes a constraint in the previously mentioned interaction flexibility. Other approaches, such as the one proposed by Google, enable interaction flexibility by removing the steering wheel and accelerator and brake pedals. However, this prevents the users to take control over the vehicle route if needed, not allowing them to make on-route spontaneous decisions, such as stopping at a specific point of interest.
Technical Paper

Access Control Requirements for Autonomous Robotic Fleets

2023-04-11
2023-01-0104
Access control enforces security policies for controlling critical resources. For V2X (Vehicle to Everything) autonomous military vehicle fleets, network middleware systems such as ROS (Robotic Operating System) expose system resources through networked publisher/subscriber and client/server paradigms. Without proper access control, these systems are vulnerable to attacks from compromised network nodes, which may perform data poisoning attacks, flood packets on a network, or attempt to gain lateral control of other resources. Access control for robotic middleware systems has been investigated in both ROS1 and ROS2. Still, these implementations do not have mechanisms for evaluating a policy's consistency and completeness or writing expressive policies for distributed fleets. We explore an RBAC (Role-Based Access Control) mechanism layered onto ROS environments that uses local permission caches with precomputed truth tables for fast policy evaluation.
Technical Paper

Utilizing Neural Networks for Semantic Segmentation on RGB/LiDAR Fused Data for Off-road Autonomous Military Vehicle Perception

2023-04-11
2023-01-0740
Image segmentation has historically been a technique for analyzing terrain for military autonomous vehicles. One of the weaknesses of image segmentation from camera data is that it lacks depth information, and it can be affected by environment lighting. Light detection and ranging (LiDAR) is an emerging technology in image segmentation that is able to estimate distances to the objects it detects. One advantage of LiDAR is the ability to gather accurate distances regardless of day, night, shadows, or glare. This study examines LiDAR and camera image segmentation fusion to improve an advanced driver-assistance systems (ADAS) algorithm for off-road autonomous military vehicles. The volume of points generated by LiDAR provides the vehicle with distance and spatial data surrounding the vehicle.
Technical Paper

Modeling and Learning of Object Placing Tasks from Human Demonstrations in Smart Manufacturing

2019-04-02
2019-01-0700
In this paper, we present a framework for the robot to learn how to place objects to a workpiece by learning from humans in smart manufacturing. In the proposed framework, the rational scene dictionary (RSD) corresponding to the keyframes of task (KFT) are used to identify the general object-action-location relationships. The Generalized Voronoi Diagrams (GVD) based contour is used to determine the relative position and orientation between the object and the corresponding workpiece at the final state. In the learning phase, we keep tracking the image segments in the human demonstration. For the moment when a spatial relation of some segments are changed in a discontinuous way, the state changes are recorded by the RSD. KFT is abstracted after traversing and searching in RSD, while the relative position and orientation of the object and the corresponding mount are presented by GVD-based contours for the keyframes.
Technical Paper

A Voice and Pointing Gesture Interaction System for On-Route Update of Autonomous Vehicles’ Path

2019-04-02
2019-01-0679
This paper describes the development and simulation of a voice and pointing gesture interaction system for on-route update of autonomous vehicles’ path. The objective of this research is to provide users of autonomous vehicles a human vehicle interaction mode that enables them to make and communicate spontaneous decisions to the autonomous car, modifying its pre-defined autonomous route in real-time. For example, similar to giving directions to a taxi driver, a user will be able to tell the car «Stop there» or «Take that exit». In this way, the user control/spontaneity vs interaction flexibility dilemma that current autonomous vehicle concepts have, could be solved, potentially increasing the user acceptance of this technology. The system was designed following a level structured state machine approach. The simulations were developed using MATLAB and VREP, a robotics simulation platform, which has accurate vehicle and sensor models.
X