Research on a Lidar Based Multi-sensor Fusion Localization System for High-Dynamic Autonomous Driving

Research on a Lidar Based Multi-sensor Fusion Localization System for High-Dynamic Autonomous Driving PDF Author: 汪聖倫
Publisher:
ISBN:
Category :
Languages : en
Pages : 71

Get Book Here

Book Description

Research on a Lidar Based Multi-sensor Fusion Localization System for High-Dynamic Autonomous Driving

Research on a Lidar Based Multi-sensor Fusion Localization System for High-Dynamic Autonomous Driving PDF Author: 汪聖倫
Publisher:
ISBN:
Category :
Languages : en
Pages : 71

Get Book Here

Book Description


Sensor Fusion in Localization, Mapping and Tracking

Sensor Fusion in Localization, Mapping and Tracking PDF Author: Constantin Wellhausen
Publisher:
ISBN:
Category :
Languages : en
Pages : 0

Get Book Here

Book Description
Making autonomous driving possible requires extensive information about the surroundings as well as the state of the vehicle. While specific information can be obtained through singular sensors, a full estimation requires a multi sensory approach, including redundant sources of information to increase robustness. This thesis gives an overview of tasks that arise in sensor fusion in autonomous driving, and presents solutions at a high level of detail, including derivations and parameters where required to enable re-implementation. The thesis includes theoretical considerations of the approaches as well as practical evaluations. Evaluations are also included for approaches that did not prove to solve their tasks robustly. This follows the belief that both results further the state of the art by giving researchers ideas about suitable and unsuitable approaches, where otherwise the unsuitable approaches may be re-implemented multiple times with similar results. The thesis focuses on model-based methods, also referred to in the following as classical methods, with a special focus on probabilistic and evidential theories. Methods based on deep learning are explicitly not covered to maintain explainability and robustness which would otherwise strongly rely on the available training data. The main focus of the work lies in three main fields of autonomous driving: localization, which estimates the state of the ego-vehicle, mapping or obstacle detection, where drivable areas are identified, and object detection and tracking, which estimates the state of all surrounding traffic participants. All algorithms are designed with the requirements of autonomous driving in mind, with a focus on robustness, real-time capability and usability of the approaches in all potential scenarios that may arise in urban driving. In localization the state of the vehicle is determined. While traditionally global positioning systems such as a Global Navigation Satellite System (GNSS) are often used for this task, they are prone to errors and may produce jumps in the position estimate which may cause unexpected and dangerous behavior. The focus of research in this thesis is the development of a localization system which produces a smooth state estimate without any jumps. For this two localization approaches are developed and executed in parallel. One localization is performed without global information to avoid jumps. This however only provides odometry, which drifts over time and does not give global positioning. To provide this information the second localization includes GNSS information, thus providing a global estimate which is free of global drift. Additionally the use of LiDAR odometry for improving the localization accuracy is evaluated. For mapping the focus of this thesis is on providing a computationally efficient mapping system which is capable of being used in arbitrarily large areas with no predefined size. This is achieved by mapping only the direct environment of the vehicle, with older information in the map being discarded. This is motivated by the observation that the environment in autonomous driving is highly dynamic and must be mapped anew every time the vehicles sensors observe an area. The provided map gives subsequent algorithms information about areas where the vehicle can or cannot drive. For this an occupancy grid map is used, which discretizes the map into cells of a fixed size, with each cell estimating whether its corresponding space in the world is occupied. However the grid map is not created for the entire area which could potentially be visited, as this may be very large and potentially impossible to represent in the working memory. Instead the map is created only for a window around the vehicle, with the vehicle roughly in the center. A hierarchical map organization is used to allow efficient moving of the window as the vehicle moves through an area. For the hierarchical map different data structures are evaluated for their time and space complexity in order to find the most suitable implementation for the presented mapping approach. Finally for tracking a late-fusion approach to the multi-sensor fusion task of estimating states of all other traffic participants is presented. Object detections are obtained from LiDAR, camera and Radar sensors, with an additional source of information being obtained from vehicle-to-everything communication which is also fused in the late fusion. The late fusion is developed for easy extendability and with arbitrary object detection algorithms in mind. For the first evaluation it relies on black box object detections provided by the sensors. In the second part of the research in object tracking multiple algorithms for object detection on LiDAR data are evaluated for the use in the object tracking framework to ease the reliance on black box implementations. A focus is set on detecting objects from motion, where three different approaches are evaluated for motion estimation in LiDAR data: LiDAR optical flow, evidential dynamic mapping and normal distribution transforms. The thesis contains both theoretical contributions and practical implementation considerations for the presented approaches with a high degree of detail including all necessary derivations. All results are implemented and evaluated on an autonomous vehicle and real-world data. With the developed algorithms autonomous driving is realized for urban areas.

Multi-sensor Fusion for Autonomous Driving

Multi-sensor Fusion for Autonomous Driving PDF Author: Xinyu Zhang
Publisher: Springer Nature
ISBN: 9819932807
Category :
Languages : en
Pages : 237

Get Book Here

Book Description


Automatic Laser Calibration, Mapping, and Localization for Autonomous Vehicles

Automatic Laser Calibration, Mapping, and Localization for Autonomous Vehicles PDF Author: Jesse Sol Levinson
Publisher: Stanford University
ISBN:
Category :
Languages : en
Pages : 153

Get Book Here

Book Description
This dissertation presents several related algorithms that enable important capabilities for self-driving vehicles. Using a rotating multi-beam laser rangefinder to sense the world, our vehicle scans millions of 3D points every second. Calibrating these sensors plays a crucial role in accurate perception, but manual calibration is unreasonably tedious, and generally inaccurate. As an alternative, we present an unsupervised algorithm for automatically calibrating both the intrinsics and extrinsics of the laser unit from only seconds of driving in an arbitrary and unknown environment. We show that the results are not only vastly easier to obtain than traditional calibration techniques, they are also more accurate. A second key challenge in autonomous navigation is reliable localization in the face of uncertainty. Using our calibrated sensors, we obtain high resolution infrared reflectivity readings of the world. From these, we build large-scale self-consistent probabilistic laser maps of urban scenes, and show that we can reliably localize a vehicle against these maps to within centimeters, even in dynamic environments, by fusing noisy GPS and IMU readings with the laser in realtime. We also present a localization algorithm that was used in the DARPA Urban Challenge, which operated without a prerecorded laser map, and allowed our vehicle to complete the entire six-hour course without a single localization failure. Finally, we present a collection of algorithms for the mapping and detection of traffic lights in realtime. These methods use a combination of computer-vision techniques and probabilistic approaches to incorporating uncertainty in order to allow our vehicle to reliably ascertain the state of traffic-light-controlled intersections.

Creating Autonomous Vehicle Systems

Creating Autonomous Vehicle Systems PDF Author: Shaoshan Liu
Publisher: Morgan & Claypool Publishers
ISBN: 1681731673
Category : Computers
Languages : en
Pages : 285

Get Book Here

Book Description
This book is the first technical overview of autonomous vehicles written for a general computing and engineering audience. The authors share their practical experiences of creating autonomous vehicle systems. These systems are complex, consisting of three major subsystems: (1) algorithms for localization, perception, and planning and control; (2) client systems, such as the robotics operating system and hardware platform; and (3) the cloud platform, which includes data storage, simulation, high-definition (HD) mapping, and deep learning model training. The algorithm subsystem extracts meaningful information from sensor raw data to understand its environment and make decisions about its actions. The client subsystem integrates these algorithms to meet real-time and reliability requirements. The cloud platform provides offline computing and storage capabilities for autonomous vehicles. Using the cloud platform, we are able to test new algorithms and update the HD map—plus, train better recognition, tracking, and decision models. This book consists of nine chapters. Chapter 1 provides an overview of autonomous vehicle systems; Chapter 2 focuses on localization technologies; Chapter 3 discusses traditional techniques used for perception; Chapter 4 discusses deep learning based techniques for perception; Chapter 5 introduces the planning and control sub-system, especially prediction and routing technologies; Chapter 6 focuses on motion planning and feedback control of the planning and control subsystem; Chapter 7 introduces reinforcement learning-based planning and control; Chapter 8 delves into the details of client systems design; and Chapter 9 provides the details of cloud platforms for autonomous driving. This book should be useful to students, researchers, and practitioners alike. Whether you are an undergraduate or a graduate student interested in autonomous driving, you will find herein a comprehensive overview of the whole autonomous vehicle technology stack. If you are an autonomous driving practitioner, the many practical techniques introduced in this book will be of interest to you. Researchers will also find plenty of references for an effective, deeper exploration of the various technologies.

Sensor Fusion for 3D Object Detection for Autonomous Vehicles

Sensor Fusion for 3D Object Detection for Autonomous Vehicles PDF Author: Yahya Massoud
Publisher:
ISBN:
Category :
Languages : en
Pages :

Get Book Here

Book Description
Thanks to the major advancements in hardware and computational power, sensor technology, and artificial intelligence, the race for fully autonomous driving systems is heating up. With a countless number of challenging conditions and driving scenarios, researchers are tackling the most challenging problems in driverless cars. One of the most critical components is the perception module, which enables an autonomous vehicle to "see" and "understand" its surrounding environment. Given that modern vehicles can have large number of sensors and available data streams, this thesis presents a deep learning-based framework that leverages multimodal data - i.e. sensor fusion, to perform the task of 3D object detection and localization. We provide an extensive review of the advancements of deep learning-based methods in computer vision, specifically in 2D and 3D object detection tasks. We also study the progress of the literature in both single-sensor and multi-sensor data fusion techniques. Furthermore, we present an in-depth explanation of our proposed approach that performs sensor fusion using input streams from LiDAR and Camera sensors, aiming to simultaneously perform 2D, 3D, and Bird's Eye View detection. Our experiments highlight the importance of learnable data fusion mechanisms and multi-task learning, the impact of different CNN design decisions, speed-accuracy tradeoffs, and ways to deal with overfitting in multi-sensor data fusion frameworks.

Millimeter Wave Radar

Millimeter Wave Radar PDF Author: Stephen L. Johnston
Publisher:
ISBN:
Category : Technology & Engineering
Languages : en
Pages : 686

Get Book Here

Book Description


Multisensor Fusion and Integration for Intelligent Systems

Multisensor Fusion and Integration for Intelligent Systems PDF Author: Lee Suk-han
Publisher: Springer Science & Business Media
ISBN: 354089859X
Category : Technology & Engineering
Languages : en
Pages : 476

Get Book Here

Book Description
The ?eld of multi-sensor fusion and integration is growing into signi?cance as our societyisintransitionintoubiquitouscomputingenvironmentswithroboticservices everywhere under ambient intelligence. What surround us are to be the networks of sensors and actuators that monitor our environment, health, security and safety, as well as the service robots, intelligent vehicles, and autonomous systems of ever heightened autonomy and dependability with integrated heterogeneous sensors and actuators. The ?eld of multi-sensor fusion and integration plays key role for m- ing the above transition possible by providing fundamental theories and tools for implementation. This volume is an edition of the papers selected from the 7th IEEE International Conference on Multi-Sensor Integration and Fusion, IEEE MFI‘08, held in Seoul, Korea, August 20–22, 2008. Only 32 papers out of the 122 papers accepted for IEEE MFI’08 were chosen and requested for revision and extension to be included in this volume. The 32 contributions to this volume are organized into three parts: Part I is dedicated to the Theories in Data and Information Fusion, Part II to the Multi-Sensor Fusion and Integration in Robotics and Vision, and Part III to the Applications to Sensor Networks and Ubiquitous Computing Environments. To help readers understand better, a part summary is included in each part as an introduction. The summaries of Parts I, II, and III are prepared respectively by Prof. Hanseok Ko, Prof. Sukhan Lee and Prof. Hernsoo Hahn.

The DARPA Urban Challenge

The DARPA Urban Challenge PDF Author: Martin Buehler
Publisher: Springer
ISBN: 364203991X
Category : Technology & Engineering
Languages : en
Pages : 651

Get Book Here

Book Description
By the dawn of the new millennium, robotics has undergone a major transformation in scope and dimensions. This expansion has been brought about by the maturity of the field and the advances in its related technologies. From a largely dominant industrial focus, robotics has been rapidly expanding into the challenges of the human world. The new generation of robots is expected to safely and dependably co-habitat with humans in homes, workplaces, and communities, providing support in services, entertainment, education, healthcare, manufacturing, and assistance. Beyond its impact on physical robots, the body of knowledge robotics has produced is revealing a much wider range of applications reaching across diverse research areas and scientific disciplines, such as: biomechanics, haptics, neurosciences, virtual simulation, animation, surgery, and sensor networks among others. In return, the challenges of the new emerging areas are proving an abundant source of stimulation and insights for the field of robotics. It is indeed at the intersection of disciplines that the most striking advances happen. The goal of the series of Springer Tracts in Advanced Robotics (STAR) is to bring, in a timely fashion, the latest advances and developments in robotics on the basis of their significance and quality. It is our hope that the wider dissemination of research developments will stimulate more exchanges and collaborations among the research community and contribute to further advancement of this rapidly growing field.

Robust Environmental Perception and Reliability Control for Intelligent Vehicles

Robust Environmental Perception and Reliability Control for Intelligent Vehicles PDF Author: Huihui Pan
Publisher: Springer Nature
ISBN: 9819977908
Category : Technology & Engineering
Languages : en
Pages : 308

Get Book Here

Book Description
This book presents the most recent state-of-the-art algorithms on robust environmental perception and reliability control for intelligent vehicle systems. By integrating object detection, semantic segmentation, trajectory prediction, multi-object tracking, multi-sensor fusion, and reliability control in a systematic way, this book is aimed at guaranteeing that intelligent vehicles can run safely in complex road traffic scenes. Adopts the multi-sensor data fusion-based neural networks to environmental perception fault tolerance algorithms, solving the problem of perception reliability when some sensors fail by using data redundancy. Presents the camera-based monocular approach to implement the robust perception tasks, which introduces sequential feature association and depth hint augmentation, and introduces seven adaptive methods. Proposes efficient and robust semantic segmentation of traffic scenes through real-time deep dual-resolution networks and representation separation of vision transformers. Focuses on trajectory prediction and proposes phased and progressive trajectory prediction methods that is more consistent with human psychological characteristics, which is able to take both social interactions and personal intentions into account. Puts forward methods based on conditional random field and multi-task segmentation learning to solve the robust multi-object tracking problem for environment perception in autonomous vehicle scenarios. Presents the novel reliability control strategies of intelligent vehicles to optimize the dynamic tracking performance and investigates the completely unknown autonomous vehicle tracking issues with actuator faults.