Application of Multi-Sensor Fusion in Autonomous Vehicle Localization Under Sensor Anomalies

Application of Multi-Sensor Fusion in Autonomous Vehicle Localization Under Sensor Anomalies PDF Author:
Publisher:
ISBN:
Category :
Languages : en
Pages : 91

Get Book Here

Book Description

Application of Multi-Sensor Fusion in Autonomous Vehicle Localization Under Sensor Anomalies

Application of Multi-Sensor Fusion in Autonomous Vehicle Localization Under Sensor Anomalies PDF Author:
Publisher:
ISBN:
Category :
Languages : en
Pages : 91

Get Book Here

Book Description


Multi-sensor Fusion for Autonomous Driving

Multi-sensor Fusion for Autonomous Driving PDF Author: Xinyu Zhang
Publisher: Springer Nature
ISBN: 9819932807
Category :
Languages : en
Pages : 237

Get Book Here

Book Description


Application of Multi-Sensor Fusion for Cascade Landmark Recognition and Vehicle Localization for Autonomous Driving

Application of Multi-Sensor Fusion for Cascade Landmark Recognition and Vehicle Localization for Autonomous Driving PDF Author: 王昱翔
Publisher:
ISBN:
Category :
Languages : en
Pages : 83

Get Book Here

Book Description


Sensor Fusion in Localization, Mapping and Tracking

Sensor Fusion in Localization, Mapping and Tracking PDF Author: Constantin Wellhausen
Publisher:
ISBN:
Category :
Languages : en
Pages : 0

Get Book Here

Book Description
Making autonomous driving possible requires extensive information about the surroundings as well as the state of the vehicle. While specific information can be obtained through singular sensors, a full estimation requires a multi sensory approach, including redundant sources of information to increase robustness. This thesis gives an overview of tasks that arise in sensor fusion in autonomous driving, and presents solutions at a high level of detail, including derivations and parameters where required to enable re-implementation. The thesis includes theoretical considerations of the approaches as well as practical evaluations. Evaluations are also included for approaches that did not prove to solve their tasks robustly. This follows the belief that both results further the state of the art by giving researchers ideas about suitable and unsuitable approaches, where otherwise the unsuitable approaches may be re-implemented multiple times with similar results. The thesis focuses on model-based methods, also referred to in the following as classical methods, with a special focus on probabilistic and evidential theories. Methods based on deep learning are explicitly not covered to maintain explainability and robustness which would otherwise strongly rely on the available training data. The main focus of the work lies in three main fields of autonomous driving: localization, which estimates the state of the ego-vehicle, mapping or obstacle detection, where drivable areas are identified, and object detection and tracking, which estimates the state of all surrounding traffic participants. All algorithms are designed with the requirements of autonomous driving in mind, with a focus on robustness, real-time capability and usability of the approaches in all potential scenarios that may arise in urban driving. In localization the state of the vehicle is determined. While traditionally global positioning systems such as a Global Navigation Satellite System (GNSS) are often used for this task, they are prone to errors and may produce jumps in the position estimate which may cause unexpected and dangerous behavior. The focus of research in this thesis is the development of a localization system which produces a smooth state estimate without any jumps. For this two localization approaches are developed and executed in parallel. One localization is performed without global information to avoid jumps. This however only provides odometry, which drifts over time and does not give global positioning. To provide this information the second localization includes GNSS information, thus providing a global estimate which is free of global drift. Additionally the use of LiDAR odometry for improving the localization accuracy is evaluated. For mapping the focus of this thesis is on providing a computationally efficient mapping system which is capable of being used in arbitrarily large areas with no predefined size. This is achieved by mapping only the direct environment of the vehicle, with older information in the map being discarded. This is motivated by the observation that the environment in autonomous driving is highly dynamic and must be mapped anew every time the vehicles sensors observe an area. The provided map gives subsequent algorithms information about areas where the vehicle can or cannot drive. For this an occupancy grid map is used, which discretizes the map into cells of a fixed size, with each cell estimating whether its corresponding space in the world is occupied. However the grid map is not created for the entire area which could potentially be visited, as this may be very large and potentially impossible to represent in the working memory. Instead the map is created only for a window around the vehicle, with the vehicle roughly in the center. A hierarchical map organization is used to allow efficient moving of the window as the vehicle moves through an area. For the hierarchical map different data structures are evaluated for their time and space complexity in order to find the most suitable implementation for the presented mapping approach. Finally for tracking a late-fusion approach to the multi-sensor fusion task of estimating states of all other traffic participants is presented. Object detections are obtained from LiDAR, camera and Radar sensors, with an additional source of information being obtained from vehicle-to-everything communication which is also fused in the late fusion. The late fusion is developed for easy extendability and with arbitrary object detection algorithms in mind. For the first evaluation it relies on black box object detections provided by the sensors. In the second part of the research in object tracking multiple algorithms for object detection on LiDAR data are evaluated for the use in the object tracking framework to ease the reliance on black box implementations. A focus is set on detecting objects from motion, where three different approaches are evaluated for motion estimation in LiDAR data: LiDAR optical flow, evidential dynamic mapping and normal distribution transforms. The thesis contains both theoretical contributions and practical implementation considerations for the presented approaches with a high degree of detail including all necessary derivations. All results are implemented and evaluated on an autonomous vehicle and real-world data. With the developed algorithms autonomous driving is realized for urban areas.

Multi-Sensor Fusion for Mono and Multi-Vehicle Localization Using Bayesian Network

Multi-Sensor Fusion for Mono and Multi-Vehicle Localization Using Bayesian Network PDF Author: C. Smaili
Publisher:
ISBN: 9789537619039
Category :
Languages : en
Pages :

Get Book Here

Book Description
This article has presented a multi-sensor fusion method for vehicle localization. The main contributions of this work are the formalization of a multi-sensor fusion method in the Bayesian Network context and an experimental validation with real data. An interesting characteristic of this approach is that it is flexible and modular in the sense that it can easily integrate other sensors. This feature is interesting because adding other sensors is a way to increase the robustness of the localization. In this approach, the use of the digital map as an observation of the state space representation has been introduced. This observation is used in the Bayesian Network framework in the same way that the GPS measurement. It turned out in the experiments that the GPS measurements are not necessary all the time, since the merging of odometry and roadmap data can provide a good estimation of the position over a substantial period. The strategy presented in this paper doesn't keep only the most likely segment. When approaching an intersection, several roads can be good candidates for this reason we manage several hypotheses until the situation becomes unambiguous. Multi-vehicle localization method presented in this work can be seen like an extension of the Mono-vehicle method, in the sense that we have duplicate the (BN) used to fuse.

Sensor Fusion for 3D Object Detection for Autonomous Vehicles

Sensor Fusion for 3D Object Detection for Autonomous Vehicles PDF Author: Yahya Massoud
Publisher:
ISBN:
Category :
Languages : en
Pages :

Get Book Here

Book Description
Thanks to the major advancements in hardware and computational power, sensor technology, and artificial intelligence, the race for fully autonomous driving systems is heating up. With a countless number of challenging conditions and driving scenarios, researchers are tackling the most challenging problems in driverless cars. One of the most critical components is the perception module, which enables an autonomous vehicle to "see" and "understand" its surrounding environment. Given that modern vehicles can have large number of sensors and available data streams, this thesis presents a deep learning-based framework that leverages multimodal data - i.e. sensor fusion, to perform the task of 3D object detection and localization. We provide an extensive review of the advancements of deep learning-based methods in computer vision, specifically in 2D and 3D object detection tasks. We also study the progress of the literature in both single-sensor and multi-sensor data fusion techniques. Furthermore, we present an in-depth explanation of our proposed approach that performs sensor fusion using input streams from LiDAR and Camera sensors, aiming to simultaneously perform 2D, 3D, and Bird's Eye View detection. Our experiments highlight the importance of learnable data fusion mechanisms and multi-task learning, the impact of different CNN design decisions, speed-accuracy tradeoffs, and ways to deal with overfitting in multi-sensor data fusion frameworks.

Multi-Sensor Fusion for Mono and Multi-Vehicle Localization Using Bayesian Network

Multi-Sensor Fusion for Mono and Multi-Vehicle Localization Using Bayesian Network PDF Author: C. Smaili
Publisher:
ISBN:
Category : Computers
Languages : en
Pages :

Get Book Here

Book Description
Multi-Sensor Fusion for Mono and Multi-Vehicle Localization using Bayesian Network.

Sensing and Control for Autonomous Vehicles

Sensing and Control for Autonomous Vehicles PDF Author: Thor I. Fossen
Publisher: Springer
ISBN: 3319553720
Category : Technology & Engineering
Languages : en
Pages : 513

Get Book Here

Book Description
This edited volume includes thoroughly collected on sensing and control for autonomous vehicles. Guidance, navigation and motion control systems for autonomous vehicles are increasingly important in land-based, marine and aerial operations. Autonomous underwater vehicles may be used for pipeline inspection, light intervention work, underwater survey and collection of oceanographic/biological data. Autonomous unmanned aerial systems can be used in a large number of applications such as inspection, monitoring, data collection, surveillance, etc. At present, vehicles operate with limited autonomy and a minimum of intelligence. There is a growing interest for cooperative and coordinated multi-vehicle systems, real-time re-planning, robust autonomous navigation systems and robust autonomous control of vehicles. Unmanned vehicles with high levels of autonomy may be used for safe and efficient collection of environmental data, for assimilation of climate and environmental models and to complement global satellite systems. The target audience primarily comprises research experts in the field of control theory, but the book may also be beneficial for graduate students.

Multisensor Fusion and Integration in the Wake of Big Data, Deep Learning and Cyber Physical System

Multisensor Fusion and Integration in the Wake of Big Data, Deep Learning and Cyber Physical System PDF Author: Sukhan Lee
Publisher: Springer
ISBN: 3319905090
Category : Technology & Engineering
Languages : en
Pages : 306

Get Book Here

Book Description
This book includes selected papers from the 13th IEEE International Conference on Multisensor Integration and Fusion for Intelligent Systems (MFI 2017) held in Daegu, Korea, November 16–22, 2017. It covers various topics, including sensor/actuator networks, distributed and cloud architectures, bio-inspired systems and evolutionary approaches, methods of cognitive sensor fusion, Bayesian approaches, fuzzy systems and neural networks, biomedical applications, autonomous land, sea and air vehicles, localization, tracking, SLAM, 3D perception, manipulation with multifinger hands, robotics, micro/nano systems, information fusion and sensors, and multimodal integration in HCI and HRI. The book is intended for robotics scientists, data and information fusion scientists, researchers and professionals at universities, research institutes and laboratories.

Autonomous Vehicle Localization Using Sensor Fusion with Lane Marking Detection and High Definition Map

Autonomous Vehicle Localization Using Sensor Fusion with Lane Marking Detection and High Definition Map PDF Author: 賴柏翔
Publisher:
ISBN:
Category :
Languages : en
Pages : 98

Get Book Here

Book Description