Visual-Inertial Perception for Robotic Tasks

Visual-Inertial Perception for Robotic Tasks PDF Author: Konstantine Tsotsos
Publisher:
ISBN:
Category :
Languages : en
Pages : 160

Get Book Here

Book Description
In spite of extensive consumer interest in domains such as autonomous driving, general purpose visual perception for autonomy remains a challenging problem. Inertial sensors such as gyroscopes and accelerometers, commonplace on smartphones today, offer complementary capabilities to visual sensors, and robotic systems typically fuse information from both visual and inertial sensors to further enhance their capabilities. In this thesis, we explore models and algorithms for constructing a representation from the fusion of visual and inertial data with the goal of supporting autonomy. We first analyze the observability properties of the standard model for visual-inertial sensor fusion, which combines Structure-from-Motion (SFM, or Visual SLAM) with inertial navigation, and overturn the common consensus that the model is observable. Informed by this analysis we develop robust inference techniques to enable our real-time visual-inertial navigation system implementation to achieve state-of-the-art relative motion estimation performance in challenging and dynamic environments. From the information provided by this process, we construct a representation of the agent's environment to act as a map that significantly improves location search time relative to standard methods, enabling long-term consistent localization within a constrained environment in real-time using commodity computing hardware. Finally, to allow an autonomous agent to reason about its world at a granularity relevant for interaction, we construct a dense surface representation upon this consistent map and develop algorithms to segment the surfaces into objects of potential relevance to the agent's task.

Visual-Inertial Perception for Robotic Tasks

Visual-Inertial Perception for Robotic Tasks PDF Author: Konstantine Tsotsos
Publisher:
ISBN:
Category :
Languages : en
Pages : 160

Get Book Here

Book Description
In spite of extensive consumer interest in domains such as autonomous driving, general purpose visual perception for autonomy remains a challenging problem. Inertial sensors such as gyroscopes and accelerometers, commonplace on smartphones today, offer complementary capabilities to visual sensors, and robotic systems typically fuse information from both visual and inertial sensors to further enhance their capabilities. In this thesis, we explore models and algorithms for constructing a representation from the fusion of visual and inertial data with the goal of supporting autonomy. We first analyze the observability properties of the standard model for visual-inertial sensor fusion, which combines Structure-from-Motion (SFM, or Visual SLAM) with inertial navigation, and overturn the common consensus that the model is observable. Informed by this analysis we develop robust inference techniques to enable our real-time visual-inertial navigation system implementation to achieve state-of-the-art relative motion estimation performance in challenging and dynamic environments. From the information provided by this process, we construct a representation of the agent's environment to act as a map that significantly improves location search time relative to standard methods, enabling long-term consistent localization within a constrained environment in real-time using commodity computing hardware. Finally, to allow an autonomous agent to reason about its world at a granularity relevant for interaction, we construct a dense surface representation upon this consistent map and develop algorithms to segment the surfaces into objects of potential relevance to the agent's task.

Visual Perception and Robotic Manipulation

Visual Perception and Robotic Manipulation PDF Author: Geoffrey Taylor
Publisher: Springer
ISBN: 3540334556
Category : Technology & Engineering
Languages : en
Pages : 231

Get Book Here

Book Description
This book moves toward the realization of domestic robots by presenting an integrated view of computer vision and robotics, covering fundamental topics including optimal sensor design, visual servo-ing, 3D object modelling and recognition, and multi-cue tracking, emphasizing robustness throughout. Covering theory and implementation, experimental results and comprehensive multimedia support including video clips, VRML data, C++ code and lecture slides, this book is a practical reference for roboticists and a valuable teaching resource.

Visual Perception for Humanoid Robots

Visual Perception for Humanoid Robots PDF Author: David Israel González Aguirre
Publisher: Springer
ISBN: 3319978411
Category : Technology & Engineering
Languages : en
Pages : 253

Get Book Here

Book Description
This book provides an overview of model-based environmental visual perception for humanoid robots. The visual perception of a humanoid robot creates a bidirectional bridge connecting sensor signals with internal representations of environmental objects. The objective of such perception systems is to answer two fundamental questions: What & where is it? To answer these questions using a sensor-to-representation bridge, coordinated processes are conducted to extract and exploit cues matching robot’s mental representations to physical entities. These include sensor & actuator modeling, calibration, filtering, and feature extraction for state estimation. This book discusses the following topics in depth: • Active Sensing: Robust probabilistic methods for optimal, high dynamic range image acquisition are suitable for use with inexpensive cameras. This enables ideal sensing in arbitrary environmental conditions encountered in human-centric spaces. The book quantitatively shows the importance of equipping robots with dependable visual sensing. • Feature Extraction & Recognition: Parameter-free, edge extraction methods based on structural graphs enable the representation of geometric primitives effectively and efficiently. This is done by eccentricity segmentation providing excellent recognition even on noisy & low-resolution images. Stereoscopic vision, Euclidean metric and graph-shape descriptors are shown to be powerful mechanisms for difficult recognition tasks. • Global Self-Localization & Depth Uncertainty Learning: Simultaneous feature matching for global localization and 6D self-pose estimation are addressed by a novel geometric and probabilistic concept using intersection of Gaussian spheres. The path from intuition to the closed-form optimal solution determining the robot location is described, including a supervised learning method for uncertainty depth modeling based on extensive ground-truth training data from a motion capture system. The methods and experiments are presented in self-contained chapters with comparisons and the state of the art. The algorithms were implemented and empirically evaluated on two humanoid robots: ARMAR III-A & B. The excellent robustness, performance and derived results received an award at the IEEE conference on humanoid robots and the contributions have been utilized for numerous visual manipulation tasks with demonstration at distinguished venues such as ICRA, CeBIT, IAS, and Automatica.

Active Sensor Planning for Multiview Vision Tasks

Active Sensor Planning for Multiview Vision Tasks PDF Author: Shengyong Chen
Publisher: Springer
ISBN: 9783642437373
Category : Technology & Engineering
Languages : en
Pages : 0

Get Book Here

Book Description
This unique book explores the important issues in studying for active visual perception. The book’s eleven chapters draw on recent important work in robot vision over ten years, particularly in the use of new concepts. Implementation examples are provided with theoretical methods for testing in a real robot system. With these optimal sensor planning strategies, this book will give the robot vision system the adaptability needed in many practical applications.

Multi-View Geometry Based Visual Perception and Control of Robotic Systems

Multi-View Geometry Based Visual Perception and Control of Robotic Systems PDF Author: Jian Chen
Publisher: CRC Press
ISBN: 042995123X
Category : Computers
Languages : en
Pages : 361

Get Book Here

Book Description
This book describes visual perception and control methods for robotic systems that need to interact with the environment. Multiple view geometry is utilized to extract low-dimensional geometric information from abundant and high-dimensional image information, making it convenient to develop general solutions for robot perception and control tasks. In this book, multiple view geometry is used for geometric modeling and scaled pose estimation. Then Lyapunov methods are applied to design stabilizing control laws in the presence of model uncertainties and multiple constraints.

Active Perception and Robot Vision

Active Perception and Robot Vision PDF Author: Arun K. Sood
Publisher: Springer Science & Business Media
ISBN: 3642772250
Category : Computers
Languages : en
Pages : 747

Get Book Here

Book Description
Intelligent robotics has become the focus of extensive research activity. This effort has been motivated by the wide variety of applications that can benefit from the developments. These applications often involve mobile robots, multiple robots working and interacting in the same work area, and operations in hazardous environments like nuclear power plants. Applications in the consumer and service sectors are also attracting interest. These applications have highlighted the importance of performance, safety, reliability, and fault tolerance. This volume is a selection of papers from a NATO Advanced Study Institute held in July 1989 with a focus on active perception and robot vision. The papers deal with such issues as motion understanding, 3-D data analysis, error minimization, object and environment modeling, object detection and recognition, parallel and real-time vision, and data fusion. The paradigm underlying the papers is that robotic systems require repeated and hierarchical application of the perception-planning-action cycle. The primary focus of the papers is the perception part of the cycle. Issues related to complete implementations are also discussed.

Multi-view Geometry Based Visual Perception and Control of Robotic Systems

Multi-view Geometry Based Visual Perception and Control of Robotic Systems PDF Author: Jian Chen
Publisher: CRC Press
ISBN: 9780429489211
Category : Computers
Languages : en
Pages : 342

Get Book Here

Book Description
This book describes visual perception and control methods for robotic systems that need to interact with the environment. Multiple view geometry is utilized to extract low-dimensional geometric information from abundant and high-dimensional image information, making it convenient to develop general solutions for robot perception and control tasks. In this book, multiple view geometry is used for geometric modeling and scaled pose estimation. Then Lyapunov methods are applied to design stabilizing control laws in the presence of model uncertainties and multiple constraints.

Active Vision and Perception in Human-Robot Collaboration

Active Vision and Perception in Human-Robot Collaboration PDF Author: Dimitri Ognibene
Publisher: Frontiers Media SA
ISBN: 2889745996
Category : Science
Languages : en
Pages : 192

Get Book Here

Book Description


Cognitive Visual Perception Mechanism for Robots Using Object-based Visual Attention

Cognitive Visual Perception Mechanism for Robots Using Object-based Visual Attention PDF Author: Yuanlong Yu
Publisher:
ISBN:
Category : Cognitive science
Languages : en
Pages : 488

Get Book Here

Book Description


Deep Learning for Robot Perception and Cognition

Deep Learning for Robot Perception and Cognition PDF Author: Alexandros Iosifidis
Publisher: Academic Press
ISBN: 0323885721
Category : Technology & Engineering
Languages : en
Pages : 638

Get Book Here

Book Description
Deep Learning for Robot Perception and Cognition introduces a broad range of topics and methods in deep learning for robot perception and cognition together with end-to-end methodologies. The book provides the conceptual and mathematical background needed for approaching a large number of robot perception and cognition tasks from an end-to-end learning point-of-view. The book is suitable for students, university and industry researchers and practitioners in Robotic Vision, Intelligent Control, Mechatronics, Deep Learning, Robotic Perception and Cognition tasks. - Presents deep learning principles and methodologies - Explains the principles of applying end-to-end learning in robotics applications - Presents how to design and train deep learning models - Shows how to apply deep learning in robot vision tasks such as object recognition, image classification, video analysis, and more - Uses robotic simulation environments for training deep learning models - Applies deep learning methods for different tasks ranging from planning and navigation to biosignal analysis