Visual-Inertial Sensor Fusion for Tracking in Ambulatory Environments

Visual-Inertial Sensor Fusion for Tracking in Ambulatory Environments PDF Author: Mehdi Patrick Stapleton
Publisher:
ISBN:
Category :
Languages : en
Pages : 147

Get Book Here

Book Description
Tracking a high-velocity object through a cluttered environment is daunting for even the human-observer. Vision-based trackers will frequently lose their lock on the object as features on the object become distorted or faded as a result of motion-blurring imparted by the high-velocity of the object. Moreover, the frequent occlusions as the object passes through clutter only serves to compound the issue. To boot, the usual difficulties associated with most vision-based trackers still apply such as: nonuniform illumination, object rotation, object scale changes, etc... Inertial-based trackers provide useful complementary data to aid the vision-based systems. The higher sampling rates of the inertial measurements gives invaluable information to be able to track high-speed objects. With the IMU attached to the object, the inertial measurements are immune to occlusions unlike their visual counterparts. Efficient combination of visual as well as inertial sensors into a unified framework is coined visual-inertial sensor fusion. Visual-inertial sensor fusion is a powerful tool for many industries: it allows the medical practitioners to better understand and diagnose illnesses; it allows the engineer to design more flexible and immersive virtual reality environments; and it allows the film-director to fully capture motion in a scene. The complementary nature of visual and inertial sensors is well-toted throughout these industries, the faster sampling rate of the inertial sensors fits lock-and-key with the higher accuracy of the visual sensor to unlock the potential for algorithms capable of tracking high-velocity objects through cluttered environments. Inevitably, sensor fusion is accompanied by higher algorithmic complexity and requires careful understanding of the components involved. For this reason, the approach taken in this thesis is a ground-up approach towards a complete visual-inertial system: from camera calibration all the way to handling of asynchronous sensor measurements for sensor-fusion.

Visual-Inertial Sensor Fusion for Tracking in Ambulatory Environments

Visual-Inertial Sensor Fusion for Tracking in Ambulatory Environments PDF Author: Mehdi Patrick Stapleton
Publisher:
ISBN:
Category :
Languages : en
Pages : 147

Get Book Here

Book Description
Tracking a high-velocity object through a cluttered environment is daunting for even the human-observer. Vision-based trackers will frequently lose their lock on the object as features on the object become distorted or faded as a result of motion-blurring imparted by the high-velocity of the object. Moreover, the frequent occlusions as the object passes through clutter only serves to compound the issue. To boot, the usual difficulties associated with most vision-based trackers still apply such as: nonuniform illumination, object rotation, object scale changes, etc... Inertial-based trackers provide useful complementary data to aid the vision-based systems. The higher sampling rates of the inertial measurements gives invaluable information to be able to track high-speed objects. With the IMU attached to the object, the inertial measurements are immune to occlusions unlike their visual counterparts. Efficient combination of visual as well as inertial sensors into a unified framework is coined visual-inertial sensor fusion. Visual-inertial sensor fusion is a powerful tool for many industries: it allows the medical practitioners to better understand and diagnose illnesses; it allows the engineer to design more flexible and immersive virtual reality environments; and it allows the film-director to fully capture motion in a scene. The complementary nature of visual and inertial sensors is well-toted throughout these industries, the faster sampling rate of the inertial sensors fits lock-and-key with the higher accuracy of the visual sensor to unlock the potential for algorithms capable of tracking high-velocity objects through cluttered environments. Inevitably, sensor fusion is accompanied by higher algorithmic complexity and requires careful understanding of the components involved. For this reason, the approach taken in this thesis is a ground-up approach towards a complete visual-inertial system: from camera calibration all the way to handling of asynchronous sensor measurements for sensor-fusion.

Visual Inertial Sensor Fusion

Visual Inertial Sensor Fusion PDF Author: Rajeev Kumar Jeevagan
Publisher:
ISBN:
Category :
Languages : en
Pages : 55

Get Book Here

Book Description


VISrec!

VISrec! PDF Author: Dominik Aufderheide
Publisher:
ISBN:
Category :
Languages : en
Pages :

Get Book Here

Book Description


Visual-Inertial Perception for Robotic Tasks

Visual-Inertial Perception for Robotic Tasks PDF Author: Konstantine Tsotsos
Publisher:
ISBN:
Category :
Languages : en
Pages : 160

Get Book Here

Book Description
In spite of extensive consumer interest in domains such as autonomous driving, general purpose visual perception for autonomy remains a challenging problem. Inertial sensors such as gyroscopes and accelerometers, commonplace on smartphones today, offer complementary capabilities to visual sensors, and robotic systems typically fuse information from both visual and inertial sensors to further enhance their capabilities. In this thesis, we explore models and algorithms for constructing a representation from the fusion of visual and inertial data with the goal of supporting autonomy. We first analyze the observability properties of the standard model for visual-inertial sensor fusion, which combines Structure-from-Motion (SFM, or Visual SLAM) with inertial navigation, and overturn the common consensus that the model is observable. Informed by this analysis we develop robust inference techniques to enable our real-time visual-inertial navigation system implementation to achieve state-of-the-art relative motion estimation performance in challenging and dynamic environments. From the information provided by this process, we construct a representation of the agent's environment to act as a map that significantly improves location search time relative to standard methods, enabling long-term consistent localization within a constrained environment in real-time using commodity computing hardware. Finally, to allow an autonomous agent to reason about its world at a granularity relevant for interaction, we construct a dense surface representation upon this consistent map and develop algorithms to segment the surfaces into objects of potential relevance to the agent's task.

Technological Innovation for Collective Awareness Systems

Technological Innovation for Collective Awareness Systems PDF Author: Luis M. Camarinha-Matos
Publisher: Springer
ISBN: 3642547346
Category : Computers
Languages : en
Pages : 614

Get Book Here

Book Description
This book constitutes the refereed proceedings of the 5th IFIP WG 5.5/SOCOLNET Doctoral Conference on Computing, Electrical and Industrial Systems, DoCEIS 2014, held in Costa de Caparica, Portugal, in April 2014. The 68 revised full papers were carefully reviewed and selected from numerous submissions. They cover a wide spectrum of topics ranging from collaborative enterprise networks to microelectronics. The papers are organized in the following topical sections: collaborative networks; computational systems; self-organizing manufacturing systems; monitoring and supervision systems; advances in manufacturing; human-computer interfaces; robotics and mechatronics, Petri nets; multi-energy systems; monitoring and control in energy; modelling and simulation in energy; optimization issues in energy; operation issues in energy; power conversion; telecommunications; electronics: design; electronics: RF applications; and electronics: devices.

Continuous Models for Cameras and Inertial Sensors

Continuous Models for Cameras and Inertial Sensors PDF Author: Hannes Ovrén
Publisher: Linköping University Electronic Press
ISBN: 917685244X
Category :
Languages : en
Pages : 67

Get Book Here

Book Description
Using images to reconstruct the world in three dimensions is a classical computer vision task. Some examples of applications where this is useful are autonomous mapping and navigation, urban planning, and special effects in movies. One common approach to 3D reconstruction is ”structure from motion” where a scene is imaged multiple times from different positions, e.g. by moving the camera. However, in a twist of irony, many structure from motion methods work best when the camera is stationary while the image is captured. This is because the motion of the camera can cause distortions in the image that lead to worse image measurements, and thus a worse reconstruction. One such distortion common to all cameras is motion blur, while another is connected to the use of an electronic rolling shutter. Instead of capturing all pixels of the image at once, a camera with a rolling shutter captures the image row by row. If the camera is moving while the image is captured the rolling shutter causes non-rigid distortions in the image that, unless handled, can severely impact the reconstruction quality. This thesis studies methods to robustly perform 3D reconstruction in the case of a moving camera. To do so, the proposed methods make use of an inertial measurement unit (IMU). The IMU measures the angular velocities and linear accelerations of the camera, and these can be used to estimate the trajectory of the camera over time. Knowledge of the camera motion can then be used to correct for the distortions caused by the rolling shutter. Another benefit of an IMU is that it can provide measurements also in situations when a camera can not, e.g. because of excessive motion blur, or absence of scene structure. To use a camera together with an IMU, the camera-IMU system must be jointly calibrated. The relationship between their respective coordinate frames need to be established, and their timings need to be synchronized. This thesis shows how to automatically perform this calibration and synchronization, without requiring e.g. calibration objects or special motion patterns. In standard structure from motion, the camera trajectory is modeled as discrete poses, with one pose per image. Switching instead to a formulation with a continuous-time camera trajectory provides a natural way to handle rolling shutter distortions, and also to incorporate inertial measurements. To model the continuous-time trajectory, many authors have used splines. The ability for a spline-based trajectory to model the real motion depends on the density of its spline knots. Choosing a too smooth spline results in approximation errors. This thesis proposes a method to estimate the spline approximation error, and use it to better balance camera and IMU measurements, when used in a sensor fusion framework. Also proposed is a way to automatically decide how dense the spline needs to be to achieve a good reconstruction. Another approach to reconstruct a 3D scene is to use a camera that directly measures depth. Some depth cameras, like the well-known Microsoft Kinect, are susceptible to the same rolling shutter effects as normal cameras. This thesis quantifies the effect of the rolling shutter distortion on 3D reconstruction, depending on the amount of motion. It is also shown that a better 3D model is obtained if the depth images are corrected using inertial measurements. Att använda bilder för att återskapa världen omkring oss i tre dimensioner är ett klassiskt problem inom datorseende. Några exempel på användningsområden är inom navigering och kartering för autonoma system, stadsplanering och specialeffekter för film och spel. En vanlig metod för 3D-rekonstruktion är det som kallas ”struktur från rörelse”. Namnet kommer sig av att man avbildar (fotograferar) en miljö från flera olika platser, till exempel genom att flytta kameran. Det är därför något ironiskt att många struktur-från-rörelse-algoritmer får problem om kameran inte är stilla när bilderna tas, exempelvis genom att använda sig av ett stativ. Anledningen är att en kamera i rörelse ger upphov till störningar i bilden vilket ger sämre bildmätningar, och därmed en sämre 3D-rekonstruktion. Ett välkänt exempel är rörelseoskärpa, medan ett annat är kopplat till användandet av en elektronisk rullande slutare. I en kamera med rullande slutare avbildas inte alla pixlar i bilden samtidigt, utan istället rad för rad. Om kameran rör på sig medan bilden tas uppstår därför störningar i bilden som måste tas om hand om för att få en bra rekonstruktion. Den här avhandlingen berör robusta metoder för 3D-rekonstruktion med rörliga kameror. En röd tråd inom arbetet är användandet av en tröghetssensor (IMU). En IMU mäter vinkelhastigheter och accelerationer, och dessa mätningar kan användas för att bestämma hur kameran har rört sig över tid. Kunskap om kamerans rörelse ger möjlighet att korrigera för störningar på grund av den rullande slutaren. Ytterligare en fördel med en IMU är att den ger mätningar även i de fall då en kamera inte kan göra det. Exempel på sådana fall är vid extrem rörelseoskärpa, starkt motljus, eller om det saknas struktur i bilden. Om man vill använda en kamera tillsammans med en IMU så måste dessa kalibreras och synkroniseras: relationen mellan deras respektive koordinatsystem måste bestämmas, och de måste vara överens om vad klockan är. I den här avhandlingen presenteras en metod för att automatiskt kalibrera och synkronisera ett kamera-IMU-system utan krav på exempelvis kalibreringsobjekt eller speciella rörelsemönster. I klassisk struktur från rörelse representeras kamerans rörelse av att varje bild beskrivs med en kamera-pose. Om man istället representerar kamerarörelsen som en tidskontinuerlig trajektoria kan man på ett naturligt sätt hantera problematiken kring rullande slutare. Det gör det också enkelt att införa tröghetsmätningar från en IMU. En tidskontinuerlig kameratrajektoria kan skapas på flera sätt, men en vanlig metod är att använda sig av så kallade splines. Förmågan hos en spline att representera den faktiska kamerarörelsen beror på hur tätt dess knutar placeras. Den här avhandlingen presenterar en metod för att uppskatta det approximationsfel som uppkommer vid valet av en för gles spline. Det uppskattade approximationsfelet kan sedan användas för att balansera mätningar från kameran och IMU:n när dessa används för sensorfusion. Avhandlingen innehåller också en metod för att bestämma hur tät en spline behöver vara för att ge ett gott resultat. En annan metod för 3D-rekonstruktion är att använda en kamera som också mäter djup, eller avstånd. Vissa djupkameror, till exempel Microsoft Kinect, har samma problematik med rullande slutare som vanliga kameror. I den här avhandlingen visas hur den rullande slutaren i kombination med olika typer och storlekar av rörelser påverkar den återskapade 3D-modellen. Genom att använda tröghetsmätningar från en IMU kan djupbilderna korrigeras, vilket visar sig ge en bättre 3D-modell.

Experimental Robotics

Experimental Robotics PDF Author: Jaydev P. Desai
Publisher: Springer
ISBN: 3319000659
Category : Technology & Engineering
Languages : en
Pages : 966

Get Book Here

Book Description
The International Symposium on Experimental Robotics (ISER) is a series of bi-annual meetings, which are organized, in a rotating fashion around North America, Europe and Asia/Oceania. The goal of ISER is to provide a forum for research in robotics that focuses on novelty of theoretical contributions validated by experimental results. The meetings are conceived to bring together, in a small group setting, researchers from around the world who are in the forefront of experimental robotics research. This unique reference presents the latest advances across the various fields of robotics, with ideas that are not only conceived conceptually but also explored experimentally. It collects robotics contributions on the current developments and new directions in the field of experimental robotics, which are based on the papers presented at the 13the ISER held in Québec City, Canada, at the Fairmont Le Château Frontenac, on June 18-21, 2012. This present thirteenth edition of Experimental Robotics edited by Jaydev P. Desai, Gregory Dudek, Oussama Khatib, and Vijay Kumar offers a collection of a broad range of topics in field and human-centered robotics.

Towards Visual-inertial SLAM for Mobile Augmented Reality

Towards Visual-inertial SLAM for Mobile Augmented Reality PDF Author: Gabriele Bleser
Publisher:
ISBN: 9783868530483
Category : Erweiterte Realität Informatik - Echtzeitbildverarbeitung - Kamera - Zielverfolgung - Merkmalsextraktion - Registrierung Bildverarbeitung
Languages : en
Pages : 176

Get Book Here

Book Description


Selected Topics in Inertial and Visual Sensor Fusion

Selected Topics in Inertial and Visual Sensor Fusion PDF Author:
Publisher:
ISBN: 9789175950617
Category :
Languages : en
Pages : 194

Get Book Here

Book Description


Observability

Observability PDF Author: Agostino Martinelli
Publisher: SIAM
ISBN: 1611976251
Category : Mathematics
Languages : en
Pages : 277

Get Book Here

Book Description
This book is about nonlinear observability. It provides a modern theory of observability based on a new paradigm borrowed from theoretical physics and the mathematical foundation of that paradigm. In the case of observability, this framework takes into account the group of invariance that is inherent to the concept of observability, allowing the reader to reach an intuitive derivation of significant results in the literature of control theory. The book provides a complete theory of observability and, consequently, the analytical solution of some open problems in control theory. Notably, it presents the first general analytic solution of the nonlinear unknown input observability (nonlinear UIO), a very complex open problem studied in the 1960s. Based on this solution, the book provides examples with important applications for neuroscience, including a deep study of the integration of multiple sensory cues from the visual and vestibular systems for self-motion perception. Observability: A New Theory Based on the Group of Invariance is the only book focused solely on observability. It provides readers with many applications, mostly in robotics and autonomous navigation, as well as complex examples in the framework of vision-aided inertial navigation for aerial vehicles. For these applications, it also includes all the derivations needed to separate the observable part of the system from the unobservable, an analysis with practical importance for obtaining the basic equations for implementing any estimation scheme or for achieving a closed-form solution to the problem. This book is intended for researchers in robotics and automation, both in academia and in industry. Researchers in other engineering disciplines, such as information theory and mechanics, will also find the book useful.