Computer Vision-based Localization and Mapping of an Unknown, Uncooperative and Spinning Target for Spacecraft Proximity Operations

Computer Vision-based Localization and Mapping of an Unknown, Uncooperative and Spinning Target for Spacecraft Proximity Operations PDF Author: Brent Edward Tweddle
Publisher:
ISBN:
Category :
Languages : en
Pages : 410

Get Book Here

Book Description
Prior studies have estimated that there are over 100 potential target objects near the Geostationary Orbit belt that are spinning at rates of over 20 rotations per minute. For a number of reasons, it may be desirable to operate in close proximity to these objects for the purposes of inspection, docking and repair. Many of them have an unknown geometric appearance, are uncooperative and non-communicative. These types of characteristics are also shared by a number of asteroid rendezvous missions. In order to safely operate in close proximity to an object in space, it is important to know the target object's position and orientation relative to the inspector satellite, as well as to build a three-dimensional geometric map of the object for relative navigation in future stages of the mission. This type of problem can be solved with many of the typical Simultaneous Localization and Mapping (SLAM) algorithms that are found in the literature. However, if the target object is spinning with signicant angular velocity, it is also important to know the linear and angular velocity of the target object as well as its center of mass, principal axes of inertia and its inertia matrix. This information is essential to being able to propagate the state of the target object to a future time, which is a key capability for any type of proximity operations mission. Most of the typical SLAM algorithms cannot easily provide these types of estimates for high-speed spinning objects. This thesis describes a new approach to solving a SLAM problem for unknown and uncooperative objects that are spinning about an arbitrary axis. It is capable of estimating a geometric map of the target object, as well as its position, orientation, linear velocity, angular velocity, center of mass, principal axes and ratios of inertia. This allows the state of the target object to be propagated to a future time step using Newton's Second Law and Euler's Equation of Rotational Motion, and thereby allowing this future state to be used by the planning and control algorithms for the target spacecraft. In order to properly evaluate this new approach, it is necessary to gather experi

Computer Vision-based Localization and Mapping of an Unknown, Uncooperative and Spinning Target for Spacecraft Proximity Operations

Computer Vision-based Localization and Mapping of an Unknown, Uncooperative and Spinning Target for Spacecraft Proximity Operations PDF Author: Brent Edward Tweddle
Publisher:
ISBN:
Category :
Languages : en
Pages : 410

Get Book Here

Book Description
Prior studies have estimated that there are over 100 potential target objects near the Geostationary Orbit belt that are spinning at rates of over 20 rotations per minute. For a number of reasons, it may be desirable to operate in close proximity to these objects for the purposes of inspection, docking and repair. Many of them have an unknown geometric appearance, are uncooperative and non-communicative. These types of characteristics are also shared by a number of asteroid rendezvous missions. In order to safely operate in close proximity to an object in space, it is important to know the target object's position and orientation relative to the inspector satellite, as well as to build a three-dimensional geometric map of the object for relative navigation in future stages of the mission. This type of problem can be solved with many of the typical Simultaneous Localization and Mapping (SLAM) algorithms that are found in the literature. However, if the target object is spinning with signicant angular velocity, it is also important to know the linear and angular velocity of the target object as well as its center of mass, principal axes of inertia and its inertia matrix. This information is essential to being able to propagate the state of the target object to a future time, which is a key capability for any type of proximity operations mission. Most of the typical SLAM algorithms cannot easily provide these types of estimates for high-speed spinning objects. This thesis describes a new approach to solving a SLAM problem for unknown and uncooperative objects that are spinning about an arbitrary axis. It is capable of estimating a geometric map of the target object, as well as its position, orientation, linear velocity, angular velocity, center of mass, principal axes and ratios of inertia. This allows the state of the target object to be propagated to a future time step using Newton's Second Law and Euler's Equation of Rotational Motion, and thereby allowing this future state to be used by the planning and control algorithms for the target spacecraft. In order to properly evaluate this new approach, it is necessary to gather experi

Proceedings of 2020 Chinese Intelligent Systems Conference

Proceedings of 2020 Chinese Intelligent Systems Conference PDF Author: Yingmin Jia
Publisher: Springer Nature
ISBN: 9811584583
Category : Technology & Engineering
Languages : en
Pages : 841

Get Book Here

Book Description
The book focuses on new theoretical results and techniques in the field of intelligent systems and control. It provides in-depth studies on a number of major topics such as Multi-Agent Systems, Complex Networks, Intelligent Robots, Complex System Theory and Swarm Behavior, Event-Triggered Control and Data-Driven Control, Robust and Adaptive Control, Big Data and Brain Science, Process Control, Intelligent Sensor and Detection Technology, Deep learning and Learning Control Guidance, Navigation and Control of Flight Vehicles and so on. Given its scope, the book will benefit all researchers, engineers, and graduate students who want to learn about cutting-edge advances in intelligent systems, intelligent control, and artificial intelligence.

Simultaneous Pose and Mapping of a Non-Cooperative Maneuvering Target Using an Octree-Based Approach

Simultaneous Pose and Mapping of a Non-Cooperative Maneuvering Target Using an Octree-Based Approach PDF Author: Peter Craig Scarcella
Publisher:
ISBN:
Category :
Languages : en
Pages :

Get Book Here

Book Description
Autonomous proximity operation missions in the space domain are a multi-faceted, studied problem. Performing such a mission around an uncooperative, unknown target presents many difficulties in estimating the relative pose of the target. If the target executes a controlled or uncontrolled (outgassing from a damaged satellite) low-thrust continuous maneuver, tracking or rendezvousing with the target becomes challenging and potentially dangerous especially in close proximity. A solution to this problem is desired such that the developed algorithm is amenable to onboard implementation, utilizes stereo vision, can estimate relative pose without initial relative information to the target, and can detect and estimate a continuous low-thrust maneuver. The developed algorithm presented in this dissertation utilizes an octree based approach to construct a three-dimensional map of the target while simultaneously estimating the pose in a multiplicative extended Kalman filter. A coarse volume estimate and probabilistic uncertainty mapping are extrapolated from the octree map and used as the initialization of a consider parameter in a consider variable state dimension (VSD) filter following maneuver detection. This framework switches to a thrusting model after the Mahalanobis distance passes a pre-defined threshold initializing the consider parameters and starting an interacting multiple model (IMM) filter. Assuming an evenly-distributed density, eight density models comprising the models in the IMM establish a mechanism for thrust states to be estimated by combining the best amalgamation of density models and the coarse volume estimate. Essentially, an online process is created to indirectly obtain mass information from the octree map without directly estimating it. Since target volume is not completely observable, it is not directly estimated and instead used as a consider parameter with an associated covariance obtained from the octree mapping uncertainty. Despite using coarse volume estimates, the simulated results show good convergence for pose and thrust states as well as accurate maneuver detection for continuous low-thrust maneuvers. Two thrust magnitudes were analyzed (differing by an order of magnitude) varying the moment of maneuver detection such that the octree mapping was at two different levels of completion. Results obtained from a complete octree mapping demonstrated accuracy both in relative pose estimates and maneuver detection. Convergence and maneuver detection had more prevalent errors associated with a less robust octree mapping as expected though their results still showed promise. The developed algorithms create an online process of indirect mass estimation to enable the thrust estimation once a maneuver is detected. Presented results demonstrate the algorithm's ability to detect unknown maneuvers and accurately estimate relative pose and thrust states. It is clearly shown that the octree map provides a clear effect on reducing the covariance and improving estimates of the pose and thrust. Furthermore, utilizing the consider filter, VSD, and IMM solely in the maneuvering model keeps the computational cost low.

Pose Estimation of Uncooperative Spacecraft Using Monocular Vision and Deep Learning

Pose Estimation of Uncooperative Spacecraft Using Monocular Vision and Deep Learning PDF Author: Sumant Sharma
Publisher:
ISBN:
Category :
Languages : en
Pages :

Get Book Here

Book Description
This dissertation addresses the design and validation of new pose estimation algorithms for spaceborne vision-based navigation in close proximity to known uncooperative spacecraft. The onboard estimation of the pose, i.e., relative position and attitude, is a key enabling technology for future on-orbit servicing and debris removal missions. The use of vision-based sensors for pose estimation is particularly attractive due to their low volumetric and power requirements, particularly in comparison to sensors such as LiDAR. Past demonstrations of this technology have relied on various combinations of cooperative use of fiducial markers on the target spacecraft, frequent inputs from the ground control stations, and post-processing of the images on-ground. This research overcomes these limitations by developing novel pose estimation methods that take as input a single two-dimensional image and a simple three-dimensional (3D) wireframe model of the target. These methods estimate the pose without requiring any a-priori pose information or long initialization phases. The first pose estimation method relies on a novel feature detection algorithm based on the filtering of the weak image gradients to identify the edges of the target spacecraft in the image, even in the presence of the Earth in the background. As compared with state-of-the-art feature detection-based methods, this method is shown to be more accurate and computationally faster through experiments on flight imagery. The second pose estimation method leverages modern learning-based algorithms by using a convolutional neural network architecture to classify the pose of the target spacecraft in the image. As compared to the feature detection-based methods, this method is shown to be more robust and two orders of magnitude computationally faster during inference using experiments on synthetic imagery. The third pose estimation method, the Spacecraft Pose Network (SPN), combines the higher accuracy potential of feature detection-based methods with the higher robustness and computational efficiency of learning-based methods. The SPN method achieves this by integrating a convolutional neural network with a Gauss-Newton algorithm. The use of a convolutional neural network allows SPN to implicitly perform feature detection without the need for intensive manual tuning of hyperparameters. The use of a Gauss-Newton algorithm allows the direct application of the underlying physics of the pose estimation problem, i.e., the perspective equations for quantifying the uncertainty in the estimated pose. In contrast to current learning-based methods, the SPN method can be trained using solely synthetic images of a target spacecraft and is shown to generalize its performance on flight imagery of the same target spacecraft. This research also demonstrates that the SPN method can be used for target-in-target pose estimation to handle terminal stages of a docking scenario where only a partial view of the target spacecraft is available. A unique contribution of this research is the generation of the Spacecraft Pose Estimation Dataset (SPEED), which is used to train and evaluate the performance of pose estimation methods. SPEED consists of synthetic images created by fusing OpenGL-based renderings of a spacecraft 3D model with actual meteorological images of the Earth. SPEED also consists of actual camera images created using a seven degrees-of-freedom robotic arm, which positions and orients a vision-based sensor with respect to a full-scale mock-up of a spacecraft. SPEED is being used to host an international competition on pose estimation in collaboration with the European Space Agency. Infinite Orbits is adopting the pose estimation methods developed during this research for onboard deployment during their commercial on-orbit servicing missions.

Non-Cooperative Target Tracking, Fusion and Control

Non-Cooperative Target Tracking, Fusion and Control PDF Author: Zhongliang Jing
Publisher: Springer
ISBN: 9783030080815
Category :
Languages : en
Pages : 362

Get Book Here

Book Description
This book gives a concise and comprehensive overview of non-cooperative target tracking, fusion and control. Focusing on algorithms rather than theories for non-cooperative targets including air and space-borne targets, this work explores a number of advanced techniques, including Gaussian mixture cardinalized probability hypothesis density (CPHD) filter, optimization on manifold, construction of filter banks and tight frames, structured sparse representation, and others. Containing a variety of illustrative and computational examples, Non-cooperative Target Tracking, Fusion and Control will be useful for students as well as engineers with an interest in information fusion, aerospace applications, radar data processing and remote sensing.

Robust Visual Detection and Tracking of Complex Objects

Robust Visual Detection and Tracking of Complex Objects PDF Author: Antoine Petit
Publisher:
ISBN:
Category :
Languages : en
Pages : 0

Get Book Here

Book Description
In this thesis, we address the issue of fully localizing a known object through computer vision, using a monocular camera, what is a central problem in robotics. A particular attention is here paid on space robotics applications, with the aims of providing a unified visual localization system for autonomous navigation purposes for space rendezvous and proximity operations. Two main challenges of the problem are tackled: initially detecting the targeted object and then tracking it frame-by-frame, providing the complete pose between the camera and the object, knowing the 3D CAD model of the object. For detection, the pose estimation process is based on the segmentation of the moving object and on an efficient probabilistic edge-based matching and alignment procedure of a set of synthetic views of the object with a sequence of initial images. For the tracking phase, pose estimation is handled through a 3D model-based tracking algorithm, for which we propose three different types of visual features, pertinently representing the object with its edges, its silhouette and with a set of interest points. The reliability of the localization process is evaluated by propagating the uncertainty from the errors of the visual features. This uncertainty besides feeds a linear Kalman filter on the camera velocity parameters. Qualitative and quantitative experiments have been performed on various synthetic and real data, with challenging imaging conditions, showing the efficiency and the benefits of the different contributions, and their compliance with space rendezvous applications.

Monocular Vision Based Localization and Mapping

Monocular Vision Based Localization and Mapping PDF Author: Michal Jama
Publisher:
ISBN:
Category :
Languages : en
Pages :

Get Book Here

Book Description
In this dissertation, two applications related to vision-based localization and mapping are considered: (1) improving navigation system based satellite location estimates by using on-board camera images, and (2) deriving position information from video stream and using it to aid an auto-pilot of an unmanned aerial vehicle (UAV). In the first part of this dissertation, a method for analyzing a minimization process called bundle adjustment (BA) used in stereo imagery based 3D terrain reconstruction to refine estimates of camera poses (positions and orientations) is presented. In particular, imagery obtained with pushbroom cameras is of interest. This work proposes a method to identify cases in which BA does not work as intended, i.e., the cases in which the pose estimates returned by the BA are not more accurate than estimates provided by a satellite navigation systems due to the existence of degrees of freedom (DOF) in BA. Use of inaccurate pose estimates causes warping and scaling effects in the reconstructed terrain and prevents the terrain from being used in scientific analysis. Main contributions of this part of work include: 1) formulation of a method for detecting DOF in the BA; and 2) identifying that two camera geometries commonly used to obtain stereo imagery have DOF. Also, this part presents results demonstrating that avoidance of the DOF can give significant accuracy gains in aerial imagery. The second part of this dissertation proposes a vision based system for UAV navigation. This is a monocular vision based simultaneous localization and mapping (SLAM) system, which measures the position and orientation of the camera and builds a map of the environment using a video-stream from a single camera. This is different from common SLAM solutions that use sensors that measure depth, like LIDAR, stereoscopic cameras or depth cameras. The SLAM solution was built by significantly modifying and extending a recent open-source SLAM solution that is fundamentally different from a traditional approach to solving SLAM problem. The modifications made are those needed to provide the position measurements necessary for the navigation solution on a UAV while simultaneously building the map, all while maintaining control of the UAV. The main contributions of this part include: 1) extension of the map building algorithm to enable it to be used realistically while controlling a UAV and simultaneously building the map; 2) improved performance of the SLAM algorithm for lower camera frame rates; and 3) the first known demonstration of a monocular SLAM algorithm successfully controlling a UAV while simultaneously building the map. This work demonstrates that a fully autonomous UAV that uses monocular vision for navigation is feasible, and can be effective in Global Positioning System denied environments.

Vision-Inertial SLAM Using Natural Features in Outdoor Environments

Vision-Inertial SLAM Using Natural Features in Outdoor Environments PDF Author: Daniel Asmar
Publisher:
ISBN:
Category :
Languages : en
Pages :

Get Book Here

Book Description


3D Multi-field Multi-scale Features from Range Data in Spacecraft Proximity Operations

3D Multi-field Multi-scale Features from Range Data in Spacecraft Proximity Operations PDF Author: Brien Roy Flewelling
Publisher:
ISBN:
Category :
Languages : en
Pages :

Get Book Here

Book Description
A fundamental problem in spacecraft proximity operations is the determination of the 6 degree of freedom relative navigation solution between the observer reference frame and a reference frame tied to a proximal body. For the most unconstrained case, the proximal body may be uncontrolled, and the observer spacecraft has no a priori information on the body. A spacecraft in this scenario must simultaneously map the generally poorly known body being observed, and safely navigate relative to it. Simultaneous localization and mapping(SLAM)is a difficult problem which has been the focus of research in recent years. The most promising approaches extract local features in 2D or 3D measurements and track them in subsequent observations by means of matching a descriptor. These methods exist for both active sensors such as Light Detection and Ranging(LIDAR) or laser RADAR(LADAR), and passive sensors such as CCD and CMOS camera systems. This dissertation presents a method for fusing time of flight(ToF) range data inherent to scanning LIDAR systems with the passive light field measurements of optical systems, extracting features which exploit information from each sensor, and solving the unique SLAM problem inherent to spacecraft proximity operations. Scale Space analysis is extended to unstructured 3D point clouds by means of an approximation to the Laplace Beltrami operator which computes the scale space on a manifold embedded in 3D object space using Gaussian convolutions based on a geodesic distance weighting. The construction of the scale space is shown to be equivalent to both the application of the diffusion equation to the surface data, as well as the surface evolution process which results from mean curvature flow. Geometric features are localized in regions of high spatial curvature or large diffusion displacements at multiple scales. The extracted interest points are associated with a local multi-field descriptor constructed from measured data in the object space. Defining features in object space instead of image space is shown to bean important step making the simultaneous consideration of co-registered texture and the associated geometry possible. These descriptors known as Multi-Field Diffusion Flow Signatures encode the shape, and multi-texture information of local neighborhoods in textured range data. Multi-Field Diffusion Flow Signatures display utility in difficult space scenarios including high contrast and saturating lighting conditions, bland and repeating textures, as well as non-Lambertian surfaces. The effectiveness and utility of Multi-Field Multi-Scale(MFMS) Features described by Multi-Field Diffusion Flow Signatures is evaluated using real data from proximity operation experiments performed at the Land Air and Space Robotics(LASR) Laboratory at Texas A & M University.

Simultaneous Tracking, Object Registration, and Mapping (STORM)

Simultaneous Tracking, Object Registration, and Mapping (STORM) PDF Author: Vincent P. Kee
Publisher:
ISBN:
Category :
Languages : en
Pages : 91

Get Book Here

Book Description
An autonomous system needs to be aware of its surroundings and know where it is in its environment in order to operate robustly in unknown environments. This problem is known as Simultaneous Localization and Mapping (SLAM). SLAM techniques have been successfully implemented on systems operating in the real world. However, most SLAM approaches assume that the environment does not change during operation -- the static world assumption. When this assumption is violated (e.g. an object moves), the SLAM estimate degrades. Consequently, the static world assumption prevents robots from interacting with their environments (e.g. manipulating objects) and restricts them to navigating in static environments. Additionally, most SLAM systems generate maps composed of low-level features that lack information about objects and their locations in the scene. This representation limits the map’s utility, preventing it from being used for tasks beyond navigation such as object manipulation and task planning. We present Simultaneous Tracking, Object Registration, and Mapping (STORM), a SLAM system that represents an environment as a collection of dynamic objects. STORM enables a robot to build and maintain maps of dynamic environments, use the map estimates to manipulate objects, and localize itself in the map when revisiting the environment. We demonstrate STORM’s capabilities with simulation and real-world experiments and compare its performance against that of a typical SLAM approach.