Implementation of a Distributed Algorithm for Multi-camera Visual Feature Extraction in a Visual Sensor Network Testbed

Implementation of a Distributed Algorithm for Multi-camera Visual Feature Extraction in a Visual Sensor Network Testbed PDF Author: Alejandro Guillén Arévalo
Publisher:
ISBN:
Category :
Languages : en
Pages :

Get Book Here

Book Description
[ANGLÈS] Visual analysis tasks, like detection, recognition and tracking, are computationally intensive, and it is therefore challenging to perform such tasks in visual sensor networks, where nodes may be equipped with low power CPUs. A promising solution is to augment the sensor network with processing nodes, and to distribute the processing tasks among the processing nodes of the visual sensor network. The objective of this project is to enable a visual sensor network testbed to operate with multiple camera sensors, and to implement an algorithm that computes the allocation of the visual feature tasks to the processing nodes. In the implemented system, the processing nodes can receive and process data from different camera sensors simultaneously. The acquired images are divided into sub-images, the sizes of the sub-images are computed through solving a linear programming problem. The implemented algorithm performs local optimization in each camera sensor without data exchange with the other cameras in order to minimize the communication overhead and the data computational load of the camera sensors. The implementation work is performed on a testbed that consists of BeagleBone Black computers with IEEE 802.15.4 or IEEE 802.11 USB modules, and the existing code base is written in C++. The implementation is used to assess the performance of the distributed algorithm in terms of completion time. The results show a good performance providing lower average completion time.

Implementation of a Distributed Algorithm for Multi-camera Visual Feature Extraction in a Visual Sensor Network Testbed

Implementation of a Distributed Algorithm for Multi-camera Visual Feature Extraction in a Visual Sensor Network Testbed PDF Author: Alejandro Guillén Arévalo
Publisher:
ISBN:
Category :
Languages : en
Pages :

Get Book Here

Book Description
[ANGLÈS] Visual analysis tasks, like detection, recognition and tracking, are computationally intensive, and it is therefore challenging to perform such tasks in visual sensor networks, where nodes may be equipped with low power CPUs. A promising solution is to augment the sensor network with processing nodes, and to distribute the processing tasks among the processing nodes of the visual sensor network. The objective of this project is to enable a visual sensor network testbed to operate with multiple camera sensors, and to implement an algorithm that computes the allocation of the visual feature tasks to the processing nodes. In the implemented system, the processing nodes can receive and process data from different camera sensors simultaneously. The acquired images are divided into sub-images, the sizes of the sub-images are computed through solving a linear programming problem. The implemented algorithm performs local optimization in each camera sensor without data exchange with the other cameras in order to minimize the communication overhead and the data computational load of the camera sensors. The implementation work is performed on a testbed that consists of BeagleBone Black computers with IEEE 802.15.4 or IEEE 802.11 USB modules, and the existing code base is written in C++. The implementation is used to assess the performance of the distributed algorithm in terms of completion time. The results show a good performance providing lower average completion time.

Completion Time Minimization for Distributed Feature Extraction in a Visual Sensor Network Testbed

Completion Time Minimization for Distributed Feature Extraction in a Visual Sensor Network Testbed PDF Author: Jordi Serra Torrens
Publisher:
ISBN:
Category :
Languages : en
Pages :

Get Book Here

Book Description
[ANGLÈS] Real-time detection and extraction of visual features in wireless sensor networks is a challenging task due to its computational complexity and the limited processing power of the nodes. A promising approach is to distribute the workload to other nodes of the network by delegating the processing of different regions of the image to different nodes. In this work a solution to optimally schedule the loads assigned to each node is implemented on a real visual sensor network testbed. To minimize the time required to process an image, the size of the subareas assigned to the cooperators are calculated by solving a linear programming problem taking into account the transmission and processing speed of the nodes and the spatial distribution of the visual features. In order to minimize the global workload, an optimal detection threshold is predicted such that only the most significant features are extracted. The solution is implemented on a visual sensor network testbed consisting of BeagleBone Black computers capable of communicating over IEEE 802.11. The capabilities of the testbed are also extended by adapting a reliable transmission protocol based on UDP capable of multicast transmission. The performance of the implemented algorithms is evaluated on the testbed.

Multi-camera Vision for Smart Environments

Multi-camera Vision for Smart Environments PDF Author: Chen Wu
Publisher:
ISBN:
Category :
Languages : en
Pages :

Get Book Here

Book Description
Technology is blending into every part of our lives. Instead of having them as intruders in the environment, people prefer the computers to go into the background, and automatically help us at the right time, on the right thing, and in the right way. A smart environment system senses the environment and people in it, makes deductions, and then takes actions if necessary. Sensors are the "eyes" of smart environments. In this dissertation we consider vision sensors. Image and video data contain rich information, yet they are also challenging to interpret. From obtaining the raw image data from the camera sensors to achieving the application's goal in a smart environment, we need to consider three main components of the system, i.e., the hardware platform, vision analysis of the image data, and high-level reasoning pertaining to the end application. A generic vision-related problem, e.g., human pose estimation, can be approached based on different assumptions. In our approach the system constraints and application's high-level objective are considered and often define boundary conditions in the design of vision-based algorithms. A multi-camera setup requires distributed processing at each camera node and information sharing through the camera network. Therefore, computation capacity of camera nodes and the communication bandwidth can define two hard constraints for algorithm design. We first introduce a smart camera architecture and its specific local computation power and wireless communication constraints. We then examine how the problem of human pose detection can be formulated for implementation on this smart camera, and describe the steps taken to achieve real-time operation for an avatar-based gaming application. We then present a human activity analysis technique based on multi-camera fusion. The method defined a hierarchy of coarse and fine level activity classes and employs different visual features to detect the pose or activity in each class. The camera fusion methods studied in this dissertation include decision fusion and feature fusion. We show the results of experiments in three testbed environments and analyze the performance of the different fusion methods. Although computer vision already involves complicated techniques in interpreting the image data, it still constitutes just the sensing part of the smart environment system, and further application-related high-level reasoning is needed. Modeling such high-level reasoning will depend on the specific nature of the problem. We present a case study in this dissertation to demonstrate how the vision processing and high-level reasoning modules can be interfaced for making semantic inference based on observations. The case study aims to recognize objects in a smart home based on the observed user interactions. We make use of the relationship between objects and user activities to infer the objects when related activities are observed. We apply Markov logic network (MLN) to model such relationships. MLN enables intuitive modeling of relationships, and it also offers the power of graphical models. We show different ways of constructing the knowledge base in MLN, and provide experimental results.

An Efficient Online Feature Extraction Algorithm for Neural Networks

An Efficient Online Feature Extraction Algorithm for Neural Networks PDF Author: Pouya Bozorgmehr
Publisher:
ISBN:
Category :
Languages : en
Pages : 63

Get Book Here

Book Description
Finding optimal feature sets for classification tasks is still a fundamental and challenging problem in the area of machine learning. The human visual system performs classification tasks effortlessly using its hierarchical features and efficient coding in its visual pathway. It is shown that early in the visual system the information is encoded using distributed coding schemes and later in the visual system the sparse coding is utilized. We propose a biologically motivated method to extract features that encode the information according to a specific activation profile. We show how our model much like the visual system, can learn distributed coding in lower layers and sparse coding in higher layers in an online manner. Online feature extraction is used in biometrics, machine vision, and pattern recognition. Methods that can dynamically extract features and perform online classification are especially important for real-world applications. We introduce online algorithms that are fast and efficient in extracting features for encoding and discriminating the input space. We also show a supervised version of this algorithm that performs feature selection and extraction in alternating steps to achieve a fast convergence and high accuracy.

Activity Based Geometry Dependent Features for Information Processing in Heterogeneous Camera Networks

Activity Based Geometry Dependent Features for Information Processing in Heterogeneous Camera Networks PDF Author: Erhan Baki Ermiş
Publisher:
ISBN:
Category :
Languages : en
Pages : 318

Get Book Here

Book Description
Abstract: Heterogeneous surveillance camera networks permit pervasive, wide-area visual surveillance for urban environments. However, due to the vast amounts of data they produce, human-operator monitoring is not possible and automatic algorithms are needed. In order to develop these automatic algorithms efficient and effective multi-camera information processing techniques must be developed. However, such multi-camera information processing techniques pose significant challenges in heterogeneous networks due to the fact that (i) most intuitive features used in video processing are geometric, i.e. utilize spatial information present in the video frames, (ii) the camera topology in heterogeneous networks is dynamic and cameras have significantly different observation geometries, consequently geometric features are not amenable to devising simple and efficient information processing techniques for heterogeneous networks. Based on these observations, we propose activity based behavior features that have certain geometry independence properties. Specifically, when the proposed features are used for information processing applications, a location observed by a number of cameras generates the same features across the cameras irrespective of their locations, orientations, and zoom levels. This geometry invariance property significantly simplifies the multi-camera information processing task in the sense that network's topology and camera calibration are no longer necessary to fuse information across cameras. We present applications of the proposed features to two such problems: (i) multi-camera correspondence, (ii) multi-camera anomaly detection. In the multi-camera correspondence application we use the activity features and propose a correspondence method that is robust to pose, illumination & geometric effects, and unsupervised (does not require any calibration objects to be utilized). In addition, through exploitation of sparsity of activity features combined with compressed sensing principles, we demonstrate that the proposed method is amenable to low communication bandwidth which is important for distributed systems. We present quantitative and qualitative results with synthetic and real life examples, which demonstrate that the proposed correspondence method outperforms methods that utilize geometric features when the cameras observe a scene with significantly different orientations. In the second application we consider the problem of abnormal behavior detection in heterogeneous networks, i.e., identification of objects whose behavior differs from behavior typically observed. We develop a framework that learns the behavior model at various regions of the video frames, and performs abnormal behavior detection via statistical methods. We show that due to the geometry independence property of the proposed features, models of normal activity obtained in one camera can be used as surrogate models in another camera to successfully perform anomaly detection. We present performance curves to demonstrate that in realistic urban monitoring scenarios, model training times can be significantly reduced when a new camera is added to a network of cameras. In both of these applications the main enabling principle is the geometry independence of the chosen features, which demonstrates how complex multi-camera information processing problems can be simplified by exploiting this principle. Finally, we present some statistical developments in the wider area of anomaly detection, which is motivated by the abnormal behavior detection application. We propose test statistics for detection problems with multidimensional observations and present optimality and robustness results.

Active Learning in Multi-camera Networks, with Applications in Person Re-identification

Active Learning in Multi-camera Networks, with Applications in Person Re-identification PDF Author: Abir Das
Publisher:
ISBN:
Category : Image processing
Languages : en
Pages : 117

Get Book Here

Book Description
With the proliferation of cheap visual sensors, camera networks are everywhere. The ubiquitous presence of cameras opens the door for cutting edge research in processing and analysis of the huge video data generated by such large-scale camera networks. Re-identification of persons coming in and out of the cameras is an important task. This has remained a challenge to the community for a variety of reasons such as change of scale, illumination, resolution etc. between cameras. All these leads to transformation of features between cameras which makes re-identification a challenging task. The first question that is addressed in this work is - Can we model the way features get transformed between cameras and use it to our advantage to re-identify persons between cameras with non-overlapping views? The similarity between the feature histograms and time series data motivated us to apply the principle of Dynamic Time Warping to study the transformation of features by warping the feature space. After capturing the feature warps, describing the transformation of features the variabilities of the warp functions were modeled as a function space of these feature warps. The function space not only allowed us to model feasible transformation between pairs of instances of the same target, but also to separate them from the infeasible transformations between instances of different targets. A supervised training phase is employed to learn a discriminating surface between these two classes in the function space.

Camera Networks

Camera Networks PDF Author: Matthew Turk
Publisher: Springer Nature
ISBN: 3031018117
Category : Computers
Languages : en
Pages : 119

Get Book Here

Book Description
As networks of video cameras are installed in many applications like security and surveillance, environmental monitoring, disaster response, and assisted living facilities, among others, image understanding in camera networks is becoming an important area of research and technology development. There are many challenges that need to be addressed in the process. Some of them are listed below: - Traditional computer vision challenges in tracking and recognition, robustness to pose, illumination, occlusion, clutter, recognition of objects, and activities; - Aggregating local information for wide area scene understanding, like obtaining stable, long-term tracks of objects; - Positioning of the cameras and dynamic control of pan-tilt-zoom (PTZ) cameras for optimal sensing; - Distributed processing and scene analysis algorithms; - Resource constraints imposed by different applications like security and surveillance, environmental monitoring, disaster response, assisted living facilities, etc. In this book, we focus on the basic research problems in camera networks, review the current state-of-the-art and present a detailed description of some of the recently developed methodologies. The major underlying theme in all the work presented is to take a network-centric view whereby the overall decisions are made at the network level. This is sometimes achieved by accumulating all the data at a central server, while at other times by exchanging decisions made by individual cameras based on their locally sensed data. Chapter One starts with an overview of the problems in camera networks and the major research directions. Some of the currently available experimental testbeds are also discussed here. One of the fundamental tasks in the analysis of dynamic scenes is to track objects. Since camera networks cover a large area, the systems need to be able to track over such wide areas where there could be both overlapping and non-overlapping fields of view of the cameras, as addressed in Chapter Two: Distributed processing is another challenge in camera networks and recent methods have shown how to do tracking, pose estimation and calibration in a distributed environment. Consensus algorithms that enable these tasks are described in Chapter Three. Chapter Four summarizes a few approaches on object and activity recognition in both distributed and centralized camera network environments. All these methods have focused primarily on the analysis side given that images are being obtained by the cameras. Efficient utilization of such networks often calls for active sensing, whereby the acquisition and analysis phases are closely linked. We discuss this issue in detail in Chapter Five and show how collaborative and opportunistic sensing in a camera network can be achieved. Finally, Chapter Six concludes the book by highlighting the major directions for future research. Table of Contents: An Introduction to Camera Networks / Wide-Area Tracking / Distributed Processing in Camera Networks / Object and Activity Recognition / Active Sensing / Future Research Directions

Scientific and Technical Aerospace Reports

Scientific and Technical Aerospace Reports PDF Author:
Publisher:
ISBN:
Category : Aeronautics
Languages : en
Pages : 836

Get Book Here

Book Description


International Aerospace Abstracts

International Aerospace Abstracts PDF Author:
Publisher:
ISBN:
Category : Aeronautics
Languages : en
Pages : 940

Get Book Here

Book Description


Index to IEEE Publications

Index to IEEE Publications PDF Author: Institute of Electrical and Electronics Engineers
Publisher:
ISBN:
Category : Electric engineering
Languages : en
Pages : 832

Get Book Here

Book Description
Issues for 1973- cover the entire IEEE technical literature.