Author: Christine M. Zwart
Publisher: Springer Nature
ISBN: 3031015207
Category : Technology & Engineering
Languages : en
Pages : 79
Book Description
Motion estimation is a long-standing cornerstone of image and video processing. Most notably, motion estimation serves as the foundation for many of today's ubiquitous video coding standards including H.264. Motion estimators also play key roles in countless other applications that serve the consumer, industrial, biomedical, and military sectors. Of the many available motion estimation techniques, optical flow is widely regarded as most flexible. The flexibility offered by optical flow is particularly useful for complex registration and interpolation problems, but comes at a considerable computational expense. As the volume and dimensionality of data that motion estimators are applied to continue to grow, that expense becomes more and more costly. Control grid motion estimators based on optical flow can accomplish motion estimation with flexibility similar to pure optical flow, but at a fraction of the computational expense. Control grid methods also offer the added benefit of representing motion far more compactly than pure optical flow. This booklet explores control grid motion estimation and provides implementations of the approach that apply to data of multiple dimensionalities. Important current applications of control grid methods including registration and interpolation are also developed. Table of Contents: Introduction / Control Grid Interpolation (CGI) / Application of CGI to Registration Problems / Application of CGI to Interpolation Problems / Discussion and Conclusions
Control Grid Motion Estimation for Efficient Application of Optical Flow
Author: Christine M. Zwart
Publisher: Springer Nature
ISBN: 3031015207
Category : Technology & Engineering
Languages : en
Pages : 79
Book Description
Motion estimation is a long-standing cornerstone of image and video processing. Most notably, motion estimation serves as the foundation for many of today's ubiquitous video coding standards including H.264. Motion estimators also play key roles in countless other applications that serve the consumer, industrial, biomedical, and military sectors. Of the many available motion estimation techniques, optical flow is widely regarded as most flexible. The flexibility offered by optical flow is particularly useful for complex registration and interpolation problems, but comes at a considerable computational expense. As the volume and dimensionality of data that motion estimators are applied to continue to grow, that expense becomes more and more costly. Control grid motion estimators based on optical flow can accomplish motion estimation with flexibility similar to pure optical flow, but at a fraction of the computational expense. Control grid methods also offer the added benefit of representing motion far more compactly than pure optical flow. This booklet explores control grid motion estimation and provides implementations of the approach that apply to data of multiple dimensionalities. Important current applications of control grid methods including registration and interpolation are also developed. Table of Contents: Introduction / Control Grid Interpolation (CGI) / Application of CGI to Registration Problems / Application of CGI to Interpolation Problems / Discussion and Conclusions
Publisher: Springer Nature
ISBN: 3031015207
Category : Technology & Engineering
Languages : en
Pages : 79
Book Description
Motion estimation is a long-standing cornerstone of image and video processing. Most notably, motion estimation serves as the foundation for many of today's ubiquitous video coding standards including H.264. Motion estimators also play key roles in countless other applications that serve the consumer, industrial, biomedical, and military sectors. Of the many available motion estimation techniques, optical flow is widely regarded as most flexible. The flexibility offered by optical flow is particularly useful for complex registration and interpolation problems, but comes at a considerable computational expense. As the volume and dimensionality of data that motion estimators are applied to continue to grow, that expense becomes more and more costly. Control grid motion estimators based on optical flow can accomplish motion estimation with flexibility similar to pure optical flow, but at a fraction of the computational expense. Control grid methods also offer the added benefit of representing motion far more compactly than pure optical flow. This booklet explores control grid motion estimation and provides implementations of the approach that apply to data of multiple dimensionalities. Important current applications of control grid methods including registration and interpolation are also developed. Table of Contents: Introduction / Control Grid Interpolation (CGI) / Application of CGI to Registration Problems / Application of CGI to Interpolation Problems / Discussion and Conclusions
Scale Space and PDE Methods in Computer Vision
Author: Ron Kimmel
Publisher: Springer
ISBN: 3540320121
Category : Computers
Languages : en
Pages : 644
Book Description
Welcome to the proceedings of the 5th International Conference on Scale-Space and PDE Methods in Computer Vision. The scale-space concept was introduced by Iijima more than 40 years ago and became popular later on through the works of Witkin and Koenderink. It is at the junction of three major schools of thought in image processing and computer vision: the design of ?lters, axiomatic approaches based on partial di?erential equations (PDEs), and variational methods for image regularization. Scale-space ideas belong to the mathematically best-understood approaches in image analysis. They have entered numerous successful applications in medical imaging and a number of other ?elds where they often give results of very high quality. This conference followed biennial meetings held in Utrecht, Corfu, Vancouver and Skye. It took place in a little castle (Schl ̈ osschen Sch ̈ onburg) near the small town of Hofgeismar, Germany. Inspired by the very successful previous meeting at Skye, we kept the style of gathering people in a slightly remote and scenic place in order to encourage many fruitful discussions during the day and in the evening. Wereceived79fullpapersubmissionsofahighstandardthatischaracteristic for the scale-space conferences. Each paper was reviewed by three experts from the Program Committee, sometimes helped by additional reviewers. Based on theresultsofthesereviews,53paperswereaccepted.Weselected24manuscripts for oral presentation and 29 for poster presentation.
Publisher: Springer
ISBN: 3540320121
Category : Computers
Languages : en
Pages : 644
Book Description
Welcome to the proceedings of the 5th International Conference on Scale-Space and PDE Methods in Computer Vision. The scale-space concept was introduced by Iijima more than 40 years ago and became popular later on through the works of Witkin and Koenderink. It is at the junction of three major schools of thought in image processing and computer vision: the design of ?lters, axiomatic approaches based on partial di?erential equations (PDEs), and variational methods for image regularization. Scale-space ideas belong to the mathematically best-understood approaches in image analysis. They have entered numerous successful applications in medical imaging and a number of other ?elds where they often give results of very high quality. This conference followed biennial meetings held in Utrecht, Corfu, Vancouver and Skye. It took place in a little castle (Schl ̈ osschen Sch ̈ onburg) near the small town of Hofgeismar, Germany. Inspired by the very successful previous meeting at Skye, we kept the style of gathering people in a slightly remote and scenic place in order to encourage many fruitful discussions during the day and in the evening. Wereceived79fullpapersubmissionsofahighstandardthatischaracteristic for the scale-space conferences. Each paper was reviewed by three experts from the Program Committee, sometimes helped by additional reviewers. Based on theresultsofthesereviews,53paperswereaccepted.Weselected24manuscripts for oral presentation and 29 for poster presentation.
Bandwidth Extension of Speech Using Perceptual Criteria
Author: Visar Berisha
Publisher: Springer Nature
ISBN: 3031015215
Category : Technology & Engineering
Languages : en
Pages : 71
Book Description
Bandwidth extension of speech is used in the International Telecommunication Union G.729.1 standard in which the narrowband bitstream is combined with quantized high-band parameters. Although this system produces high-quality wideband speech, the additional bits used to represent the high band can be further reduced. In addition to the algorithm used in the G.729.1 standard, bandwidth extension methods based on spectrum prediction have also been proposed. Although these algorithms do not require additional bits, they perform poorly when the correlation between the low and the high band is weak. In this book, two wideband speech coding algorithms that rely on bandwidth extension are developed. The algorithms operate as wrappers around existing narrowband compression schemes. More specifically, in these algorithms, the low band is encoded using an existing toll-quality narrowband system, whereas the high band is generated using the proposed extension techniques. The first method relies only on transmitted high-band information to generate the wideband speech. The second algorithm uses a constrained minimum mean square error estimator that combines transmitted high-band envelope information with a predictive scheme driven by narrowband features. Both algorithms make use of novel perceptual models based on loudness that determine optimum quantization strategies for wideband recovery and synthesis. Objective and subjective evaluations reveal that the proposed system performs at a lower average bit rate while improving speech quality when compared to other similar algorithms.
Publisher: Springer Nature
ISBN: 3031015215
Category : Technology & Engineering
Languages : en
Pages : 71
Book Description
Bandwidth extension of speech is used in the International Telecommunication Union G.729.1 standard in which the narrowband bitstream is combined with quantized high-band parameters. Although this system produces high-quality wideband speech, the additional bits used to represent the high band can be further reduced. In addition to the algorithm used in the G.729.1 standard, bandwidth extension methods based on spectrum prediction have also been proposed. Although these algorithms do not require additional bits, they perform poorly when the correlation between the low and the high band is weak. In this book, two wideband speech coding algorithms that rely on bandwidth extension are developed. The algorithms operate as wrappers around existing narrowband compression schemes. More specifically, in these algorithms, the low band is encoded using an existing toll-quality narrowband system, whereas the high band is generated using the proposed extension techniques. The first method relies only on transmitted high-band information to generate the wideband speech. The second algorithm uses a constrained minimum mean square error estimator that combines transmitted high-band envelope information with a predictive scheme driven by narrowband features. Both algorithms make use of novel perceptual models based on loudness that determine optimum quantization strategies for wideband recovery and synthesis. Objective and subjective evaluations reveal that the proposed system performs at a lower average bit rate while improving speech quality when compared to other similar algorithms.
Virtual Design of an Audio Lifelogging System
Author: Brian Mears
Publisher: Springer Nature
ISBN: 3031015258
Category : Technology & Engineering
Languages : en
Pages : 63
Book Description
The availability of inexpensive, custom, highly integrated circuits is enabling some very powerful systems that bring together sensors, smart phones, wearables, cloud computing, and other technologies. To design these types of complex systems we are advocating a top-down simulation methodology to identify problems early. This approach enables software development to start prior to expensive chip and hardware development. We call the overall approach virtual design. This book explains why simulation has become important for chip design and provides an introduction to some of the simulation methods used. The audio lifelogging research project demonstrates the virtual design process in practice. The goals of this book are to: explain how silicon design has become more closely involved with system design; show how virtual design enables top down design; explain the utility of simulation at different abstraction levels; show how open source simulation software was used in audio lifelogging. The target audience for this book are faculty, engineers, and students who are interested in developing digital devices for Internet of Things (IoT) types of products.
Publisher: Springer Nature
ISBN: 3031015258
Category : Technology & Engineering
Languages : en
Pages : 63
Book Description
The availability of inexpensive, custom, highly integrated circuits is enabling some very powerful systems that bring together sensors, smart phones, wearables, cloud computing, and other technologies. To design these types of complex systems we are advocating a top-down simulation methodology to identify problems early. This approach enables software development to start prior to expensive chip and hardware development. We call the overall approach virtual design. This book explains why simulation has become important for chip design and provides an introduction to some of the simulation methods used. The audio lifelogging research project demonstrates the virtual design process in practice. The goals of this book are to: explain how silicon design has become more closely involved with system design; show how virtual design enables top down design; explain the utility of simulation at different abstraction levels; show how open source simulation software was used in audio lifelogging. The target audience for this book are faculty, engineers, and students who are interested in developing digital devices for Internet of Things (IoT) types of products.
Sensor Analysis for the Internet of Things
Author: Michael Stanley
Publisher: Springer Nature
ISBN: 3031015266
Category : Technology & Engineering
Languages : en
Pages : 113
Book Description
While it may be attractive to view sensors as simple transducers which convert physical quantities into electrical signals, the truth of the matter is more complex. The engineer should have a proper understanding of the physics involved in the conversion process, including interactions with other measurable quantities. A deep understanding of these interactions can be leveraged to apply sensor fusion techniques to minimize noise and/or extract additional information from sensor signals. Advances in microcontroller and MEMS manufacturing, along with improved internet connectivity, have enabled cost-effective wearable and Internet of Things sensor applications. At the same time, machine learning techniques have gone mainstream, so that those same applications can now be more intelligent than ever before. This book explores these topics in the context of a small set of sensor types. We provide some basic understanding of sensor operation for accelerometers, magnetometers, gyroscopes, and pressure sensors. We show how information from these can be fused to provide estimates of orientation. Then we explore the topics of machine learning and sensor data analytics.
Publisher: Springer Nature
ISBN: 3031015266
Category : Technology & Engineering
Languages : en
Pages : 113
Book Description
While it may be attractive to view sensors as simple transducers which convert physical quantities into electrical signals, the truth of the matter is more complex. The engineer should have a proper understanding of the physics involved in the conversion process, including interactions with other measurable quantities. A deep understanding of these interactions can be leveraged to apply sensor fusion techniques to minimize noise and/or extract additional information from sensor signals. Advances in microcontroller and MEMS manufacturing, along with improved internet connectivity, have enabled cost-effective wearable and Internet of Things sensor applications. At the same time, machine learning techniques have gone mainstream, so that those same applications can now be more intelligent than ever before. This book explores these topics in the context of a small set of sensor types. We provide some basic understanding of sensor operation for accelerometers, magnetometers, gyroscopes, and pressure sensors. We show how information from these can be fused to provide estimates of orientation. Then we explore the topics of machine learning and sensor data analytics.
Cognitive Fusion for Target Tracking
Author: Ioannis Kyriakides
Publisher: Springer Nature
ISBN: 3031015282
Category : Technology & Engineering
Languages : en
Pages : 57
Book Description
The adaptive configuration of nodes in a sensor network has the potential to improve sequential estimation performance by intelligently allocating limited sensor network resources. In addition, the use of heterogeneous sensing nodes provides a diversity of information that also enhances estimation performance. This work reviews cognitive systems and presents a cognitive fusion framework for sequential state estimation using adaptive configuration of heterogeneous sensing nodes and heterogeneous data fusion. This work also provides an application of cognitive fusion to the sequential estimation problem of target tracking using foveal and radar sensors.
Publisher: Springer Nature
ISBN: 3031015282
Category : Technology & Engineering
Languages : en
Pages : 57
Book Description
The adaptive configuration of nodes in a sensor network has the potential to improve sequential estimation performance by intelligently allocating limited sensor network resources. In addition, the use of heterogeneous sensing nodes provides a diversity of information that also enhances estimation performance. This work reviews cognitive systems and presents a cognitive fusion framework for sequential state estimation using adaptive configuration of heterogeneous sensing nodes and heterogeneous data fusion. This work also provides an application of cognitive fusion to the sequential estimation problem of target tracking using foveal and radar sensors.
Engineer Your Software!
Author: Scott A. Whitmire
Publisher: Springer Nature
ISBN: 3031015304
Category : Technology & Engineering
Languages : en
Pages : 121
Book Description
Software development is hard, but creating good software is even harder, especially if your main job is something other than developing software. Engineer Your Software! opens the world of software engineering, weaving engineering techniques and measurement into software development activities. Focusing on architecture and design, Engineer Your Software! claims that no matter how you write software, design and engineering matter and can be applied at any point in the process. Engineer Your Software! provides advice, patterns, design criteria, measures, and techniques that will help you get it right the first time. Engineer Your Software! also provides solutions to many vexing issues that developers run into time and time again. Developed over 40 years of creating large software applications, these lessons are sprinkled with real-world examples from actual software projects. Along the way, the author describes common design principles and design patterns that can make life a lot easier for anyone tasked with writing anything from a simple script to the largest enterprise-scale systems.
Publisher: Springer Nature
ISBN: 3031015304
Category : Technology & Engineering
Languages : en
Pages : 121
Book Description
Software development is hard, but creating good software is even harder, especially if your main job is something other than developing software. Engineer Your Software! opens the world of software engineering, weaving engineering techniques and measurement into software development activities. Focusing on architecture and design, Engineer Your Software! claims that no matter how you write software, design and engineering matter and can be applied at any point in the process. Engineer Your Software! provides advice, patterns, design criteria, measures, and techniques that will help you get it right the first time. Engineer Your Software! also provides solutions to many vexing issues that developers run into time and time again. Developed over 40 years of creating large software applications, these lessons are sprinkled with real-world examples from actual software projects. Along the way, the author describes common design principles and design patterns that can make life a lot easier for anyone tasked with writing anything from a simple script to the largest enterprise-scale systems.
A Survey of Blur Detection and Sharpness Assessment Methods
Author: Juan Andrade
Publisher: Springer Nature
ISBN: 3031015290
Category : Technology & Engineering
Languages : en
Pages : 95
Book Description
Blurring is almost an omnipresent effect on natural images. The main causes of blurring in images include: (a) the existence of objects at different depths within the scene which is known as defocus blur; (b) blurring due to motion either of objects in the scene or the imaging device; and (c) blurring due to atmospheric turbulence. Automatic estimation of spatially varying sharpness/blurriness has several applications including depth estimation, image quality assessment, information retrieval, image restoration, among others. There are some cases in which blur is intentionally introduced or enhanced; for example, in artistic photography and cinematography in which blur is intentionally introduced to emphasize a certain image region. Bokeh is a technique that introduces defocus blur with aesthetic purposes. Additionally, in trending applications like augmented and virtual reality usually, blur is introduced in order to provide/enhance depth perception. Digital images and videos are produced every day in astonishing amounts and the demand for higher quality is constantly rising which creates a need for advanced image quality assessment. Additionally, image quality assessment is important for the performance of image processing algorithms. It has been determined that image noise and artifacts can affect the performance of algorithms such as face detection and recognition, image saliency detection, and video target tracking. Therefore, image quality assessment (IQA) has been a topic of intense research in the fields of image processing and computer vision. Since humans are the end consumers of multimedia signals, subjective quality metrics provide the most reliable results; however, their cost in addition to time requirements makes them unfeasible for practical applications. Thus, objective quality metrics are usually preferred.
Publisher: Springer Nature
ISBN: 3031015290
Category : Technology & Engineering
Languages : en
Pages : 95
Book Description
Blurring is almost an omnipresent effect on natural images. The main causes of blurring in images include: (a) the existence of objects at different depths within the scene which is known as defocus blur; (b) blurring due to motion either of objects in the scene or the imaging device; and (c) blurring due to atmospheric turbulence. Automatic estimation of spatially varying sharpness/blurriness has several applications including depth estimation, image quality assessment, information retrieval, image restoration, among others. There are some cases in which blur is intentionally introduced or enhanced; for example, in artistic photography and cinematography in which blur is intentionally introduced to emphasize a certain image region. Bokeh is a technique that introduces defocus blur with aesthetic purposes. Additionally, in trending applications like augmented and virtual reality usually, blur is introduced in order to provide/enhance depth perception. Digital images and videos are produced every day in astonishing amounts and the demand for higher quality is constantly rising which creates a need for advanced image quality assessment. Additionally, image quality assessment is important for the performance of image processing algorithms. It has been determined that image noise and artifacts can affect the performance of algorithms such as face detection and recognition, image saliency detection, and video target tracking. Therefore, image quality assessment (IQA) has been a topic of intense research in the fields of image processing and computer vision. Since humans are the end consumers of multimedia signals, subjective quality metrics provide the most reliable results; however, their cost in addition to time requirements makes them unfeasible for practical applications. Thus, objective quality metrics are usually preferred.
Latency and Distortion of Electromagnetic Trackers for Augmented Reality Systems
Author: Henry Himberg
Publisher: Springer Nature
ISBN: 3031015223
Category : Technology & Engineering
Languages : en
Pages : 173
Book Description
Augmented reality (AR) systems are often used to superimpose virtual objects or information on a scene to improve situational awareness. Delays in the display system or inaccurate registration of objects destroy the sense of immersion a user experiences when using AR systems. AC electromagnetic trackers are ideal for these applications when combined with head orientation prediction to compensate for display system delays. Unfortunately, these trackers do not perform well in environments that contain conductive or ferrous materials due to magnetic field distortion without expensive calibration techniques. In our work we focus on both the prediction and distortion compensation aspects of this application, developing a "small footprint" predictive filter for display lag compensation and a simplified calibration system for AC magnetic trackers. In the first phase of our study we presented a novel method of tracking angular head velocity from quaternion orientation using an Extended Kalman Filter in both single model (DQEKF) and multiple model (MMDQ) implementations. In the second phase of our work we have developed a new method of mapping the magnetic field generated by the tracker without high precision measurement equipment. This method uses simple fixtures with multiple sensors in a rigid geometry to collect magnetic field data in the tracking volume. We have developed a new algorithm to process the collected data and generate a map of the magnetic field distortion that can be used to compensate distorted measurement data. Table of Contents: List of Tables / Preface / Acknowledgments / Delta Quaternion Extended Kalman Filter / Multiple Model Delta Quaternion Filter / Interpolation Volume Calibration / Conclusion / References / Authors' Biographies
Publisher: Springer Nature
ISBN: 3031015223
Category : Technology & Engineering
Languages : en
Pages : 173
Book Description
Augmented reality (AR) systems are often used to superimpose virtual objects or information on a scene to improve situational awareness. Delays in the display system or inaccurate registration of objects destroy the sense of immersion a user experiences when using AR systems. AC electromagnetic trackers are ideal for these applications when combined with head orientation prediction to compensate for display system delays. Unfortunately, these trackers do not perform well in environments that contain conductive or ferrous materials due to magnetic field distortion without expensive calibration techniques. In our work we focus on both the prediction and distortion compensation aspects of this application, developing a "small footprint" predictive filter for display lag compensation and a simplified calibration system for AC magnetic trackers. In the first phase of our study we presented a novel method of tracking angular head velocity from quaternion orientation using an Extended Kalman Filter in both single model (DQEKF) and multiple model (MMDQ) implementations. In the second phase of our work we have developed a new method of mapping the magnetic field generated by the tracker without high precision measurement equipment. This method uses simple fixtures with multiple sensors in a rigid geometry to collect magnetic field data in the tracking volume. We have developed a new algorithm to process the collected data and generate a map of the magnetic field distortion that can be used to compensate distorted measurement data. Table of Contents: List of Tables / Preface / Acknowledgments / Delta Quaternion Extended Kalman Filter / Multiple Model Delta Quaternion Filter / Interpolation Volume Calibration / Conclusion / References / Authors' Biographies
Secure Sensor Cloud
Author: Vimal Kumar
Publisher: Springer Nature
ISBN: 3031015274
Category : Technology & Engineering
Languages : en
Pages : 126
Book Description
The sensor cloud is a new model of computing paradigm for Wireless Sensor Networks (WSNs), which facilitates resource sharing and provides a platform to integrate different sensor networks where multiple users can build their own sensing applications at the same time. It enables a multi-user on-demand sensory system, where computing, sensing, and wireless network resources are shared among applications. Therefore, it has inherent challenges for providing security and privacy across the sensor cloud infrastructure. With the integration of WSNs with different ownerships, and users running a variety of applications including their own code, there is a need for a risk assessment mechanism to estimate the likelihood and impact of attacks on the life of the network. The data being generated by the wireless sensors in a sensor cloud need to be protected against adversaries, which may be outsiders as well as insiders. Similarly, the code disseminated to the sensors within the sensor cloud needs to be protected against inside and outside adversaries. Moreover, since the wireless sensors cannot support complex and energy-intensive measures, the lightweight schemes for integrity, security, and privacy of the data have to be redesigned. The book starts with the motivation and architecture discussion of a sensor cloud. Due to the integration of multiple WSNs running user-owned applications and code, the possibility of attacks is more likely. Thus, next, we discuss a risk assessment mechanism to estimate the likelihood and impact of attacks on these WSNs in a sensor cloud using a framework that allows the security administrator to better understand the threats present and take necessary actions. Then, we discuss integrity and privacy preserving data aggregation in a sensor cloud as it becomes harder to protect data in this environment. Integrity of data can be compromised as it becomes easier for an attacker to inject false data in a sensor cloud, and due to hop by hop nature, privacy of data could be leaked as well. Next, the book discusses a fine-grained access control scheme which works on the secure aggregated data in a sensor cloud. This scheme uses Attribute Based Encryption (ABE) to achieve the objective. Furthermore, to securely and efficiently disseminate application code in sensor cloud, we present a secure code dissemination algorithm which first reduces the amount of code to be transmitted from the base station to the sensor nodes. It then uses Symmetric Proxy Re-encryption along with Bloom filters and Hash-based Message Authentication Code (HMACs) to protect the code against eavesdropping and false code injection attacks.
Publisher: Springer Nature
ISBN: 3031015274
Category : Technology & Engineering
Languages : en
Pages : 126
Book Description
The sensor cloud is a new model of computing paradigm for Wireless Sensor Networks (WSNs), which facilitates resource sharing and provides a platform to integrate different sensor networks where multiple users can build their own sensing applications at the same time. It enables a multi-user on-demand sensory system, where computing, sensing, and wireless network resources are shared among applications. Therefore, it has inherent challenges for providing security and privacy across the sensor cloud infrastructure. With the integration of WSNs with different ownerships, and users running a variety of applications including their own code, there is a need for a risk assessment mechanism to estimate the likelihood and impact of attacks on the life of the network. The data being generated by the wireless sensors in a sensor cloud need to be protected against adversaries, which may be outsiders as well as insiders. Similarly, the code disseminated to the sensors within the sensor cloud needs to be protected against inside and outside adversaries. Moreover, since the wireless sensors cannot support complex and energy-intensive measures, the lightweight schemes for integrity, security, and privacy of the data have to be redesigned. The book starts with the motivation and architecture discussion of a sensor cloud. Due to the integration of multiple WSNs running user-owned applications and code, the possibility of attacks is more likely. Thus, next, we discuss a risk assessment mechanism to estimate the likelihood and impact of attacks on these WSNs in a sensor cloud using a framework that allows the security administrator to better understand the threats present and take necessary actions. Then, we discuss integrity and privacy preserving data aggregation in a sensor cloud as it becomes harder to protect data in this environment. Integrity of data can be compromised as it becomes easier for an attacker to inject false data in a sensor cloud, and due to hop by hop nature, privacy of data could be leaked as well. Next, the book discusses a fine-grained access control scheme which works on the secure aggregated data in a sensor cloud. This scheme uses Attribute Based Encryption (ABE) to achieve the objective. Furthermore, to securely and efficiently disseminate application code in sensor cloud, we present a secure code dissemination algorithm which first reduces the amount of code to be transmitted from the base station to the sensor nodes. It then uses Symmetric Proxy Re-encryption along with Bloom filters and Hash-based Message Authentication Code (HMACs) to protect the code against eavesdropping and false code injection attacks.