Author: Daniel de Leng
Publisher: Linköping University Electronic Press
ISBN: 9176854760
Category :
Languages : en
Pages : 153
Book Description
A lot of today's data is generated incrementally over time by a large variety of producers. This data ranges from quantitative sensor observations produced by robot systems to complex unstructured human-generated texts on social media. With data being so abundant, making sense of these streams of data through reasoning is challenging. Reasoning over streams is particularly relevant for autonomous robotic systems that operate in a physical environment. They commonly observe this environment through incremental observations, gradually refining information about their surroundings. This makes robust management of streaming data and its refinement an important problem. Many contemporary approaches to stream reasoning focus on the issue of querying data streams in order to generate higher-level information by relying on well-known database approaches. Other approaches apply logic-based reasoning techniques, which rarely consider the provenance of their symbolic interpretations. In this thesis, we integrate techniques for logic-based spatio-temporal stream reasoning with the adaptive generation of the state streams needed to do the reasoning over. This combination deals with both the challenge of reasoning over streaming data and the problem of robustly managing streaming data and its refinement. The main contributions of this thesis are (1) a logic-based spatio-temporal reasoning technique that combines temporal reasoning with qualitative spatial reasoning; (2) an adaptive reconfiguration procedure for generating and maintaining a data stream required to perform spatio-temporal stream reasoning over; and (3) integration of these two techniques into a stream reasoning framework. The proposed spatio-temporal stream reasoning technique is able to reason with intertemporal spatial relations by leveraging landmarks. Adaptive state stream generation allows the framework to adapt in situations in which the set of available streaming resources changes. Management of streaming resources is formalised in the DyKnow model, which introduces a configuration life-cycle to adaptively generate state streams. The DyKnow-ROS stream reasoning framework is a concrete realisation of this model that extends the Robot Operating System (ROS). DyKnow-ROS has been deployed on the SoftBank Robotics NAO platform to demonstrate the system's capabilities in the context of a case study on run-time adaptive reconfiguration. The results show that the proposed system – by combining reasoning over and reasoning about streams – can robustly perform spatio-temporal stream reasoning, even when the availability of streaming resources changes.
Spatio-Temporal Stream Reasoning with Adaptive State Stream Generation
Author: Daniel de Leng
Publisher: Linköping University Electronic Press
ISBN: 9176854760
Category :
Languages : en
Pages : 153
Book Description
A lot of today's data is generated incrementally over time by a large variety of producers. This data ranges from quantitative sensor observations produced by robot systems to complex unstructured human-generated texts on social media. With data being so abundant, making sense of these streams of data through reasoning is challenging. Reasoning over streams is particularly relevant for autonomous robotic systems that operate in a physical environment. They commonly observe this environment through incremental observations, gradually refining information about their surroundings. This makes robust management of streaming data and its refinement an important problem. Many contemporary approaches to stream reasoning focus on the issue of querying data streams in order to generate higher-level information by relying on well-known database approaches. Other approaches apply logic-based reasoning techniques, which rarely consider the provenance of their symbolic interpretations. In this thesis, we integrate techniques for logic-based spatio-temporal stream reasoning with the adaptive generation of the state streams needed to do the reasoning over. This combination deals with both the challenge of reasoning over streaming data and the problem of robustly managing streaming data and its refinement. The main contributions of this thesis are (1) a logic-based spatio-temporal reasoning technique that combines temporal reasoning with qualitative spatial reasoning; (2) an adaptive reconfiguration procedure for generating and maintaining a data stream required to perform spatio-temporal stream reasoning over; and (3) integration of these two techniques into a stream reasoning framework. The proposed spatio-temporal stream reasoning technique is able to reason with intertemporal spatial relations by leveraging landmarks. Adaptive state stream generation allows the framework to adapt in situations in which the set of available streaming resources changes. Management of streaming resources is formalised in the DyKnow model, which introduces a configuration life-cycle to adaptively generate state streams. The DyKnow-ROS stream reasoning framework is a concrete realisation of this model that extends the Robot Operating System (ROS). DyKnow-ROS has been deployed on the SoftBank Robotics NAO platform to demonstrate the system's capabilities in the context of a case study on run-time adaptive reconfiguration. The results show that the proposed system – by combining reasoning over and reasoning about streams – can robustly perform spatio-temporal stream reasoning, even when the availability of streaming resources changes.
Publisher: Linköping University Electronic Press
ISBN: 9176854760
Category :
Languages : en
Pages : 153
Book Description
A lot of today's data is generated incrementally over time by a large variety of producers. This data ranges from quantitative sensor observations produced by robot systems to complex unstructured human-generated texts on social media. With data being so abundant, making sense of these streams of data through reasoning is challenging. Reasoning over streams is particularly relevant for autonomous robotic systems that operate in a physical environment. They commonly observe this environment through incremental observations, gradually refining information about their surroundings. This makes robust management of streaming data and its refinement an important problem. Many contemporary approaches to stream reasoning focus on the issue of querying data streams in order to generate higher-level information by relying on well-known database approaches. Other approaches apply logic-based reasoning techniques, which rarely consider the provenance of their symbolic interpretations. In this thesis, we integrate techniques for logic-based spatio-temporal stream reasoning with the adaptive generation of the state streams needed to do the reasoning over. This combination deals with both the challenge of reasoning over streaming data and the problem of robustly managing streaming data and its refinement. The main contributions of this thesis are (1) a logic-based spatio-temporal reasoning technique that combines temporal reasoning with qualitative spatial reasoning; (2) an adaptive reconfiguration procedure for generating and maintaining a data stream required to perform spatio-temporal stream reasoning over; and (3) integration of these two techniques into a stream reasoning framework. The proposed spatio-temporal stream reasoning technique is able to reason with intertemporal spatial relations by leveraging landmarks. Adaptive state stream generation allows the framework to adapt in situations in which the set of available streaming resources changes. Management of streaming resources is formalised in the DyKnow model, which introduces a configuration life-cycle to adaptively generate state streams. The DyKnow-ROS stream reasoning framework is a concrete realisation of this model that extends the Robot Operating System (ROS). DyKnow-ROS has been deployed on the SoftBank Robotics NAO platform to demonstrate the system's capabilities in the context of a case study on run-time adaptive reconfiguration. The results show that the proposed system – by combining reasoning over and reasoning about streams – can robustly perform spatio-temporal stream reasoning, even when the availability of streaming resources changes.
Robust Stream Reasoning Under Uncertainty
Author: Daniel de Leng
Publisher: Linköping University Electronic Press
ISBN: 9176850137
Category :
Languages : en
Pages : 234
Book Description
Vast amounts of data are continually being generated by a wide variety of data producers. This data ranges from quantitative sensor observations produced by robot systems to complex unstructured human-generated texts on social media. With data being so abundant, the ability to make sense of these streams of data through reasoning is of great importance. Reasoning over streams is particularly relevant for autonomous robotic systems that operate in physical environments. They commonly observe this environment through incremental observations, gradually refining information about their surroundings. This makes robust management of streaming data and their refinement an important problem. Many contemporary approaches to stream reasoning focus on the issue of querying data streams in order to generate higher-level information by relying on well-known database approaches. Other approaches apply logic-based reasoning techniques, which rarely consider the provenance of their symbolic interpretations. In this work, we integrate techniques for logic-based stream reasoning with the adaptive generation of the state streams needed to do the reasoning over. This combination deals with both the challenge of reasoning over uncertain streaming data and the problem of robustly managing streaming data and their refinement. The main contributions of this work are (1) a logic-based temporal reasoning technique based on path checking under uncertainty that combines temporal reasoning with qualitative spatial reasoning; (2) an adaptive reconfiguration procedure for generating and maintaining a data stream required to perform spatio-temporal stream reasoning over; and (3) integration of these two techniques into a stream reasoning framework. The proposed spatio-temporal stream reasoning technique is able to reason with intertemporal spatial relations by leveraging landmarks. Adaptive state stream generation allows the framework to adapt to situations in which the set of available streaming resources changes. Management of streaming resources is formalised in the DyKnow model, which introduces a configuration life-cycle to adaptively generate state streams. The DyKnow-ROS stream reasoning framework is a concrete realisation of this model that extends the Robot Operating System (ROS). DyKnow-ROS has been deployed on the SoftBank Robotics NAO platform to demonstrate the system's capabilities in a case study on run-time adaptive reconfiguration. The results show that the proposed system - by combining reasoning over and reasoning about streams - can robustly perform stream reasoning, even when the availability of streaming resources changes.
Publisher: Linköping University Electronic Press
ISBN: 9176850137
Category :
Languages : en
Pages : 234
Book Description
Vast amounts of data are continually being generated by a wide variety of data producers. This data ranges from quantitative sensor observations produced by robot systems to complex unstructured human-generated texts on social media. With data being so abundant, the ability to make sense of these streams of data through reasoning is of great importance. Reasoning over streams is particularly relevant for autonomous robotic systems that operate in physical environments. They commonly observe this environment through incremental observations, gradually refining information about their surroundings. This makes robust management of streaming data and their refinement an important problem. Many contemporary approaches to stream reasoning focus on the issue of querying data streams in order to generate higher-level information by relying on well-known database approaches. Other approaches apply logic-based reasoning techniques, which rarely consider the provenance of their symbolic interpretations. In this work, we integrate techniques for logic-based stream reasoning with the adaptive generation of the state streams needed to do the reasoning over. This combination deals with both the challenge of reasoning over uncertain streaming data and the problem of robustly managing streaming data and their refinement. The main contributions of this work are (1) a logic-based temporal reasoning technique based on path checking under uncertainty that combines temporal reasoning with qualitative spatial reasoning; (2) an adaptive reconfiguration procedure for generating and maintaining a data stream required to perform spatio-temporal stream reasoning over; and (3) integration of these two techniques into a stream reasoning framework. The proposed spatio-temporal stream reasoning technique is able to reason with intertemporal spatial relations by leveraging landmarks. Adaptive state stream generation allows the framework to adapt to situations in which the set of available streaming resources changes. Management of streaming resources is formalised in the DyKnow model, which introduces a configuration life-cycle to adaptively generate state streams. The DyKnow-ROS stream reasoning framework is a concrete realisation of this model that extends the Robot Operating System (ROS). DyKnow-ROS has been deployed on the SoftBank Robotics NAO platform to demonstrate the system's capabilities in a case study on run-time adaptive reconfiguration. The results show that the proposed system - by combining reasoning over and reasoning about streams - can robustly perform stream reasoning, even when the availability of streaming resources changes.
Exploring C2 Capability and Effectiveness in Challenging Situations
Author: Magdalena Granåsen
Publisher: Linköping University Electronic Press
ISBN: 917685082X
Category :
Languages : en
Pages : 66
Book Description
Modern societies are affected by various threats and hazards, including natural disasters, cyber-attacks, extreme weather events and inter-state conflicts. Managing these challenging situations requires immediate actions, suspension of ordinary procedures, decision making under uncertainty and coordinated action. In other words, challenging situations put high demands on the command and control (C2) capability. To strengthen the capability of C2, it is vital to identify the prerequisites for effective coordination and direction within the domain of interest. This thesis explores C2 capability and effectiveness in three domains: interorganizational crisis management, military command and control, and cyber defence operations. The thesis aims to answer three research questions: (1) What constitutes C2 capability? (2) What constitutes C2 effectiveness? and (3) How can C2 effectiveness be assessed? The work was carried out as two case studies and one systematic literature review. The main contributions of the thesis are the identification of perspectives of C2 capability in challenging situations and an overview of approaches to C2 effectiveness assessment. Based on the results of the three studies, six recurring perspectives of capability in the domains studied were identified: interaction (collaboration), direction and coordination, relationships, situation awareness, resilience and preparedness. In the domains there are differences concerning which perspectives that are most emphasized in order obtain C2 capability. C2 effectiveness is defined as the extent to which a C2 system is successful in achieving its intended result. The thesis discusses the interconnectedness of performance and effectiveness measures, and concludes that there is not a united view on the difference between measures of effectiveness and measures of performance. Different approaches to effectiveness assessment were identified, where assessment may be conducted based on one specific issue, in relation to a defined goal for a C2 function or using a more exploratory approach.
Publisher: Linköping University Electronic Press
ISBN: 917685082X
Category :
Languages : en
Pages : 66
Book Description
Modern societies are affected by various threats and hazards, including natural disasters, cyber-attacks, extreme weather events and inter-state conflicts. Managing these challenging situations requires immediate actions, suspension of ordinary procedures, decision making under uncertainty and coordinated action. In other words, challenging situations put high demands on the command and control (C2) capability. To strengthen the capability of C2, it is vital to identify the prerequisites for effective coordination and direction within the domain of interest. This thesis explores C2 capability and effectiveness in three domains: interorganizational crisis management, military command and control, and cyber defence operations. The thesis aims to answer three research questions: (1) What constitutes C2 capability? (2) What constitutes C2 effectiveness? and (3) How can C2 effectiveness be assessed? The work was carried out as two case studies and one systematic literature review. The main contributions of the thesis are the identification of perspectives of C2 capability in challenging situations and an overview of approaches to C2 effectiveness assessment. Based on the results of the three studies, six recurring perspectives of capability in the domains studied were identified: interaction (collaboration), direction and coordination, relationships, situation awareness, resilience and preparedness. In the domains there are differences concerning which perspectives that are most emphasized in order obtain C2 capability. C2 effectiveness is defined as the extent to which a C2 system is successful in achieving its intended result. The thesis discusses the interconnectedness of performance and effectiveness measures, and concludes that there is not a united view on the difference between measures of effectiveness and measures of performance. Different approaches to effectiveness assessment were identified, where assessment may be conducted based on one specific issue, in relation to a defined goal for a C2 function or using a more exploratory approach.
Formal Verification of Tree Ensembles in Safety-Critical Applications
Author: John Törnblom
Publisher: Linköping University Electronic Press
ISBN: 917929748X
Category : Electronic books
Languages : en
Pages : 41
Book Description
In the presence of data and computational resources, machine learning can be used to synthesize software automatically. For example, machines are now capable of learning complicated pattern recognition tasks and sophisticated decision policies, two key capabilities in autonomous cyber-physical systems. Unfortunately, humans find software synthesized by machine learning algorithms difficult to interpret, which currently limits their use in safety-critical applications such as medical diagnosis and avionic systems. In particular, successful deployments of safety-critical systems mandate the execution of rigorous verification activities, which often rely on human insights, e.g., to identify scenarios in which the system shall be tested. A natural pathway towards a viable verification strategy for such systems is to leverage formal verification techniques, which, in the presence of a formal specification, can provide definitive guarantees with little human intervention. However, formal verification suffers from scalability issues with respect to system complexity. In this thesis, we investigate the limits of current formal verification techniques when applied to a class of machine learning models called tree ensembles, and identify model-specific characteristics that can be exploited to improve the performance of verification algorithms when applied specifically to tree ensembles. To this end, we develop two formal verification techniques specifically for tree ensembles, one fast and conservative technique, and one exact but more computationally demanding. We then combine these two techniques into an abstraction-refinement approach, that we implement in a tool called VoTE (Verifier of Tree Ensembles). Using a couple of case studies, we recognize that sets of inputs that lead to the same system behavior can be captured precisely as hyperrectangles, which enables tractable enumeration of input-output mappings when the input dimension is low. Tree ensembles with a high-dimensional input domain, however, seems generally difficult to verify. In some cases though, conservative approximations of input-output mappings can greatly improve performance. This is demonstrated in a digit recognition case study, where we assess the robustness of classifiers when confronted with additive noise.
Publisher: Linköping University Electronic Press
ISBN: 917929748X
Category : Electronic books
Languages : en
Pages : 41
Book Description
In the presence of data and computational resources, machine learning can be used to synthesize software automatically. For example, machines are now capable of learning complicated pattern recognition tasks and sophisticated decision policies, two key capabilities in autonomous cyber-physical systems. Unfortunately, humans find software synthesized by machine learning algorithms difficult to interpret, which currently limits their use in safety-critical applications such as medical diagnosis and avionic systems. In particular, successful deployments of safety-critical systems mandate the execution of rigorous verification activities, which often rely on human insights, e.g., to identify scenarios in which the system shall be tested. A natural pathway towards a viable verification strategy for such systems is to leverage formal verification techniques, which, in the presence of a formal specification, can provide definitive guarantees with little human intervention. However, formal verification suffers from scalability issues with respect to system complexity. In this thesis, we investigate the limits of current formal verification techniques when applied to a class of machine learning models called tree ensembles, and identify model-specific characteristics that can be exploited to improve the performance of verification algorithms when applied specifically to tree ensembles. To this end, we develop two formal verification techniques specifically for tree ensembles, one fast and conservative technique, and one exact but more computationally demanding. We then combine these two techniques into an abstraction-refinement approach, that we implement in a tool called VoTE (Verifier of Tree Ensembles). Using a couple of case studies, we recognize that sets of inputs that lead to the same system behavior can be captured precisely as hyperrectangles, which enables tractable enumeration of input-output mappings when the input dimension is low. Tree ensembles with a high-dimensional input domain, however, seems generally difficult to verify. In some cases though, conservative approximations of input-output mappings can greatly improve performance. This is demonstrated in a digit recognition case study, where we assess the robustness of classifiers when confronted with additive noise.
Methods and Tools for Efficient Model-Based Development of Cyber-Physical Systems with Emphasis on Model and Tool Integration
Author: Alachew Mengist
Publisher: Linköping University Electronic Press
ISBN: 9176850366
Category :
Languages : en
Pages : 116
Book Description
Model-based tools and methods are playing important roles in the design and analysis of cyber-physical systems before building and testing physical prototypes. The development of increasingly complex CPSs requires the use of multiple tools for different phases of the development lifecycle, which in turn depends on the ability of the supporting tools to interoperate. However, currently no vendor provides comprehensive end-to-end systems engineering tool support across the entire product lifecycle, and no mature solution currently exists for integrating different system modeling and simulation languages, tools and algorithms in the CPSs design process. Thus, modeling and simulation tools are still used separately in industry. The unique challenges in integration of CPSs are a result of the increasing heterogeneity of components and their interactions, increasing size of systems, and essential design requirements from various stakeholders. The corresponding system development involves several specialists in different domains, often using different modeling languages and tools. In order to address the challenges of CPSs and facilitate design of system architecture and design integration of different models, significant progress needs to be made towards model-based integration of multiple design tools, languages, and algorithms into a single integrated modeling and simulation environment. In this thesis we present the need for methods and tools with the aim of developing techniques for numerically stable co-simulation, advanced simulation model analysis, simulation-based optimization, and traceability capability, and making them more accessible to the model-based cyber physical product development process, leading to more efficient simulation. In particular, the contributions of this thesis are as follows: 1) development of a model-based dynamic optimization approach by integrating optimization into the model development process; 2) development of a graphical co-modeling editor and co-simulation framework for modeling, connecting, and unified system simulation of several different modeling tools using the TLM technique; 3) development of a tool-supported method for multidisciplinary collaborative modeling and traceability support throughout the development process for CPSs; 4) development of an advanced simulation modeling analysis tool for more efficient simulation.
Publisher: Linköping University Electronic Press
ISBN: 9176850366
Category :
Languages : en
Pages : 116
Book Description
Model-based tools and methods are playing important roles in the design and analysis of cyber-physical systems before building and testing physical prototypes. The development of increasingly complex CPSs requires the use of multiple tools for different phases of the development lifecycle, which in turn depends on the ability of the supporting tools to interoperate. However, currently no vendor provides comprehensive end-to-end systems engineering tool support across the entire product lifecycle, and no mature solution currently exists for integrating different system modeling and simulation languages, tools and algorithms in the CPSs design process. Thus, modeling and simulation tools are still used separately in industry. The unique challenges in integration of CPSs are a result of the increasing heterogeneity of components and their interactions, increasing size of systems, and essential design requirements from various stakeholders. The corresponding system development involves several specialists in different domains, often using different modeling languages and tools. In order to address the challenges of CPSs and facilitate design of system architecture and design integration of different models, significant progress needs to be made towards model-based integration of multiple design tools, languages, and algorithms into a single integrated modeling and simulation environment. In this thesis we present the need for methods and tools with the aim of developing techniques for numerically stable co-simulation, advanced simulation model analysis, simulation-based optimization, and traceability capability, and making them more accessible to the model-based cyber physical product development process, leading to more efficient simulation. In particular, the contributions of this thesis are as follows: 1) development of a model-based dynamic optimization approach by integrating optimization into the model development process; 2) development of a graphical co-modeling editor and co-simulation framework for modeling, connecting, and unified system simulation of several different modeling tools using the TLM technique; 3) development of a tool-supported method for multidisciplinary collaborative modeling and traceability support throughout the development process for CPSs; 4) development of an advanced simulation modeling analysis tool for more efficient simulation.
Latency-aware Resource Management at the Edge
Author: Klervie Toczé
Publisher: Linköping University Electronic Press
ISBN: 9179299040
Category :
Languages : en
Pages : 148
Book Description
The increasing diversity of connected devices leads to new application domains being envisioned. Some of these need ultra low latency or have privacy requirements that cannot be satisfied by the current cloud. By bringing resources closer to the end user, the recent edge computing paradigm aims to enable such applications. One critical aspect to ensure the successful deployment of the edge computing paradigm is efficient resource management. Indeed, obtaining the needed resources is crucial for the applications using the edge, but the resource picture of this paradigm is complex. First, as opposed to the nearly infinite resources provided by the cloud, the edge devices have finite resources. Moreover, different resource types are required depending on the applications and the devices supplying those resources are very heterogeneous. This thesis studies several challenges towards enabling efficient resource management for edge computing. The thesis begins by a review of the state-of-the-art research focusing on resource management in the edge computing context. A taxonomy is proposed for providing an overview of the current research and identify areas in need of further work. One of the identified challenges is studying the resource supply organization in the case where a mix of mobile and stationary devices is used to provide the edge resources. The ORCH framework is proposed as a means to orchestrate this edge device mix. The evaluation performed in a simulator shows that this combination of devices enables higher quality of service for latency-critical tasks. Another area is understanding the resource demand side. The thesis presents a study of the workload of a killer application for edge computing: mixed reality. The MR-Leo prototype is designed and used as a vehicle to understand the end-to-end latency, the throughput, and the characteristics of the workload for this type of application. A method for modeling the workload of an application is devised and applied to MR-Leo in order to obtain a synthetic workload exhibiting the same characteristics, which can be used in further studies.
Publisher: Linköping University Electronic Press
ISBN: 9179299040
Category :
Languages : en
Pages : 148
Book Description
The increasing diversity of connected devices leads to new application domains being envisioned. Some of these need ultra low latency or have privacy requirements that cannot be satisfied by the current cloud. By bringing resources closer to the end user, the recent edge computing paradigm aims to enable such applications. One critical aspect to ensure the successful deployment of the edge computing paradigm is efficient resource management. Indeed, obtaining the needed resources is crucial for the applications using the edge, but the resource picture of this paradigm is complex. First, as opposed to the nearly infinite resources provided by the cloud, the edge devices have finite resources. Moreover, different resource types are required depending on the applications and the devices supplying those resources are very heterogeneous. This thesis studies several challenges towards enabling efficient resource management for edge computing. The thesis begins by a review of the state-of-the-art research focusing on resource management in the edge computing context. A taxonomy is proposed for providing an overview of the current research and identify areas in need of further work. One of the identified challenges is studying the resource supply organization in the case where a mix of mobile and stationary devices is used to provide the edge resources. The ORCH framework is proposed as a means to orchestrate this edge device mix. The evaluation performed in a simulator shows that this combination of devices enables higher quality of service for latency-critical tasks. Another area is understanding the resource demand side. The thesis presents a study of the workload of a killer application for edge computing: mixed reality. The MR-Leo prototype is designed and used as a vehicle to understand the end-to-end latency, the throughput, and the characteristics of the workload for this type of application. A method for modeling the workload of an application is devised and applied to MR-Leo in order to obtain a synthetic workload exhibiting the same characteristics, which can be used in further studies.
Designing a Modern Skeleton Programming Framework for Parallel and Heterogeneous Systems
Author: August Ernstsson
Publisher: Linköping University Electronic Press
ISBN: 9179297722
Category : Electronic books
Languages : en
Pages : 176
Book Description
Today's society is increasingly software-driven and dependent on powerful computer technology. Therefore it is important that advancements in the low-level processor hardware are made available for exploitation by a growing number of programmers of differing skill level. However, as we are approaching the end of Moore's law, hardware designers are finding new and increasingly complex ways to increase the accessible processor performance. It is getting more and more difficult to effectively target these processing resources without expert knowledge in parallelization, heterogeneous computation, communication, synchronization, and so on. To ensure that the software side can keep up, advanced programming environments and frameworks are needed to bridge the widening gap between hardware and software. One such example is the pattern-centric skeleton programming model and in particular the SkePU project. The work presented in this thesis first redesigns the SkePU framework based on modern C++ variadic template metaprogramming and state-of-the-art compiler technology. It then explores new ways to improve performance: by providing new patterns, improving the data access locality of existing ones, and using both static and dynamic knowledge about program flow. The work combines novel ideas with practical evaluation of the approach on several applications. The advancements also include the first skeleton API that allows variadic skeletons, new data containers, and finally an approach to make skeleton programming more customizable without compromising universal portability.
Publisher: Linköping University Electronic Press
ISBN: 9179297722
Category : Electronic books
Languages : en
Pages : 176
Book Description
Today's society is increasingly software-driven and dependent on powerful computer technology. Therefore it is important that advancements in the low-level processor hardware are made available for exploitation by a growing number of programmers of differing skill level. However, as we are approaching the end of Moore's law, hardware designers are finding new and increasingly complex ways to increase the accessible processor performance. It is getting more and more difficult to effectively target these processing resources without expert knowledge in parallelization, heterogeneous computation, communication, synchronization, and so on. To ensure that the software side can keep up, advanced programming environments and frameworks are needed to bridge the widening gap between hardware and software. One such example is the pattern-centric skeleton programming model and in particular the SkePU project. The work presented in this thesis first redesigns the SkePU framework based on modern C++ variadic template metaprogramming and state-of-the-art compiler technology. It then explores new ways to improve performance: by providing new patterns, improving the data access locality of existing ones, and using both static and dynamic knowledge about program flow. The work combines novel ideas with practical evaluation of the approach on several applications. The advancements also include the first skeleton API that allows variadic skeletons, new data containers, and finally an approach to make skeleton programming more customizable without compromising universal portability.
Towards Semantically Enabled Complex Event Processing
Author: Robin Keskisärkkä
Publisher: Linköping University Electronic Press
ISBN: 9176854795
Category :
Languages : en
Pages : 169
Book Description
The Semantic Web provides a framework for semantically annotating data on the web, and the Resource Description Framework (RDF) supports the integration of structured data represented in heterogeneous formats. Traditionally, the Semantic Web has focused primarily on more or less static data, but information on the web today is becoming increasingly dynamic. RDF Stream Processing (RSP) systems address this issue by adding support for streaming data and continuous query processing. To some extent, RSP systems can be used to perform complex event processing (CEP), where meaningful high-level events are generated based on low-level events from multiple sources; however, there are several challenges with respect to using RSP in this context. Event models designed to represent static event information lack several features required for CEP, and are typically not well suited for stream reasoning. The dynamic nature of streaming data also greatly complicates the development and validation of RSP queries. Therefore, reusing queries that have been prepared ahead of time is important to be able to support real-time decision-making. Additionally, there are limitations in existing RSP implementations in terms of both scalability and expressiveness, where some features required in CEP are not supported by any of the current systems. The goal of this thesis work has been to address some of these challenges and the main contributions of the thesis are: (1) an event model ontology targeted at supporting CEP; (2) a model for representing parameterized RSP queries as reusable templates; and (3) an architecture that allows RSP systems to be integrated for use in CEP. The proposed event model tackles issues specifically related to event modeling in CEP that have not been sufficiently covered by other event models, includes support for event encapsulation and event payloads, and can easily be extended to fit specific use-cases. The model for representing RSP query templates was designed as an extension to SPIN, a vocabulary that supports modeling of SPARQL queries as RDF. The extended model supports the current version of the RSP Query Language (RSP-QL) developed by the RDF Stream Processing Community Group, along with some of the most popular RSP query languages. Finally, the proposed architecture views RSP queries as individual event processing agents in a more general CEP framework. Additional event processing components can be integrated to provide support for operations that are not supported in RSP, or to provide more efficient processing for specific tasks. We demonstrate the architecture in implementations for scenarios related to traffic-incident monitoring, criminal-activity monitoring, and electronic healthcare monitoring.
Publisher: Linköping University Electronic Press
ISBN: 9176854795
Category :
Languages : en
Pages : 169
Book Description
The Semantic Web provides a framework for semantically annotating data on the web, and the Resource Description Framework (RDF) supports the integration of structured data represented in heterogeneous formats. Traditionally, the Semantic Web has focused primarily on more or less static data, but information on the web today is becoming increasingly dynamic. RDF Stream Processing (RSP) systems address this issue by adding support for streaming data and continuous query processing. To some extent, RSP systems can be used to perform complex event processing (CEP), where meaningful high-level events are generated based on low-level events from multiple sources; however, there are several challenges with respect to using RSP in this context. Event models designed to represent static event information lack several features required for CEP, and are typically not well suited for stream reasoning. The dynamic nature of streaming data also greatly complicates the development and validation of RSP queries. Therefore, reusing queries that have been prepared ahead of time is important to be able to support real-time decision-making. Additionally, there are limitations in existing RSP implementations in terms of both scalability and expressiveness, where some features required in CEP are not supported by any of the current systems. The goal of this thesis work has been to address some of these challenges and the main contributions of the thesis are: (1) an event model ontology targeted at supporting CEP; (2) a model for representing parameterized RSP queries as reusable templates; and (3) an architecture that allows RSP systems to be integrated for use in CEP. The proposed event model tackles issues specifically related to event modeling in CEP that have not been sufficiently covered by other event models, includes support for event encapsulation and event payloads, and can easily be extended to fit specific use-cases. The model for representing RSP query templates was designed as an extension to SPIN, a vocabulary that supports modeling of SPARQL queries as RDF. The extended model supports the current version of the RSP Query Language (RSP-QL) developed by the RDF Stream Processing Community Group, along with some of the most popular RSP query languages. Finally, the proposed architecture views RSP queries as individual event processing agents in a more general CEP framework. Additional event processing components can be integrated to provide support for operations that are not supported in RSP, or to provide more efficient processing for specific tasks. We demonstrate the architecture in implementations for scenarios related to traffic-incident monitoring, criminal-activity monitoring, and electronic healthcare monitoring.
Sustainable Smart Cities
Author: Pradeep Kumar Singh
Publisher: Springer Nature
ISBN: 3031088158
Category : Technology & Engineering
Languages : en
Pages : 342
Book Description
This book brings the recent collection of smart technologies. Smart cities challenges and key requirements are discussed through the technological solutions, IoT, cloud computing, block chain and artificial intelligence. Firstly, the key technologies contributing to the smart cities research are identified. Then, the most popular ones are covered in context to their theoretical and practical applications. Smart cities technologies are one of the recent research areas. Every day new technological solutions are coming to make smart cities more sustainable. The book explores the integration of main key technologies for smart cities which are IoT & cloud computing, data science, AI and block chain & Industry 4.0. Moreover, some integrated solutions using AI, data science and IoT will attract the attention of end users. Primary market of the book is aimed toward the undergraduate and master students. IoT, cloud computing, artificial intelligence and block chain are elective courses at the bachelor level in the engineering domain, and its application areas in context to smart cities are covered in this book. The book is a good source of reference for their master dissertations. Ph.D. students or scholars who are working on these key technologies like IoT & cloud, AI, data science, block chain & Industry 4.0 will find this book as a constant source of reference for their ongoing research. Smart city planners, architects and municipal experts may also find this book useful.
Publisher: Springer Nature
ISBN: 3031088158
Category : Technology & Engineering
Languages : en
Pages : 342
Book Description
This book brings the recent collection of smart technologies. Smart cities challenges and key requirements are discussed through the technological solutions, IoT, cloud computing, block chain and artificial intelligence. Firstly, the key technologies contributing to the smart cities research are identified. Then, the most popular ones are covered in context to their theoretical and practical applications. Smart cities technologies are one of the recent research areas. Every day new technological solutions are coming to make smart cities more sustainable. The book explores the integration of main key technologies for smart cities which are IoT & cloud computing, data science, AI and block chain & Industry 4.0. Moreover, some integrated solutions using AI, data science and IoT will attract the attention of end users. Primary market of the book is aimed toward the undergraduate and master students. IoT, cloud computing, artificial intelligence and block chain are elective courses at the bachelor level in the engineering domain, and its application areas in context to smart cities are covered in this book. The book is a good source of reference for their master dissertations. Ph.D. students or scholars who are working on these key technologies like IoT & cloud, AI, data science, block chain & Industry 4.0 will find this book as a constant source of reference for their ongoing research. Smart city planners, architects and municipal experts may also find this book useful.
Official Gazette of the United States Patent and Trademark Office
Author: United States. Patent and Trademark Office
Publisher:
ISBN:
Category : Patents
Languages : en
Pages : 1274
Book Description
Publisher:
ISBN:
Category : Patents
Languages : en
Pages : 1274
Book Description