Author: Anders Andersson
Publisher: Linköping University Electronic Press
ISBN: 9176855244
Category :
Languages : en
Pages : 39
Book Description
Modern vehicles are complex systems. Different design stages for such a complex system include evaluation using models and submodels, hardware-in-the-loop systems and complete vehicles. Once a vehicle is delivered to the market evaluation continues by the public. One kind of tool that can be used during many stages of a vehicle lifecycle is driving simulators. The use of driving simulators with a human driver is commonly focused on driver behavior. In a high fidelity moving base driving simulator it is possible to provide realistic and repetitive driving situations using distinctive features such as: physical modelling of driven vehicle, a moving base, a physical cabin interface and an audio and visual representation of the driving environment. A desired but difficult goal to achieve using a moving base driving simulator is to have behavioral validity. In other words, A driver in a moving base driving simulator should have the same driving behavior as he or she would have during the same driving task in a real vehicle.". In this thesis the focus is on high fidelity moving base driving simulators. The main target is to improve the behavior validity or to maintain behavior validity while adding complexity to the simulator. One main assumption in this thesis is that systems closer to the final product provide better accuracy and are perceived better if properly integrated. Thus, the approach in this thesis is to try to ease incorporation of such systems using combinations of the methods hardware-in-the-loop and distributed simulation. Hardware-in-the-loop is a method where hardware is interfaced into a software controlled environment/simulation. Distributed simulation is a method where parts of a simulation at physically different locations are connected together. For some simulator laboratories distributed simulation is the only feasible option since some hardware cannot be moved in an easy way. Results presented in this thesis show that a complete vehicle or hardware-in-the-loop test laboratory can successfully be connected to a moving base driving simulator. Further, it is demonstrated that using a framework for distributed simulation eases communication and integration due to standardized interfaces. One identified potential problem is complexity in interface wrappers when integrating hardware-in-the-loop in a distributed simulation framework. From this aspect, it is important to consider the model design and the intersections between software and hardware models. Another important issue discussed is the increased delay in overhead time when using a framework for distributed simulation.
Extensions for Distributed Moving Base Driving Simulators
Author: Anders Andersson
Publisher: Linköping University Electronic Press
ISBN: 9176855244
Category :
Languages : en
Pages : 39
Book Description
Modern vehicles are complex systems. Different design stages for such a complex system include evaluation using models and submodels, hardware-in-the-loop systems and complete vehicles. Once a vehicle is delivered to the market evaluation continues by the public. One kind of tool that can be used during many stages of a vehicle lifecycle is driving simulators. The use of driving simulators with a human driver is commonly focused on driver behavior. In a high fidelity moving base driving simulator it is possible to provide realistic and repetitive driving situations using distinctive features such as: physical modelling of driven vehicle, a moving base, a physical cabin interface and an audio and visual representation of the driving environment. A desired but difficult goal to achieve using a moving base driving simulator is to have behavioral validity. In other words, A driver in a moving base driving simulator should have the same driving behavior as he or she would have during the same driving task in a real vehicle.". In this thesis the focus is on high fidelity moving base driving simulators. The main target is to improve the behavior validity or to maintain behavior validity while adding complexity to the simulator. One main assumption in this thesis is that systems closer to the final product provide better accuracy and are perceived better if properly integrated. Thus, the approach in this thesis is to try to ease incorporation of such systems using combinations of the methods hardware-in-the-loop and distributed simulation. Hardware-in-the-loop is a method where hardware is interfaced into a software controlled environment/simulation. Distributed simulation is a method where parts of a simulation at physically different locations are connected together. For some simulator laboratories distributed simulation is the only feasible option since some hardware cannot be moved in an easy way. Results presented in this thesis show that a complete vehicle or hardware-in-the-loop test laboratory can successfully be connected to a moving base driving simulator. Further, it is demonstrated that using a framework for distributed simulation eases communication and integration due to standardized interfaces. One identified potential problem is complexity in interface wrappers when integrating hardware-in-the-loop in a distributed simulation framework. From this aspect, it is important to consider the model design and the intersections between software and hardware models. Another important issue discussed is the increased delay in overhead time when using a framework for distributed simulation.
Publisher: Linköping University Electronic Press
ISBN: 9176855244
Category :
Languages : en
Pages : 39
Book Description
Modern vehicles are complex systems. Different design stages for such a complex system include evaluation using models and submodels, hardware-in-the-loop systems and complete vehicles. Once a vehicle is delivered to the market evaluation continues by the public. One kind of tool that can be used during many stages of a vehicle lifecycle is driving simulators. The use of driving simulators with a human driver is commonly focused on driver behavior. In a high fidelity moving base driving simulator it is possible to provide realistic and repetitive driving situations using distinctive features such as: physical modelling of driven vehicle, a moving base, a physical cabin interface and an audio and visual representation of the driving environment. A desired but difficult goal to achieve using a moving base driving simulator is to have behavioral validity. In other words, A driver in a moving base driving simulator should have the same driving behavior as he or she would have during the same driving task in a real vehicle.". In this thesis the focus is on high fidelity moving base driving simulators. The main target is to improve the behavior validity or to maintain behavior validity while adding complexity to the simulator. One main assumption in this thesis is that systems closer to the final product provide better accuracy and are perceived better if properly integrated. Thus, the approach in this thesis is to try to ease incorporation of such systems using combinations of the methods hardware-in-the-loop and distributed simulation. Hardware-in-the-loop is a method where hardware is interfaced into a software controlled environment/simulation. Distributed simulation is a method where parts of a simulation at physically different locations are connected together. For some simulator laboratories distributed simulation is the only feasible option since some hardware cannot be moved in an easy way. Results presented in this thesis show that a complete vehicle or hardware-in-the-loop test laboratory can successfully be connected to a moving base driving simulator. Further, it is demonstrated that using a framework for distributed simulation eases communication and integration due to standardized interfaces. One identified potential problem is complexity in interface wrappers when integrating hardware-in-the-loop in a distributed simulation framework. From this aspect, it is important to consider the model design and the intersections between software and hardware models. Another important issue discussed is the increased delay in overhead time when using a framework for distributed simulation.
Exploring C2 Capability and Effectiveness in Challenging Situations
Author: Magdalena Granåsen
Publisher: Linköping University Electronic Press
ISBN: 917685082X
Category :
Languages : en
Pages : 66
Book Description
Modern societies are affected by various threats and hazards, including natural disasters, cyber-attacks, extreme weather events and inter-state conflicts. Managing these challenging situations requires immediate actions, suspension of ordinary procedures, decision making under uncertainty and coordinated action. In other words, challenging situations put high demands on the command and control (C2) capability. To strengthen the capability of C2, it is vital to identify the prerequisites for effective coordination and direction within the domain of interest. This thesis explores C2 capability and effectiveness in three domains: interorganizational crisis management, military command and control, and cyber defence operations. The thesis aims to answer three research questions: (1) What constitutes C2 capability? (2) What constitutes C2 effectiveness? and (3) How can C2 effectiveness be assessed? The work was carried out as two case studies and one systematic literature review. The main contributions of the thesis are the identification of perspectives of C2 capability in challenging situations and an overview of approaches to C2 effectiveness assessment. Based on the results of the three studies, six recurring perspectives of capability in the domains studied were identified: interaction (collaboration), direction and coordination, relationships, situation awareness, resilience and preparedness. In the domains there are differences concerning which perspectives that are most emphasized in order obtain C2 capability. C2 effectiveness is defined as the extent to which a C2 system is successful in achieving its intended result. The thesis discusses the interconnectedness of performance and effectiveness measures, and concludes that there is not a united view on the difference between measures of effectiveness and measures of performance. Different approaches to effectiveness assessment were identified, where assessment may be conducted based on one specific issue, in relation to a defined goal for a C2 function or using a more exploratory approach.
Publisher: Linköping University Electronic Press
ISBN: 917685082X
Category :
Languages : en
Pages : 66
Book Description
Modern societies are affected by various threats and hazards, including natural disasters, cyber-attacks, extreme weather events and inter-state conflicts. Managing these challenging situations requires immediate actions, suspension of ordinary procedures, decision making under uncertainty and coordinated action. In other words, challenging situations put high demands on the command and control (C2) capability. To strengthen the capability of C2, it is vital to identify the prerequisites for effective coordination and direction within the domain of interest. This thesis explores C2 capability and effectiveness in three domains: interorganizational crisis management, military command and control, and cyber defence operations. The thesis aims to answer three research questions: (1) What constitutes C2 capability? (2) What constitutes C2 effectiveness? and (3) How can C2 effectiveness be assessed? The work was carried out as two case studies and one systematic literature review. The main contributions of the thesis are the identification of perspectives of C2 capability in challenging situations and an overview of approaches to C2 effectiveness assessment. Based on the results of the three studies, six recurring perspectives of capability in the domains studied were identified: interaction (collaboration), direction and coordination, relationships, situation awareness, resilience and preparedness. In the domains there are differences concerning which perspectives that are most emphasized in order obtain C2 capability. C2 effectiveness is defined as the extent to which a C2 system is successful in achieving its intended result. The thesis discusses the interconnectedness of performance and effectiveness measures, and concludes that there is not a united view on the difference between measures of effectiveness and measures of performance. Different approaches to effectiveness assessment were identified, where assessment may be conducted based on one specific issue, in relation to a defined goal for a C2 function or using a more exploratory approach.
Methods and Tools for Efficient Model-Based Development of Cyber-Physical Systems with Emphasis on Model and Tool Integration
Author: Alachew Mengist
Publisher: Linköping University Electronic Press
ISBN: 9176850366
Category :
Languages : en
Pages : 116
Book Description
Model-based tools and methods are playing important roles in the design and analysis of cyber-physical systems before building and testing physical prototypes. The development of increasingly complex CPSs requires the use of multiple tools for different phases of the development lifecycle, which in turn depends on the ability of the supporting tools to interoperate. However, currently no vendor provides comprehensive end-to-end systems engineering tool support across the entire product lifecycle, and no mature solution currently exists for integrating different system modeling and simulation languages, tools and algorithms in the CPSs design process. Thus, modeling and simulation tools are still used separately in industry. The unique challenges in integration of CPSs are a result of the increasing heterogeneity of components and their interactions, increasing size of systems, and essential design requirements from various stakeholders. The corresponding system development involves several specialists in different domains, often using different modeling languages and tools. In order to address the challenges of CPSs and facilitate design of system architecture and design integration of different models, significant progress needs to be made towards model-based integration of multiple design tools, languages, and algorithms into a single integrated modeling and simulation environment. In this thesis we present the need for methods and tools with the aim of developing techniques for numerically stable co-simulation, advanced simulation model analysis, simulation-based optimization, and traceability capability, and making them more accessible to the model-based cyber physical product development process, leading to more efficient simulation. In particular, the contributions of this thesis are as follows: 1) development of a model-based dynamic optimization approach by integrating optimization into the model development process; 2) development of a graphical co-modeling editor and co-simulation framework for modeling, connecting, and unified system simulation of several different modeling tools using the TLM technique; 3) development of a tool-supported method for multidisciplinary collaborative modeling and traceability support throughout the development process for CPSs; 4) development of an advanced simulation modeling analysis tool for more efficient simulation.
Publisher: Linköping University Electronic Press
ISBN: 9176850366
Category :
Languages : en
Pages : 116
Book Description
Model-based tools and methods are playing important roles in the design and analysis of cyber-physical systems before building and testing physical prototypes. The development of increasingly complex CPSs requires the use of multiple tools for different phases of the development lifecycle, which in turn depends on the ability of the supporting tools to interoperate. However, currently no vendor provides comprehensive end-to-end systems engineering tool support across the entire product lifecycle, and no mature solution currently exists for integrating different system modeling and simulation languages, tools and algorithms in the CPSs design process. Thus, modeling and simulation tools are still used separately in industry. The unique challenges in integration of CPSs are a result of the increasing heterogeneity of components and their interactions, increasing size of systems, and essential design requirements from various stakeholders. The corresponding system development involves several specialists in different domains, often using different modeling languages and tools. In order to address the challenges of CPSs and facilitate design of system architecture and design integration of different models, significant progress needs to be made towards model-based integration of multiple design tools, languages, and algorithms into a single integrated modeling and simulation environment. In this thesis we present the need for methods and tools with the aim of developing techniques for numerically stable co-simulation, advanced simulation model analysis, simulation-based optimization, and traceability capability, and making them more accessible to the model-based cyber physical product development process, leading to more efficient simulation. In particular, the contributions of this thesis are as follows: 1) development of a model-based dynamic optimization approach by integrating optimization into the model development process; 2) development of a graphical co-modeling editor and co-simulation framework for modeling, connecting, and unified system simulation of several different modeling tools using the TLM technique; 3) development of a tool-supported method for multidisciplinary collaborative modeling and traceability support throughout the development process for CPSs; 4) development of an advanced simulation modeling analysis tool for more efficient simulation.
Spatio-Temporal Stream Reasoning with Adaptive State Stream Generation
Author: Daniel de Leng
Publisher: Linköping University Electronic Press
ISBN: 9176854760
Category :
Languages : en
Pages : 153
Book Description
A lot of today's data is generated incrementally over time by a large variety of producers. This data ranges from quantitative sensor observations produced by robot systems to complex unstructured human-generated texts on social media. With data being so abundant, making sense of these streams of data through reasoning is challenging. Reasoning over streams is particularly relevant for autonomous robotic systems that operate in a physical environment. They commonly observe this environment through incremental observations, gradually refining information about their surroundings. This makes robust management of streaming data and its refinement an important problem. Many contemporary approaches to stream reasoning focus on the issue of querying data streams in order to generate higher-level information by relying on well-known database approaches. Other approaches apply logic-based reasoning techniques, which rarely consider the provenance of their symbolic interpretations. In this thesis, we integrate techniques for logic-based spatio-temporal stream reasoning with the adaptive generation of the state streams needed to do the reasoning over. This combination deals with both the challenge of reasoning over streaming data and the problem of robustly managing streaming data and its refinement. The main contributions of this thesis are (1) a logic-based spatio-temporal reasoning technique that combines temporal reasoning with qualitative spatial reasoning; (2) an adaptive reconfiguration procedure for generating and maintaining a data stream required to perform spatio-temporal stream reasoning over; and (3) integration of these two techniques into a stream reasoning framework. The proposed spatio-temporal stream reasoning technique is able to reason with intertemporal spatial relations by leveraging landmarks. Adaptive state stream generation allows the framework to adapt in situations in which the set of available streaming resources changes. Management of streaming resources is formalised in the DyKnow model, which introduces a configuration life-cycle to adaptively generate state streams. The DyKnow-ROS stream reasoning framework is a concrete realisation of this model that extends the Robot Operating System (ROS). DyKnow-ROS has been deployed on the SoftBank Robotics NAO platform to demonstrate the system's capabilities in the context of a case study on run-time adaptive reconfiguration. The results show that the proposed system – by combining reasoning over and reasoning about streams – can robustly perform spatio-temporal stream reasoning, even when the availability of streaming resources changes.
Publisher: Linköping University Electronic Press
ISBN: 9176854760
Category :
Languages : en
Pages : 153
Book Description
A lot of today's data is generated incrementally over time by a large variety of producers. This data ranges from quantitative sensor observations produced by robot systems to complex unstructured human-generated texts on social media. With data being so abundant, making sense of these streams of data through reasoning is challenging. Reasoning over streams is particularly relevant for autonomous robotic systems that operate in a physical environment. They commonly observe this environment through incremental observations, gradually refining information about their surroundings. This makes robust management of streaming data and its refinement an important problem. Many contemporary approaches to stream reasoning focus on the issue of querying data streams in order to generate higher-level information by relying on well-known database approaches. Other approaches apply logic-based reasoning techniques, which rarely consider the provenance of their symbolic interpretations. In this thesis, we integrate techniques for logic-based spatio-temporal stream reasoning with the adaptive generation of the state streams needed to do the reasoning over. This combination deals with both the challenge of reasoning over streaming data and the problem of robustly managing streaming data and its refinement. The main contributions of this thesis are (1) a logic-based spatio-temporal reasoning technique that combines temporal reasoning with qualitative spatial reasoning; (2) an adaptive reconfiguration procedure for generating and maintaining a data stream required to perform spatio-temporal stream reasoning over; and (3) integration of these two techniques into a stream reasoning framework. The proposed spatio-temporal stream reasoning technique is able to reason with intertemporal spatial relations by leveraging landmarks. Adaptive state stream generation allows the framework to adapt in situations in which the set of available streaming resources changes. Management of streaming resources is formalised in the DyKnow model, which introduces a configuration life-cycle to adaptively generate state streams. The DyKnow-ROS stream reasoning framework is a concrete realisation of this model that extends the Robot Operating System (ROS). DyKnow-ROS has been deployed on the SoftBank Robotics NAO platform to demonstrate the system's capabilities in the context of a case study on run-time adaptive reconfiguration. The results show that the proposed system – by combining reasoning over and reasoning about streams – can robustly perform spatio-temporal stream reasoning, even when the availability of streaming resources changes.
Designing a Modern Skeleton Programming Framework for Parallel and Heterogeneous Systems
Author: August Ernstsson
Publisher: Linköping University Electronic Press
ISBN: 9179297722
Category : Electronic books
Languages : en
Pages : 176
Book Description
Today's society is increasingly software-driven and dependent on powerful computer technology. Therefore it is important that advancements in the low-level processor hardware are made available for exploitation by a growing number of programmers of differing skill level. However, as we are approaching the end of Moore's law, hardware designers are finding new and increasingly complex ways to increase the accessible processor performance. It is getting more and more difficult to effectively target these processing resources without expert knowledge in parallelization, heterogeneous computation, communication, synchronization, and so on. To ensure that the software side can keep up, advanced programming environments and frameworks are needed to bridge the widening gap between hardware and software. One such example is the pattern-centric skeleton programming model and in particular the SkePU project. The work presented in this thesis first redesigns the SkePU framework based on modern C++ variadic template metaprogramming and state-of-the-art compiler technology. It then explores new ways to improve performance: by providing new patterns, improving the data access locality of existing ones, and using both static and dynamic knowledge about program flow. The work combines novel ideas with practical evaluation of the approach on several applications. The advancements also include the first skeleton API that allows variadic skeletons, new data containers, and finally an approach to make skeleton programming more customizable without compromising universal portability.
Publisher: Linköping University Electronic Press
ISBN: 9179297722
Category : Electronic books
Languages : en
Pages : 176
Book Description
Today's society is increasingly software-driven and dependent on powerful computer technology. Therefore it is important that advancements in the low-level processor hardware are made available for exploitation by a growing number of programmers of differing skill level. However, as we are approaching the end of Moore's law, hardware designers are finding new and increasingly complex ways to increase the accessible processor performance. It is getting more and more difficult to effectively target these processing resources without expert knowledge in parallelization, heterogeneous computation, communication, synchronization, and so on. To ensure that the software side can keep up, advanced programming environments and frameworks are needed to bridge the widening gap between hardware and software. One such example is the pattern-centric skeleton programming model and in particular the SkePU project. The work presented in this thesis first redesigns the SkePU framework based on modern C++ variadic template metaprogramming and state-of-the-art compiler technology. It then explores new ways to improve performance: by providing new patterns, improving the data access locality of existing ones, and using both static and dynamic knowledge about program flow. The work combines novel ideas with practical evaluation of the approach on several applications. The advancements also include the first skeleton API that allows variadic skeletons, new data containers, and finally an approach to make skeleton programming more customizable without compromising universal portability.
Latency-aware Resource Management at the Edge
Author: Klervie Toczé
Publisher: Linköping University Electronic Press
ISBN: 9179299040
Category :
Languages : en
Pages : 148
Book Description
The increasing diversity of connected devices leads to new application domains being envisioned. Some of these need ultra low latency or have privacy requirements that cannot be satisfied by the current cloud. By bringing resources closer to the end user, the recent edge computing paradigm aims to enable such applications. One critical aspect to ensure the successful deployment of the edge computing paradigm is efficient resource management. Indeed, obtaining the needed resources is crucial for the applications using the edge, but the resource picture of this paradigm is complex. First, as opposed to the nearly infinite resources provided by the cloud, the edge devices have finite resources. Moreover, different resource types are required depending on the applications and the devices supplying those resources are very heterogeneous. This thesis studies several challenges towards enabling efficient resource management for edge computing. The thesis begins by a review of the state-of-the-art research focusing on resource management in the edge computing context. A taxonomy is proposed for providing an overview of the current research and identify areas in need of further work. One of the identified challenges is studying the resource supply organization in the case where a mix of mobile and stationary devices is used to provide the edge resources. The ORCH framework is proposed as a means to orchestrate this edge device mix. The evaluation performed in a simulator shows that this combination of devices enables higher quality of service for latency-critical tasks. Another area is understanding the resource demand side. The thesis presents a study of the workload of a killer application for edge computing: mixed reality. The MR-Leo prototype is designed and used as a vehicle to understand the end-to-end latency, the throughput, and the characteristics of the workload for this type of application. A method for modeling the workload of an application is devised and applied to MR-Leo in order to obtain a synthetic workload exhibiting the same characteristics, which can be used in further studies.
Publisher: Linköping University Electronic Press
ISBN: 9179299040
Category :
Languages : en
Pages : 148
Book Description
The increasing diversity of connected devices leads to new application domains being envisioned. Some of these need ultra low latency or have privacy requirements that cannot be satisfied by the current cloud. By bringing resources closer to the end user, the recent edge computing paradigm aims to enable such applications. One critical aspect to ensure the successful deployment of the edge computing paradigm is efficient resource management. Indeed, obtaining the needed resources is crucial for the applications using the edge, but the resource picture of this paradigm is complex. First, as opposed to the nearly infinite resources provided by the cloud, the edge devices have finite resources. Moreover, different resource types are required depending on the applications and the devices supplying those resources are very heterogeneous. This thesis studies several challenges towards enabling efficient resource management for edge computing. The thesis begins by a review of the state-of-the-art research focusing on resource management in the edge computing context. A taxonomy is proposed for providing an overview of the current research and identify areas in need of further work. One of the identified challenges is studying the resource supply organization in the case where a mix of mobile and stationary devices is used to provide the edge resources. The ORCH framework is proposed as a means to orchestrate this edge device mix. The evaluation performed in a simulator shows that this combination of devices enables higher quality of service for latency-critical tasks. Another area is understanding the resource demand side. The thesis presents a study of the workload of a killer application for edge computing: mixed reality. The MR-Leo prototype is designed and used as a vehicle to understand the end-to-end latency, the throughput, and the characteristics of the workload for this type of application. A method for modeling the workload of an application is devised and applied to MR-Leo in order to obtain a synthetic workload exhibiting the same characteristics, which can be used in further studies.
Towards Semantically Enabled Complex Event Processing
Author: Robin Keskisärkkä
Publisher: Linköping University Electronic Press
ISBN: 9176854795
Category :
Languages : en
Pages : 169
Book Description
The Semantic Web provides a framework for semantically annotating data on the web, and the Resource Description Framework (RDF) supports the integration of structured data represented in heterogeneous formats. Traditionally, the Semantic Web has focused primarily on more or less static data, but information on the web today is becoming increasingly dynamic. RDF Stream Processing (RSP) systems address this issue by adding support for streaming data and continuous query processing. To some extent, RSP systems can be used to perform complex event processing (CEP), where meaningful high-level events are generated based on low-level events from multiple sources; however, there are several challenges with respect to using RSP in this context. Event models designed to represent static event information lack several features required for CEP, and are typically not well suited for stream reasoning. The dynamic nature of streaming data also greatly complicates the development and validation of RSP queries. Therefore, reusing queries that have been prepared ahead of time is important to be able to support real-time decision-making. Additionally, there are limitations in existing RSP implementations in terms of both scalability and expressiveness, where some features required in CEP are not supported by any of the current systems. The goal of this thesis work has been to address some of these challenges and the main contributions of the thesis are: (1) an event model ontology targeted at supporting CEP; (2) a model for representing parameterized RSP queries as reusable templates; and (3) an architecture that allows RSP systems to be integrated for use in CEP. The proposed event model tackles issues specifically related to event modeling in CEP that have not been sufficiently covered by other event models, includes support for event encapsulation and event payloads, and can easily be extended to fit specific use-cases. The model for representing RSP query templates was designed as an extension to SPIN, a vocabulary that supports modeling of SPARQL queries as RDF. The extended model supports the current version of the RSP Query Language (RSP-QL) developed by the RDF Stream Processing Community Group, along with some of the most popular RSP query languages. Finally, the proposed architecture views RSP queries as individual event processing agents in a more general CEP framework. Additional event processing components can be integrated to provide support for operations that are not supported in RSP, or to provide more efficient processing for specific tasks. We demonstrate the architecture in implementations for scenarios related to traffic-incident monitoring, criminal-activity monitoring, and electronic healthcare monitoring.
Publisher: Linköping University Electronic Press
ISBN: 9176854795
Category :
Languages : en
Pages : 169
Book Description
The Semantic Web provides a framework for semantically annotating data on the web, and the Resource Description Framework (RDF) supports the integration of structured data represented in heterogeneous formats. Traditionally, the Semantic Web has focused primarily on more or less static data, but information on the web today is becoming increasingly dynamic. RDF Stream Processing (RSP) systems address this issue by adding support for streaming data and continuous query processing. To some extent, RSP systems can be used to perform complex event processing (CEP), where meaningful high-level events are generated based on low-level events from multiple sources; however, there are several challenges with respect to using RSP in this context. Event models designed to represent static event information lack several features required for CEP, and are typically not well suited for stream reasoning. The dynamic nature of streaming data also greatly complicates the development and validation of RSP queries. Therefore, reusing queries that have been prepared ahead of time is important to be able to support real-time decision-making. Additionally, there are limitations in existing RSP implementations in terms of both scalability and expressiveness, where some features required in CEP are not supported by any of the current systems. The goal of this thesis work has been to address some of these challenges and the main contributions of the thesis are: (1) an event model ontology targeted at supporting CEP; (2) a model for representing parameterized RSP queries as reusable templates; and (3) an architecture that allows RSP systems to be integrated for use in CEP. The proposed event model tackles issues specifically related to event modeling in CEP that have not been sufficiently covered by other event models, includes support for event encapsulation and event payloads, and can easily be extended to fit specific use-cases. The model for representing RSP query templates was designed as an extension to SPIN, a vocabulary that supports modeling of SPARQL queries as RDF. The extended model supports the current version of the RSP Query Language (RSP-QL) developed by the RDF Stream Processing Community Group, along with some of the most popular RSP query languages. Finally, the proposed architecture views RSP queries as individual event processing agents in a more general CEP framework. Additional event processing components can be integrated to provide support for operations that are not supported in RSP, or to provide more efficient processing for specific tasks. We demonstrate the architecture in implementations for scenarios related to traffic-incident monitoring, criminal-activity monitoring, and electronic healthcare monitoring.
Formal Verification of Tree Ensembles in Safety-Critical Applications
Author: John Törnblom
Publisher: Linköping University Electronic Press
ISBN: 917929748X
Category : Electronic books
Languages : en
Pages : 41
Book Description
In the presence of data and computational resources, machine learning can be used to synthesize software automatically. For example, machines are now capable of learning complicated pattern recognition tasks and sophisticated decision policies, two key capabilities in autonomous cyber-physical systems. Unfortunately, humans find software synthesized by machine learning algorithms difficult to interpret, which currently limits their use in safety-critical applications such as medical diagnosis and avionic systems. In particular, successful deployments of safety-critical systems mandate the execution of rigorous verification activities, which often rely on human insights, e.g., to identify scenarios in which the system shall be tested. A natural pathway towards a viable verification strategy for such systems is to leverage formal verification techniques, which, in the presence of a formal specification, can provide definitive guarantees with little human intervention. However, formal verification suffers from scalability issues with respect to system complexity. In this thesis, we investigate the limits of current formal verification techniques when applied to a class of machine learning models called tree ensembles, and identify model-specific characteristics that can be exploited to improve the performance of verification algorithms when applied specifically to tree ensembles. To this end, we develop two formal verification techniques specifically for tree ensembles, one fast and conservative technique, and one exact but more computationally demanding. We then combine these two techniques into an abstraction-refinement approach, that we implement in a tool called VoTE (Verifier of Tree Ensembles). Using a couple of case studies, we recognize that sets of inputs that lead to the same system behavior can be captured precisely as hyperrectangles, which enables tractable enumeration of input-output mappings when the input dimension is low. Tree ensembles with a high-dimensional input domain, however, seems generally difficult to verify. In some cases though, conservative approximations of input-output mappings can greatly improve performance. This is demonstrated in a digit recognition case study, where we assess the robustness of classifiers when confronted with additive noise.
Publisher: Linköping University Electronic Press
ISBN: 917929748X
Category : Electronic books
Languages : en
Pages : 41
Book Description
In the presence of data and computational resources, machine learning can be used to synthesize software automatically. For example, machines are now capable of learning complicated pattern recognition tasks and sophisticated decision policies, two key capabilities in autonomous cyber-physical systems. Unfortunately, humans find software synthesized by machine learning algorithms difficult to interpret, which currently limits their use in safety-critical applications such as medical diagnosis and avionic systems. In particular, successful deployments of safety-critical systems mandate the execution of rigorous verification activities, which often rely on human insights, e.g., to identify scenarios in which the system shall be tested. A natural pathway towards a viable verification strategy for such systems is to leverage formal verification techniques, which, in the presence of a formal specification, can provide definitive guarantees with little human intervention. However, formal verification suffers from scalability issues with respect to system complexity. In this thesis, we investigate the limits of current formal verification techniques when applied to a class of machine learning models called tree ensembles, and identify model-specific characteristics that can be exploited to improve the performance of verification algorithms when applied specifically to tree ensembles. To this end, we develop two formal verification techniques specifically for tree ensembles, one fast and conservative technique, and one exact but more computationally demanding. We then combine these two techniques into an abstraction-refinement approach, that we implement in a tool called VoTE (Verifier of Tree Ensembles). Using a couple of case studies, we recognize that sets of inputs that lead to the same system behavior can be captured precisely as hyperrectangles, which enables tractable enumeration of input-output mappings when the input dimension is low. Tree ensembles with a high-dimensional input domain, however, seems generally difficult to verify. In some cases though, conservative approximations of input-output mappings can greatly improve performance. This is demonstrated in a digit recognition case study, where we assess the robustness of classifiers when confronted with additive noise.
Handbook of Driver Assistance Systems
Author: Hermann Winner
Publisher: Springer
ISBN: 9783319123516
Category : Technology & Engineering
Languages : en
Pages : 0
Book Description
This fundamental work explains in detail systems for active safety and driver assistance, considering both their structure and their function. These include the well-known standard systems such as Anti-lock braking system (ABS), Electronic Stability Control (ESC) or Adaptive Cruise Control (ACC). But it includes also new systems for protecting collisions protection, for changing the lane, or for convenient parking. The book aims at giving a complete picture focusing on the entire system. First, it describes the components which are necessary for assistance systems, such as sensors, actuators, mechatronic subsystems, and control elements. Then, it explains key features for the user-friendly design of human-machine interfaces between driver and assistance system. Finally, important characteristic features of driver assistance systems for particular vehicles are presented: Systems for commercial vehicles and motorcycles.
Publisher: Springer
ISBN: 9783319123516
Category : Technology & Engineering
Languages : en
Pages : 0
Book Description
This fundamental work explains in detail systems for active safety and driver assistance, considering both their structure and their function. These include the well-known standard systems such as Anti-lock braking system (ABS), Electronic Stability Control (ESC) or Adaptive Cruise Control (ACC). But it includes also new systems for protecting collisions protection, for changing the lane, or for convenient parking. The book aims at giving a complete picture focusing on the entire system. First, it describes the components which are necessary for assistance systems, such as sensors, actuators, mechatronic subsystems, and control elements. Then, it explains key features for the user-friendly design of human-machine interfaces between driver and assistance system. Finally, important characteristic features of driver assistance systems for particular vehicles are presented: Systems for commercial vehicles and motorcycles.
Discrete Choice Methods with Simulation
Author: Kenneth Train
Publisher: Cambridge University Press
ISBN: 0521766559
Category : Business & Economics
Languages : en
Pages : 399
Book Description
This book describes the new generation of discrete choice methods, focusing on the many advances that are made possible by simulation. Researchers use these statistical methods to examine the choices that consumers, households, firms, and other agents make. Each of the major models is covered: logit, generalized extreme value, or GEV (including nested and cross-nested logits), probit, and mixed logit, plus a variety of specifications that build on these basics. Simulation-assisted estimation procedures are investigated and compared, including maximum stimulated likelihood, method of simulated moments, and method of simulated scores. Procedures for drawing from densities are described, including variance reduction techniques such as anithetics and Halton draws. Recent advances in Bayesian procedures are explored, including the use of the Metropolis-Hastings algorithm and its variant Gibbs sampling. The second edition adds chapters on endogeneity and expectation-maximization (EM) algorithms. No other book incorporates all these fields, which have arisen in the past 25 years. The procedures are applicable in many fields, including energy, transportation, environmental studies, health, labor, and marketing.
Publisher: Cambridge University Press
ISBN: 0521766559
Category : Business & Economics
Languages : en
Pages : 399
Book Description
This book describes the new generation of discrete choice methods, focusing on the many advances that are made possible by simulation. Researchers use these statistical methods to examine the choices that consumers, households, firms, and other agents make. Each of the major models is covered: logit, generalized extreme value, or GEV (including nested and cross-nested logits), probit, and mixed logit, plus a variety of specifications that build on these basics. Simulation-assisted estimation procedures are investigated and compared, including maximum stimulated likelihood, method of simulated moments, and method of simulated scores. Procedures for drawing from densities are described, including variance reduction techniques such as anithetics and Halton draws. Recent advances in Bayesian procedures are explored, including the use of the Metropolis-Hastings algorithm and its variant Gibbs sampling. The second edition adds chapters on endogeneity and expectation-maximization (EM) algorithms. No other book incorporates all these fields, which have arisen in the past 25 years. The procedures are applicable in many fields, including energy, transportation, environmental studies, health, labor, and marketing.