Author: Meysam Aghighi
Publisher: Linköping University Electronic Press
ISBN: 9176855198
Category :
Languages : en
Pages : 59
Book Description
Automated planning is known to be computationally hard in the general case. Propositional planning is PSPACE-complete and first-order planning is undecidable. One method for analyzing the computational complexity of planning is to study restricted subsets of planning instances, with the aim of differentiating instances with varying complexity. We use this methodology for studying the computational complexity of planning. Finding new tractable (i.e. polynomial-time solvable) problems has been a particularly important goal for researchers in the area. The reason behind this is not only to differentiate between easy and hard planning instances, but also to use polynomial-time solvable instances in order to construct better heuristic functions and improve planners. We identify a new class of tractable cost-optimal planning instances by restricting the causal graph. We study the computational complexity of oversubscription planning (such as the net-benefit problem) under various restrictions and reveal strong connections with classical planning. Inspired by this, we present a method for compiling oversubscription planning problems into the ordinary plan existence problem. We further study the parameterized complexity of cost-optimal and net-benefit planning under the same restrictions and show that the choice of numeric domain for the action costs has a great impact on the parameterized complexity. We finally consider the parameterized complexity of certain problems related to partial-order planning. In some applications, less restricted plans than total-order plans are needed. Therefore, a partial-order plan is being used instead. When dealing with partial-order plans, one important question is how to achieve optimal partial order plans, i.e. having the highest degree of freedom according to some notion of flexibility. We study several optimization problems for partial-order plans, such as finding a minimum deordering or reordering, and finding the minimum parallel execution length.
Computational Complexity of some Optimization Problems in Planning
Author: Meysam Aghighi
Publisher: Linköping University Electronic Press
ISBN: 9176855198
Category :
Languages : en
Pages : 59
Book Description
Automated planning is known to be computationally hard in the general case. Propositional planning is PSPACE-complete and first-order planning is undecidable. One method for analyzing the computational complexity of planning is to study restricted subsets of planning instances, with the aim of differentiating instances with varying complexity. We use this methodology for studying the computational complexity of planning. Finding new tractable (i.e. polynomial-time solvable) problems has been a particularly important goal for researchers in the area. The reason behind this is not only to differentiate between easy and hard planning instances, but also to use polynomial-time solvable instances in order to construct better heuristic functions and improve planners. We identify a new class of tractable cost-optimal planning instances by restricting the causal graph. We study the computational complexity of oversubscription planning (such as the net-benefit problem) under various restrictions and reveal strong connections with classical planning. Inspired by this, we present a method for compiling oversubscription planning problems into the ordinary plan existence problem. We further study the parameterized complexity of cost-optimal and net-benefit planning under the same restrictions and show that the choice of numeric domain for the action costs has a great impact on the parameterized complexity. We finally consider the parameterized complexity of certain problems related to partial-order planning. In some applications, less restricted plans than total-order plans are needed. Therefore, a partial-order plan is being used instead. When dealing with partial-order plans, one important question is how to achieve optimal partial order plans, i.e. having the highest degree of freedom according to some notion of flexibility. We study several optimization problems for partial-order plans, such as finding a minimum deordering or reordering, and finding the minimum parallel execution length.
Publisher: Linköping University Electronic Press
ISBN: 9176855198
Category :
Languages : en
Pages : 59
Book Description
Automated planning is known to be computationally hard in the general case. Propositional planning is PSPACE-complete and first-order planning is undecidable. One method for analyzing the computational complexity of planning is to study restricted subsets of planning instances, with the aim of differentiating instances with varying complexity. We use this methodology for studying the computational complexity of planning. Finding new tractable (i.e. polynomial-time solvable) problems has been a particularly important goal for researchers in the area. The reason behind this is not only to differentiate between easy and hard planning instances, but also to use polynomial-time solvable instances in order to construct better heuristic functions and improve planners. We identify a new class of tractable cost-optimal planning instances by restricting the causal graph. We study the computational complexity of oversubscription planning (such as the net-benefit problem) under various restrictions and reveal strong connections with classical planning. Inspired by this, we present a method for compiling oversubscription planning problems into the ordinary plan existence problem. We further study the parameterized complexity of cost-optimal and net-benefit planning under the same restrictions and show that the choice of numeric domain for the action costs has a great impact on the parameterized complexity. We finally consider the parameterized complexity of certain problems related to partial-order planning. In some applications, less restricted plans than total-order plans are needed. Therefore, a partial-order plan is being used instead. When dealing with partial-order plans, one important question is how to achieve optimal partial order plans, i.e. having the highest degree of freedom according to some notion of flexibility. We study several optimization problems for partial-order plans, such as finding a minimum deordering or reordering, and finding the minimum parallel execution length.
Computational Complexity
Author: Sanjeev Arora
Publisher: Cambridge University Press
ISBN: 0521424267
Category : Computers
Languages : en
Pages : 609
Book Description
New and classical results in computational complexity, including interactive proofs, PCP, derandomization, and quantum computation. Ideal for graduate students.
Publisher: Cambridge University Press
ISBN: 0521424267
Category : Computers
Languages : en
Pages : 609
Book Description
New and classical results in computational complexity, including interactive proofs, PCP, derandomization, and quantum computation. Ideal for graduate students.
ECAI 2006
Author: Gerhard Brewka
Publisher: IOS Press
ISBN: 9781586036423
Category : Artificial intelligence
Languages : en
Pages : 896
Book Description
Publisher: IOS Press
ISBN: 9781586036423
Category : Artificial intelligence
Languages : en
Pages : 896
Book Description
Red Plenty
Author: Francis Spufford
Publisher: Graywolf Press
ISBN: 1555970419
Category : History
Languages : en
Pages : 437
Book Description
"Spufford cunningly maps out a literary genre of his own . . . Freewheeling and fabulous." —The Times (London) Strange as it may seem, the gray, oppressive USSR was founded on a fairy tale. It was built on the twentieth-century magic called "the planned economy," which was going to gush forth an abundance of good things that the lands of capitalism could never match. And just for a little while, in the heady years of the late 1950s, the magic seemed to be working. Red Plenty is about that moment in history, and how it came, and how it went away; about the brief era when, under the rash leadership of Khrushchev, the Soviet Union looked forward to a future of rich communists and envious capitalists, when Moscow would out-glitter Manhattan and every Lada would be better engineered than a Porsche. It's about the scientists who did their genuinely brilliant best to make the dream come true, to give the tyranny its happy ending. Red Plenty is history, it's fiction, it's as ambitious as Sputnik, as uncompromising as an Aeroflot flight attendant, and as different from what you were expecting as a glass of Soviet champagne.
Publisher: Graywolf Press
ISBN: 1555970419
Category : History
Languages : en
Pages : 437
Book Description
"Spufford cunningly maps out a literary genre of his own . . . Freewheeling and fabulous." —The Times (London) Strange as it may seem, the gray, oppressive USSR was founded on a fairy tale. It was built on the twentieth-century magic called "the planned economy," which was going to gush forth an abundance of good things that the lands of capitalism could never match. And just for a little while, in the heady years of the late 1950s, the magic seemed to be working. Red Plenty is about that moment in history, and how it came, and how it went away; about the brief era when, under the rash leadership of Khrushchev, the Soviet Union looked forward to a future of rich communists and envious capitalists, when Moscow would out-glitter Manhattan and every Lada would be better engineered than a Porsche. It's about the scientists who did their genuinely brilliant best to make the dream come true, to give the tyranny its happy ending. Red Plenty is history, it's fiction, it's as ambitious as Sputnik, as uncompromising as an Aeroflot flight attendant, and as different from what you were expecting as a glass of Soviet champagne.
Completion of Ontologies and Ontology Networks
Author: Zlatan Dragisic
Publisher: Linköping University Electronic Press
ISBN: 9176855228
Category : Computers
Languages : en
Pages : 88
Book Description
The World Wide Web contains large amounts of data, and in most cases this data has no explicit structure. The lack of structure makes it difficult for automated agents to understand and use such data. A step towards a more structured World Wide Web is the Semantic Web, which aims at introducing semantics to data on the World Wide Web. One of the key technologies in this endeavour are ontologies, which provide a means for modeling a domain of interest and are used for search and integration of data. In recent years many ontologies have been developed. To be able to use multiple ontologies it is necessary to align them, i.e., find inter-ontology relationships. However, developing and aligning ontologies is not an easy task and it is often the case that ontologies and their alignments are incorrect and incomplete. This can be a problem for semantically-enabled applications. Incorrect and incomplete ontologies and alignments directly influence the quality of the results of such applications, as wrong results can be returned and correct results can be missed. This thesis focuses on the problem of completing ontologies and ontology networks. The contributions of the thesis are threefold. First, we address the issue of completing the is-a structure and alignment in ontologies and ontology networks. We have formalized the problem of completing the is-a structure in ontologies as an abductive reasoning problem and developed algorithms as well as systems for dealing with the problem. With respect to the completion of alignments, we have studied system performance in the Ontology Alignment Evaluation Initiative, a yearly evaluation campaign for ontology alignment systems. We have also addressed the scalability of ontology matching, which is one of the current challenges, by developing an approach for reducing the search space when generating the alignment.Second, high quality completion requires user involvement. As users' time and effort are a limited resource we address the issue of limiting and facilitating user interaction in the completion process. We have conducted a broad study of state-of-the-art ontology alignment systems and identified different issues related to the process. We have also conducted experiments to assess the impact of user errors in the completion process. While the completion of ontologies and ontology networks can be done at any point in the life-cycle of ontologies and ontology networks, some of the issues can be addressed already in the development phase. The third contribution of the thesis addresses this by introducing ontology completion and ontology alignment into an existing ontology development methodology.
Publisher: Linköping University Electronic Press
ISBN: 9176855228
Category : Computers
Languages : en
Pages : 88
Book Description
The World Wide Web contains large amounts of data, and in most cases this data has no explicit structure. The lack of structure makes it difficult for automated agents to understand and use such data. A step towards a more structured World Wide Web is the Semantic Web, which aims at introducing semantics to data on the World Wide Web. One of the key technologies in this endeavour are ontologies, which provide a means for modeling a domain of interest and are used for search and integration of data. In recent years many ontologies have been developed. To be able to use multiple ontologies it is necessary to align them, i.e., find inter-ontology relationships. However, developing and aligning ontologies is not an easy task and it is often the case that ontologies and their alignments are incorrect and incomplete. This can be a problem for semantically-enabled applications. Incorrect and incomplete ontologies and alignments directly influence the quality of the results of such applications, as wrong results can be returned and correct results can be missed. This thesis focuses on the problem of completing ontologies and ontology networks. The contributions of the thesis are threefold. First, we address the issue of completing the is-a structure and alignment in ontologies and ontology networks. We have formalized the problem of completing the is-a structure in ontologies as an abductive reasoning problem and developed algorithms as well as systems for dealing with the problem. With respect to the completion of alignments, we have studied system performance in the Ontology Alignment Evaluation Initiative, a yearly evaluation campaign for ontology alignment systems. We have also addressed the scalability of ontology matching, which is one of the current challenges, by developing an approach for reducing the search space when generating the alignment.Second, high quality completion requires user involvement. As users' time and effort are a limited resource we address the issue of limiting and facilitating user interaction in the completion process. We have conducted a broad study of state-of-the-art ontology alignment systems and identified different issues related to the process. We have also conducted experiments to assess the impact of user errors in the completion process. While the completion of ontologies and ontology networks can be done at any point in the life-cycle of ontologies and ontology networks, some of the issues can be addressed already in the development phase. The third contribution of the thesis addresses this by introducing ontology completion and ontology alignment into an existing ontology development methodology.
Author:
Publisher: IOS Press
ISBN:
Category :
Languages : en
Pages : 7289
Book Description
Publisher: IOS Press
ISBN:
Category :
Languages : en
Pages : 7289
Book Description
Beyond Recognition
Author: Le Minh-Ha
Publisher: Linköping University Electronic Press
ISBN: 918075676X
Category :
Languages : en
Pages : 103
Book Description
This thesis addresses the need to balance the use of facial recognition systems with the need to protect personal privacy in machine learning and biometric identification. As advances in deep learning accelerate their evolution, facial recognition systems enhance security capabilities, but also risk invading personal privacy. Our research identifies and addresses critical vulnerabilities inherent in facial recognition systems, and proposes innovative privacy-enhancing technologies that anonymize facial data while maintaining its utility for legitimate applications. Our investigation centers on the development of methodologies and frameworks that achieve k-anonymity in facial datasets; leverage identity disentanglement to facilitate anonymization; exploit the vulnerabilities of facial recognition systems to underscore their limitations; and implement practical defenses against unauthorized recognition systems. We introduce novel contributions such as AnonFACES, StyleID, IdDecoder, StyleAdv, and DiffPrivate, each designed to protect facial privacy through advanced adversarial machine learning techniques and generative models. These solutions not only demonstrate the feasibility of protecting facial privacy in an increasingly surveilled world, but also highlight the ongoing need for robust countermeasures against the ever-evolving capabilities of facial recognition technology. Continuous innovation in privacy-enhancing technologies is required to safeguard individuals from the pervasive reach of digital surveillance and protect their fundamental right to privacy. By providing open-source, publicly available tools, and frameworks, this thesis contributes to the collective effort to ensure that advancements in facial recognition serve the public good without compromising individual rights. Our multi-disciplinary approach bridges the gap between biometric systems, adversarial machine learning, and generative modeling to pave the way for future research in the domain and support AI innovation where technological advancement and privacy are balanced.
Publisher: Linköping University Electronic Press
ISBN: 918075676X
Category :
Languages : en
Pages : 103
Book Description
This thesis addresses the need to balance the use of facial recognition systems with the need to protect personal privacy in machine learning and biometric identification. As advances in deep learning accelerate their evolution, facial recognition systems enhance security capabilities, but also risk invading personal privacy. Our research identifies and addresses critical vulnerabilities inherent in facial recognition systems, and proposes innovative privacy-enhancing technologies that anonymize facial data while maintaining its utility for legitimate applications. Our investigation centers on the development of methodologies and frameworks that achieve k-anonymity in facial datasets; leverage identity disentanglement to facilitate anonymization; exploit the vulnerabilities of facial recognition systems to underscore their limitations; and implement practical defenses against unauthorized recognition systems. We introduce novel contributions such as AnonFACES, StyleID, IdDecoder, StyleAdv, and DiffPrivate, each designed to protect facial privacy through advanced adversarial machine learning techniques and generative models. These solutions not only demonstrate the feasibility of protecting facial privacy in an increasingly surveilled world, but also highlight the ongoing need for robust countermeasures against the ever-evolving capabilities of facial recognition technology. Continuous innovation in privacy-enhancing technologies is required to safeguard individuals from the pervasive reach of digital surveillance and protect their fundamental right to privacy. By providing open-source, publicly available tools, and frameworks, this thesis contributes to the collective effort to ensure that advancements in facial recognition serve the public good without compromising individual rights. Our multi-disciplinary approach bridges the gap between biometric systems, adversarial machine learning, and generative modeling to pave the way for future research in the domain and support AI innovation where technological advancement and privacy are balanced.
Gated Bayesian Networks
Author: Marcus Bendtsen
Publisher: Linköping University Electronic Press
ISBN: 9176855252
Category :
Languages : en
Pages : 245
Book Description
Bayesian networks have grown to become a dominant type of model within the domain of probabilistic graphical models. Not only do they empower users with a graphical means for describing the relationships among random variables, but they also allow for (potentially) fewer parameters to estimate, and enable more efficient inference. The random variables and the relationships among them decide the structure of the directed acyclic graph that represents the Bayesian network. It is the stasis over time of these two components that we question in this thesis. By introducing a new type of probabilistic graphical model, which we call gated Bayesian networks, we allow for the variables that we include in our model, and the relationships among them, to change overtime. We introduce algorithms that can learn gated Bayesian networks that use different variables at different times, required due to the process which we are modelling going through distinct phases. We evaluate the efficacy of these algorithms within the domain of algorithmic trading, showing how the learnt gated Bayesian networks can improve upon a passive approach to trading. We also introduce algorithms that detect changes in the relationships among the random variables, allowing us to create a model that consists of several Bayesian networks, thereby revealing changes and the structure by which these changes occur. The resulting models can be used to detect the currently most appropriate Bayesian network, and we show their use in real-world examples from both the domain of sports analytics and finance.
Publisher: Linköping University Electronic Press
ISBN: 9176855252
Category :
Languages : en
Pages : 245
Book Description
Bayesian networks have grown to become a dominant type of model within the domain of probabilistic graphical models. Not only do they empower users with a graphical means for describing the relationships among random variables, but they also allow for (potentially) fewer parameters to estimate, and enable more efficient inference. The random variables and the relationships among them decide the structure of the directed acyclic graph that represents the Bayesian network. It is the stasis over time of these two components that we question in this thesis. By introducing a new type of probabilistic graphical model, which we call gated Bayesian networks, we allow for the variables that we include in our model, and the relationships among them, to change overtime. We introduce algorithms that can learn gated Bayesian networks that use different variables at different times, required due to the process which we are modelling going through distinct phases. We evaluate the efficacy of these algorithms within the domain of algorithmic trading, showing how the learnt gated Bayesian networks can improve upon a passive approach to trading. We also introduce algorithms that detect changes in the relationships among the random variables, allowing us to create a model that consists of several Bayesian networks, thereby revealing changes and the structure by which these changes occur. The resulting models can be used to detect the currently most appropriate Bayesian network, and we show their use in real-world examples from both the domain of sports analytics and finance.
Content Ontology Design Patterns: Qualities, Methods, and Tools
Author: Karl Hammar
Publisher: Linköping University Electronic Press
ISBN: 917685454X
Category :
Languages : en
Pages : 261
Book Description
Ontologies are formal knowledge models that describe concepts and relationships and enable data integration, information search, and reasoning. Ontology Design Patterns (ODPs) are reusable solutions intended to simplify ontology development and support the use of semantic technologies by ontology engineers. ODPs document and package good modelling practices for reuse, ideally enabling inexperienced ontologists to construct high-quality ontologies. Although ODPs are already used for development, there are still remaining challenges that have not been addressed in the literature. These research gaps include a lack of knowledge about (1) which ODP features are important for ontology engineering, (2) less experienced developers' preferences and barriers for employing ODP tooling, and (3) the suitability of the eXtreme Design (XD) ODP usage methodology in non-academic contexts. This dissertation aims to close these gaps by combining quantitative and qualitative methods, primarily based on five ontology engineering projects involving inexperienced ontologists. A series of ontology engineering workshops and surveys provided data about developer preferences regarding ODP features, ODP usage methodology, and ODP tooling needs. Other data sources are ontologies and ODPs published on the web, which have been studied in detail. To evaluate tooling improvements, experimental approaches provide data from comparison of new tools and techniques against established alternatives. The analysis of the gathered data resulted in a set of measurable quality indicators that cover aspects of ODP documentation, formal representation or axiomatisation, and usage by ontologists. These indicators highlight quality trade-offs: for instance, between ODP Learnability and Reusability, or between Functional Suitability and Performance Efficiency. Furthermore, the results demonstrate a need for ODP tools that support three novel property specialisation strategies, and highlight the preference of inexperienced developers for template-based ODP instantiation---neither of which are supported in prior tooling. The studies also resulted in improvements to ODP search engines based on ODP-specific attributes. Finally, the analysis shows that XD should include guidance for the developer roles and responsibilities in ontology engineering projects, suggestions on how to reuse existing ontology resources, and approaches for adapting XD to project-specific contexts.
Publisher: Linköping University Electronic Press
ISBN: 917685454X
Category :
Languages : en
Pages : 261
Book Description
Ontologies are formal knowledge models that describe concepts and relationships and enable data integration, information search, and reasoning. Ontology Design Patterns (ODPs) are reusable solutions intended to simplify ontology development and support the use of semantic technologies by ontology engineers. ODPs document and package good modelling practices for reuse, ideally enabling inexperienced ontologists to construct high-quality ontologies. Although ODPs are already used for development, there are still remaining challenges that have not been addressed in the literature. These research gaps include a lack of knowledge about (1) which ODP features are important for ontology engineering, (2) less experienced developers' preferences and barriers for employing ODP tooling, and (3) the suitability of the eXtreme Design (XD) ODP usage methodology in non-academic contexts. This dissertation aims to close these gaps by combining quantitative and qualitative methods, primarily based on five ontology engineering projects involving inexperienced ontologists. A series of ontology engineering workshops and surveys provided data about developer preferences regarding ODP features, ODP usage methodology, and ODP tooling needs. Other data sources are ontologies and ODPs published on the web, which have been studied in detail. To evaluate tooling improvements, experimental approaches provide data from comparison of new tools and techniques against established alternatives. The analysis of the gathered data resulted in a set of measurable quality indicators that cover aspects of ODP documentation, formal representation or axiomatisation, and usage by ontologists. These indicators highlight quality trade-offs: for instance, between ODP Learnability and Reusability, or between Functional Suitability and Performance Efficiency. Furthermore, the results demonstrate a need for ODP tools that support three novel property specialisation strategies, and highlight the preference of inexperienced developers for template-based ODP instantiation---neither of which are supported in prior tooling. The studies also resulted in improvements to ODP search engines based on ODP-specific attributes. Finally, the analysis shows that XD should include guidance for the developer roles and responsibilities in ontology engineering projects, suggestions on how to reuse existing ontology resources, and approaches for adapting XD to project-specific contexts.
System-Level Analysis and Design under Uncertainty
Author: Ivan Ukhov
Publisher: Linköping University Electronic Press
ISBN: 9176854264
Category :
Languages : en
Pages : 194
Book Description
One major problem for the designer of electronic systems is the presence of uncertainty, which is due to phenomena such as process and workload variation. Very often, uncertainty is inherent and inevitable. If ignored, it can lead to degradation of the quality of service in the best case and to severe faults or burnt silicon in the worst case. Thus, it is crucial to analyze uncertainty and to mitigate its damaging consequences by designing electronic systems in such a way that they effectively and efficiently take uncertainty into account. We begin by considering techniques for deterministic system-level analysis and design of certain aspects of electronic systems. These techniques do not take uncertainty into account, but they serve as a solid foundation for those that do. Our attention revolves primarily around power and temperature, as they are of central importance for attaining robustness and energy efficiency. We develop a novel approach to dynamic steady-state temperature analysis of electronic systems and apply it in the context of reliability optimization. We then proceed to develop techniques that address uncertainty. The first technique is designed to quantify the variability of process parameters, which is induced by process variation, across silicon wafers based on indirect and potentially incomplete and noisy measurements. The second technique is designed to study diverse system-level characteristics with respect to the variability originating from process variation. In particular, it allows for analyzing transient temperature profiles as well as dynamic steady-state temperature profiles of electronic systems. This is illustrated by considering a problem of design-space exploration with probabilistic constraints related to reliability. The third technique that we develop is designed to efficiently tackle the case of sources of uncertainty that are less regular than process variation, such as workload variation. This technique is exemplified by analyzing the effect that workload units with uncertain processing times have on the timing-, power-, and temperature-related characteristics of the system under consideration. We also address the issue of runtime management of electronic systems that are subject to uncertainty. In this context, we perform an early investigation of the utility of advanced prediction techniques for the purpose of finegrained long-range forecasting of resource usage in large computer systems. All the proposed techniques are assessed by extensive experimental evaluations, which demonstrate the superior performance of our approaches to analysis and design of electronic systems compared to existing techniques.
Publisher: Linköping University Electronic Press
ISBN: 9176854264
Category :
Languages : en
Pages : 194
Book Description
One major problem for the designer of electronic systems is the presence of uncertainty, which is due to phenomena such as process and workload variation. Very often, uncertainty is inherent and inevitable. If ignored, it can lead to degradation of the quality of service in the best case and to severe faults or burnt silicon in the worst case. Thus, it is crucial to analyze uncertainty and to mitigate its damaging consequences by designing electronic systems in such a way that they effectively and efficiently take uncertainty into account. We begin by considering techniques for deterministic system-level analysis and design of certain aspects of electronic systems. These techniques do not take uncertainty into account, but they serve as a solid foundation for those that do. Our attention revolves primarily around power and temperature, as they are of central importance for attaining robustness and energy efficiency. We develop a novel approach to dynamic steady-state temperature analysis of electronic systems and apply it in the context of reliability optimization. We then proceed to develop techniques that address uncertainty. The first technique is designed to quantify the variability of process parameters, which is induced by process variation, across silicon wafers based on indirect and potentially incomplete and noisy measurements. The second technique is designed to study diverse system-level characteristics with respect to the variability originating from process variation. In particular, it allows for analyzing transient temperature profiles as well as dynamic steady-state temperature profiles of electronic systems. This is illustrated by considering a problem of design-space exploration with probabilistic constraints related to reliability. The third technique that we develop is designed to efficiently tackle the case of sources of uncertainty that are less regular than process variation, such as workload variation. This technique is exemplified by analyzing the effect that workload units with uncertain processing times have on the timing-, power-, and temperature-related characteristics of the system under consideration. We also address the issue of runtime management of electronic systems that are subject to uncertainty. In this context, we perform an early investigation of the utility of advanced prediction techniques for the purpose of finegrained long-range forecasting of resource usage in large computer systems. All the proposed techniques are assessed by extensive experimental evaluations, which demonstrate the superior performance of our approaches to analysis and design of electronic systems compared to existing techniques.