PODC '07

PODC '07 PDF Author:
Publisher:
ISBN:
Category : Electronic data processing
Languages : en
Pages : 428

Get Book Here

Book Description

PODC '07

PODC '07 PDF Author:
Publisher:
ISBN:
Category : Electronic data processing
Languages : en
Pages : 428

Get Book Here

Book Description


Transactional Memory

Transactional Memory PDF Author: Tim Harris
Publisher: Morgan & Claypool Publishers
ISBN: 1608452352
Category : Computers
Languages : en
Pages : 247

Get Book Here

Book Description
The advent of multicore processors has renewed interest in the idea of incorporating transactions into the programming model used to write parallel programs. This approach, known as transactional memory, offers an alternative, and hopefully better, way to coordinate concurrent threads. The ACI (atomicity, consistency, isolation) properties of transactions provide a foundation to ensure that con-current reads and writes of shared data do not produce inconsistent or incorrect results. At a higher level, a computation wrapped in a transaction executes atomically---either it completes successfully and commits its result in its entirety or it aborts. In addition, isolation ensures the transaction produces the same result as if no other transactions were executing concurrently. Although transactions are not a parallel programming panacea, they shift much of the burden of synchronizing and co-ordinating parallel computations from a programmer to a compiler, to a language runtime system, or to hardware. The challenge for the system implementers is to build an efficient transactional memory infrastructure. This book presents an overview of the state of the art in the design and implementation of transactional memory systems, as of early spring 2010.

Fault-Tolerant Message-Passing Distributed Systems

Fault-Tolerant Message-Passing Distributed Systems PDF Author: Michel Raynal
Publisher: Springer
ISBN: 3319941410
Category : Computers
Languages : en
Pages : 468

Get Book Here

Book Description
This book presents the most important fault-tolerant distributed programming abstractions and their associated distributed algorithms, in particular in terms of reliable communication and agreement, which lie at the heart of nearly all distributed applications. These programming abstractions, distributed objects or services, allow software designers and programmers to cope with asynchrony and the most important types of failures such as process crashes, message losses, and malicious behaviors of computing entities, widely known under the term "Byzantine fault-tolerance". The author introduces these notions in an incremental manner, starting from a clear specification, followed by algorithms which are first described intuitively and then proved correct. The book also presents impossibility results in classic distributed computing models, along with strategies, mainly failure detectors and randomization, that allow us to enrich these models. In this sense, the book constitutes an introduction to the science of distributed computing, with applications in all domains of distributed systems, such as cloud computing and blockchains. Each chapter comes with exercises and bibliographic notes to help the reader approach, understand, and master the fascinating field of fault-tolerant distributed computing.

Search-Based Applications

Search-Based Applications PDF Author: Gregory Grefenstette
Publisher: Springer Nature
ISBN: 3031022742
Category : Computers
Languages : en
Pages : 159

Get Book Here

Book Description
We are poised at a major turning point in the history of information management via computers. Recent evolutions in computing, communications, and commerce are fundamentally reshaping the ways in which we humans interact with information, and generating enormous volumes of electronic data along the way. As a result of these forces, what will data management technologies, and their supporting software and system architectures, look like in ten years? It is difficult to say, but we can see the future taking shape now in a new generation of information access platforms that combine strategies and structures of two familiar -- and previously quite distinct -- technologies, search engines and databases, and in a new model for software applications, the Search-Based Application (SBA), which offers a pragmatic way to solve both well-known and emerging information management challenges as of now. Search engines are the world's most familiar and widely deployed information access tool, used by hundreds of millions of people every day to locate information on the Web, but few are aware they can now also be used to provide precise, multidimensional information access and analysis that is hard to distinguish from current database applications, yet endowed with the usability and massive scalability of Web search. In this book, we hope to introduce Search Based Applications to a wider audience, using real case studies to show how this flexible technology can be used to intelligently aggregate large volumes of unstructured data (like Web pages) and structured data (like database content), and to make that data available in a highly contextual, quasi real-time manner to a wide base of users for a varied range of purposes. We also hope to shed light on the general convergences underway in search and database disciplines, convergences that make SBAs possible, and which serve as harbingers of information management paradigms and technologies to come. Table of Contents: Search Based Applications / Evolving Business Information Access Needs / Origins and Histories / Data Models and Storage / Data Collection/Population / Data Processing / Data Retrieval / Data Security, Usability, Performance, Cost / Summary Evolutions and Convergences / SBA Platforms / SBA Uses and Preconditions / Anatomy of a Search Based Application / Case Study: GEFCO / Case Study: Urbanizer / Case Study: National Postal Agency / Future Directions

Transactional Memory, Second Edition

Transactional Memory, Second Edition PDF Author: Tim Harris
Publisher: Springer Nature
ISBN: 3031017285
Category : Technology & Engineering
Languages : en
Pages : 247

Get Book Here

Book Description
The advent of multicore processors has renewed interest in the idea of incorporating transactions into the programming model used to write parallel programs. This approach, known as transactional memory, offers an alternative, and hopefully better, way to coordinate concurrent threads. The ACI (atomicity, consistency, isolation) properties of transactions provide a foundation to ensure that concurrent reads and writes of shared data do not produce inconsistent or incorrect results. At a higher level, a computation wrapped in a transaction executes atomically - either it completes successfully and commits its result in its entirety or it aborts. In addition, isolation ensures the transaction produces the same result as if no other transactions were executing concurrently. Although transactions are not a parallel programming panacea, they shift much of the burden of synchronizing and coordinating parallel computations from a programmer to a compiler, to a language runtime system, or to hardware. The challenge for the system implementers is to build an efficient transactional memory infrastructure. This book presents an overview of the state of the art in the design and implementation of transactional memory systems, as of early spring 2010. Table of Contents: Introduction / Basic Transactions / Building on Basic Transactions / Software Transactional Memory / Hardware-Supported Transactional Memory / Conclusions

Handbook of Fiber Optic Data Communication

Handbook of Fiber Optic Data Communication PDF Author: Casimer DeCusatis
Publisher: Elsevier Inc. Chapters
ISBN: 0128068280
Category : Science
Languages : en
Pages : 30

Get Book Here

Book Description
All modern data centers require some form of data backup or replication to protect the data from natural or man-made disasters and provide business continuity. Companies rely on their information systems to run daily operations. If a system becomes unavailable, company operations may be impaired or stopped completely. If critical data remains inaccessible for an extended period, the company may never recover and be forced to go out of business. It is necessary to provide a reliable infrastructure for IT operations in order to minimize any chance of disruption. In this chapter, we define the requirements for Tier 1 through Tier 4 data centers. We discuss the ACID-BASE (atomicity, consistency, isolation, durability-basically available, soft state, eventual consistency) taxonomies for data consistency, giving examples from companies such as Yahoo!, Amazon, Google, and IBM. The chapter includes a detailed discussion of the different options for IBM Geographically Dispersed Parallel Sysplex (GDPS), enterprise-class high-end business continuity, and disaster recovery solution, including the Sysplex Timer protocol, InterSystem Channel (ISC), Parallel Sysplex InfiniBand (PSIFB), and more.

Concurrent Programming: Algorithms, Principles, and Foundations

Concurrent Programming: Algorithms, Principles, and Foundations PDF Author: Michel Raynal
Publisher: Springer Science & Business Media
ISBN: 3642320279
Category : Computers
Languages : en
Pages : 530

Get Book Here

Book Description
This book is devoted to the most difficult part of concurrent programming, namely synchronization concepts, techniques and principles when the cooperating entities are asynchronous, communicate through a shared memory, and may experience failures. Synchronization is no longer a set of tricks but, due to research results in recent decades, it relies today on sane scientific foundations as explained in this book. In this book the author explains synchronization and the implementation of concurrent objects, presenting in a uniform and comprehensive way the major theoretical and practical results of the past 30 years. Among the key features of the book are a new look at lock-based synchronization (mutual exclusion, semaphores, monitors, path expressions); an introduction to the atomicity consistency criterion and its properties and a specific chapter on transactional memory; an introduction to mutex-freedom and associated progress conditions such as obstruction-freedom and wait-freedom; a presentation of Lamport's hierarchy of safe, regular and atomic registers and associated wait-free constructions; a description of numerous wait-free constructions of concurrent objects (queues, stacks, weak counters, snapshot objects, renaming objects, etc.); a presentation of the computability power of concurrent objects including the notions of universal construction, consensus number and the associated Herlihy's hierarchy; and a survey of failure detector-based constructions of consensus objects. The book is suitable for advanced undergraduate students and graduate students in computer science or computer engineering, graduate students in mathematics interested in the foundations of process synchronization, and practitioners and engineers who need to produce correct concurrent software. The reader should have a basic knowledge of algorithms and operating systems.

Structural Failure Models for Fault-Tolerant Distributed Computing

Structural Failure Models for Fault-Tolerant Distributed Computing PDF Author: Timo Warns
Publisher: Springer Science & Business Media
ISBN: 3834897078
Category : Computers
Languages : en
Pages : 227

Get Book Here

Book Description
Timo Warns has developed tractable fault models that, while being non-probabilistic, are accurate for dependent and propagating faults. Using seminal problems such as consensus and constructing coteries, he demonstrates how the new models can be used to design and evaluate effective and efficient means of fault tolerance.

Be sparse! Be dense! Be robust!

Be sparse! Be dense! Be robust! PDF Author: Sorge, Manuel
Publisher: Universitätsverlag der TU Berlin
ISBN: 3798328854
Category : Mathematics
Languages : en
Pages : 272

Get Book Here

Book Description
In this thesis we study the computational complexity of five NP-hard graph problems. It is widely accepted that, in general, NP-hard problems cannot be solved efficiently, that is, in polynomial time, due to many unsuccessful attempts to prove the contrary. Hence, we aim to identify properties of the inputs other than their length, that make the problem tractable or intractable. We measure these properties via parameters, mappings that assign to each input a nonnegative integer. For a given parameter k, we then attempt to design fixed-parameter algorithms, algorithms that on input q have running time upper bounded by f(k(q)) * |q|^c , where f is a preferably slowly growing function, |q| is the length of q, and c is a constant, preferably small. In each of the graph problems treated in this thesis, our input represents the setting in which we shall find a solution graph. In addition, the solution graphs shall have a certain property specific to our five graph problems. This property comes in three flavors. First, we look for a graph that shall be sparse! That is, it shall contain few edges. Second, we look for a graph that shall be dense! That is, it shall contain many edges. Third, we look for a graph that shall be robust! That is, it shall remain a good solution, even when it suffers several small modifications. Be sparse! In this part of the thesis, we analyze two similar problems. The input for both of them is a hypergraph H , which consists of a vertex set V and a family E of subsets of V , called hyperedges. The task is to find a support for H , a graph G such that for each hyperedge W in E we have that G[W ] is connected. Motivated by applications in network design, we study SUBSET INTERCONNECTION DESIGN, where we additionally get an integer f , and the support shall contain at most |V| - f + 1 edges. We show that SUBSET INTERCONNECTION DESIGN admits a fixed-parameter algorithm with respect to the number of hyperedges in the input hypergraph, and a fixed-parameter algorithm with respect to f + d , where d is the size of a largest hyperedge. Motivated by an application in hypergraph visualization, we study r-OUTERPLANAR SUPPORT where the support for H shall be r -outerplanar, that is, admit a edge-crossing free embedding in the plane with at most r layers. We show that r-OUTER-PLANAR SUPPORT admits a fixed-parameter algorithm with respect to m + r , where m is the number of hyperedges in the input hypergraph H. Be dense! In this part of the thesis, we study two problems motivated by community detection in social networks. Herein, the input is a graph G and an integer k. We look for a subgraph G' of G containing (exactly) k vertices which adheres to one of two mathematically precise definitions of being dense. In mu-CLIQUE, 0 < mu <= 1, the sought k-vertex subgraph G' should contain at least mu time k choose 2 edges. We study the complexity of mu-CLIQUE with respect to three parameters of the input graph G: the maximum vertex degree delta, h-index h, and degeneracy d. We have delta >= h >= d in every graph and h as well as d assume small values in graphs derived from social networks. For delta and for h, respectively, we obtain fixed-parameter algorithms for mu-CLIQUE and we show that for d + k a fixed-parameter algorithm is unlikely to exist. We prove the positive algorithmic results via developing a general framework for optimizing objective functions over k-vertex subgraphs. In HIGHLY CONNECTED SUBGRAPH we look for a k-vertex subgraph G' in which each vertex shall have degree at least floor(k/2)+1. We analyze a part of the so-called parameter ecology for HIGHLY CONNECTED SUBGRAPH, that is, we navigate the space of possible parameters in a quest to find a reasonable trade-off between small parameter values in practice and efficient running time guarantees. The highlights are that no 2^o(n) * n^c -time algorithms are possible for n-vertex input graphs unless the Exponential Time Hypothesis fails; that there is a O(4^g * n^2)-time algorithm for the number g of edges outgoing from the solution G; and we derive a 2^(O(sqrt(a)log(a)) + a^2nm-time algorithm for the number a of edges not in the solution. Be robust! In this part of the thesis, we study the VECTOR CONNECTIVITY problem, where we are given a graph G, a vertex labeling ell from V(G) to {1, . . . , d }, and an integer k. We are to find a vertex subset S of V(G) of size at most k such that each vertex v in V (G)\S has ell(v) vertex-disjoint paths from v to S in G. Such a set S is useful when placing servers in a network to satisfy robustness-of-service demands. We prove that VECTOR CONNECTIVITY admits a randomized fixed-parameter algorithm with respect to k, that it does not allow a polynomial kernelization with respect to k + d but that, if d is treated as a constant, then it allows a vertex-linear kernelization with respect to k. In dieser Dissertation untersuchen wir die Berechnungskomplexität von fünf NP-schweren Graphproblemen. Es wird weithin angenommen, dass NP-schwere Probleme im Allgemeinen nicht effizient gelöst werden können, das heißt, dass sie keine Polynomialzeitalgorithmen erlauben. Diese Annahme basiert auf vielen bisher nicht erfolgreichen Versuchen das Gegenteil zu beweisen. Aus diesem Grund versuchen wir Eigenschaften der Eingabe herauszuarbeiten, die das betrachtete Problem handhabbar oder unhandhabbar machen. Solche Eigenschaften messen wir mittels Parametern, das heißt, Abbildungen, die jeder möglichen Eingabe eine natürliche Zahl zuordnen. Für einen gegebenen Parameter k versuchen wir dann Fixed-Parameter Algorithmen zu entwerfen, also Algorithmen, die auf Eingabe q eine obere Laufzeitschranke von f(k(q)) * |q|^c erlauben, wobei f eine, vorzugsweise schwach wachsende, Funktion ist, |q| die Länge der Eingabe, und c eine Konstante, vorzugsweise klein. In den Graphproblemen, die wir in dieser Dissertation studieren, repräsentiert unsere Eingabe eine Situation in der wir einen Lösungsgraph finden sollen. Zusätzlich sollen die Lösungsgraphen bestimmte problemspezifische Eigenschaften haben. Wir betrachten drei Varianten dieser Eigenschaften: Zunächst suchen wir einen Graphen, der sparse sein soll. Das heißt, dass er wenige Kanten enthalten soll. Dann suchen wir einen Graphen, der dense sein soll. Das heißt, dass er viele Kanten enthalten soll. Zuletzt suchen wir einen Graphen, der robust sein soll. Das heißt, dass er eine gute Lösung bleiben soll, selbst wenn er einige kleine Modifikationen durchmacht. Be sparse! In diesem Teil der Arbeit analysieren wir zwei ähnliche Probleme. In beiden ist die Eingabe ein Hypergraph H, bestehend aus einer Knotenmenge V und einer Familie E von Teilmengen von V, genannt Hyperkanten. Die Aufgabe ist einen Support für H zu finden, einen Graphen G, sodass für jede Hyperkante W in E der induzierte Teilgraph G[W] verbunden ist. Motiviert durch Anwendungen im Netzwerkdesign betrachten wir SUBSET INTERCONNECTION DESIGN, worin wir eine natürliche Zahl f als zusätzliche Eingabe bekommen, und der Support höchstens |V| - f + 1 Kanten enthalten soll. Wir zeigen, dass SUBSET INTERCONNECTION DESIGN einen Fixed-Parameter Algorithmus in Hinsicht auf die Zahl der Hyperkanten im Eingabegraph erlaubt, und einen Fixed-Parameter Algorithmus in Hinsicht auf f + d, wobei d die Größe einer größten Hyperkante ist. Motiviert durch eine Anwendung in der Hypergraphvisualisierung studieren wir r-OUTERPLANAR SUPPORT, worin der Support für H r-outerplanar sein soll, das heißt, er soll eine kantenkreuzungsfreie Einbettung in die Ebene erlauben mit höchstens r Schichten. Wir zeigen, dass r-OUTERPLANAR SUPPORT einen Fixed-Parameter Algorithmus in Hinsicht auf m + r zulässt, wobei m die Anzahl der Hyperkanten im Eingabehypergraphen H ist. Be dense! In diesem Teil der Arbeit studieren wir zwei Probleme, die durch Community Detection in sozialen Netzwerken motiviert sind. Dabei ist die Eingabe ein Graph G und eine natürliche Zahl k. Wir suchen einen Teilgraphen G' von G, der (genau) k Knoten enthält und dabei eine von zwei mathematisch präzisen Definitionen davon, dense zu sein, aufweist. In mu-CLIQUE, 0 < mu <= 1, soll der gesuchte Teilgraph G' mindestens mu mal k über 2 Kanten enthalten. Wir studieren die Berechnungskomplexität von mu-CLIQUE in Hinsicht auf drei Parameter des Eingabegraphen G: dem maximalen Knotengrad delta, dem h-Index h, und der Degeneracy d. Es gilt delta >= h >= d für jeden Graphen und h als auch d nehmen kleine Werte in Graphen an, die aus sozialen Netzwerken abgeleitet sind. Für delta und h erhalten wir Fixed-Parameter Algorithmen für mu-CLIQUE und wir zeigen, dass für d + k wahrscheinlich kein Fixed-Parameter Algorithmus existiert. Unsere positiven algorithmischen Resultate erhalten wir durch Entwickeln eines allgemeinen Frameworks zum Optimieren von Zielfunktionen über k-Knoten-Teilgraphen. In HIGHLY CONNECTED SUBGRAPH soll in dem gesuchten k-Knoten-Teilgraphen G' jeder Knoten Knotengrad mindestens floor(k/2) + 1 haben. Wir analysieren einen Teil der sogenannten Parameter Ecology für HIGHLY CONNECTED SUBGRAPH. Das heißt, wir navigieren im Raum der möglichen Parameter auf der Suche nach einem vernünftigen Trade-off zwischen kleinen Parameterwerten in der Praxis und effizienten oberen Laufzeitschranken. Die Highlights hier sind, dass es keine Algorithmen mit 2^o(n) * poly(n)-Laufzeit für HIGHLY CONNECTED SUBGRAPH gibt, es sei denn die Exponential Time Hypothesis stimmt nicht; die Entwicklung eines Algorithmus mit O(4^y * n^2 )-Laufzeit, wobei y die Anzahl der Kanten ist, die aus dem Lösungsgraphen G' herausgehen; und die Entwicklung eines Algorithmus mit 2^O(sqrt(a) log(a)) + O(a^2nm)-Laufzeit, wobei a die Anzahl der Kanten ist, die nicht in G' enthalten sind.

Building Dependable Distributed Systems

Building Dependable Distributed Systems PDF Author: Wenbing Zhao
Publisher: John Wiley & Sons
ISBN: 1118912632
Category : Computers
Languages : en
Pages : 246

Get Book Here

Book Description
A one-volume guide to the most essential techniques for designing and building dependable distributed systems Instead of covering a broad range of research works for each dependability strategy, this useful reference focuses on only a selected few (usually the most seminal works, the most practical approaches, or the first publication of each approach), explaining each in depth, usually with a comprehensive set of examples. Each technique is dissected thoroughly enough so that readers who are not familiar with dependable distributed computing can actually grasp the technique after studying the book. Building Dependable Distributed Systems consists of eight chapters. The first introduces the basic concepts and terminology of dependable distributed computing, and also provides an overview of the primary means of achieving dependability. Checkpointing and logging mechanisms, which are the most commonly used means of achieving limited degree of fault tolerance, are described in the second chapter. Works on recovery-oriented computing, focusing on the practical techniques that reduce the fault detection and recovery times for Internet-based applications, are covered in chapter three. Chapter four outlines the replication techniques for data and service fault tolerance. This chapter also pays particular attention to optimistic replication and the CAP theorem. Chapter five explains a few seminal works on group communication systems. Chapter six introduces the distributed consensus problem and covers a number of Paxos family algorithms in depth. The Byzantine generals problem and its latest solutions, including the seminal Practical Byzantine Fault Tolerance (PBFT) algorithm and a number of its derivatives, are introduced in chapter seven. The final chapter details the latest research results surrounding application-aware Byzantine fault tolerance, which represents an important step forward in the practical use of Byzantine fault tolerance techniques.