Author: Leon Willenborg
Publisher: Springer Science & Business Media
ISBN: 1461301211
Category : Business & Economics
Languages : en
Pages : 273
Book Description
Statistical disclosure control is the discipline that deals with producing statistical data that are safe enough to be released to external researchers. This book concentrates on the methodology of the area. It deals with both microdata (individual data) and tabular (aggregated) data. The book attempts to develop the theory from what can be called the paradigm of statistical confidentiality: to modify unsafe data in such a way that safe (enough) data emerge, with minimum information loss. This book discusses what safe data, are, how information loss can be measured, and how to modify the data in a (near) optimal way. Once it has been decided how to measure safety and information loss, the production of safe data from unsafe data is often a matter of solving an optimization problem. Several such problems are discussed in the book, and most of them turn out to be hard problems that can be solved only approximately. The authors present new results that have not been published before. The book is not a description of an area that is closed, but, on the contrary, one that still has many spots awaiting to be more fully explored. Some of these are indicated in the book. The book will be useful for official, social and medical statisticians and others who are involved in releasing personal or business data for statistical use. Operations researchers may be interested in the optimization problems involved, particularly for the challenges they present. Leon Willenborg has worked at the Department of Statistical Methods at Statistics Netherlands since 1983, first as a researcher and since 1989 as a senior researcher. Since 1989 his main field of research and consultancy has been statistical disclosure control. From 1996-1998 he was the project coordinator of the EU co-funded SDC project.
Elements of Statistical Disclosure Control
Author: Leon Willenborg
Publisher: Springer Science & Business Media
ISBN: 1461301211
Category : Business & Economics
Languages : en
Pages : 273
Book Description
Statistical disclosure control is the discipline that deals with producing statistical data that are safe enough to be released to external researchers. This book concentrates on the methodology of the area. It deals with both microdata (individual data) and tabular (aggregated) data. The book attempts to develop the theory from what can be called the paradigm of statistical confidentiality: to modify unsafe data in such a way that safe (enough) data emerge, with minimum information loss. This book discusses what safe data, are, how information loss can be measured, and how to modify the data in a (near) optimal way. Once it has been decided how to measure safety and information loss, the production of safe data from unsafe data is often a matter of solving an optimization problem. Several such problems are discussed in the book, and most of them turn out to be hard problems that can be solved only approximately. The authors present new results that have not been published before. The book is not a description of an area that is closed, but, on the contrary, one that still has many spots awaiting to be more fully explored. Some of these are indicated in the book. The book will be useful for official, social and medical statisticians and others who are involved in releasing personal or business data for statistical use. Operations researchers may be interested in the optimization problems involved, particularly for the challenges they present. Leon Willenborg has worked at the Department of Statistical Methods at Statistics Netherlands since 1983, first as a researcher and since 1989 as a senior researcher. Since 1989 his main field of research and consultancy has been statistical disclosure control. From 1996-1998 he was the project coordinator of the EU co-funded SDC project.
Publisher: Springer Science & Business Media
ISBN: 1461301211
Category : Business & Economics
Languages : en
Pages : 273
Book Description
Statistical disclosure control is the discipline that deals with producing statistical data that are safe enough to be released to external researchers. This book concentrates on the methodology of the area. It deals with both microdata (individual data) and tabular (aggregated) data. The book attempts to develop the theory from what can be called the paradigm of statistical confidentiality: to modify unsafe data in such a way that safe (enough) data emerge, with minimum information loss. This book discusses what safe data, are, how information loss can be measured, and how to modify the data in a (near) optimal way. Once it has been decided how to measure safety and information loss, the production of safe data from unsafe data is often a matter of solving an optimization problem. Several such problems are discussed in the book, and most of them turn out to be hard problems that can be solved only approximately. The authors present new results that have not been published before. The book is not a description of an area that is closed, but, on the contrary, one that still has many spots awaiting to be more fully explored. Some of these are indicated in the book. The book will be useful for official, social and medical statisticians and others who are involved in releasing personal or business data for statistical use. Operations researchers may be interested in the optimization problems involved, particularly for the challenges they present. Leon Willenborg has worked at the Department of Statistical Methods at Statistics Netherlands since 1983, first as a researcher and since 1989 as a senior researcher. Since 1989 his main field of research and consultancy has been statistical disclosure control. From 1996-1998 he was the project coordinator of the EU co-funded SDC project.
Statistical Disclosure Control
Author: Anco Hundepool
Publisher: John Wiley & Sons
ISBN: 1118348214
Category : Mathematics
Languages : en
Pages : 308
Book Description
A reference to answer all your statistical confidentiality questions. This handbook provides technical guidance on statistical disclosure control and on how to approach the problem of balancing the need to provide users with statistical outputs and the need to protect the confidentiality of respondents. Statistical disclosure control is combined with other tools such as administrative, legal and IT in order to define a proper data dissemination strategy based on a risk management approach. The key concepts of statistical disclosure control are presented, along with the methodology and software that can be used to apply various methods of statistical disclosure control. Numerous examples and guidelines are also featured to illustrate the topics covered. Statistical Disclosure Control: Presents a combination of both theoretical and practical solutions Introduces all the key concepts and definitions involved with statistical disclosure control. Provides a high level overview of how to approach problems associated with confidentiality. Provides a broad-ranging review of the methods available to control disclosure. Explains the subtleties of group disclosure control. Features examples throughout the book along with case studies demonstrating how particular methods are used. Discusses microdata, magnitude and frequency tabular data, and remote access issues. Written by experts within leading National Statistical Institutes. Official statisticians, academics and market researchers who need to be informed and make decisions on disclosure limitation will benefit from this book.
Publisher: John Wiley & Sons
ISBN: 1118348214
Category : Mathematics
Languages : en
Pages : 308
Book Description
A reference to answer all your statistical confidentiality questions. This handbook provides technical guidance on statistical disclosure control and on how to approach the problem of balancing the need to provide users with statistical outputs and the need to protect the confidentiality of respondents. Statistical disclosure control is combined with other tools such as administrative, legal and IT in order to define a proper data dissemination strategy based on a risk management approach. The key concepts of statistical disclosure control are presented, along with the methodology and software that can be used to apply various methods of statistical disclosure control. Numerous examples and guidelines are also featured to illustrate the topics covered. Statistical Disclosure Control: Presents a combination of both theoretical and practical solutions Introduces all the key concepts and definitions involved with statistical disclosure control. Provides a high level overview of how to approach problems associated with confidentiality. Provides a broad-ranging review of the methods available to control disclosure. Explains the subtleties of group disclosure control. Features examples throughout the book along with case studies demonstrating how particular methods are used. Discusses microdata, magnitude and frequency tabular data, and remote access issues. Written by experts within leading National Statistical Institutes. Official statisticians, academics and market researchers who need to be informed and make decisions on disclosure limitation will benefit from this book.
Statistical Disclosure Control for Microdata
Author: Matthias Templ
Publisher: Springer
ISBN: 3319502727
Category : Social Science
Languages : en
Pages : 299
Book Description
This book on statistical disclosure control presents the theory, applications and software implementation of the traditional approach to (micro)data anonymization, including data perturbation methods, disclosure risk, data utility, information loss and methods for simulating synthetic data. Introducing readers to the R packages sdcMicro and simPop, the book also features numerous examples and exercises with solutions, as well as case studies with real-world data, accompanied by the underlying R code to allow readers to reproduce all results. The demand for and volume of data from surveys, registers or other sources containing sensible information on persons or enterprises have increased significantly over the last several years. At the same time, privacy protection principles and regulations have imposed restrictions on the access and use of individual data. Proper and secure microdata dissemination calls for the application of statistical disclosure control methods to the da ta before release. This book is intended for practitioners at statistical agencies and other national and international organizations that deal with confidential data. It will also be interesting for researchers working in statistical disclosure control and the health sciences.
Publisher: Springer
ISBN: 3319502727
Category : Social Science
Languages : en
Pages : 299
Book Description
This book on statistical disclosure control presents the theory, applications and software implementation of the traditional approach to (micro)data anonymization, including data perturbation methods, disclosure risk, data utility, information loss and methods for simulating synthetic data. Introducing readers to the R packages sdcMicro and simPop, the book also features numerous examples and exercises with solutions, as well as case studies with real-world data, accompanied by the underlying R code to allow readers to reproduce all results. The demand for and volume of data from surveys, registers or other sources containing sensible information on persons or enterprises have increased significantly over the last several years. At the same time, privacy protection principles and regulations have imposed restrictions on the access and use of individual data. Proper and secure microdata dissemination calls for the application of statistical disclosure control methods to the da ta before release. This book is intended for practitioners at statistical agencies and other national and international organizations that deal with confidential data. It will also be interesting for researchers working in statistical disclosure control and the health sciences.
Statistical Disclosure Control in Practice
Author: Leon Willenborg
Publisher: Springer Science & Business Media
ISBN: 146124028X
Category : Mathematics
Languages : en
Pages : 164
Book Description
The aim of this book is to discuss various aspects associated with disseminating personal or business data collected in censuses or surveys or copied from administrative sources. The problem is to present the data in such a form that they are useful for statistical research and to provide sufficient protection for the individuals or businesses to whom the data refer. The major part of this book is concerned with how to define the disclosure problem and how to deal with it in practical circumstances.
Publisher: Springer Science & Business Media
ISBN: 146124028X
Category : Mathematics
Languages : en
Pages : 164
Book Description
The aim of this book is to discuss various aspects associated with disseminating personal or business data collected in censuses or surveys or copied from administrative sources. The problem is to present the data in such a form that they are useful for statistical research and to provide sufficient protection for the individuals or businesses to whom the data refer. The major part of this book is concerned with how to define the disclosure problem and how to deal with it in practical circumstances.
Synthetic Datasets for Statistical Disclosure Control
Author: Jörg Drechsler
Publisher: Springer Science & Business Media
ISBN: 146140326X
Category : Social Science
Languages : en
Pages : 148
Book Description
The aim of this book is to give the reader a detailed introduction to the different approaches to generating multiply imputed synthetic datasets. It describes all approaches that have been developed so far, provides a brief history of synthetic datasets, and gives useful hints on how to deal with real data problems like nonresponse, skip patterns, or logical constraints. Each chapter is dedicated to one approach, first describing the general concept followed by a detailed application to a real dataset providing useful guidelines on how to implement the theory in practice. The discussed multiple imputation approaches include imputation for nonresponse, generating fully synthetic datasets, generating partially synthetic datasets, generating synthetic datasets when the original data is subject to nonresponse, and a two-stage imputation approach that helps to better address the omnipresent trade-off between analytical validity and the risk of disclosure. The book concludes with a glimpse into the future of synthetic datasets, discussing the potential benefits and possible obstacles of the approach and ways to address the concerns of data users and their understandable discomfort with using data that doesn’t consist only of the originally collected values. The book is intended for researchers and practitioners alike. It helps the researcher to find the state of the art in synthetic data summarized in one book with full reference to all relevant papers on the topic. But it is also useful for the practitioner at the statistical agency who is considering the synthetic data approach for data dissemination in the future and wants to get familiar with the topic.
Publisher: Springer Science & Business Media
ISBN: 146140326X
Category : Social Science
Languages : en
Pages : 148
Book Description
The aim of this book is to give the reader a detailed introduction to the different approaches to generating multiply imputed synthetic datasets. It describes all approaches that have been developed so far, provides a brief history of synthetic datasets, and gives useful hints on how to deal with real data problems like nonresponse, skip patterns, or logical constraints. Each chapter is dedicated to one approach, first describing the general concept followed by a detailed application to a real dataset providing useful guidelines on how to implement the theory in practice. The discussed multiple imputation approaches include imputation for nonresponse, generating fully synthetic datasets, generating partially synthetic datasets, generating synthetic datasets when the original data is subject to nonresponse, and a two-stage imputation approach that helps to better address the omnipresent trade-off between analytical validity and the risk of disclosure. The book concludes with a glimpse into the future of synthetic datasets, discussing the potential benefits and possible obstacles of the approach and ways to address the concerns of data users and their understandable discomfort with using data that doesn’t consist only of the originally collected values. The book is intended for researchers and practitioners alike. It helps the researcher to find the state of the art in synthetic data summarized in one book with full reference to all relevant papers on the topic. But it is also useful for the practitioner at the statistical agency who is considering the synthetic data approach for data dissemination in the future and wants to get familiar with the topic.
Statistical Confidentiality
Author: George T. Duncan
Publisher: Springer Science & Business Media
ISBN: 144197802X
Category : Social Science
Languages : en
Pages : 205
Book Description
Because statistical confidentiality embraces the responsibility for both protecting data and ensuring its beneficial use for statistical purposes, those working with personal and proprietary data can benefit from the principles and practices this book presents. Researchers can understand why an agency holding statistical data does not respond well to the demand, “Just give me the data; I’m only going to do good things with it.” Statisticians can incorporate the requirements of statistical confidentiality into their methodologies for data collection and analysis. Data stewards, caught between those eager for data and those who worry about confidentiality, can use the tools of statistical confidentiality toward satisfying both groups. The eight chapters lay out the dilemma of data stewardship organizations (such as statistical agencies) in resolving the tension between protecting data from snoopers while providing data to legitimate users, explain disclosure risk and explore the types of attack that a data snooper might mount, present the methods of disclosure risk assessment, give techniques for statistical disclosure limitation of both tabular data and microdata, identify measures of the impact of disclosure limitation on data utility, provide restricted access methods as administrative procedures for disclosure control, and finally explore the future of statistical confidentiality.
Publisher: Springer Science & Business Media
ISBN: 144197802X
Category : Social Science
Languages : en
Pages : 205
Book Description
Because statistical confidentiality embraces the responsibility for both protecting data and ensuring its beneficial use for statistical purposes, those working with personal and proprietary data can benefit from the principles and practices this book presents. Researchers can understand why an agency holding statistical data does not respond well to the demand, “Just give me the data; I’m only going to do good things with it.” Statisticians can incorporate the requirements of statistical confidentiality into their methodologies for data collection and analysis. Data stewards, caught between those eager for data and those who worry about confidentiality, can use the tools of statistical confidentiality toward satisfying both groups. The eight chapters lay out the dilemma of data stewardship organizations (such as statistical agencies) in resolving the tension between protecting data from snoopers while providing data to legitimate users, explain disclosure risk and explore the types of attack that a data snooper might mount, present the methods of disclosure risk assessment, give techniques for statistical disclosure limitation of both tabular data and microdata, identify measures of the impact of disclosure limitation on data utility, provide restricted access methods as administrative procedures for disclosure control, and finally explore the future of statistical confidentiality.
Advances in Cryptology - CRYPTO 2008
Author: David Wagner
Publisher: Springer
ISBN: 3540851747
Category : Computers
Languages : en
Pages : 605
Book Description
This book constitutes the refereed proceedings of the 28th Annual International Cryptology Conference, CRYPTO 2008, held in Santa Barbara, CA, USA in August 2008. The 32 revised full papers presented were carefully reviewed and selected from 184 submissions. Addressing all current foundational, theoretical and research aspects of cryptology, cryptography, and cryptanalysis as well as advanced applications, the papers are organized in topical sections on random oracles, applications, public-key crypto, hash functions, cryptanalysis, multiparty computation, privacy, zero knowledge, and oblivious transfer.
Publisher: Springer
ISBN: 3540851747
Category : Computers
Languages : en
Pages : 605
Book Description
This book constitutes the refereed proceedings of the 28th Annual International Cryptology Conference, CRYPTO 2008, held in Santa Barbara, CA, USA in August 2008. The 32 revised full papers presented were carefully reviewed and selected from 184 submissions. Addressing all current foundational, theoretical and research aspects of cryptology, cryptography, and cryptanalysis as well as advanced applications, the papers are organized in topical sections on random oracles, applications, public-key crypto, hash functions, cryptanalysis, multiparty computation, privacy, zero knowledge, and oblivious transfer.
Privacy in Statistical Databases
Author: Josep Domingo-Ferrer
Publisher: Springer
ISBN: 9783030575205
Category : Computers
Languages : en
Pages : 370
Book Description
This book constitutes the refereed proceedings of the International Conference on Privacy in Statistical Databases, PSD 2020, held in Tarragona, Spain, in September 2020 under the sponsorship of the UNESCO Chair in Data Privacy. The 25 revised full papers presented were carefully reviewed and selected from 49 submissions. The papers are organized into the following topics: privacy models; microdata protection; protection of statistical tables; protection of interactive and mobility databases; record linkage and alternative methods; synthetic data; data quality; and case studies. The Chapter “Explaining recurrent machine learning models: integral privacy revisited” is available open access under a Creative Commons Attribution 4.0 International License via link.springer.com.
Publisher: Springer
ISBN: 9783030575205
Category : Computers
Languages : en
Pages : 370
Book Description
This book constitutes the refereed proceedings of the International Conference on Privacy in Statistical Databases, PSD 2020, held in Tarragona, Spain, in September 2020 under the sponsorship of the UNESCO Chair in Data Privacy. The 25 revised full papers presented were carefully reviewed and selected from 49 submissions. The papers are organized into the following topics: privacy models; microdata protection; protection of statistical tables; protection of interactive and mobility databases; record linkage and alternative methods; synthetic data; data quality; and case studies. The Chapter “Explaining recurrent machine learning models: integral privacy revisited” is available open access under a Creative Commons Attribution 4.0 International License via link.springer.com.
Total Survey Error in Practice
Author: Paul P. Biemer
Publisher: John Wiley & Sons
ISBN: 1119041678
Category : Social Science
Languages : en
Pages : 624
Book Description
Featuring a timely presentation of total survey error (TSE), this edited volume introduces valuable tools for understanding and improving survey data quality in the context of evolving large-scale data sets This book provides an overview of the TSE framework and current TSE research as related to survey design, data collection, estimation, and analysis. It recognizes that survey data affects many public policy and business decisions and thus focuses on the framework for understanding and improving survey data quality. The book also addresses issues with data quality in official statistics and in social, opinion, and market research as these fields continue to evolve, leading to larger and messier data sets. This perspective challenges survey organizations to find ways to collect and process data more efficiently without sacrificing quality. The volume consists of the most up-to-date research and reporting from over 70 contributors representing the best academics and researchers from a range of fields. The chapters are broken out into five main sections: The Concept of TSE and the TSE Paradigm, Implications for Survey Design, Data Collection and Data Processing Applications, Evaluation and Improvement, and Estimation and Analysis. Each chapter introduces and examines multiple error sources, such as sampling error, measurement error, and nonresponse error, which often offer the greatest risks to data quality, while also encouraging readers not to lose sight of the less commonly studied error sources, such as coverage error, processing error, and specification error. The book also notes the relationships between errors and the ways in which efforts to reduce one type can increase another, resulting in an estimate with larger total error. This book: • Features various error sources, and the complex relationships between them, in 25 high-quality chapters on the most up-to-date research in the field of TSE • Provides comprehensive reviews of the literature on error sources as well as data collection approaches and estimation methods to reduce their effects • Presents examples of recent international events that demonstrate the effects of data error, the importance of survey data quality, and the real-world issues that arise from these errors • Spans the four pillars of the total survey error paradigm (design, data collection, evaluation and analysis) to address key data quality issues in official statistics and survey research Total Survey Error in Practice is a reference for survey researchers and data scientists in research areas that include social science, public opinion, public policy, and business. It can also be used as a textbook or supplementary material for a graduate-level course in survey research methods.
Publisher: John Wiley & Sons
ISBN: 1119041678
Category : Social Science
Languages : en
Pages : 624
Book Description
Featuring a timely presentation of total survey error (TSE), this edited volume introduces valuable tools for understanding and improving survey data quality in the context of evolving large-scale data sets This book provides an overview of the TSE framework and current TSE research as related to survey design, data collection, estimation, and analysis. It recognizes that survey data affects many public policy and business decisions and thus focuses on the framework for understanding and improving survey data quality. The book also addresses issues with data quality in official statistics and in social, opinion, and market research as these fields continue to evolve, leading to larger and messier data sets. This perspective challenges survey organizations to find ways to collect and process data more efficiently without sacrificing quality. The volume consists of the most up-to-date research and reporting from over 70 contributors representing the best academics and researchers from a range of fields. The chapters are broken out into five main sections: The Concept of TSE and the TSE Paradigm, Implications for Survey Design, Data Collection and Data Processing Applications, Evaluation and Improvement, and Estimation and Analysis. Each chapter introduces and examines multiple error sources, such as sampling error, measurement error, and nonresponse error, which often offer the greatest risks to data quality, while also encouraging readers not to lose sight of the less commonly studied error sources, such as coverage error, processing error, and specification error. The book also notes the relationships between errors and the ways in which efforts to reduce one type can increase another, resulting in an estimate with larger total error. This book: • Features various error sources, and the complex relationships between them, in 25 high-quality chapters on the most up-to-date research in the field of TSE • Provides comprehensive reviews of the literature on error sources as well as data collection approaches and estimation methods to reduce their effects • Presents examples of recent international events that demonstrate the effects of data error, the importance of survey data quality, and the real-world issues that arise from these errors • Spans the four pillars of the total survey error paradigm (design, data collection, evaluation and analysis) to address key data quality issues in official statistics and survey research Total Survey Error in Practice is a reference for survey researchers and data scientists in research areas that include social science, public opinion, public policy, and business. It can also be used as a textbook or supplementary material for a graduate-level course in survey research methods.
Federal Statistics, Multiple Data Sources, and Privacy Protection
Author: National Academies of Sciences, Engineering, and Medicine
Publisher: National Academies Press
ISBN: 0309465370
Category : Social Science
Languages : en
Pages : 195
Book Description
The environment for obtaining information and providing statistical data for policy makers and the public has changed significantly in the past decade, raising questions about the fundamental survey paradigm that underlies federal statistics. New data sources provide opportunities to develop a new paradigm that can improve timeliness, geographic or subpopulation detail, and statistical efficiency. It also has the potential to reduce the costs of producing federal statistics. The panel's first report described federal statistical agencies' current paradigm, which relies heavily on sample surveys for producing national statistics, and challenges agencies are facing; the legal frameworks and mechanisms for protecting the privacy and confidentiality of statistical data and for providing researchers access to data, and challenges to those frameworks and mechanisms; and statistical agencies access to alternative sources of data. The panel recommended a new approach for federal statistical programs that would combine diverse data sources from government and private sector sources and the creation of a new entity that would provide the foundational elements needed for this new approach, including legal authority to access data and protect privacy. This second of the panel's two reports builds on the analysis, conclusions, and recommendations in the first one. This report assesses alternative methods for implementing a new approach that would combine diverse data sources from government and private sector sources, including describing statistical models for combining data from multiple sources; examining statistical and computer science approaches that foster privacy protections; evaluating frameworks for assessing the quality and utility of alternative data sources; and various models for implementing the recommended new entity. Together, the two reports offer ideas and recommendations to help federal statistical agencies examine and evaluate data from alternative sources and then combine them as appropriate to provide the country with more timely, actionable, and useful information for policy makers, businesses, and individuals.
Publisher: National Academies Press
ISBN: 0309465370
Category : Social Science
Languages : en
Pages : 195
Book Description
The environment for obtaining information and providing statistical data for policy makers and the public has changed significantly in the past decade, raising questions about the fundamental survey paradigm that underlies federal statistics. New data sources provide opportunities to develop a new paradigm that can improve timeliness, geographic or subpopulation detail, and statistical efficiency. It also has the potential to reduce the costs of producing federal statistics. The panel's first report described federal statistical agencies' current paradigm, which relies heavily on sample surveys for producing national statistics, and challenges agencies are facing; the legal frameworks and mechanisms for protecting the privacy and confidentiality of statistical data and for providing researchers access to data, and challenges to those frameworks and mechanisms; and statistical agencies access to alternative sources of data. The panel recommended a new approach for federal statistical programs that would combine diverse data sources from government and private sector sources and the creation of a new entity that would provide the foundational elements needed for this new approach, including legal authority to access data and protect privacy. This second of the panel's two reports builds on the analysis, conclusions, and recommendations in the first one. This report assesses alternative methods for implementing a new approach that would combine diverse data sources from government and private sector sources, including describing statistical models for combining data from multiple sources; examining statistical and computer science approaches that foster privacy protections; evaluating frameworks for assessing the quality and utility of alternative data sources; and various models for implementing the recommended new entity. Together, the two reports offer ideas and recommendations to help federal statistical agencies examine and evaluate data from alternative sources and then combine them as appropriate to provide the country with more timely, actionable, and useful information for policy makers, businesses, and individuals.