Author: Yves Gingras
Publisher: MIT Press
ISBN: 026203512X
Category : Education
Languages : en
Pages : 133
Book Description
Why bibliometrics is useful for understanding the global dynamics of science but generate perverse effects when applied inappropriately in research evaluation and university rankings. The research evaluation market is booming. “Ranking,” “metrics,” “h-index,” and “impact factors” are reigning buzzwords. Government and research administrators want to evaluate everything—teachers, professors, training programs, universities—using quantitative indicators. Among the tools used to measure “research excellence,” bibliometrics—aggregate data on publications and citations—has become dominant. Bibliometrics is hailed as an “objective” measure of research quality, a quantitative measure more useful than “subjective” and intuitive evaluation methods such as peer review that have been used since scientific papers were first published in the seventeenth century. In this book, Yves Gingras offers a spirited argument against an unquestioning reliance on bibliometrics as an indicator of research quality. Gingras shows that bibliometric rankings have no real scientific validity, rarely measuring what they pretend to. Although the study of publication and citation patterns, at the proper scales, can yield insights on the global dynamics of science over time, ill-defined quantitative indicators often generate perverse and unintended effects on the direction of research. Moreover, abuse of bibliometrics occurs when data is manipulated to boost rankings. Gingras looks at the politics of evaluation and argues that using numbers can be a way to control scientists and diminish their autonomy in the evaluation process. Proposing precise criteria for establishing the validity of indicators at a given scale of analysis, Gingras questions why universities are so eager to let invalid indicators influence their research strategy.
Bibliometrics and Research Evaluation
Author: Yves Gingras
Publisher: MIT Press
ISBN: 026203512X
Category : Education
Languages : en
Pages : 133
Book Description
Why bibliometrics is useful for understanding the global dynamics of science but generate perverse effects when applied inappropriately in research evaluation and university rankings. The research evaluation market is booming. “Ranking,” “metrics,” “h-index,” and “impact factors” are reigning buzzwords. Government and research administrators want to evaluate everything—teachers, professors, training programs, universities—using quantitative indicators. Among the tools used to measure “research excellence,” bibliometrics—aggregate data on publications and citations—has become dominant. Bibliometrics is hailed as an “objective” measure of research quality, a quantitative measure more useful than “subjective” and intuitive evaluation methods such as peer review that have been used since scientific papers were first published in the seventeenth century. In this book, Yves Gingras offers a spirited argument against an unquestioning reliance on bibliometrics as an indicator of research quality. Gingras shows that bibliometric rankings have no real scientific validity, rarely measuring what they pretend to. Although the study of publication and citation patterns, at the proper scales, can yield insights on the global dynamics of science over time, ill-defined quantitative indicators often generate perverse and unintended effects on the direction of research. Moreover, abuse of bibliometrics occurs when data is manipulated to boost rankings. Gingras looks at the politics of evaluation and argues that using numbers can be a way to control scientists and diminish their autonomy in the evaluation process. Proposing precise criteria for establishing the validity of indicators at a given scale of analysis, Gingras questions why universities are so eager to let invalid indicators influence their research strategy.
Publisher: MIT Press
ISBN: 026203512X
Category : Education
Languages : en
Pages : 133
Book Description
Why bibliometrics is useful for understanding the global dynamics of science but generate perverse effects when applied inappropriately in research evaluation and university rankings. The research evaluation market is booming. “Ranking,” “metrics,” “h-index,” and “impact factors” are reigning buzzwords. Government and research administrators want to evaluate everything—teachers, professors, training programs, universities—using quantitative indicators. Among the tools used to measure “research excellence,” bibliometrics—aggregate data on publications and citations—has become dominant. Bibliometrics is hailed as an “objective” measure of research quality, a quantitative measure more useful than “subjective” and intuitive evaluation methods such as peer review that have been used since scientific papers were first published in the seventeenth century. In this book, Yves Gingras offers a spirited argument against an unquestioning reliance on bibliometrics as an indicator of research quality. Gingras shows that bibliometric rankings have no real scientific validity, rarely measuring what they pretend to. Although the study of publication and citation patterns, at the proper scales, can yield insights on the global dynamics of science over time, ill-defined quantitative indicators often generate perverse and unintended effects on the direction of research. Moreover, abuse of bibliometrics occurs when data is manipulated to boost rankings. Gingras looks at the politics of evaluation and argues that using numbers can be a way to control scientists and diminish their autonomy in the evaluation process. Proposing precise criteria for establishing the validity of indicators at a given scale of analysis, Gingras questions why universities are so eager to let invalid indicators influence their research strategy.
Bibliometrics and Research Evaluation
Author: Yves Gingras
Publisher: MIT Press
ISBN: 0262337665
Category : Education
Languages : en
Pages : 133
Book Description
Why bibliometrics is useful for understanding the global dynamics of science but generate perverse effects when applied inappropriately in research evaluation and university rankings. The research evaluation market is booming. “Ranking,” “metrics,” “h-index,” and “impact factors” are reigning buzzwords. Government and research administrators want to evaluate everything—teachers, professors, training programs, universities—using quantitative indicators. Among the tools used to measure “research excellence,” bibliometrics—aggregate data on publications and citations—has become dominant. Bibliometrics is hailed as an “objective” measure of research quality, a quantitative measure more useful than “subjective” and intuitive evaluation methods such as peer review that have been used since scientific papers were first published in the seventeenth century. In this book, Yves Gingras offers a spirited argument against an unquestioning reliance on bibliometrics as an indicator of research quality. Gingras shows that bibliometric rankings have no real scientific validity, rarely measuring what they pretend to. Although the study of publication and citation patterns, at the proper scales, can yield insights on the global dynamics of science over time, ill-defined quantitative indicators often generate perverse and unintended effects on the direction of research. Moreover, abuse of bibliometrics occurs when data is manipulated to boost rankings. Gingras looks at the politics of evaluation and argues that using numbers can be a way to control scientists and diminish their autonomy in the evaluation process. Proposing precise criteria for establishing the validity of indicators at a given scale of analysis, Gingras questions why universities are so eager to let invalid indicators influence their research strategy.
Publisher: MIT Press
ISBN: 0262337665
Category : Education
Languages : en
Pages : 133
Book Description
Why bibliometrics is useful for understanding the global dynamics of science but generate perverse effects when applied inappropriately in research evaluation and university rankings. The research evaluation market is booming. “Ranking,” “metrics,” “h-index,” and “impact factors” are reigning buzzwords. Government and research administrators want to evaluate everything—teachers, professors, training programs, universities—using quantitative indicators. Among the tools used to measure “research excellence,” bibliometrics—aggregate data on publications and citations—has become dominant. Bibliometrics is hailed as an “objective” measure of research quality, a quantitative measure more useful than “subjective” and intuitive evaluation methods such as peer review that have been used since scientific papers were first published in the seventeenth century. In this book, Yves Gingras offers a spirited argument against an unquestioning reliance on bibliometrics as an indicator of research quality. Gingras shows that bibliometric rankings have no real scientific validity, rarely measuring what they pretend to. Although the study of publication and citation patterns, at the proper scales, can yield insights on the global dynamics of science over time, ill-defined quantitative indicators often generate perverse and unintended effects on the direction of research. Moreover, abuse of bibliometrics occurs when data is manipulated to boost rankings. Gingras looks at the politics of evaluation and argues that using numbers can be a way to control scientists and diminish their autonomy in the evaluation process. Proposing precise criteria for establishing the validity of indicators at a given scale of analysis, Gingras questions why universities are so eager to let invalid indicators influence their research strategy.
Citation Analysis in Research Evaluation
Author: Henk F. Moed
Publisher: Springer Science & Business Media
ISBN: 1402037147
Category : Science
Languages : en
Pages : 334
Book Description
This book is written for members of the scholarly research community, and for persons involved in research evaluation and research policy. More specifically, it is directed towards the following four main groups of readers: – All scientists and scholars who have been or will be subjected to a quantitative assessment of research performance using citation analysis. – Research policy makers and managers who wish to become conversant with the basic features of citation analysis, and about its potentialities and limitations. – Members of peer review committees and other evaluators, who consider the use of citation analysis as a tool in their assessments. – Practitioners and students in the field of quantitative science and technology studies, informetrics, and library and information science. Citation analysis involves the construction and application of a series of indicators of the ‘impact’, ‘influence’ or ‘quality’ of scholarly work, derived from citation data, i.e. data on references cited in footnotes or bibliographies of scholarly research publications. Such indicators are applied both in the study of scholarly communication and in the assessment of research performance. The term ‘scholarly’ comprises all domains of science and scholarship, including not only those fields that are normally denoted as science – the natural and life sciences, mathematical and technical sciences – but also social sciences and humanities.
Publisher: Springer Science & Business Media
ISBN: 1402037147
Category : Science
Languages : en
Pages : 334
Book Description
This book is written for members of the scholarly research community, and for persons involved in research evaluation and research policy. More specifically, it is directed towards the following four main groups of readers: – All scientists and scholars who have been or will be subjected to a quantitative assessment of research performance using citation analysis. – Research policy makers and managers who wish to become conversant with the basic features of citation analysis, and about its potentialities and limitations. – Members of peer review committees and other evaluators, who consider the use of citation analysis as a tool in their assessments. – Practitioners and students in the field of quantitative science and technology studies, informetrics, and library and information science. Citation analysis involves the construction and application of a series of indicators of the ‘impact’, ‘influence’ or ‘quality’ of scholarly work, derived from citation data, i.e. data on references cited in footnotes or bibliographies of scholarly research publications. Such indicators are applied both in the study of scholarly communication and in the assessment of research performance. The term ‘scholarly’ comprises all domains of science and scholarship, including not only those fields that are normally denoted as science – the natural and life sciences, mathematical and technical sciences – but also social sciences and humanities.
Beyond Bibliometrics
Author: Blaise Cronin
Publisher: MIT Press
ISBN: 0262026791
Category : Education
Languages : en
Pages : 475
Book Description
A comprehensive, state-of-the-art examination of the changing ways we measure scholarly performance and research impact.
Publisher: MIT Press
ISBN: 0262026791
Category : Education
Languages : en
Pages : 475
Book Description
A comprehensive, state-of-the-art examination of the changing ways we measure scholarly performance and research impact.
Handbook Bibliometrics
Author: Rafael Ball
Publisher: Walter de Gruyter GmbH & Co KG
ISBN: 3110646617
Category : Language Arts & Disciplines
Languages : en
Pages : 542
Book Description
Bibliometrics and altmetrics are increasingly becoming the focus of interest in the context of research evaluation. The Handbook Bibliometrics provides a comprehensive introduction to quantifying scientific output in addition to a historical derivation, individual indicators, institutions, application perspectives and data bases. Furthermore, application scenarios, training and qualification on bibliometrics and their implications are considered.
Publisher: Walter de Gruyter GmbH & Co KG
ISBN: 3110646617
Category : Language Arts & Disciplines
Languages : en
Pages : 542
Book Description
Bibliometrics and altmetrics are increasingly becoming the focus of interest in the context of research evaluation. The Handbook Bibliometrics provides a comprehensive introduction to quantifying scientific output in addition to a historical derivation, individual indicators, institutions, application perspectives and data bases. Furthermore, application scenarios, training and qualification on bibliometrics and their implications are considered.
Handbook on the Theory and Practice of Program Evaluation
Author: Albert N. Link
Publisher: Edward Elgar Publishing
ISBN: 0857932403
Category : Political Science
Languages : en
Pages : 425
Book Description
'The economic crisis has simultaneously placed a strong emphasis on the role of R&D as an engine of economic growth and a demand that limited public resources are demonstrated to have had the maximum possible impact. Rigorous evaluation is the key to meeting these needs. This Handbook brings together highly experienced leaders in the field to provide a comprehensive and well-organised state-of-the-art overview of the range of methods available. It will prove invaluable to experienced practitioners, students in the field and more widely to those who want to increase their understanding of the complex and pervasive ways in which technological advance contributes to economic and social progress.' – Luke Georghiou, University of Manchester, UK 'Theoretical and empirical research on program evaluation has advanced rapidly in scope and quality. A concomitant trend is increasing pressure on policymakers to show that programs are "effective". Now is the time for a comprehensive status report on state-of-the-art research and methods by leading scholars in a variety of disciplines on program evaluation. This outstanding collection of contributions will serve as a valuable reference tool for academics, policymakers, and practitioners for many years to come.' – Donald S. Siegel, University at Albany, SUNY, US There has been a dramatic increase in expenditures on public goods over the past thirty years, particularly in the area of research and development. As governments explore the many opportunities for growth in this area, they – and the general public – are becoming increasingly concerned with the transparency, accountability and performance of public programs. This pioneering Handbook offers a collection of critical essays on the theory and practice of program evaluation, written by some of the most well-known experts in the field. As this volume demonstrates, a wide variety of methodologies exist to evaluate particularly the objectives and outcomes of research and development programs. These include surveys, statistical and econometric estimations, patent analyses, bibliometrics, scientometrics, network analyses, case studies, and historical tracings. Contributors divide these and other methods and applications into four categories – economic, non-economic, hybrid and data-driven – in order to discuss the many factors that affect the utility of each technique and how that impacts the technological, economic and societal forecasts of the programs in question. Scholars, practitioners and students with an interest in economics and innovation will all find this Handbook an invaluable resource.
Publisher: Edward Elgar Publishing
ISBN: 0857932403
Category : Political Science
Languages : en
Pages : 425
Book Description
'The economic crisis has simultaneously placed a strong emphasis on the role of R&D as an engine of economic growth and a demand that limited public resources are demonstrated to have had the maximum possible impact. Rigorous evaluation is the key to meeting these needs. This Handbook brings together highly experienced leaders in the field to provide a comprehensive and well-organised state-of-the-art overview of the range of methods available. It will prove invaluable to experienced practitioners, students in the field and more widely to those who want to increase their understanding of the complex and pervasive ways in which technological advance contributes to economic and social progress.' – Luke Georghiou, University of Manchester, UK 'Theoretical and empirical research on program evaluation has advanced rapidly in scope and quality. A concomitant trend is increasing pressure on policymakers to show that programs are "effective". Now is the time for a comprehensive status report on state-of-the-art research and methods by leading scholars in a variety of disciplines on program evaluation. This outstanding collection of contributions will serve as a valuable reference tool for academics, policymakers, and practitioners for many years to come.' – Donald S. Siegel, University at Albany, SUNY, US There has been a dramatic increase in expenditures on public goods over the past thirty years, particularly in the area of research and development. As governments explore the many opportunities for growth in this area, they – and the general public – are becoming increasingly concerned with the transparency, accountability and performance of public programs. This pioneering Handbook offers a collection of critical essays on the theory and practice of program evaluation, written by some of the most well-known experts in the field. As this volume demonstrates, a wide variety of methodologies exist to evaluate particularly the objectives and outcomes of research and development programs. These include surveys, statistical and econometric estimations, patent analyses, bibliometrics, scientometrics, network analyses, case studies, and historical tracings. Contributors divide these and other methods and applications into four categories – economic, non-economic, hybrid and data-driven – in order to discuss the many factors that affect the utility of each technique and how that impacts the technological, economic and societal forecasts of the programs in question. Scholars, practitioners and students with an interest in economics and innovation will all find this Handbook an invaluable resource.
Handbook of Bibliometric Indicators
Author: Roberto Todeschini
Publisher: John Wiley & Sons
ISBN: 3527337040
Category : Language Arts & Disciplines
Languages : en
Pages : 511
Book Description
At last, the first systematic guide to the growing jungle of citation indices and other bibliometric indicators. Written with the aim of providing a complete and unbiased overview of all available statistical measures for scientific productivity, the core of this reference is an alphabetical dictionary of indices and other algorithms used to evaluate the importance and impact of researchers and their institutions. In 150 major articles, the authors describe all indices in strictly mathematical terms without passing judgement on their relative merit. From widely used measures, such as the journal impact factor or the h-index, to highly specialized indices, all indicators currently in use in the sciences and humanities are described, and their application explained. The introductory section and the appendix contain a wealth of valuable supporting information on data sources, tools and techniques for bibliometric and scientometric analysis - for individual researchers as well as their funders and publishers.
Publisher: John Wiley & Sons
ISBN: 3527337040
Category : Language Arts & Disciplines
Languages : en
Pages : 511
Book Description
At last, the first systematic guide to the growing jungle of citation indices and other bibliometric indicators. Written with the aim of providing a complete and unbiased overview of all available statistical measures for scientific productivity, the core of this reference is an alphabetical dictionary of indices and other algorithms used to evaluate the importance and impact of researchers and their institutions. In 150 major articles, the authors describe all indices in strictly mathematical terms without passing judgement on their relative merit. From widely used measures, such as the journal impact factor or the h-index, to highly specialized indices, all indicators currently in use in the sciences and humanities are described, and their application explained. The introductory section and the appendix contain a wealth of valuable supporting information on data sources, tools and techniques for bibliometric and scientometric analysis - for individual researchers as well as their funders and publishers.
Measuring Research
Author: Cassidy R. Sugimoto
Publisher: Oxford University Press
ISBN: 0190640111
Category : Computers
Languages : en
Pages : 169
Book Description
Policy makers, academic administrators, scholars, and members of the public are clamoring for indicators of the value and reach of research. The question of how to quantify the impact and importance of research and scholarly output, from the publication of books and journal articles to the indexing of citations and tweets, is a critical one in predicting innovation, and in deciding what sorts of research is supported and whom is hired to carry it out. There is a wide set of data and tools available for measuring research, but they are often used in crude ways, and each have their own limitations and internal logics. Measuring Research: What Everyone Needs to Know(R) will provide, for the first time, an accessible account of the methods used to gather and analyze data on research output and impact. Following a brief history of scholarly communication and its measurement -- from traditional peer review to crowdsourced review on the social web -- the book will look at the classification of knowledge and academic disciplines, the differences between citations and references, the role of peer review, national research evaluation exercises, the tools used to measure research, the many different types of measurement indicators, and how to measure interdisciplinarity. The book also addresses emerging issues within scholarly communication, including whether or not measurement promotes a "publish or perish" culture, fraud in research, or "citation cartels." It will also look at the stakeholders behind these analytical tools, the adverse effects of these quantifications, and the future of research measurement.
Publisher: Oxford University Press
ISBN: 0190640111
Category : Computers
Languages : en
Pages : 169
Book Description
Policy makers, academic administrators, scholars, and members of the public are clamoring for indicators of the value and reach of research. The question of how to quantify the impact and importance of research and scholarly output, from the publication of books and journal articles to the indexing of citations and tweets, is a critical one in predicting innovation, and in deciding what sorts of research is supported and whom is hired to carry it out. There is a wide set of data and tools available for measuring research, but they are often used in crude ways, and each have their own limitations and internal logics. Measuring Research: What Everyone Needs to Know(R) will provide, for the first time, an accessible account of the methods used to gather and analyze data on research output and impact. Following a brief history of scholarly communication and its measurement -- from traditional peer review to crowdsourced review on the social web -- the book will look at the classification of knowledge and academic disciplines, the differences between citations and references, the role of peer review, national research evaluation exercises, the tools used to measure research, the many different types of measurement indicators, and how to measure interdisciplinarity. The book also addresses emerging issues within scholarly communication, including whether or not measurement promotes a "publish or perish" culture, fraud in research, or "citation cartels." It will also look at the stakeholders behind these analytical tools, the adverse effects of these quantifications, and the future of research measurement.
Research Assessment in the Humanities
Author: Michael Ochsner
Publisher: Springer
ISBN: 3319290169
Category : Education
Languages : en
Pages : 249
Book Description
This book analyses and discusses the recent developments for assessing research quality in the humanities and related fields in the social sciences. Research assessments in the humanities are highly controversial and the evaluation of humanities research is delicate. While citation-based research performance indicators are widely used in the natural and life sciences, quantitative measures for research performance meet strong opposition in the humanities. This volume combines the presentation of state-of-the-art projects on research assessments in the humanities by humanities scholars themselves with a description of the evaluation of humanities research in practice presented by research funders. Bibliometric issues concerning humanities research complete the exhaustive analysis of humanities research assessment. The selection of authors is well-balanced between humanities scholars, research funders, and researchers on higher education. Hence, the edited volume succeeds in painting a comprehensive picture of research evaluation in the humanities. This book is valuable to university and science policy makers, university administrators, research evaluators, bibliometricians as well as humanities scholars who seek expert knowledge in research evaluation in the humanities.
Publisher: Springer
ISBN: 3319290169
Category : Education
Languages : en
Pages : 249
Book Description
This book analyses and discusses the recent developments for assessing research quality in the humanities and related fields in the social sciences. Research assessments in the humanities are highly controversial and the evaluation of humanities research is delicate. While citation-based research performance indicators are widely used in the natural and life sciences, quantitative measures for research performance meet strong opposition in the humanities. This volume combines the presentation of state-of-the-art projects on research assessments in the humanities by humanities scholars themselves with a description of the evaluation of humanities research in practice presented by research funders. Bibliometric issues concerning humanities research complete the exhaustive analysis of humanities research assessment. The selection of authors is well-balanced between humanities scholars, research funders, and researchers on higher education. Hence, the edited volume succeeds in painting a comprehensive picture of research evaluation in the humanities. This book is valuable to university and science policy makers, university administrators, research evaluators, bibliometricians as well as humanities scholars who seek expert knowledge in research evaluation in the humanities.
Best Practices in Bibliometrics & Bibliometric Services
Author: Juan Ignacio Gorraiz
Publisher: Frontiers Media SA
ISBN: 2889719693
Category : Science
Languages : en
Pages : 145
Book Description
Publisher: Frontiers Media SA
ISBN: 2889719693
Category : Science
Languages : en
Pages : 145
Book Description