Author: Nicola Ferro
Publisher: Springer
ISBN: 3030229483
Category : Computers
Languages : en
Pages : 597
Book Description
This volume celebrates the twentieth anniversary of CLEF - the Cross-Language Evaluation Forum for the first ten years, and the Conference and Labs of the Evaluation Forum since – and traces its evolution over these first two decades. CLEF’s main mission is to promote research, innovation and development of information retrieval (IR) systems by anticipating trends in information management in order to stimulate advances in the field of IR system experimentation and evaluation. The book is divided into six parts. Parts I and II provide background and context, with the first part explaining what is meant by experimental evaluation and the underlying theory, and describing how this has been interpreted in CLEF and in other internationally recognized evaluation initiatives. Part II presents research architectures and infrastructures that have been developed to manage experimental data and to provide evaluation services in CLEF and elsewhere. Parts III, IV and V represent the core of the book, presenting some of the most significant evaluation activities in CLEF, ranging from the early multilingual text processing exercises to the later, more sophisticated experiments on multimodal collections in diverse genres and media. In all cases, the focus is not only on describing “what has been achieved”, but above all on “what has been learnt”. The final part examines the impact CLEF has had on the research world and discusses current and future challenges, both academic and industrial, including the relevance of IR benchmarking in industrial settings. Mainly intended for researchers in academia and industry, it also offers useful insights and tips for practitioners in industry working on the evaluation and performance issues of IR tools, and graduate students specializing in information retrieval.
Information Retrieval Evaluation in a Changing World
Author: Nicola Ferro
Publisher: Springer
ISBN: 3030229483
Category : Computers
Languages : en
Pages : 597
Book Description
This volume celebrates the twentieth anniversary of CLEF - the Cross-Language Evaluation Forum for the first ten years, and the Conference and Labs of the Evaluation Forum since – and traces its evolution over these first two decades. CLEF’s main mission is to promote research, innovation and development of information retrieval (IR) systems by anticipating trends in information management in order to stimulate advances in the field of IR system experimentation and evaluation. The book is divided into six parts. Parts I and II provide background and context, with the first part explaining what is meant by experimental evaluation and the underlying theory, and describing how this has been interpreted in CLEF and in other internationally recognized evaluation initiatives. Part II presents research architectures and infrastructures that have been developed to manage experimental data and to provide evaluation services in CLEF and elsewhere. Parts III, IV and V represent the core of the book, presenting some of the most significant evaluation activities in CLEF, ranging from the early multilingual text processing exercises to the later, more sophisticated experiments on multimodal collections in diverse genres and media. In all cases, the focus is not only on describing “what has been achieved”, but above all on “what has been learnt”. The final part examines the impact CLEF has had on the research world and discusses current and future challenges, both academic and industrial, including the relevance of IR benchmarking in industrial settings. Mainly intended for researchers in academia and industry, it also offers useful insights and tips for practitioners in industry working on the evaluation and performance issues of IR tools, and graduate students specializing in information retrieval.
Publisher: Springer
ISBN: 3030229483
Category : Computers
Languages : en
Pages : 597
Book Description
This volume celebrates the twentieth anniversary of CLEF - the Cross-Language Evaluation Forum for the first ten years, and the Conference and Labs of the Evaluation Forum since – and traces its evolution over these first two decades. CLEF’s main mission is to promote research, innovation and development of information retrieval (IR) systems by anticipating trends in information management in order to stimulate advances in the field of IR system experimentation and evaluation. The book is divided into six parts. Parts I and II provide background and context, with the first part explaining what is meant by experimental evaluation and the underlying theory, and describing how this has been interpreted in CLEF and in other internationally recognized evaluation initiatives. Part II presents research architectures and infrastructures that have been developed to manage experimental data and to provide evaluation services in CLEF and elsewhere. Parts III, IV and V represent the core of the book, presenting some of the most significant evaluation activities in CLEF, ranging from the early multilingual text processing exercises to the later, more sophisticated experiments on multimodal collections in diverse genres and media. In all cases, the focus is not only on describing “what has been achieved”, but above all on “what has been learnt”. The final part examines the impact CLEF has had on the research world and discusses current and future challenges, both academic and industrial, including the relevance of IR benchmarking in industrial settings. Mainly intended for researchers in academia and industry, it also offers useful insights and tips for practitioners in industry working on the evaluation and performance issues of IR tools, and graduate students specializing in information retrieval.
Introduction to Information Retrieval
Author: Christopher D. Manning
Publisher: Cambridge University Press
ISBN: 1139472100
Category : Computers
Languages : en
Pages :
Book Description
Class-tested and coherent, this textbook teaches classical and web information retrieval, including web search and the related areas of text classification and text clustering from basic concepts. It gives an up-to-date treatment of all aspects of the design and implementation of systems for gathering, indexing, and searching documents; methods for evaluating systems; and an introduction to the use of machine learning methods on text collections. All the important ideas are explained using examples and figures, making it perfect for introductory courses in information retrieval for advanced undergraduates and graduate students in computer science. Based on feedback from extensive classroom experience, the book has been carefully structured in order to make teaching more natural and effective. Slides and additional exercises (with solutions for lecturers) are also available through the book's supporting website to help course instructors prepare their lectures.
Publisher: Cambridge University Press
ISBN: 1139472100
Category : Computers
Languages : en
Pages :
Book Description
Class-tested and coherent, this textbook teaches classical and web information retrieval, including web search and the related areas of text classification and text clustering from basic concepts. It gives an up-to-date treatment of all aspects of the design and implementation of systems for gathering, indexing, and searching documents; methods for evaluating systems; and an introduction to the use of machine learning methods on text collections. All the important ideas are explained using examples and figures, making it perfect for introductory courses in information retrieval for advanced undergraduates and graduate students in computer science. Based on feedback from extensive classroom experience, the book has been carefully structured in order to make teaching more natural and effective. Slides and additional exercises (with solutions for lecturers) are also available through the book's supporting website to help course instructors prepare their lectures.
Evaluating Information Retrieval and Access Tasks
Author: Tetsuya Sakai
Publisher: Springer Nature
ISBN: 9811555540
Category : Information retrieval
Languages : en
Pages : 225
Book Description
This open access book summarizes the first two decades of the NII Testbeds and Community for Information access Research (NTCIR). NTCIR is a series of evaluation forums run by a global team of researchers and hosted by the National Institute of Informatics (NII), Japan. The book is unique in that it discusses not just what was done at NTCIR, but also how it was done and the impact it has achieved. For example, in some chapters the reader sees the early seeds of what eventually grew to be the search engines that provide access to content on the World Wide Web, todays smartphones that can tailor what they show to the needs of their owners, and the smart speakers that enrich our lives at home and on the move. We also get glimpses into how new search engines can be built for mathematical formulae, or for the digital record of a lived human life. Key to the success of the NTCIR endeavor was early recognition that information access research is an empirical discipline and that evaluation therefore lay at the core of the enterprise. Evaluation is thus at the heart of each chapter in this book. They show, for example, how the recognition that some documents are more important than others has shaped thinking about evaluation design. The thirty-three contributors to this volume speak for the many hundreds of researchers from dozens of countries around the world who together shaped NTCIR as organizers and participants. This book is suitable for researchers, practitioners, and students--anyone who wants to learn about past and present evaluation efforts in information retrieval, information access, and natural language processing, as well as those who want to participate in an evaluation task or even to design and organize one.
Publisher: Springer Nature
ISBN: 9811555540
Category : Information retrieval
Languages : en
Pages : 225
Book Description
This open access book summarizes the first two decades of the NII Testbeds and Community for Information access Research (NTCIR). NTCIR is a series of evaluation forums run by a global team of researchers and hosted by the National Institute of Informatics (NII), Japan. The book is unique in that it discusses not just what was done at NTCIR, but also how it was done and the impact it has achieved. For example, in some chapters the reader sees the early seeds of what eventually grew to be the search engines that provide access to content on the World Wide Web, todays smartphones that can tailor what they show to the needs of their owners, and the smart speakers that enrich our lives at home and on the move. We also get glimpses into how new search engines can be built for mathematical formulae, or for the digital record of a lived human life. Key to the success of the NTCIR endeavor was early recognition that information access research is an empirical discipline and that evaluation therefore lay at the core of the enterprise. Evaluation is thus at the heart of each chapter in this book. They show, for example, how the recognition that some documents are more important than others has shaped thinking about evaluation design. The thirty-three contributors to this volume speak for the many hundreds of researchers from dozens of countries around the world who together shaped NTCIR as organizers and participants. This book is suitable for researchers, practitioners, and students--anyone who wants to learn about past and present evaluation efforts in information retrieval, information access, and natural language processing, as well as those who want to participate in an evaluation task or even to design and organize one.
Experimental IR Meets Multilinguality, Multimodality, and Interaction
Author: Fabio Crestani
Publisher: Springer Nature
ISBN: 3030285774
Category : Computers
Languages : en
Pages : 442
Book Description
This book constitutes the refereed proceedings of the 10th International Conference of the CLEF Association, CLEF 2019, held in Lugano, Switzerland, in September 2019. The conference has a clear focus on experimental information retrieval with special attention to the challenges of multimodality, multilinguality, and interactive search ranging from unstructured to semi structures and structured data. The 7 full papers and 8 short papers presented in this volume were carefully reviewed and selected from 30 submissions. This year, many contributions tackle the social networks with the detection of stances or early identification of depression signs on Twitter in a cross-lingual context. Further this volume presents 7 “best of the labs” papers which were reviewed as a full paper submission with the same review criteria. The labs represented scientific challenges based on new data sets and real world problems in multimodal and multilingual information access. In addition to this, 9 benchmarking labs reported results of their yearlong activities in overview talks and lab sessions.
Publisher: Springer Nature
ISBN: 3030285774
Category : Computers
Languages : en
Pages : 442
Book Description
This book constitutes the refereed proceedings of the 10th International Conference of the CLEF Association, CLEF 2019, held in Lugano, Switzerland, in September 2019. The conference has a clear focus on experimental information retrieval with special attention to the challenges of multimodality, multilinguality, and interactive search ranging from unstructured to semi structures and structured data. The 7 full papers and 8 short papers presented in this volume were carefully reviewed and selected from 30 submissions. This year, many contributions tackle the social networks with the detection of stances or early identification of depression signs on Twitter in a cross-lingual context. Further this volume presents 7 “best of the labs” papers which were reviewed as a full paper submission with the same review criteria. The labs represented scientific challenges based on new data sets and real world problems in multimodal and multilingual information access. In addition to this, 9 benchmarking labs reported results of their yearlong activities in overview talks and lab sessions.
Information Retrieval Technology
Author: Fu Lee Wang
Publisher: Springer Nature
ISBN: 3030428354
Category : Computers
Languages : en
Pages : 207
Book Description
This book constitutes the refereed proceedings of the 15th Information Retrieval Technology Conference, AIRS 2019, held in Hong Kong, China, in November 2019.The 14 full papers presented together with 3 short papers were carefully reviewed and selected from 27 submissions. The scope of the conference covers applications, systems, technologies and theory aspects of information retrieval in text, audio, image, video and multimedia data.
Publisher: Springer Nature
ISBN: 3030428354
Category : Computers
Languages : en
Pages : 207
Book Description
This book constitutes the refereed proceedings of the 15th Information Retrieval Technology Conference, AIRS 2019, held in Hong Kong, China, in November 2019.The 14 full papers presented together with 3 short papers were carefully reviewed and selected from 27 submissions. The scope of the conference covers applications, systems, technologies and theory aspects of information retrieval in text, audio, image, video and multimedia data.
Experimental IR Meets Multilinguality, Multimodality, and Interaction
Author: Avi Arampatzis
Publisher: Springer Nature
ISBN: 3030582191
Category : Computers
Languages : en
Pages : 409
Book Description
This book constitutes the refereed proceedings of the 11th International Conference of the CLEF Association, CLEF 2020, held in Thessaloniki, Greece, in September 2020.* The conference has a clear focus on experimental information retrieval with special attention to the challenges of multimodality, multilinguality, and interactive search ranging from unstructured to semi structures and structured data. The 5 full papers and 2 short papers presented in this volume were carefully reviewed and selected from 9 submissions. This year, the contributions addressed the following challenges: a large-scale evaluation of translation effects in academic search, advancement of assessor-driven aggregation methods for efficient relevance assessments, and development of a new test dataset. In addition to this, the volume presents 7 “best of the labs” papers which were reviewed as full paper submissions with the same review criteria. The 12 lab overview papers were accepted out of 15 submissions and represent scientific challenges based on new data sets and real world problems in multimodal and multilingual information access. * The conference was held virtually due to the COVID-19 pandemic.
Publisher: Springer Nature
ISBN: 3030582191
Category : Computers
Languages : en
Pages : 409
Book Description
This book constitutes the refereed proceedings of the 11th International Conference of the CLEF Association, CLEF 2020, held in Thessaloniki, Greece, in September 2020.* The conference has a clear focus on experimental information retrieval with special attention to the challenges of multimodality, multilinguality, and interactive search ranging from unstructured to semi structures and structured data. The 5 full papers and 2 short papers presented in this volume were carefully reviewed and selected from 9 submissions. This year, the contributions addressed the following challenges: a large-scale evaluation of translation effects in academic search, advancement of assessor-driven aggregation methods for efficient relevance assessments, and development of a new test dataset. In addition to this, the volume presents 7 “best of the labs” papers which were reviewed as full paper submissions with the same review criteria. The 12 lab overview papers were accepted out of 15 submissions and represent scientific challenges based on new data sets and real world problems in multimodal and multilingual information access. * The conference was held virtually due to the COVID-19 pandemic.
Modern Information Retrieval
Author: Yates
Publisher: Pearson Education India
ISBN: 9788131709771
Category :
Languages : en
Pages : 540
Book Description
Publisher: Pearson Education India
ISBN: 9788131709771
Category :
Languages : en
Pages : 540
Book Description
Experimental IR Meets Multilinguality, Multimodality, and Interaction
Author: Cross-Language Evaluation Forum. Conference
Publisher: Springer Nature
ISBN: 3031717368
Category : Data mining
Languages : en
Pages : 287
Book Description
The two volume set LNCS 14958 + 14959 constitutes the proceedings of the 15th International Conference of the CLEF Association, CLEF 2024, held in Grenoble, France, during September 9–12, 2024. The proceedings contain 11 conference papers; 6 best of CLEF 2023 Labs' papers, and 14 Lab overview papers accepted from 45 submissions. In addition an overview paper on the CLEF activities in the last 25 years is included. The CLEF conference and labs of the evaluation forum deal with topics in information access from different perspectives, in any modality and language, focusing on experimental information retrieval (IR). .
Publisher: Springer Nature
ISBN: 3031717368
Category : Data mining
Languages : en
Pages : 287
Book Description
The two volume set LNCS 14958 + 14959 constitutes the proceedings of the 15th International Conference of the CLEF Association, CLEF 2024, held in Grenoble, France, during September 9–12, 2024. The proceedings contain 11 conference papers; 6 best of CLEF 2023 Labs' papers, and 14 Lab overview papers accepted from 45 submissions. In addition an overview paper on the CLEF activities in the last 25 years is included. The CLEF conference and labs of the evaluation forum deal with topics in information access from different perspectives, in any modality and language, focusing on experimental information retrieval (IR). .
Information Retrieval Evaluation in a Changing World
Author:
Publisher:
ISBN: 9783030229498
Category : Information retrieval
Languages : en
Pages : 597
Book Description
This volume celebrates the twentieth anniversary of CLEF - the Cross-Language Evaluation Forum for the first ten years, and the Conference and Labs of the Evaluation Forum since - and traces its evolution over these first two decades. CLEF's main mission is to promote research, innovation and development of information retrieval (IR) systems by anticipating trends in information management in order to stimulate advances in the field of IR system experimentation and evaluation. The book is divided into six parts. Parts I and II provide background and context, with the first part explaining what is meant by experimental evaluation and the underlying theory, and describing how this has been interpreted in CLEF and in other internationally recognized evaluation initiatives. Part II presents research architectures and infrastructures that have been developed to manage experimental data and to provide evaluation services in CLEF and elsewhere. Parts III, IV and V represent the core of the book, presenting some of the most significant evaluation activities in CLEF, ranging from the early multilingual text processing exercises to the later, more sophisticated experiments on multimodal collections in diverse genres and media. In all cases, the focus is not only on describing "what has been achieved", but above all on "what has been learnt". The final part examines the impact CLEF has had on the research world and discusses current and future challenges, both academic and industrial, including the relevance of IR benchmarking in industrial settings. Mainly intended for researchers in academia and industry, it also offers useful insights and tips for practitioners in industry working on the evaluation and performance issues of IR tools, and graduate students specializing in information retrieval.
Publisher:
ISBN: 9783030229498
Category : Information retrieval
Languages : en
Pages : 597
Book Description
This volume celebrates the twentieth anniversary of CLEF - the Cross-Language Evaluation Forum for the first ten years, and the Conference and Labs of the Evaluation Forum since - and traces its evolution over these first two decades. CLEF's main mission is to promote research, innovation and development of information retrieval (IR) systems by anticipating trends in information management in order to stimulate advances in the field of IR system experimentation and evaluation. The book is divided into six parts. Parts I and II provide background and context, with the first part explaining what is meant by experimental evaluation and the underlying theory, and describing how this has been interpreted in CLEF and in other internationally recognized evaluation initiatives. Part II presents research architectures and infrastructures that have been developed to manage experimental data and to provide evaluation services in CLEF and elsewhere. Parts III, IV and V represent the core of the book, presenting some of the most significant evaluation activities in CLEF, ranging from the early multilingual text processing exercises to the later, more sophisticated experiments on multimodal collections in diverse genres and media. In all cases, the focus is not only on describing "what has been achieved", but above all on "what has been learnt". The final part examines the impact CLEF has had on the research world and discusses current and future challenges, both academic and industrial, including the relevance of IR benchmarking in industrial settings. Mainly intended for researchers in academia and industry, it also offers useful insights and tips for practitioners in industry working on the evaluation and performance issues of IR tools, and graduate students specializing in information retrieval.
Laboratory Experiments in Information Retrieval
Author: Tetsuya Sakai
Publisher: Springer
ISBN: 9811311994
Category : Computers
Languages : en
Pages : 157
Book Description
Covering aspects from principles and limitations of statistical significance tests to topic set size design and power analysis, this book guides readers to statistically well-designed experiments. Although classical statistical significance tests are to some extent useful in information retrieval (IR) evaluation, they can harm research unless they are used appropriately with the right sample sizes and statistical power and unless the test results are reported properly. The first half of the book is mainly targeted at undergraduate students, and the second half is suitable for graduate students and researchers who regularly conduct laboratory experiments in IR, natural language processing, recommendations, and related fields. Chapters 1–5 review parametric significance tests for comparing system means, namely, t-tests and ANOVAs, and show how easily they can be conducted using Microsoft Excel or R. These chapters also discuss a few multiple comparison procedures for researchers who are interested in comparing every system pair, including a randomised version of Tukey's Honestly Significant Difference test. The chapters then deal with known limitations of classical significance testing and provide practical guidelines for reporting research results regarding comparison of means. Chapters 6 and 7 discuss statistical power. Chapter 6 introduces topic set size design to enable test collection builders to determine an appropriate number of topics to create. Readers can easily use the author’s Excel tools for topic set size design based on the paired and two-sample t-tests, one-way ANOVA, and confidence intervals. Chapter 7 describes power-analysis-based methods for determining an appropriate sample size for a new experiment based on a similar experiment done in the past, detailing how to utilize the author’s R tools for power analysis and how to interpret the results. Case studies from IR for both Excel-based topic set size design and R-based power analysis are also provided.
Publisher: Springer
ISBN: 9811311994
Category : Computers
Languages : en
Pages : 157
Book Description
Covering aspects from principles and limitations of statistical significance tests to topic set size design and power analysis, this book guides readers to statistically well-designed experiments. Although classical statistical significance tests are to some extent useful in information retrieval (IR) evaluation, they can harm research unless they are used appropriately with the right sample sizes and statistical power and unless the test results are reported properly. The first half of the book is mainly targeted at undergraduate students, and the second half is suitable for graduate students and researchers who regularly conduct laboratory experiments in IR, natural language processing, recommendations, and related fields. Chapters 1–5 review parametric significance tests for comparing system means, namely, t-tests and ANOVAs, and show how easily they can be conducted using Microsoft Excel or R. These chapters also discuss a few multiple comparison procedures for researchers who are interested in comparing every system pair, including a randomised version of Tukey's Honestly Significant Difference test. The chapters then deal with known limitations of classical significance testing and provide practical guidelines for reporting research results regarding comparison of means. Chapters 6 and 7 discuss statistical power. Chapter 6 introduces topic set size design to enable test collection builders to determine an appropriate number of topics to create. Readers can easily use the author’s Excel tools for topic set size design based on the paired and two-sample t-tests, one-way ANOVA, and confidence intervals. Chapter 7 describes power-analysis-based methods for determining an appropriate sample size for a new experiment based on a similar experiment done in the past, detailing how to utilize the author’s R tools for power analysis and how to interpret the results. Case studies from IR for both Excel-based topic set size design and R-based power analysis are also provided.