رکورد قبلیرکورد بعدی

" Information Retrieval Evaluation in a Changing World : "


Document Type : BL
Record Number : 862470
Title & Author : Information Retrieval Evaluation in a Changing World : : Lessons Learned from 20 Years of CLEF /\ Nicola Ferro, Carol Peters, editors.
Publication Statement : Cham :: Springer,, 2019.
Series Statement : The Information Retrieval Ser. ;; v. 41
Page. NO : 1 online resource (597 pages)
ISBN : 3030229475
: : 3030229483
: : 3030229491
: : 3030229505
: : 9783030229474
: : 9783030229481
: : 9783030229498
: : 9783030229504
: 9783030229474
Notes : 1 Task Definition
Contents : Intro; Foreword; Preface; Contents; Acronyms; Editorial Board; Reviewers; Part I Experimental Evaluation and CLEF; From Multilingual to Multimodal: The Evolution of CLEF over Two Decades; 1 Introduction; 1.1 Experimental Evaluation; 1.2 International Evaluation Initiatives; 2 CLEF 1.0: Cross-Language Evaluation Forum (2000-2009); 2.1 Tracks and Tasks in CLEF 1.0; 2.1.1 Multilingual Text Retrieval (2000-2009); 2.1.2 The Domain-Specific Track (2001-2008); 2.1.3 Interactive Cross-Language Retrieval (2002-2009); 2.1.4 The Question-Answering Track (2003-2015)
: 2.1.5 Cross-Language Retrieval in Image Collections (2003-2019)2.1.6 Spoken Document/Speech Retrieval (2003-2007); 2.1.7 Multilingual Web Retrieval (2005-2008); 2.1.8 Geographical Retrieval (2005-2008); 2.1.9 Multilingual Information Filtering (2008-2009); 2.1.10 Cross-Language Video Retrieval (2008-2009); 2.1.11 Component-Based Evaluation (2009); 3 CLEF 2.0: Conference and Labs of the Evaluation Forum (2010-2019); 3.1 Workshops and Labs in CLEF 2.0; 3.1.1 Web People Search (2010); 3.1.2 Cross-Lingual Expert Search (2010); 3.1.3 Music Information Retrieval (2011)
: 3.1.17 News Recommendation Evaluation (2014-2017)3.1.18 Living Labs (2015-2016); 3.1.19 Social Book Search (2015-2016); 3.1.20 Microblog Cultural Contextualization (2016-2018); 3.1.21 Dynamic Search for Complex Tasks (2017-2018); 3.1.22 Early Risk Prediction on the Internet (eRisk, 2017-2019); 3.1.23 Evaluation of Personalised Information Retrieval (2017-2019); 3.1.24 Automatic Identification and Verification of Political Claims (2018-2019); 3.1.25 Reproducibility (2018-2019); 4 IR Tools and Test Collections; 4.1 ELRA Catalogue; 4.2 Some Publicly Accessible CLEF Test Suites
: 3.1.4 Entity Recognition (2013)3.1.5 Multimodal Spatial Role Labeling (2017); 3.1.6 Extracting Protests from News (2019); 3.1.7 Question Answering (2003-2015); 3.1.8 Image Retrieval (2003-2019); 3.1.9 Log File Analysis (2009-2011); 3.1.10 Intellectual Property in the Patent Domain (2009-2013); 3.1.11 Digital Text Forensics (2010-2019); 3.1.12 Cultural Heritage in CLEF (2011-2013); 3.1.13 Retrieval on Structured Datasets (2012-2014); 3.1.14 Online Reputation Management (2012-2014); 3.1.15 eHealth (2012-2019); 3.1.16 Biodiversity Identification and Prediction (2014-2019)
: 5 The CLEF Association6 Impact; References; The Evolution of Cranfield; 1 Introduction; 2 Cranfield Pre-TREC; 3 TREC Ad Hoc Collections; 3.1 Size; 3.2 Evaluation Measures; 3.3 Reliability Tests; 3.3.1 Effect of Topic Set Size; 3.3.2 Effect of Evaluation Measure Used; 3.3.3 Significance Testing; 4 Moving On; 4.1 Cross-Language Test Collections; 4.2 Other Tasks; 4.2.1 Filtering Tasks; 4.2.2 Focused Retrieval Tasks; 4.2.3 Web Tasks; 4.3 Size Revisited; 4.3.1 Special Measures; 4.3.2 Constructing Large Collections; 4.4 User-Based Measures; 5 Conclusion; References; How to Run an Evaluation Task
Abstract : This volume celebrates the twentieth anniversary of CLEF - the Cross-Language Evaluation Forum for the first ten years, and the Conference and Labs of the Evaluation Forum since - and traces its evolution over these first two decades. CLEF's main mission is to promote research, innovation and development of information retrieval (IR) systems by anticipating trends in information management in order to stimulate advances in the field of IR system experimentation and evaluation. The book is divided into six parts. Parts I and II provide background and context, with the first part explaining what is meant by experimental evaluation and the underlying theory, and describing how this has been interpreted in CLEF and in other internationally recognized evaluation initiatives. Part II presents research architectures and infrastructures that have been developed to manage experimental data and to provide evaluation services in CLEF and elsewhere. Parts III, IV and V represent the core of the book, presenting some of the most significant evaluation activities in CLEF, ranging from the early multilingual text processing exercises to the later, more sophisticated experiments on multimodal collections in diverse genres and media. In all cases, the focus is not only on describing "what has been achieved", but above all on "what has been learnt". The final part examines the impact CLEF has had on the research world and discusses current and future challenges, both academic and industrial, including the relevance of IR benchmarking in industrial settings. Mainly intended for researchers in academia and industry, it also offers useful insights and tips for practitioners in industry working on the evaluation and performance issues of IR tools, and graduate students specializing in information retrieval.
Subject : Cross-Language Evaluation Forum.
Subject : Information retrieval.
Subject : Information retrieval.
Dewey Classification : ‭025.04‬
LC Classification : ‭QA75.5-76.95QA76.9.N‬
: ‭ZA3075‬
Added Entry : Ferro, Nicola.
: Peters, C., (Carol)
کپی لینک

پیشنهاد خرید
پیوستها
Search result is zero
نظرسنجی
نظرسنجی منابع دیجیتال

1 - آیا از کیفیت منابع دیجیتال راضی هستید؟