Foundations of IR Experimental Evaluation
Abstract: In this talk, I will introduce the basic notions about how to evaluate information retrieval systems. After discussing why we need evaluation and presenting the whole evaluation spectrum, I will focus on system-oriented evaluation and the Cranfield paradigm. In particular, the lecture will cover experimental collections, namely corpora of documents, topics, and relevance judgments with some insights about the pooling process. The notions behind evaluation measures will be introduced, dealing with both set-based and rank-based measures. Finally, some ideas about statistical significance testing will be presented in order to allow for a sound comparison of system performance.
Short Bio: Nicola Ferro (http://www.dei.unipd.it/~ferro/) is Associate Professor in computer science at the University of Padua, Italy. His research interests include information retrieval, its experimental evaluation, multilingual information access, and digital libraries. He is the coordinator of the CLEF evaluation initiative, which involves more than 200 research groups world-wide in large-scale IR evaluation activities. He was the coordinator of the EU Seventh Framework Programme Network of Excellence PROMISE on information retrieval evaluation. He is associate editor of ACM TOIS and was general chair of ECIR 2016.