Please enable JavaScript.
Coggle requires JavaScript to display documents.
Information Retrieval System Evaluation, Information Retrieval System…
Information Retrieval System Evaluation
Information Retrieval System Evaluation prepared by ASIL SHAIKH (232266)
Standard Test Collections
Core Components
Document Corpus:
A large, diverse set of documents (e.g., news articles, research papers) used as the retrieval base.
Query Set:
Predefined user information needs, expressed as natural language queries or topic descriptions.
Relevance Judgments (Qrels):
Human-labeled assessments indicating which documents are relevant to each query.
Benchmark Collections
TREC (Text Retrieval Conference):
Industry-standard datasets supporting various retrieval tasks (e.g., ad hoc, web, QA).
Collection:
Pioneer in retrieval evaluation, basis for early empirical studies.
CLEF (Conference and Labs of the Evaluation Forum):
Focus on multilingual and cross-lingual retrieval.
FIRE:
Collections supporting Asian languages and regional research.
Purpose and Role
comparative evaluation
reproducibility
Concept of Relevance
Dimensions of Relevance
Topical Relevance
Cognitive Relevance
Situational Relevance
User (Personal) Relevance
Assessment Techniques
Explicit Judgments
Implicit Feedback
Crowdsourcing
Justification and Challenges
Subjectivity
Temporal Sensitivity
Consistency and Agreement
clear guidelines and task definition