|
" Language modeling for information retrieval "
edited by W. Bruce Croft and John Lafferty.
Document Type
|
:
|
BL
|
Record Number
|
:
|
730973
|
Doc. No
|
:
|
b550757
|
Main Entry
|
:
|
edited by W. Bruce Croft and John Lafferty.
|
Title & Author
|
:
|
Language modeling for information retrieval\ edited by W. Bruce Croft and John Lafferty.
|
Publication Statement
|
:
|
Dordrecht ; London: Springer, 2011
|
Series Statement
|
:
|
The Kluwer international series on information retrieval, v. 13
|
Page. NO
|
:
|
1 volume ; 24 cm.
|
ISBN
|
:
|
9048162637
|
|
:
|
: 9789048162635
|
Notes
|
:
|
Originally published: 2003.;Includes index.
|
Contents
|
:
|
Preface. Contributing Authors. 1: Probabilistic Relevance Models Based on Document and Query Generation; J. Lafferty, ChengXiang Zhai. 1. Introduction. 2. Generative Relevance Models. 3. Discussion. 4. Historical Notes. 2: Relevance Models in Information Retrieval; V. Lavrenko, W.B. Croft. 1. Introduction. 2. Relevance Models. 3. Estimating a Relevance Model. 4. Experimental Results. 5. Conclusions. 3: Language Modeling and Relevance; K. Sparck Jones, S. Robertson, D. Hiemstra, H. Zaragoza. 1. Introduction. 2. Relevance in LM. 3. A Possible LM Approach: Parsimonious Models. 4. Concluding Comment. 4: Contributions of Language Modeling to the Theory and Practice of IR; W.R. Greiff, W.T. Morgan. 1. Introduction. 2. What is Language Modeling in IR. 3. Simulation Studies of Variance Reduction. 4. Continued Exploration. 5: Language Models for Topic Tracking; W. Kraai, M. Spitters. 1. Introduction. 2. Language Models for IR Tasks. 3. Experiments. 4. Discussion. 5. Conclusions. 6: A Probabilistic Approach to Term Translation for Cross-Lingual Retrieval; Jinxi Xu, R. Weischedel. 1. Introduction. 2. A Probabilistic Model for CLIR. 3. Estimating Term Translation Probabilities. 4. Related Work. 5. Test Collections. 6. Comparing CLIR with Monolingual Baseline. 7. Comparing Probabilistic and Structural Translations. 8. Comparing Probabilistic Translation and MT. 9. Measuring CLIR Performance as a Function of Resource Sizes. 10. Reducing the Translation Cost of Creating a Parallel Corpus. 11. Conclusions. 7: Using Compression-Based Language Models for Text Categorization; W.J. Teahan, D.J. Harper. 1. Background. 2. Compression Models. 3. Bayes Classifiers. 4. PPM-Based Language Models. 5. Experimental Results. 6. Discussion. 8: Applications of Score Distributions in Information Retrieval; R. Manmatha. 1. Introduction. 2. Related Work. 3. Modeling Score Distributions of Search Engines. 4. Combining Search Engines Indexing the Same Database. 5. Applications to Filtering and Topic Detection and Tracking. 6. Combining Engines Indexing Different Databases or Languages. 7. Conclusion. 9: An Unbiased Generative Model for Setting Dissemination Thresholds; Yi Zhang, J. Callan. 1. Introduction. 2. Generative Models of Dissemination Thresholds. 3. The Non-Random Sampling Problem & Solution. 4. Experimental Methodology. 5. Experimental Results. 6. Conclusion. 10: Language Modeling Experiments in Non-Extractive Summarization; V.O. Mittal, M.J. Witbrock. 1. Introduction. 2. Related Work. 3. Statistical Models of Gisting. 4. Training the Models. 5. Output and Evaluation. 6.
|
Subject
|
:
|
Information retrieval.
|
Subject
|
:
|
Information storage and retrieval systems.
|
LC Classification
|
:
|
Z699.3E358 2011
|
Added Entry
|
:
|
John Lafferty
|
|
:
|
W Bruce Croft
|
| |