摘要: Abstract : This paper describes CMIC's submissions to the TREC'09 relevance feedback track. In phase 1 runs we submitted, experimented with two different techniques produce 5 documents be judged by user in initial step, namely using knowledge bases and clustering. Both attempt topically diversify these as much possible an effort maximize probability that they contain at least relevant document. The basic premise is if a query has n diverse interpretations, then diversifying results picking top most likely interpretations would interested one interpretation. 2 runs, which involved use of attained from judgments, attempted positive negative judgments weighing terms used for subsequent feedback.