This page contains instructions for running BM25 baselines on the MS MARCO document ranking task. Note that there is a separate MS MARCO passage ranking task.
We're going to use the repository's root directory as the working directory. First, we need to download and extract the MS MARCO document dataset:
mkdir collections/msmarco-doc
wget https://msmarco.blob.core.windows.net/msmarcoranking/msmarco-docs.trec.gz -P collections/msmarco-doc
# Alternative mirror:
# wget https://www.dropbox.com/s/w6caao3sfx9nluo/msmarco-docs.trec.gz -P collections/msmarco-doc
To confirm, msmarco-docs.trec.gz
should have MD5 checksum of d4863e4f342982b51b9a8fc668b2d0c0
.
There's no need to uncompress the file, as Anserini can directly index gzipped files. Build the index with the following command:
nohup sh target/appassembler/bin/IndexCollection -collection CleanTrecCollection \
-generator DefaultLuceneDocumentGenerator -threads 1 -input collections/msmarco-doc \
-index indexes/msmarco-doc/lucene-index.msmarco-doc.pos+docvectors+rawdocs \
-storePositions -storeDocvectors -storeRaw >& logs/log.msmarco-doc.pos+docvectors+rawdocs &
On a modern desktop with an SSD, indexing takes around 40 minutes. There should be a total of 3,213,835 documents indexed.
After indexing finishes, we can do a retrieval run. The dev queries are already stored in our repo:
target/appassembler/bin/SearchCollection -topicreader TsvInt \
-index indexes/msmarco-doc/lucene-index.msmarco-doc.pos+docvectors+rawdocs \
-topics src/main/resources/topics-and-qrels/topics.msmarco-doc.dev.txt \
-output runs/run.msmarco-doc.dev.bm25.txt -bm25
On a modern desktop with an SSD, the run takes around 12 minutes.
After the run completes, we can evaluate with trec_eval
:
$ tools/eval/trec_eval.9.0.4/trec_eval -c -mmap -mrecall.1000 src/main/resources/topics-and-qrels/qrels.msmarco-doc.dev.txt runs/run.msmarco-doc.dev.bm25.txt
map all 0.2310
recall_1000 all 0.8856
Let's compare to the baselines provided by Microsoft. First, download:
wget https://msmarco.blob.core.windows.net/msmarcoranking/msmarco-docdev-top100.gz -P runs
gunzip runs/msmarco-docdev-top100.gz
Then, run trec_eval
to compare.
Note that to be fair, we restrict evaluation to top 100 hits per topic (which is what Microsoft provides):
$ tools/eval/trec_eval.9.0.4/trec_eval -c -mmap -M 100 src/main/resources/topics-and-qrels/qrels.msmarco-doc.dev.txt runs/msmarco-docdev-top100
map all 0.2219
$ tools/eval/trec_eval.9.0.4/trec_eval -c -mmap -M 100 src/main/resources/topics-and-qrels/qrels.msmarco-doc.dev.txt runs/run.msmarco-doc.dev.bm25.txt
map all 0.2303
We see that "out of the box" Anserini is already better!
It is well known that BM25 parameter tuning is important.
The above instructions use the Anserini (system-wide) default of k1=0.9
, b=0.4
.
Let's try to do better!
We tuned BM25 using the queries found here: these are five different sets of 10k samples from the training queries (using the shuf
command).
Tuning was performed on each individual set (grid search, in tenth increments) and then we averaged parameter values across all five sets (this has the effect of regularization).
Here, we optimized for average precision (AP).
The tuned parameters using this approach are k1=3.44
, b=0.87
.
To perform a run with these parameters, issue the following command:
target/appassembler/bin/SearchCollection -topicreader TsvString \
-index indexes/msmarco-doc/lucene-index.msmarco-doc.pos+docvectors+rawdocs \
-topics src/main/resources/topics-and-qrels/topics.msmarco-doc.dev.txt \
-output runs/run.msmarco-doc.dev.bm25.tuned.txt -bm25 -bm25.k1 3.44 -bm25.b 0.87
Here's the comparison between the Anserini default and tuned parameters:
Setting | AP | Recall@1000 |
---|---|---|
Default (k1=0.9 , b=0.4 ) |
0.2310 | 0.8856 |
Tuned (k1=3.44 , b=0.87 ) |
0.2788 | 0.9326 |
As expected, BM25 tuning makes a big difference!
- Results replicated by @edwinzhng on 2020-01-14 (commit
3964169
) - Results replicated by @nikhilro on 2020-01-21 (commit
631589e
) - Results replicated by @yuki617 on 2020-03-29 (commit
074723c
) - Results replicated by @HangCui0510 on 2020-04-23 (commit
0ae567d
) - Results replicated by @x65han on 2020-04-25 (commit
f5496b9
) - Results replicated by @y276lin on 2020-04-26 (commit
8f48f8e
) - Results replicated by @stephaniewhoo on 2020-04-26 (commit
8f48f8e
) - Results replicated by @YimingDou on 2020-05-14 (commit
3b0a642
) - Results replicated by @richard3983 on 2020-05-14 (commit
a65646f
) - Results replicated by @MXueguang on 2020-05-20 (commit
3b2751e
) - Results replicated by @shaneding on 2020-05-23 (commit
b6e0367
) - Results replicated by @kelvin-jiang on 2020-05-24 (commit
b6e0367
) - Results replicated by @adamyy on 2020-05-28 (commit
a1ecfa4
) - Results replicated by @TianchengY on 2020-05-28 (commit
2947a16
) - Results replicated by @stariqmi on 2020-05-28 (commit
4914305
) - Results replicated by @justinborromeo on 2020-06-11 (commit
7954eab
) - Results replicated by @yxzhu16 on 2020-07-03 (commit
68ace26
) - Results replicated by @LizzyZhang-tutu on 2020-07-13 (commit
8c98d5b
) - Results replicated by @estella98 on 2020-08-05 (commit
99092a8
) - Results replicated by @tangsaidi on 2020-08-19 (commit
aba846
) - Results replicated by @qguo96 on 2020-09-07 (commit
e16b3c1
) - Results replicated by @yuxuan-ji on 2020-09-08 (commit
0f9a8ec
) - Results replicated by @wiltan-uw on 2020-09-09 (commit
93d913f
) - Results replicated by @JeffreyCA on 2020-09-13 (commit
bc2628b
) - Results replicated by @jhuang265 on 2020-10-15 (commit
66711b9
) - Results replicated by @rayyang29 on 2020-10-27 (commit
ad8cc5a
) - Results replicated by @Dahlia-Chehata on 2020-11-12 (commit
22c0ad3
)