diff --git a/README.md b/README.md
index 3c7696c0eb..25101fc902 100644
--- a/README.md
+++ b/README.md
@@ -83,7 +83,6 @@ See individual pages for details!
| **Learned Sparse (other)** | | | |
| DeepImpact | [✓](docs/regressions/regressions-msmarco-passage-deepimpact.md) | | |
| SPLADEv2 | [✓](docs/regressions/regressions-msmarco-passage-distill-splade-max.md) | | |
-| SPLADE-distill CoCodenser-medium | [✓](docs/regressions/regressions-msmarco-passage-splade-distil-cocodenser-medium.md) | [✓](docs/regressions/regressions-dl19-passage-splade-distil-cocodenser-medium.md) | [✓](docs/regressions/regressions-dl20-passage-splade-distil-cocodenser-medium.md) |
| SPLADE++ CoCondenser-EnsembleDistil | [✓](docs/regressions/regressions-msmarco-passage-splade-pp-ed.md) | [✓](docs/regressions/regressions-dl19-passage-splade-pp-ed.md) | [✓](docs/regressions/regressions-dl20-passage-splade-pp-ed.md) |
| SPLADE++ CoCondenser-EnsembleDistil (ONNX) | [✓](docs/regressions/regressions-msmarco-passage-splade-pp-ed-onnx.md) | [✓](docs/regressions/regressions-dl19-passage-splade-pp-ed-onnx.md) | [✓](docs/regressions/regressions-dl20-passage-splade-pp-ed-onnx.md) |
| SPLADE++ CoCondenser-SelfDistil | [✓](docs/regressions/regressions-msmarco-passage-splade-pp-sd.md) | [✓](docs/regressions/regressions-dl19-passage-splade-pp-sd.md) | [✓](docs/regressions/regressions-dl20-passage-splade-pp-sd.md) |
@@ -111,7 +110,6 @@ See individual pages for details!
| [uniCOIL (TILDE)](https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/msmarco-passage-unicoil-tilde-expansion.tar) | 3.9 GB | `12a9c289d94e32fd63a7d39c9677d75c` |
| [DeepImpact](https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/msmarco-passage-deepimpact.tar) | 3.6 GB | `73843885b503af3c8b3ee62e5f5a9900` |
| [SPLADEv2](https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/msmarco-passage-distill-splade-max.tar) | 9.9 GB | `b5d126f5d9a8e1b3ef3f5cb0ba651725` |
-| [SPLADE-distill CoCodenser-medium](https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/msmarco-passage-splade_distil_cocodenser_medium.tar) | 4.9 GB | `f77239a26d08856e6491a34062893b0c` |
| [SPLADE++ CoCondenser-EnsembleDistil](https://rgw.cs.uwaterloo.ca/pyserini/data/msmarco-passage-splade-pp-ed.tar) | 4.2 GB | `e489133bdc54ee1e7c62a32aa582bc77` |
| [SPLADE++ CoCondenser-SelfDistil](https://rgw.cs.uwaterloo.ca/pyserini/data/msmarco-passage-splade-pp-sd.tar) | 4.8 GB | `cb7e264222f2bf2221dd2c9d28190be1` |
| [cosDPR-distil](https://rgw.cs.uwaterloo.ca/pyserini/data/msmarco-passage-cos-dpr-distil.tar) | 57 GB | `e20ffbc8b5e7f760af31298aefeaebbd` |
@@ -212,40 +210,39 @@ Key:
+ F2 = "flat" baselinse (pre-tokenized with `bert-base-uncased` tokenizer)
+ MF = "multifield" baseline (Lucene analyzer)
+ U1 = uniCOIL (noexp)
-+ S1 = SPLADE-distill CoCodenser-medium
-+ S2 = SPLADE++ CoCondenser-EnsembleDistil
-
-| Corpus | F1 | F2 | MF | U1 | S1 | S2 |
-|-------------------------|:-----------------------------------------------------------------------------:|:--------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------:|
-| TREC-COVID | [+](docs/regressions/regressions-beir-v1.0.0-trec-covid-flat.md) | [+](docs/regressions/regressions-beir-v1.0.0-trec-covid-flat-wp.md) | [+](docs/regressions/regressions-beir-v1.0.0-trec-covid-multifield.md) | [+](docs/regressions/regressions-beir-v1.0.0-trec-covid-unicoil-noexp.md) | [+](docs/regressions/regressions-beir-v1.0.0-trec-covid-splade-distil-cocodenser-medium.md) | [+](docs/regressions/regressions-beir-v1.0.0-trec-covid-splade-pp-ed.md) |
-| BioASQ | [+](docs/regressions/regressions-beir-v1.0.0-bioasq-flat.md) | [+](docs/regressions/regressions-beir-v1.0.0-bioasq-flat-wp.md) | [+](docs/regressions/regressions-beir-v1.0.0-bioasq-multifield.md) | [+](docs/regressions/regressions-beir-v1.0.0-bioasq-unicoil-noexp.md) | [+](docs/regressions/regressions-beir-v1.0.0-bioasq-splade-distil-cocodenser-medium.md) | [+](docs/regressions/regressions-beir-v1.0.0-bioasq-splade-pp-ed.md) |
-| NFCorpus | [+](docs/regressions/regressions-beir-v1.0.0-nfcorpus-flat.md) | [+](docs/regressions/regressions-beir-v1.0.0-nfcorpus-flat-wp.md) | [+](docs/regressions/regressions-beir-v1.0.0-nfcorpus-multifield.md) | [+](docs/regressions/regressions-beir-v1.0.0-nfcorpus-unicoil-noexp.md) | [+](docs/regressions/regressions-beir-v1.0.0-nfcorpus-splade-distil-cocodenser-medium.md) | [+](docs/regressions/regressions-beir-v1.0.0-nfcorpus-splade-pp-ed.md) |
-| NQ | [+](docs/regressions/regressions-beir-v1.0.0-nq-flat.md) | [+](docs/regressions/regressions-beir-v1.0.0-nq-flat-wp.md) | [+](docs/regressions/regressions-beir-v1.0.0-nq-multifield.md) | [+](docs/regressions/regressions-beir-v1.0.0-nq-unicoil-noexp.md) | [+](docs/regressions/regressions-beir-v1.0.0-nq-splade-distil-cocodenser-medium.md) | [+](docs/regressions/regressions-beir-v1.0.0-nq-splade-pp-ed.md) |
-| HotpotQA | [+](docs/regressions/regressions-beir-v1.0.0-hotpotqa-flat.md) | [+](docs/regressions/regressions-beir-v1.0.0-hotpotqa-flat-wp.md) | [+](docs/regressions/regressions-beir-v1.0.0-hotpotqa-multifield.md) | [+](docs/regressions/regressions-beir-v1.0.0-hotpotqa-unicoil-noexp.md) | [+](docs/regressions/regressions-beir-v1.0.0-hotpotqa-splade-distil-cocodenser-medium.md) | [+](docs/regressions/regressions-beir-v1.0.0-hotpotqa-splade-pp-ed.md) |
-| FiQA-2018 | [+](docs/regressions/regressions-beir-v1.0.0-fiqa-flat.md) | [+](docs/regressions/regressions-beir-v1.0.0-fiqa-flat-wp.md) | [+](docs/regressions/regressions-beir-v1.0.0-fiqa-multifield.md) | [+](docs/regressions/regressions-beir-v1.0.0-fiqa-unicoil-noexp.md) | [+](docs/regressions/regressions-beir-v1.0.0-fiqa-splade-distil-cocodenser-medium.md) | [+](docs/regressions/regressions-beir-v1.0.0-fiqa-splade-pp-ed.md) |
-| Signal-1M(RT) | [+](docs/regressions/regressions-beir-v1.0.0-signal1m-flat.md) | [+](docs/regressions/regressions-beir-v1.0.0-signal1m-flat-wp.md) | [+](docs/regressions/regressions-beir-v1.0.0-signal1m-multifield.md) | [+](docs/regressions/regressions-beir-v1.0.0-signal1m-unicoil-noexp.md) | [+](docs/regressions/regressions-beir-v1.0.0-signal1m-splade-distil-cocodenser-medium.md) | [+](docs/regressions/regressions-beir-v1.0.0-signal1m-splade-pp-ed.md) |
-| TREC-NEWS | [+](docs/regressions/regressions-beir-v1.0.0-trec-news-flat.md) | [+](docs/regressions/regressions-beir-v1.0.0-trec-news-flat-wp.md) | [+](docs/regressions/regressions-beir-v1.0.0-trec-news-multifield.md) | [+](docs/regressions/regressions-beir-v1.0.0-trec-news-unicoil-noexp.md) | [+](docs/regressions/regressions-beir-v1.0.0-trec-news-splade-distil-cocodenser-medium.md) | [+](docs/regressions/regressions-beir-v1.0.0-trec-news-splade-pp-ed.md) |
-| Robust04 | [+](docs/regressions/regressions-beir-v1.0.0-robust04-flat.md) | [+](docs/regressions/regressions-beir-v1.0.0-robust04-flat-wp.md) | [+](docs/regressions/regressions-beir-v1.0.0-robust04-multifield.md) | [+](docs/regressions/regressions-beir-v1.0.0-robust04-unicoil-noexp.md) | [+](docs/regressions/regressions-beir-v1.0.0-robust04-splade-distil-cocodenser-medium.md) | [+](docs/regressions/regressions-beir-v1.0.0-robust04-splade-pp-ed.md) |
-| ArguAna | [+](docs/regressions/regressions-beir-v1.0.0-arguana-flat.md) | [+](docs/regressions/regressions-beir-v1.0.0-arguana-flat-wp.md) | [+](docs/regressions/regressions-beir-v1.0.0-arguana-multifield.md) | [+](docs/regressions/regressions-beir-v1.0.0-arguana-unicoil-noexp.md) | [+](docs/regressions/regressions-beir-v1.0.0-arguana-splade-distil-cocodenser-medium.md) | [+](docs/regressions/regressions-beir-v1.0.0-arguana-splade-pp-ed.md) |
-| Touche2020 | [+](docs/regressions/regressions-beir-v1.0.0-webis-touche2020-flat.md) | [+](docs/regressions/regressions-beir-v1.0.0-webis-touche2020-flat-wp.md) | [+](docs/regressions/regressions-beir-v1.0.0-webis-touche2020-multifield.md) | [+](docs/regressions/regressions-beir-v1.0.0-webis-touche2020-unicoil-noexp.md) | [+](docs/regressions/regressions-beir-v1.0.0-webis-touche2020-splade-distil-cocodenser-medium.md) | [+](docs/regressions/regressions-beir-v1.0.0-webis-touche2020-splade-pp-ed.md) |
-| CQADupStack-Android | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-android-flat.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-android-flat-wp.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-android-multifield.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-android-unicoil-noexp.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-android-splade-distil-cocodenser-medium.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-android-splade-pp-ed.md) |
-| CQADupStack-English | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-english-flat.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-english-flat-wp.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-english-multifield.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-english-unicoil-noexp.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-english-splade-distil-cocodenser-medium.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-english-splade-pp-ed.md) |
-| CQADupStack-Gaming | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-gaming-flat.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-gaming-flat-wp.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-gaming-multifield.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-gaming-unicoil-noexp.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-gaming-splade-distil-cocodenser-medium.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-gaming-splade-pp-ed.md) |
-| CQADupStack-Gis | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-gis-flat.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-gis-flat-wp.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-gis-multifield.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-gis-unicoil-noexp.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-gis-splade-distil-cocodenser-medium.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-gis-splade-pp-ed.md) |
-| CQADupStack-Mathematica | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-mathematica-flat.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-mathematica-flat-wp.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-mathematica-multifield.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-mathematica-unicoil-noexp.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-mathematica-splade-distil-cocodenser-medium.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-mathematica-splade-pp-ed.md) |
-| CQADupStack-Physics | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-physics-flat.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-physics-flat-wp.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-physics-multifield.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-physics-unicoil-noexp.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-physics-splade-distil-cocodenser-medium.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-physics-splade-pp-ed.md) |
-| CQADupStack-Programmers | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-programmers-flat.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-programmers-flat-wp.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-programmers-multifield.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-programmers-unicoil-noexp.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-programmers-splade-distil-cocodenser-medium.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-programmers-splade-pp-ed.md) |
-| CQADupStack-Stats | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-stats-flat.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-stats-flat-wp.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-stats-multifield.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-stats-unicoil-noexp.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-stats-splade-distil-cocodenser-medium.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-stats-splade-pp-ed.md) |
-| CQADupStack-Tex | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-tex-flat.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-tex-flat-wp.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-tex-multifield.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-tex-unicoil-noexp.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-tex-splade-distil-cocodenser-medium.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-tex-splade-pp-ed.md) |
-| CQADupStack-Unix | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-unix-flat.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-unix-flat-wp.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-unix-multifield.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-unix-unicoil-noexp.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-unix-splade-distil-cocodenser-medium.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-unix-splade-pp-ed.md) |
-| CQADupStack-Webmasters | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-webmasters-flat.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-webmasters-flat-wp.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-webmasters-multifield.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-webmasters-unicoil-noexp.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-webmasters-splade-distil-cocodenser-medium.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-webmasters-splade-pp-ed.md) |
-| CQADupStack-Wordpress | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-wordpress-flat.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-wordpress-flat-wp.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-wordpress-multifield.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-wordpress-unicoil-noexp.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-wordpress-splade-distil-cocodenser-medium.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-wordpress-splade-pp-ed.md) |
-| Quora | [+](docs/regressions/regressions-beir-v1.0.0-quora-flat.md) | [+](docs/regressions/regressions-beir-v1.0.0-quora-flat-wp.md) | [+](docs/regressions/regressions-beir-v1.0.0-quora-multifield.md) | [+](docs/regressions/regressions-beir-v1.0.0-quora-unicoil-noexp.md) | [+](docs/regressions/regressions-beir-v1.0.0-quora-splade-distil-cocodenser-medium.md) | [+](docs/regressions/regressions-beir-v1.0.0-quora-splade-pp-ed.md) |
-| DBPedia | [+](docs/regressions/regressions-beir-v1.0.0-dbpedia-entity-flat.md) | [+](docs/regressions/regressions-beir-v1.0.0-dbpedia-entity-flat-wp.md) | [+](docs/regressions/regressions-beir-v1.0.0-dbpedia-entity-multifield.md) | [+](docs/regressions/regressions-beir-v1.0.0-dbpedia-entity-unicoil-noexp.md) | [+](docs/regressions/regressions-beir-v1.0.0-dbpedia-entity-splade-distil-cocodenser-medium.md) | [+](docs/regressions/regressions-beir-v1.0.0-dbpedia-entity-splade-pp-ed.md) |
-| SCIDOCS | [+](docs/regressions/regressions-beir-v1.0.0-scidocs-flat.md) | [+](docs/regressions/regressions-beir-v1.0.0-scidocs-flat-wp.md) | [+](docs/regressions/regressions-beir-v1.0.0-scidocs-multifield.md) | [+](docs/regressions/regressions-beir-v1.0.0-scidocs-unicoil-noexp.md) | [+](docs/regressions/regressions-beir-v1.0.0-scidocs-splade-distil-cocodenser-medium.md) | [+](docs/regressions/regressions-beir-v1.0.0-scidocs-splade-pp-ed.md) |
-| FEVER | [+](docs/regressions/regressions-beir-v1.0.0-fever-flat.md) | [+](docs/regressions/regressions-beir-v1.0.0-fever-flat-wp.md) | [+](docs/regressions/regressions-beir-v1.0.0-fever-multifield.md) | [+](docs/regressions/regressions-beir-v1.0.0-fever-unicoil-noexp.md) | [+](docs/regressions/regressions-beir-v1.0.0-fever-splade-distil-cocodenser-medium.md) | [+](docs/regressions/regressions-beir-v1.0.0-fever-splade-pp-ed.md) |
-| Climate-FEVER | [+](docs/regressions/regressions-beir-v1.0.0-climate-fever-flat.md) | [+](docs/regressions/regressions-beir-v1.0.0-climate-fever-flat-wp.md) | [+](docs/regressions/regressions-beir-v1.0.0-climate-fever-multifield.md) | [+](docs/regressions/regressions-beir-v1.0.0-climate-fever-unicoil-noexp.md) | [+](docs/regressions/regressions-beir-v1.0.0-climate-fever-splade-distil-cocodenser-medium.md) | [+](docs/regressions/regressions-beir-v1.0.0-climate-fever-splade-pp-ed.md) |
-| SciFact | [+](docs/regressions/regressions-beir-v1.0.0-scifact-flat.md) | [+](docs/regressions/regressions-beir-v1.0.0-scifact-flat-wp.md) | [+](docs/regressions/regressions-beir-v1.0.0-scifact-multifield.md) | [+](docs/regressions/regressions-beir-v1.0.0-scifact-unicoil-noexp.md) | [+](docs/regressions/regressions-beir-v1.0.0-scifact-splade-distil-cocodenser-medium.md) | [+](docs/regressions/regressions-beir-v1.0.0-scifact-splade-pp-ed.md) |
++ S1 = SPLADE++ CoCondenser-EnsembleDistil
+
+| Corpus | F1 | F2 | MF | U1 | S1 |
+|-------------------------|:-----------------------------------------------------------------------------:|:--------------------------------------------------------------------------------:|:-----------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------:|:-------------------------------------------------------------------------------------:|
+| TREC-COVID | [+](docs/regressions/regressions-beir-v1.0.0-trec-covid-flat.md) | [+](docs/regressions/regressions-beir-v1.0.0-trec-covid-flat-wp.md) | [+](docs/regressions/regressions-beir-v1.0.0-trec-covid-multifield.md) | [+](docs/regressions/regressions-beir-v1.0.0-trec-covid-unicoil-noexp.md) | [+](docs/regressions/regressions-beir-v1.0.0-trec-covid-splade-pp-ed.md) |
+| BioASQ | [+](docs/regressions/regressions-beir-v1.0.0-bioasq-flat.md) | [+](docs/regressions/regressions-beir-v1.0.0-bioasq-flat-wp.md) | [+](docs/regressions/regressions-beir-v1.0.0-bioasq-multifield.md) | [+](docs/regressions/regressions-beir-v1.0.0-bioasq-unicoil-noexp.md) | [+](docs/regressions/regressions-beir-v1.0.0-bioasq-splade-pp-ed.md) |
+| NFCorpus | [+](docs/regressions/regressions-beir-v1.0.0-nfcorpus-flat.md) | [+](docs/regressions/regressions-beir-v1.0.0-nfcorpus-flat-wp.md) | [+](docs/regressions/regressions-beir-v1.0.0-nfcorpus-multifield.md) | [+](docs/regressions/regressions-beir-v1.0.0-nfcorpus-unicoil-noexp.md) | [+](docs/regressions/regressions-beir-v1.0.0-nfcorpus-splade-pp-ed.md) |
+| NQ | [+](docs/regressions/regressions-beir-v1.0.0-nq-flat.md) | [+](docs/regressions/regressions-beir-v1.0.0-nq-flat-wp.md) | [+](docs/regressions/regressions-beir-v1.0.0-nq-multifield.md) | [+](docs/regressions/regressions-beir-v1.0.0-nq-unicoil-noexp.md) | [+](docs/regressions/regressions-beir-v1.0.0-nq-splade-pp-ed.md) |
+| HotpotQA | [+](docs/regressions/regressions-beir-v1.0.0-hotpotqa-flat.md) | [+](docs/regressions/regressions-beir-v1.0.0-hotpotqa-flat-wp.md) | [+](docs/regressions/regressions-beir-v1.0.0-hotpotqa-multifield.md) | [+](docs/regressions/regressions-beir-v1.0.0-hotpotqa-unicoil-noexp.md) | [+](docs/regressions/regressions-beir-v1.0.0-hotpotqa-splade-pp-ed.md) |
+| FiQA-2018 | [+](docs/regressions/regressions-beir-v1.0.0-fiqa-flat.md) | [+](docs/regressions/regressions-beir-v1.0.0-fiqa-flat-wp.md) | [+](docs/regressions/regressions-beir-v1.0.0-fiqa-multifield.md) | [+](docs/regressions/regressions-beir-v1.0.0-fiqa-unicoil-noexp.md) | [+](docs/regressions/regressions-beir-v1.0.0-fiqa-splade-pp-ed.md) |
+| Signal-1M(RT) | [+](docs/regressions/regressions-beir-v1.0.0-signal1m-flat.md) | [+](docs/regressions/regressions-beir-v1.0.0-signal1m-flat-wp.md) | [+](docs/regressions/regressions-beir-v1.0.0-signal1m-multifield.md) | [+](docs/regressions/regressions-beir-v1.0.0-signal1m-unicoil-noexp.md) | [+](docs/regressions/regressions-beir-v1.0.0-signal1m-splade-pp-ed.md) |
+| TREC-NEWS | [+](docs/regressions/regressions-beir-v1.0.0-trec-news-flat.md) | [+](docs/regressions/regressions-beir-v1.0.0-trec-news-flat-wp.md) | [+](docs/regressions/regressions-beir-v1.0.0-trec-news-multifield.md) | [+](docs/regressions/regressions-beir-v1.0.0-trec-news-unicoil-noexp.md) | [+](docs/regressions/regressions-beir-v1.0.0-trec-news-splade-pp-ed.md) |
+| Robust04 | [+](docs/regressions/regressions-beir-v1.0.0-robust04-flat.md) | [+](docs/regressions/regressions-beir-v1.0.0-robust04-flat-wp.md) | [+](docs/regressions/regressions-beir-v1.0.0-robust04-multifield.md) | [+](docs/regressions/regressions-beir-v1.0.0-robust04-unicoil-noexp.md) | [+](docs/regressions/regressions-beir-v1.0.0-robust04-splade-pp-ed.md) |
+| ArguAna | [+](docs/regressions/regressions-beir-v1.0.0-arguana-flat.md) | [+](docs/regressions/regressions-beir-v1.0.0-arguana-flat-wp.md) | [+](docs/regressions/regressions-beir-v1.0.0-arguana-multifield.md) | [+](docs/regressions/regressions-beir-v1.0.0-arguana-unicoil-noexp.md) | [+](docs/regressions/regressions-beir-v1.0.0-arguana-splade-pp-ed.md) |
+| Touche2020 | [+](docs/regressions/regressions-beir-v1.0.0-webis-touche2020-flat.md) | [+](docs/regressions/regressions-beir-v1.0.0-webis-touche2020-flat-wp.md) | [+](docs/regressions/regressions-beir-v1.0.0-webis-touche2020-multifield.md) | [+](docs/regressions/regressions-beir-v1.0.0-webis-touche2020-unicoil-noexp.md) | [+](docs/regressions/regressions-beir-v1.0.0-webis-touche2020-splade-pp-ed.md) |
+| CQADupStack-Android | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-android-flat.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-android-flat-wp.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-android-multifield.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-android-unicoil-noexp.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-android-splade-pp-ed.md) |
+| CQADupStack-English | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-english-flat.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-english-flat-wp.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-english-multifield.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-english-unicoil-noexp.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-english-splade-pp-ed.md) |
+| CQADupStack-Gaming | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-gaming-flat.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-gaming-flat-wp.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-gaming-multifield.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-gaming-unicoil-noexp.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-gaming-splade-pp-ed.md) |
+| CQADupStack-Gis | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-gis-flat.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-gis-flat-wp.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-gis-multifield.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-gis-unicoil-noexp.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-gis-splade-pp-ed.md) |
+| CQADupStack-Mathematica | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-mathematica-flat.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-mathematica-flat-wp.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-mathematica-multifield.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-mathematica-unicoil-noexp.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-mathematica-splade-pp-ed.md) |
+| CQADupStack-Physics | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-physics-flat.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-physics-flat-wp.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-physics-multifield.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-physics-unicoil-noexp.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-physics-splade-pp-ed.md) |
+| CQADupStack-Programmers | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-programmers-flat.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-programmers-flat-wp.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-programmers-multifield.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-programmers-unicoil-noexp.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-programmers-splade-pp-ed.md) |
+| CQADupStack-Stats | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-stats-flat.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-stats-flat-wp.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-stats-multifield.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-stats-unicoil-noexp.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-stats-splade-pp-ed.md) |
+| CQADupStack-Tex | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-tex-flat.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-tex-flat-wp.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-tex-multifield.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-tex-unicoil-noexp.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-tex-splade-pp-ed.md) |
+| CQADupStack-Unix | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-unix-flat.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-unix-flat-wp.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-unix-multifield.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-unix-unicoil-noexp.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-unix-splade-pp-ed.md) |
+| CQADupStack-Webmasters | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-webmasters-flat.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-webmasters-flat-wp.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-webmasters-multifield.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-webmasters-unicoil-noexp.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-webmasters-splade-pp-ed.md) |
+| CQADupStack-Wordpress | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-wordpress-flat.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-wordpress-flat-wp.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-wordpress-multifield.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-wordpress-unicoil-noexp.md) | [+](docs/regressions/regressions-beir-v1.0.0-cqadupstack-wordpress-splade-pp-ed.md) |
+| Quora | [+](docs/regressions/regressions-beir-v1.0.0-quora-flat.md) | [+](docs/regressions/regressions-beir-v1.0.0-quora-flat-wp.md) | [+](docs/regressions/regressions-beir-v1.0.0-quora-multifield.md) | [+](docs/regressions/regressions-beir-v1.0.0-quora-unicoil-noexp.md) | [+](docs/regressions/regressions-beir-v1.0.0-quora-splade-pp-ed.md) |
+| DBPedia | [+](docs/regressions/regressions-beir-v1.0.0-dbpedia-entity-flat.md) | [+](docs/regressions/regressions-beir-v1.0.0-dbpedia-entity-flat-wp.md) | [+](docs/regressions/regressions-beir-v1.0.0-dbpedia-entity-multifield.md) | [+](docs/regressions/regressions-beir-v1.0.0-dbpedia-entity-unicoil-noexp.md) | [+](docs/regressions/regressions-beir-v1.0.0-dbpedia-entity-splade-pp-ed.md) |
+| SCIDOCS | [+](docs/regressions/regressions-beir-v1.0.0-scidocs-flat.md) | [+](docs/regressions/regressions-beir-v1.0.0-scidocs-flat-wp.md) | [+](docs/regressions/regressions-beir-v1.0.0-scidocs-multifield.md) | [+](docs/regressions/regressions-beir-v1.0.0-scidocs-unicoil-noexp.md) | [+](docs/regressions/regressions-beir-v1.0.0-scidocs-splade-pp-ed.md) |
+| FEVER | [+](docs/regressions/regressions-beir-v1.0.0-fever-flat.md) | [+](docs/regressions/regressions-beir-v1.0.0-fever-flat-wp.md) | [+](docs/regressions/regressions-beir-v1.0.0-fever-multifield.md) | [+](docs/regressions/regressions-beir-v1.0.0-fever-unicoil-noexp.md) | [+](docs/regressions/regressions-beir-v1.0.0-fever-splade-pp-ed.md) |
+| Climate-FEVER | [+](docs/regressions/regressions-beir-v1.0.0-climate-fever-flat.md) | [+](docs/regressions/regressions-beir-v1.0.0-climate-fever-flat-wp.md) | [+](docs/regressions/regressions-beir-v1.0.0-climate-fever-multifield.md) | [+](docs/regressions/regressions-beir-v1.0.0-climate-fever-unicoil-noexp.md) | [+](docs/regressions/regressions-beir-v1.0.0-climate-fever-splade-pp-ed.md) |
+| SciFact | [+](docs/regressions/regressions-beir-v1.0.0-scifact-flat.md) | [+](docs/regressions/regressions-beir-v1.0.0-scifact-flat-wp.md) | [+](docs/regressions/regressions-beir-v1.0.0-scifact-multifield.md) | [+](docs/regressions/regressions-beir-v1.0.0-scifact-unicoil-noexp.md) | [+](docs/regressions/regressions-beir-v1.0.0-scifact-splade-pp-ed.md) |
diff --git a/docs/regressions.md b/docs/regressions.md
index b32b4db362..75feb8b9e6 100644
--- a/docs/regressions.md
+++ b/docs/regressions.md
@@ -47,7 +47,6 @@ nohup python src/main/python/run_regression.py --index --verify --search --regre
nohup python src/main/python/run_regression.py --index --verify --search --regression msmarco-passage-unicoil-noexp >& logs/log.msmarco-passage-unicoil-noexp &
nohup python src/main/python/run_regression.py --index --verify --search --regression msmarco-passage-unicoil-tilde-expansion >& logs/log.msmarco-passage-unicoil-tilde-expansion &
nohup python src/main/python/run_regression.py --index --verify --search --regression msmarco-passage-distill-splade-max >& logs/log.msmarco-passage-distill-splade-max &
-nohup python src/main/python/run_regression.py --index --verify --search --regression msmarco-passage-splade-distil-cocodenser-medium >& logs/log.msmarco-passage-splade-distil-cocodenser-medium &
nohup python src/main/python/run_regression.py --index --verify --search --regression msmarco-passage-splade-pp-ed >& logs/log.msmarco-passage-splade-pp-ed &
nohup python src/main/python/run_regression.py --index --verify --search --regression msmarco-passage-splade-pp-sd >& logs/log.msmarco-passage-splade-pp-sd &
nohup python src/main/python/run_regression.py --index --verify --search --regression msmarco-passage-cos-dpr-distil-hnsw >& logs/log.msmarco-passage-cos-dpr-distil-hnsw &
@@ -83,7 +82,6 @@ nohup python src/main/python/run_regression.py --index --verify --search --regre
nohup python src/main/python/run_regression.py --index --verify --search --regression dl19-passage-docTTTTTquery >& logs/log.dl19-passage-docTTTTTquery &
nohup python src/main/python/run_regression.py --index --verify --search --regression dl19-passage-unicoil >& logs/log.dl19-passage-unicoil &
nohup python src/main/python/run_regression.py --index --verify --search --regression dl19-passage-unicoil-noexp >& logs/log.dl19-passage-unicoil-noexp &
-nohup python src/main/python/run_regression.py --index --verify --search --regression dl19-passage-splade-distil-cocodenser-medium >& logs/log.dl19-passage-splade-distil-cocodenser-medium &
nohup python src/main/python/run_regression.py --index --verify --search --regression dl19-passage-splade-pp-ed >& logs/log.dl19-passage-splade-pp-ed &
nohup python src/main/python/run_regression.py --index --verify --search --regression dl19-passage-splade-pp-sd >& logs/log.dl19-passage-splade-pp-sd &
nohup python src/main/python/run_regression.py --index --verify --search --regression dl19-passage-cos-dpr-distil-hnsw >& logs/log.dl19-passage-cos-dpr-distil-hnsw &
@@ -119,7 +117,6 @@ nohup python src/main/python/run_regression.py --index --verify --search --regre
nohup python src/main/python/run_regression.py --index --verify --search --regression dl20-passage-docTTTTTquery >& logs/log.dl20-passage-docTTTTTquery &
nohup python src/main/python/run_regression.py --index --verify --search --regression dl20-passage-unicoil >& logs/log.dl20-passage-unicoil &
nohup python src/main/python/run_regression.py --index --verify --search --regression dl20-passage-unicoil-noexp >& logs/log.dl20-passage-unicoil-noexp &
-nohup python src/main/python/run_regression.py --index --verify --search --regression dl20-passage-splade-distil-cocodenser-medium >& logs/log.dl20-passage-splade-distil-cocodenser-medium &
nohup python src/main/python/run_regression.py --index --verify --search --regression dl20-passage-splade-pp-ed >& logs/log.dl20-passage-splade-pp-ed &
nohup python src/main/python/run_regression.py --index --verify --search --regression dl20-passage-splade-pp-sd >& logs/log.dl20-passage-splade-pp-sd &
nohup python src/main/python/run_regression.py --index --verify --search --regression dl20-passage-cos-dpr-distil-hnsw >& logs/log.dl20-passage-cos-dpr-distil-hnsw &
@@ -237,42 +234,6 @@ nohup python src/main/python/run_regression.py --index --verify --search --regre
```
-
-BEIR (v1.0.0): SPLADE-distill CoCodenser-medium
-
-```bash
-nohup python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-trec-covid-splade-distil-cocodenser-medium >& logs/log.beir-v1.0.0-trec-covid-splade-distil-cocodenser-medium &
-nohup python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-bioasq-splade-distil-cocodenser-medium >& logs/log.beir-v1.0.0-bioasq-splade-distil-cocodenser-medium &
-nohup python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-nfcorpus-splade-distil-cocodenser-medium >& logs/log.beir-v1.0.0-nfcorpus-splade-distil-cocodenser-medium &
-nohup python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-nq-splade-distil-cocodenser-medium >& logs/log.beir-v1.0.0-nq-splade-distil-cocodenser-medium &
-nohup python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-hotpotqa-splade-distil-cocodenser-medium >& logs/log.beir-v1.0.0-hotpotqa-splade-distil-cocodenser-medium &
-nohup python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-fiqa-splade-distil-cocodenser-medium >& logs/log.beir-v1.0.0-fiqa-splade-distil-cocodenser-medium &
-nohup python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-signal1m-splade-distil-cocodenser-medium >& logs/log.beir-v1.0.0-signal1m-splade-distil-cocodenser-medium &
-nohup python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-trec-news-splade-distil-cocodenser-medium >& logs/log.beir-v1.0.0-trec-news-splade-distil-cocodenser-medium &
-nohup python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-robust04-splade-distil-cocodenser-medium >& logs/log.beir-v1.0.0-robust04-splade-distil-cocodenser-medium &
-nohup python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-arguana-splade-distil-cocodenser-medium >& logs/log.beir-v1.0.0-arguana-splade-distil-cocodenser-medium &
-nohup python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-webis-touche2020-splade-distil-cocodenser-medium >& logs/log.beir-v1.0.0-webis-touche2020-splade-distil-cocodenser-medium &
-nohup python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-cqadupstack-android-splade-distil-cocodenser-medium >& logs/log.beir-v1.0.0-cqadupstack-android-splade-distil-cocodenser-medium &
-nohup python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-cqadupstack-english-splade-distil-cocodenser-medium >& logs/log.beir-v1.0.0-cqadupstack-english-splade-distil-cocodenser-medium &
-nohup python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-cqadupstack-gaming-splade-distil-cocodenser-medium >& logs/log.beir-v1.0.0-cqadupstack-gaming-splade-distil-cocodenser-medium &
-nohup python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-cqadupstack-gis-splade-distil-cocodenser-medium >& logs/log.beir-v1.0.0-cqadupstack-gis-splade-distil-cocodenser-medium &
-nohup python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-cqadupstack-mathematica-splade-distil-cocodenser-medium >& logs/log.beir-v1.0.0-cqadupstack-mathematica-splade-distil-cocodenser-medium &
-nohup python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-cqadupstack-physics-splade-distil-cocodenser-medium >& logs/log.beir-v1.0.0-cqadupstack-physics-splade-distil-cocodenser-medium &
-nohup python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-cqadupstack-programmers-splade-distil-cocodenser-medium >& logs/log.beir-v1.0.0-cqadupstack-programmers-splade-distil-cocodenser-medium &
-nohup python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-cqadupstack-stats-splade-distil-cocodenser-medium >& logs/log.beir-v1.0.0-cqadupstack-stats-splade-distil-cocodenser-medium &
-nohup python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-cqadupstack-tex-splade-distil-cocodenser-medium >& logs/log.beir-v1.0.0-cqadupstack-tex-splade-distil-cocodenser-medium &
-nohup python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-cqadupstack-unix-splade-distil-cocodenser-medium >& logs/log.beir-v1.0.0-cqadupstack-unix-splade-distil-cocodenser-medium &
-nohup python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-cqadupstack-webmasters-splade-distil-cocodenser-medium >& logs/log.beir-v1.0.0-cqadupstack-webmasters-splade-distil-cocodenser-medium &
-nohup python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-cqadupstack-wordpress-splade-distil-cocodenser-medium >& logs/log.beir-v1.0.0-cqadupstack-wordpress-splade-distil-cocodenser-medium &
-nohup python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-quora-splade-distil-cocodenser-medium >& logs/log.beir-v1.0.0-quora-splade-distil-cocodenser-medium &
-nohup python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-dbpedia-entity-splade-distil-cocodenser-medium >& logs/log.beir-v1.0.0-dbpedia-entity-splade-distil-cocodenser-medium &
-nohup python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-scidocs-splade-distil-cocodenser-medium >& logs/log.beir-v1.0.0-scidocs-splade-distil-cocodenser-medium &
-nohup python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-fever-splade-distil-cocodenser-medium >& logs/log.beir-v1.0.0-fever-splade-distil-cocodenser-medium &
-nohup python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-climate-fever-splade-distil-cocodenser-medium >& logs/log.beir-v1.0.0-climate-fever-splade-distil-cocodenser-medium &
-nohup python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-scifact-splade-distil-cocodenser-medium >& logs/log.beir-v1.0.0-scifact-splade-distil-cocodenser-medium &
-```
-
-
BEIR (v1.0.0): uniCOIL (noexp)
diff --git a/docs/regressions/regressions-beir-v1.0.0-arguana-splade-distil-cocodenser-medium.md b/docs/regressions/regressions-beir-v1.0.0-arguana-splade-distil-cocodenser-medium.md
deleted file mode 100644
index ea6c359c5b..0000000000
--- a/docs/regressions/regressions-beir-v1.0.0-arguana-splade-distil-cocodenser-medium.md
+++ /dev/null
@@ -1,102 +0,0 @@
-# Anserini Regressions: BEIR (v1.0.0) — ArguAna
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on [BEIR (v1.0.0) — ArguAna](http://beir.ai/).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-The exact configurations for these regressions are stored in [this YAML file](../../src/main/resources/regression/beir-v1.0.0-arguana-splade-distil-cocodenser-medium.yaml).
-Note that this page is automatically generated from [this template](../../src/main/resources/docgen/templates/beir-v1.0.0-arguana-splade-distil-cocodenser-medium.template) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression beir-v1.0.0-arguana-splade-distil-cocodenser-medium
-```
-
-## Corpus
-
-We make available a version of the BEIR-v1.0.0 arguana corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., gone through document expansion and term reweighting.
-Thus, no neural inference is involved.
-
-Download the corpus and unpack into `collections/`:
-
-```
-wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/beir-v1.0.0-splade_distil_cocodenser_medium-arguana.tar -P collections/
-tar xvf collections/beir-v1.0.0-splade_distil_cocodenser_medium-arguana.tar -C collections/
-```
-
-To confirm, the tarball is 8.9 MB and has MD5 checksum `9c5a181e03cbc7f13abd0e0e4bf9158e`.
-
-With the corpus downloaded, the following command will perform the complete regression, end to end, on any machine:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression beir-v1.0.0-arguana-splade-distil-cocodenser-medium \
- --corpus-path collections/beir-v1.0.0-splade_distil_cocodenser_medium-arguana
-```
-
-Alternatively, you can simply copy/paste from the commands below and obtain the same results.
-
-## Indexing
-
-Sample indexing command:
-
-```
-target/appassembler/bin/IndexCollection \
- -collection JsonVectorCollection \
- -input /path/to/beir-v1.0.0-arguana-splade_distil_cocodenser_medium \
- -generator DefaultLuceneDocumentGenerator \
- -index indexes/lucene-index.beir-v1.0.0-arguana-splade_distil_cocodenser_medium/ \
- -threads 16 -impact -pretokenized \
- >& logs/log.beir-v1.0.0-arguana-splade_distil_cocodenser_medium &
-```
-
-The path `/path/to/beir-v1.0.0-arguana-splade_distil_cocodenser_medium/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the pre-encoded tokens.
-Upon completion, we should have an index with 8,674 documents.
-
-For additional details, see explanation of [common indexing options](../../docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```
-target/appassembler/bin/SearchCollection \
- -index indexes/lucene-index.beir-v1.0.0-arguana-splade_distil_cocodenser_medium/ \
- -topics tools/topics-and-qrels/topics.beir-v1.0.0-arguana.test.splade_distil_cocodenser_medium.tsv.gz \
- -topicReader TsvString \
- -output runs/run.beir-v1.0.0-arguana-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-arguana.test.splade_distil_cocodenser_medium.txt \
- -impact -pretokenized -removeQuery -hits 1000 &
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```
-target/appassembler/bin/trec_eval -c -m ndcg_cut.10 tools/topics-and-qrels/qrels.beir-v1.0.0-arguana.test.txt runs/run.beir-v1.0.0-arguana-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-arguana.test.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -c -m recall.100 tools/topics-and-qrels/qrels.beir-v1.0.0-arguana.test.txt runs/run.beir-v1.0.0-arguana-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-arguana.test.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -c -m recall.1000 tools/topics-and-qrels/qrels.beir-v1.0.0-arguana.test.txt runs/run.beir-v1.0.0-arguana-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-arguana.test.splade_distil_cocodenser_medium.txt
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-| **nDCG@10** | **SPLADE-distill CoCodenser Medium**|
-|:-------------------------------------------------------------------------------------------------------------|-----------|
-| BEIR (v1.0.0): ArguAna | 0.5210 |
-| **R@100** | **SPLADE-distill CoCodenser Medium**|
-| BEIR (v1.0.0): ArguAna | 0.9822 |
-| **R@1000** | **SPLADE-distill CoCodenser Medium**|
-| BEIR (v1.0.0): ArguAna | 0.9950 |
-
-
-## Reproduction Log[*](../../docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](../../src/main/resources/docgen/templates/beir-v1.0.0-arguana-splade-distil-cocodenser-medium.template) and run `bin/build.sh` to rebuild the documentation.
diff --git a/docs/regressions/regressions-beir-v1.0.0-bioasq-splade-distil-cocodenser-medium.md b/docs/regressions/regressions-beir-v1.0.0-bioasq-splade-distil-cocodenser-medium.md
deleted file mode 100644
index e581a268e4..0000000000
--- a/docs/regressions/regressions-beir-v1.0.0-bioasq-splade-distil-cocodenser-medium.md
+++ /dev/null
@@ -1,103 +0,0 @@
-# Anserini Regressions: BEIR (v1.0.0) — BioASQ
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on [BEIR (v1.0.0) — BioASQ](http://beir.ai/).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-The exact configurations for these regressions are stored in [this YAML file](../../src/main/resources/regression/beir-v1.0.0-bioasq-splade-distil-cocodenser-medium.yaml).
-Note that this page is automatically generated from [this template](../../src/main/resources/docgen/templates/beir-v1.0.0-bioasq-splade-distil-cocodenser-medium.template) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression beir-v1.0.0-bioasq-splade-distil-cocodenser-medium
-```
-
-## Corpus
-
-We make available a version of the BEIR-v1.0.0 bioasq corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., gone through document expansion and term reweighting.
-Thus, no neural inference is involved.
-For details on how to train SPLADE-distil CoCodenser Medium and perform inference, please see [guide provided by Naver Labs Europe](https://github.com/naver/splade/tree/main/anserini_evaluation).
-
-Download the corpus and unpack into `collections/`:
-
-```
-wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/beir-v1.0.0-splade_distil_cocodenser_medium-bioasq.tar -P collections/
-tar xvf collections/beir-v1.0.0-splade_distil_cocodenser_medium-bioasq.tar -C collections/
-```
-
-To confirm, the tarball is 8.9 MB and has MD5 checksum `9c5a181e03cbc7f13abd0e0e4bf9158e`.
-
-With the corpus downloaded, the following command will perform the complete regression, end to end, on any machine:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression beir-v1.0.0-bioasq-splade-distil-cocodenser-medium \
- --corpus-path collections/beir-v1.0.0-splade_distil_cocodenser_medium-bioasq
-```
-
-Alternatively, you can simply copy/paste from the commands below and obtain the same results.
-
-## Indexing
-
-Sample indexing command:
-
-```
-target/appassembler/bin/IndexCollection \
- -collection JsonVectorCollection \
- -input /path/to/beir-v1.0.0-bioasq-splade_distil_cocodenser_medium \
- -generator DefaultLuceneDocumentGenerator \
- -index indexes/lucene-index.beir-v1.0.0-bioasq-splade_distil_cocodenser_medium/ \
- -threads 16 -impact -pretokenized \
- >& logs/log.beir-v1.0.0-bioasq-splade_distil_cocodenser_medium &
-```
-
-The path `/path/to/beir-v1.0.0-bioasq-splade_distil_cocodenser_medium/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the pre-encoded tokens.
-Upon completion, we should have an index with 8,674 documents.
-
-For additional details, see explanation of [common indexing options](../../docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```
-target/appassembler/bin/SearchCollection \
- -index indexes/lucene-index.beir-v1.0.0-bioasq-splade_distil_cocodenser_medium/ \
- -topics tools/topics-and-qrels/topics.beir-v1.0.0-bioasq.test.splade_distil_cocodenser_medium.tsv.gz \
- -topicReader TsvString \
- -output runs/run.beir-v1.0.0-bioasq-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-bioasq.test.splade_distil_cocodenser_medium.txt \
- -impact -pretokenized -removeQuery -hits 1000 &
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```
-target/appassembler/bin/trec_eval -c -m ndcg_cut.10 tools/topics-and-qrels/qrels.beir-v1.0.0-bioasq.test.txt runs/run.beir-v1.0.0-bioasq-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-bioasq.test.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -c -m recall.100 tools/topics-and-qrels/qrels.beir-v1.0.0-bioasq.test.txt runs/run.beir-v1.0.0-bioasq-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-bioasq.test.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -c -m recall.1000 tools/topics-and-qrels/qrels.beir-v1.0.0-bioasq.test.txt runs/run.beir-v1.0.0-bioasq-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-bioasq.test.splade_distil_cocodenser_medium.txt
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-| **nDCG@10** | **SPLADE-distill CoCodenser Medium**|
-|:-------------------------------------------------------------------------------------------------------------|-----------|
-| BEIR (v1.0.0): BioASQ | 0.5035 |
-| **R@100** | **SPLADE-distill CoCodenser Medium**|
-| BEIR (v1.0.0): BioASQ | 0.7422 |
-| **R@1000** | **SPLADE-distill CoCodenser Medium**|
-| BEIR (v1.0.0): BioASQ | 0.8904 |
-
-
-## Reproduction Log[*](../../docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](../../src/main/resources/docgen/templates/beir-v1.0.0-bioasq-splade-distil-cocodenser-medium.template) and run `bin/build.sh` to rebuild the documentation.
diff --git a/docs/regressions/regressions-beir-v1.0.0-climate-fever-splade-distil-cocodenser-medium.md b/docs/regressions/regressions-beir-v1.0.0-climate-fever-splade-distil-cocodenser-medium.md
deleted file mode 100644
index 8992bb8ab7..0000000000
--- a/docs/regressions/regressions-beir-v1.0.0-climate-fever-splade-distil-cocodenser-medium.md
+++ /dev/null
@@ -1,103 +0,0 @@
-# Anserini Regressions: BEIR (v1.0.0) — Climate-FEVER
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on [BEIR (v1.0.0) — Climate-FEVER](http://beir.ai/).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-The exact configurations for these regressions are stored in [this YAML file](../../src/main/resources/regression/beir-v1.0.0-climate-fever-splade-distil-cocodenser-medium.yaml).
-Note that this page is automatically generated from [this template](../../src/main/resources/docgen/templates/beir-v1.0.0-climate-fever-splade-distil-cocodenser-medium.template) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression beir-v1.0.0-climate-fever-splade-distil-cocodenser-medium
-```
-
-## Corpus
-
-We make available a version of the BEIR-v1.0.0 climate-fever corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., gone through document expansion and term reweighting.
-Thus, no neural inference is involved.
-For details on how to train SPLADE-distil CoCodenser Medium and perform inference, please see [guide provided by Naver Labs Europe](https://github.com/naver/splade/tree/main/anserini_evaluation).
-
-Download the corpus and unpack into `collections/`:
-
-```
-wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/beir-v1.0.0-splade_distil_cocodenser_medium-climate-fever.tar -P collections/
-tar xvf collections/beir-v1.0.0-splade_distil_cocodenser_medium-climate-fever.tar -C collections/
-```
-
-To confirm, the tarball is 3.2 GB and has MD5 checksum `07787a4b1236fad234a7fe6d89197b34`.
-
-With the corpus downloaded, the following command will perform the complete regression, end to end, on any machine:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression beir-v1.0.0-climate-fever-splade-distil-cocodenser-medium \
- --corpus-path collections/beir-v1.0.0-splade_distil_cocodenser_medium-climate-fever
-```
-
-Alternatively, you can simply copy/paste from the commands below and obtain the same results.
-
-## Indexing
-
-Sample indexing command:
-
-```
-target/appassembler/bin/IndexCollection \
- -collection JsonVectorCollection \
- -input /path/to/beir-v1.0.0-climate-fever-splade_distil_cocodenser_medium \
- -generator DefaultLuceneDocumentGenerator \
- -index indexes/lucene-index.beir-v1.0.0-climate-fever-splade_distil_cocodenser_medium/ \
- -threads 16 -impact -pretokenized \
- >& logs/log.beir-v1.0.0-climate-fever-splade_distil_cocodenser_medium &
-```
-
-The path `/path/to/beir-v1.0.0-climate-fever-splade_distil_cocodenser_medium/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the pre-encoded tokens.
-Upon completion, we should have an index with 5,416,593 documents.
-
-For additional details, see explanation of [common indexing options](../../docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```
-target/appassembler/bin/SearchCollection \
- -index indexes/lucene-index.beir-v1.0.0-climate-fever-splade_distil_cocodenser_medium/ \
- -topics tools/topics-and-qrels/topics.beir-v1.0.0-climate-fever.test.splade_distil_cocodenser_medium.tsv.gz \
- -topicReader TsvString \
- -output runs/run.beir-v1.0.0-climate-fever-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-climate-fever.test.splade_distil_cocodenser_medium.txt \
- -impact -pretokenized -removeQuery -hits 1000 &
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```
-target/appassembler/bin/trec_eval -c -m ndcg_cut.10 tools/topics-and-qrels/qrels.beir-v1.0.0-climate-fever.test.txt runs/run.beir-v1.0.0-climate-fever-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-climate-fever.test.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -c -m recall.100 tools/topics-and-qrels/qrels.beir-v1.0.0-climate-fever.test.txt runs/run.beir-v1.0.0-climate-fever-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-climate-fever.test.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -c -m recall.1000 tools/topics-and-qrels/qrels.beir-v1.0.0-climate-fever.test.txt runs/run.beir-v1.0.0-climate-fever-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-climate-fever.test.splade_distil_cocodenser_medium.txt
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-| **nDCG@10** | **SPLADE-distill CoCodenser Medium**|
-|:-------------------------------------------------------------------------------------------------------------|-----------|
-| BEIR (v1.0.0): Climate-FEVER | 0.2276 |
-| **R@100** | **SPLADE-distill CoCodenser Medium**|
-| BEIR (v1.0.0): Climate-FEVER | 0.5140 |
-| **R@1000** | **SPLADE-distill CoCodenser Medium**|
-| BEIR (v1.0.0): Climate-FEVER | 0.7084 |
-
-
-## Reproduction Log[*](../../docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](../../src/main/resources/docgen/templates/beir-v1.0.0-climate-fever-splade-distil-cocodenser-medium.template) and run `bin/build.sh` to rebuild the documentation.
diff --git a/docs/regressions/regressions-beir-v1.0.0-cqadupstack-android-splade-distil-cocodenser-medium.md b/docs/regressions/regressions-beir-v1.0.0-cqadupstack-android-splade-distil-cocodenser-medium.md
deleted file mode 100644
index 77c87293f3..0000000000
--- a/docs/regressions/regressions-beir-v1.0.0-cqadupstack-android-splade-distil-cocodenser-medium.md
+++ /dev/null
@@ -1,103 +0,0 @@
-# Anserini Regressions: BEIR (v1.0.0) — CQADupStack-android
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on [BEIR (v1.0.0) — CQADupStack-android](http://beir.ai/).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-The exact configurations for these regressions are stored in [this YAML file](../../src/main/resources/regression/beir-v1.0.0-cqadupstack-android-splade-distil-cocodenser-medium.yaml).
-Note that this page is automatically generated from [this template](../../src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-android-splade-distil-cocodenser-medium.template) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression beir-v1.0.0-cqadupstack-android-splade-distil-cocodenser-medium
-```
-
-## Corpus
-
-We make available a version of the BEIR-v1.0.0 cqadupstack-android corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., gone through document expansion and term reweighting.
-Thus, no neural inference is involved.
-For details on how to train SPLADE-distil CoCodenser Medium and perform inference, please see [guide provided by Naver Labs Europe](https://github.com/naver/splade/tree/main/anserini_evaluation).
-
-Download the corpus and unpack into `collections/`:
-
-```
-wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-android.tar -P collections/
-tar xvf collections/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-android.tar -C collections/
-```
-
-To confirm, the tarball is 8.9 MB and has MD5 checksum `9c5a181e03cbc7f13abd0e0e4bf9158e`.
-
-With the corpus downloaded, the following command will perform the complete regression, end to end, on any machine:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression beir-v1.0.0-cqadupstack-android-splade-distil-cocodenser-medium \
- --corpus-path collections/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-android
-```
-
-Alternatively, you can simply copy/paste from the commands below and obtain the same results.
-
-## Indexing
-
-Sample indexing command:
-
-```
-target/appassembler/bin/IndexCollection \
- -collection JsonVectorCollection \
- -input /path/to/beir-v1.0.0-cqadupstack-android-splade_distil_cocodenser_medium \
- -generator DefaultLuceneDocumentGenerator \
- -index indexes/lucene-index.beir-v1.0.0-cqadupstack-android-splade_distil_cocodenser_medium/ \
- -threads 16 -impact -pretokenized \
- >& logs/log.beir-v1.0.0-cqadupstack-android-splade_distil_cocodenser_medium &
-```
-
-The path `/path/to/beir-v1.0.0-cqadupstack-android-splade_distil_cocodenser_medium/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the pre-encoded tokens.
-Upon completion, we should have an index with 8,674 documents.
-
-For additional details, see explanation of [common indexing options](../../docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```
-target/appassembler/bin/SearchCollection \
- -index indexes/lucene-index.beir-v1.0.0-cqadupstack-android-splade_distil_cocodenser_medium/ \
- -topics tools/topics-and-qrels/topics.beir-v1.0.0-cqadupstack-android.test.splade_distil_cocodenser_medium.tsv.gz \
- -topicReader TsvString \
- -output runs/run.beir-v1.0.0-cqadupstack-android-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-cqadupstack-android.test.splade_distil_cocodenser_medium.txt \
- -impact -pretokenized -removeQuery -hits 1000 &
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```
-target/appassembler/bin/trec_eval -c -m ndcg_cut.10 tools/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-android.test.txt runs/run.beir-v1.0.0-cqadupstack-android-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-cqadupstack-android.test.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -c -m recall.100 tools/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-android.test.txt runs/run.beir-v1.0.0-cqadupstack-android-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-cqadupstack-android.test.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -c -m recall.1000 tools/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-android.test.txt runs/run.beir-v1.0.0-cqadupstack-android-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-cqadupstack-android.test.splade_distil_cocodenser_medium.txt
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-| **nDCG@10** | **SPLADE-distill CoCodenser Medium**|
-|:-------------------------------------------------------------------------------------------------------------|-----------|
-| BEIR (v1.0.0): CQADupStack-android | 0.3954 |
-| **R@100** | **SPLADE-distill CoCodenser Medium**|
-| BEIR (v1.0.0): CQADupStack-android | 0.7405 |
-| **R@1000** | **SPLADE-distill CoCodenser Medium**|
-| BEIR (v1.0.0): CQADupStack-android | 0.9035 |
-
-
-## Reproduction Log[*](../../docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](../../src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-android-splade-distil-cocodenser-medium.template) and run `bin/build.sh` to rebuild the documentation.
diff --git a/docs/regressions/regressions-beir-v1.0.0-cqadupstack-english-splade-distil-cocodenser-medium.md b/docs/regressions/regressions-beir-v1.0.0-cqadupstack-english-splade-distil-cocodenser-medium.md
deleted file mode 100644
index e184eaf5b1..0000000000
--- a/docs/regressions/regressions-beir-v1.0.0-cqadupstack-english-splade-distil-cocodenser-medium.md
+++ /dev/null
@@ -1,103 +0,0 @@
-# Anserini Regressions: BEIR (v1.0.0) — CQADupStack-english
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on [BEIR (v1.0.0) — CQADupStack-english](http://beir.ai/).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-The exact configurations for these regressions are stored in [this YAML file](../../src/main/resources/regression/beir-v1.0.0-cqadupstack-english-splade-distil-cocodenser-medium.yaml).
-Note that this page is automatically generated from [this template](../../src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-english-splade-distil-cocodenser-medium.template) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression beir-v1.0.0-cqadupstack-english-splade-distil-cocodenser-medium
-```
-
-## Corpus
-
-We make available a version of the BEIR-v1.0.0 cqadupstack-english corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., gone through document expansion and term reweighting.
-Thus, no neural inference is involved.
-For details on how to train SPLADE-distil CoCodenser Medium and perform inference, please see [guide provided by Naver Labs Europe](https://github.com/naver/splade/tree/main/anserini_evaluation).
-
-Download the corpus and unpack into `collections/`:
-
-```
-wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-english.tar -P collections/
-tar xvf collections/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-english.tar -C collections/
-```
-
-To confirm, the tarball is 8.9 MB and has MD5 checksum `9c5a181e03cbc7f13abd0e0e4bf9158e`.
-
-With the corpus downloaded, the following command will perform the complete regression, end to end, on any machine:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression beir-v1.0.0-cqadupstack-english-splade-distil-cocodenser-medium \
- --corpus-path collections/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-english
-```
-
-Alternatively, you can simply copy/paste from the commands below and obtain the same results.
-
-## Indexing
-
-Sample indexing command:
-
-```
-target/appassembler/bin/IndexCollection \
- -collection JsonVectorCollection \
- -input /path/to/beir-v1.0.0-cqadupstack-english-splade_distil_cocodenser_medium \
- -generator DefaultLuceneDocumentGenerator \
- -index indexes/lucene-index.beir-v1.0.0-cqadupstack-english-splade_distil_cocodenser_medium/ \
- -threads 16 -impact -pretokenized \
- >& logs/log.beir-v1.0.0-cqadupstack-english-splade_distil_cocodenser_medium &
-```
-
-The path `/path/to/beir-v1.0.0-cqadupstack-english-splade_distil_cocodenser_medium/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the pre-encoded tokens.
-Upon completion, we should have an index with 8,674 documents.
-
-For additional details, see explanation of [common indexing options](../../docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```
-target/appassembler/bin/SearchCollection \
- -index indexes/lucene-index.beir-v1.0.0-cqadupstack-english-splade_distil_cocodenser_medium/ \
- -topics tools/topics-and-qrels/topics.beir-v1.0.0-cqadupstack-english.test.splade_distil_cocodenser_medium.tsv.gz \
- -topicReader TsvString \
- -output runs/run.beir-v1.0.0-cqadupstack-english-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-cqadupstack-english.test.splade_distil_cocodenser_medium.txt \
- -impact -pretokenized -removeQuery -hits 1000 &
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```
-target/appassembler/bin/trec_eval -c -m ndcg_cut.10 tools/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-english.test.txt runs/run.beir-v1.0.0-cqadupstack-english-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-cqadupstack-english.test.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -c -m recall.100 tools/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-english.test.txt runs/run.beir-v1.0.0-cqadupstack-english-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-cqadupstack-english.test.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -c -m recall.1000 tools/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-english.test.txt runs/run.beir-v1.0.0-cqadupstack-english-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-cqadupstack-english.test.splade_distil_cocodenser_medium.txt
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-| **nDCG@10** | **SPLADE-distill CoCodenser Medium**|
-|:-------------------------------------------------------------------------------------------------------------|-----------|
-| BEIR (v1.0.0): CQADupStack-english | 0.4026 |
-| **R@100** | **SPLADE-distill CoCodenser Medium**|
-| BEIR (v1.0.0): CQADupStack-english | 0.6768 |
-| **R@1000** | **SPLADE-distill CoCodenser Medium**|
-| BEIR (v1.0.0): CQADupStack-english | 0.8346 |
-
-
-## Reproduction Log[*](../../docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](../../src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-english-splade-distil-cocodenser-medium.template) and run `bin/build.sh` to rebuild the documentation.
diff --git a/docs/regressions/regressions-beir-v1.0.0-cqadupstack-gaming-splade-distil-cocodenser-medium.md b/docs/regressions/regressions-beir-v1.0.0-cqadupstack-gaming-splade-distil-cocodenser-medium.md
deleted file mode 100644
index d7adcdcde8..0000000000
--- a/docs/regressions/regressions-beir-v1.0.0-cqadupstack-gaming-splade-distil-cocodenser-medium.md
+++ /dev/null
@@ -1,103 +0,0 @@
-# Anserini Regressions: BEIR (v1.0.0) — CQADupStack-gaming
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on [BEIR (v1.0.0) — CQADupStack-gaming](http://beir.ai/).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-The exact configurations for these regressions are stored in [this YAML file](../../src/main/resources/regression/beir-v1.0.0-cqadupstack-gaming-splade-distil-cocodenser-medium.yaml).
-Note that this page is automatically generated from [this template](../../src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-gaming-splade-distil-cocodenser-medium.template) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression beir-v1.0.0-cqadupstack-gaming-splade-distil-cocodenser-medium
-```
-
-## Corpus
-
-We make available a version of the BEIR-v1.0.0 cqadupstack-gaming corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., gone through document expansion and term reweighting.
-Thus, no neural inference is involved.
-For details on how to train SPLADE-distil CoCodenser Medium and perform inference, please see [guide provided by Naver Labs Europe](https://github.com/naver/splade/tree/main/anserini_evaluation).
-
-Download the corpus and unpack into `collections/`:
-
-```
-wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-gaming.tar -P collections/
-tar xvf collections/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-gaming.tar -C collections/
-```
-
-To confirm, the tarball is 8.9 MB and has MD5 checksum `9c5a181e03cbc7f13abd0e0e4bf9158e`.
-
-With the corpus downloaded, the following command will perform the complete regression, end to end, on any machine:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression beir-v1.0.0-cqadupstack-gaming-splade-distil-cocodenser-medium \
- --corpus-path collections/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-gaming
-```
-
-Alternatively, you can simply copy/paste from the commands below and obtain the same results.
-
-## Indexing
-
-Sample indexing command:
-
-```
-target/appassembler/bin/IndexCollection \
- -collection JsonVectorCollection \
- -input /path/to/beir-v1.0.0-cqadupstack-gaming-splade_distil_cocodenser_medium \
- -generator DefaultLuceneDocumentGenerator \
- -index indexes/lucene-index.beir-v1.0.0-cqadupstack-gaming-splade_distil_cocodenser_medium/ \
- -threads 16 -impact -pretokenized \
- >& logs/log.beir-v1.0.0-cqadupstack-gaming-splade_distil_cocodenser_medium &
-```
-
-The path `/path/to/beir-v1.0.0-cqadupstack-gaming-splade_distil_cocodenser_medium/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the pre-encoded tokens.
-Upon completion, we should have an index with 8,674 documents.
-
-For additional details, see explanation of [common indexing options](../../docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```
-target/appassembler/bin/SearchCollection \
- -index indexes/lucene-index.beir-v1.0.0-cqadupstack-gaming-splade_distil_cocodenser_medium/ \
- -topics tools/topics-and-qrels/topics.beir-v1.0.0-cqadupstack-gaming.test.splade_distil_cocodenser_medium.tsv.gz \
- -topicReader TsvString \
- -output runs/run.beir-v1.0.0-cqadupstack-gaming-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-cqadupstack-gaming.test.splade_distil_cocodenser_medium.txt \
- -impact -pretokenized -removeQuery -hits 1000 &
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```
-target/appassembler/bin/trec_eval -c -m ndcg_cut.10 tools/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-gaming.test.txt runs/run.beir-v1.0.0-cqadupstack-gaming-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-cqadupstack-gaming.test.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -c -m recall.100 tools/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-gaming.test.txt runs/run.beir-v1.0.0-cqadupstack-gaming-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-cqadupstack-gaming.test.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -c -m recall.1000 tools/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-gaming.test.txt runs/run.beir-v1.0.0-cqadupstack-gaming-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-cqadupstack-gaming.test.splade_distil_cocodenser_medium.txt
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-| **nDCG@10** | **SPLADE-distill CoCodenser Medium**|
-|:-------------------------------------------------------------------------------------------------------------|-----------|
-| BEIR (v1.0.0): CQADupStack-gaming | 0.5061 |
-| **R@100** | **SPLADE-distill CoCodenser Medium**|
-| BEIR (v1.0.0): CQADupStack-gaming | 0.8138 |
-| **R@1000** | **SPLADE-distill CoCodenser Medium**|
-| BEIR (v1.0.0): CQADupStack-gaming | 0.9253 |
-
-
-## Reproduction Log[*](../../docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](../../src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-gaming-splade-distil-cocodenser-medium.template) and run `bin/build.sh` to rebuild the documentation.
diff --git a/docs/regressions/regressions-beir-v1.0.0-cqadupstack-gis-splade-distil-cocodenser-medium.md b/docs/regressions/regressions-beir-v1.0.0-cqadupstack-gis-splade-distil-cocodenser-medium.md
deleted file mode 100644
index afcfb29d71..0000000000
--- a/docs/regressions/regressions-beir-v1.0.0-cqadupstack-gis-splade-distil-cocodenser-medium.md
+++ /dev/null
@@ -1,103 +0,0 @@
-# Anserini Regressions: BEIR (v1.0.0) — CQADupStack-gis
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on [BEIR (v1.0.0) — CQADupStack-gis](http://beir.ai/).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-The exact configurations for these regressions are stored in [this YAML file](../../src/main/resources/regression/beir-v1.0.0-cqadupstack-gis-splade-distil-cocodenser-medium.yaml).
-Note that this page is automatically generated from [this template](../../src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-gis-splade-distil-cocodenser-medium.template) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression beir-v1.0.0-cqadupstack-gis-splade-distil-cocodenser-medium
-```
-
-## Corpus
-
-We make available a version of the BEIR-v1.0.0 cqadupstack-gis corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., gone through document expansion and term reweighting.
-Thus, no neural inference is involved.
-For details on how to train SPLADE-distil CoCodenser Medium and perform inference, please see [guide provided by Naver Labs Europe](https://github.com/naver/splade/tree/main/anserini_evaluation).
-
-Download the corpus and unpack into `collections/`:
-
-```
-wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-gis.tar -P collections/
-tar xvf collections/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-gis.tar -C collections/
-```
-
-To confirm, the tarball is 8.9 MB and has MD5 checksum `9c5a181e03cbc7f13abd0e0e4bf9158e`.
-
-With the corpus downloaded, the following command will perform the complete regression, end to end, on any machine:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression beir-v1.0.0-cqadupstack-gis-splade-distil-cocodenser-medium \
- --corpus-path collections/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-gis
-```
-
-Alternatively, you can simply copy/paste from the commands below and obtain the same results.
-
-## Indexing
-
-Sample indexing command:
-
-```
-target/appassembler/bin/IndexCollection \
- -collection JsonVectorCollection \
- -input /path/to/beir-v1.0.0-cqadupstack-gis-splade_distil_cocodenser_medium \
- -generator DefaultLuceneDocumentGenerator \
- -index indexes/lucene-index.beir-v1.0.0-cqadupstack-gis-splade_distil_cocodenser_medium/ \
- -threads 16 -impact -pretokenized \
- >& logs/log.beir-v1.0.0-cqadupstack-gis-splade_distil_cocodenser_medium &
-```
-
-The path `/path/to/beir-v1.0.0-cqadupstack-gis-splade_distil_cocodenser_medium/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the pre-encoded tokens.
-Upon completion, we should have an index with 8,674 documents.
-
-For additional details, see explanation of [common indexing options](../../docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```
-target/appassembler/bin/SearchCollection \
- -index indexes/lucene-index.beir-v1.0.0-cqadupstack-gis-splade_distil_cocodenser_medium/ \
- -topics tools/topics-and-qrels/topics.beir-v1.0.0-cqadupstack-gis.test.splade_distil_cocodenser_medium.tsv.gz \
- -topicReader TsvString \
- -output runs/run.beir-v1.0.0-cqadupstack-gis-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-cqadupstack-gis.test.splade_distil_cocodenser_medium.txt \
- -impact -pretokenized -removeQuery -hits 1000 &
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```
-target/appassembler/bin/trec_eval -c -m ndcg_cut.10 tools/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-gis.test.txt runs/run.beir-v1.0.0-cqadupstack-gis-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-cqadupstack-gis.test.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -c -m recall.100 tools/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-gis.test.txt runs/run.beir-v1.0.0-cqadupstack-gis-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-cqadupstack-gis.test.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -c -m recall.1000 tools/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-gis.test.txt runs/run.beir-v1.0.0-cqadupstack-gis-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-cqadupstack-gis.test.splade_distil_cocodenser_medium.txt
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-| **nDCG@10** | **SPLADE-distill CoCodenser Medium**|
-|:-------------------------------------------------------------------------------------------------------------|-----------|
-| BEIR (v1.0.0): CQADupStack-gis | 0.3223 |
-| **R@100** | **SPLADE-distill CoCodenser Medium**|
-| BEIR (v1.0.0): CQADupStack-gis | 0.6419 |
-| **R@1000** | **SPLADE-distill CoCodenser Medium**|
-| BEIR (v1.0.0): CQADupStack-gis | 0.8385 |
-
-
-## Reproduction Log[*](../../docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](../../src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-gis-splade-distil-cocodenser-medium.template) and run `bin/build.sh` to rebuild the documentation.
diff --git a/docs/regressions/regressions-beir-v1.0.0-cqadupstack-mathematica-splade-distil-cocodenser-medium.md b/docs/regressions/regressions-beir-v1.0.0-cqadupstack-mathematica-splade-distil-cocodenser-medium.md
deleted file mode 100644
index e1ed1b8138..0000000000
--- a/docs/regressions/regressions-beir-v1.0.0-cqadupstack-mathematica-splade-distil-cocodenser-medium.md
+++ /dev/null
@@ -1,103 +0,0 @@
-# Anserini Regressions: BEIR (v1.0.0) — CQADupStack-mathematica
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on [BEIR (v1.0.0) — CQADupStack-mathematica](http://beir.ai/).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-The exact configurations for these regressions are stored in [this YAML file](../../src/main/resources/regression/beir-v1.0.0-cqadupstack-mathematica-splade-distil-cocodenser-medium.yaml).
-Note that this page is automatically generated from [this template](../../src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-mathematica-splade-distil-cocodenser-medium.template) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression beir-v1.0.0-cqadupstack-mathematica-splade-distil-cocodenser-medium
-```
-
-## Corpus
-
-We make available a version of the BEIR-v1.0.0 cqadupstack-mathematica corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., gone through document expansion and term reweighting.
-Thus, no neural inference is involved.
-For details on how to train SPLADE-distil CoCodenser Medium and perform inference, please see [guide provided by Naver Labs Europe](https://github.com/naver/splade/tree/main/anserini_evaluation).
-
-Download the corpus and unpack into `collections/`:
-
-```
-wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-mathematica.tar -P collections/
-tar xvf collections/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-mathematica.tar -C collections/
-```
-
-To confirm, the tarball is 8.9 MB and has MD5 checksum `9c5a181e03cbc7f13abd0e0e4bf9158e`.
-
-With the corpus downloaded, the following command will perform the complete regression, end to end, on any machine:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression beir-v1.0.0-cqadupstack-mathematica-splade-distil-cocodenser-medium \
- --corpus-path collections/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-mathematica
-```
-
-Alternatively, you can simply copy/paste from the commands below and obtain the same results.
-
-## Indexing
-
-Sample indexing command:
-
-```
-target/appassembler/bin/IndexCollection \
- -collection JsonVectorCollection \
- -input /path/to/beir-v1.0.0-cqadupstack-mathematica-splade_distil_cocodenser_medium \
- -generator DefaultLuceneDocumentGenerator \
- -index indexes/lucene-index.beir-v1.0.0-cqadupstack-mathematica-splade_distil_cocodenser_medium/ \
- -threads 16 -impact -pretokenized \
- >& logs/log.beir-v1.0.0-cqadupstack-mathematica-splade_distil_cocodenser_medium &
-```
-
-The path `/path/to/beir-v1.0.0-cqadupstack-mathematica-splade_distil_cocodenser_medium/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the pre-encoded tokens.
-Upon completion, we should have an index with 8,674 documents.
-
-For additional details, see explanation of [common indexing options](../../docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```
-target/appassembler/bin/SearchCollection \
- -index indexes/lucene-index.beir-v1.0.0-cqadupstack-mathematica-splade_distil_cocodenser_medium/ \
- -topics tools/topics-and-qrels/topics.beir-v1.0.0-cqadupstack-mathematica.test.splade_distil_cocodenser_medium.tsv.gz \
- -topicReader TsvString \
- -output runs/run.beir-v1.0.0-cqadupstack-mathematica-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-cqadupstack-mathematica.test.splade_distil_cocodenser_medium.txt \
- -impact -pretokenized -removeQuery -hits 1000 &
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```
-target/appassembler/bin/trec_eval -c -m ndcg_cut.10 tools/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-mathematica.test.txt runs/run.beir-v1.0.0-cqadupstack-mathematica-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-cqadupstack-mathematica.test.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -c -m recall.100 tools/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-mathematica.test.txt runs/run.beir-v1.0.0-cqadupstack-mathematica-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-cqadupstack-mathematica.test.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -c -m recall.1000 tools/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-mathematica.test.txt runs/run.beir-v1.0.0-cqadupstack-mathematica-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-cqadupstack-mathematica.test.splade_distil_cocodenser_medium.txt
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-| **nDCG@10** | **SPLADE-distill CoCodenser Medium**|
-|:-------------------------------------------------------------------------------------------------------------|-----------|
-| BEIR (v1.0.0): CQADupStack-mathematica | 0.2423 |
-| **R@100** | **SPLADE-distill CoCodenser Medium**|
-| BEIR (v1.0.0): CQADupStack-mathematica | 0.5732 |
-| **R@1000** | **SPLADE-distill CoCodenser Medium**|
-| BEIR (v1.0.0): CQADupStack-mathematica | 0.7848 |
-
-
-## Reproduction Log[*](../../docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](../../src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-mathematica-splade-distil-cocodenser-medium.template) and run `bin/build.sh` to rebuild the documentation.
diff --git a/docs/regressions/regressions-beir-v1.0.0-cqadupstack-physics-splade-distil-cocodenser-medium.md b/docs/regressions/regressions-beir-v1.0.0-cqadupstack-physics-splade-distil-cocodenser-medium.md
deleted file mode 100644
index 0190459b81..0000000000
--- a/docs/regressions/regressions-beir-v1.0.0-cqadupstack-physics-splade-distil-cocodenser-medium.md
+++ /dev/null
@@ -1,103 +0,0 @@
-# Anserini Regressions: BEIR (v1.0.0) — CQADupStack-physics
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on [BEIR (v1.0.0) — CQADupStack-physics](http://beir.ai/).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-The exact configurations for these regressions are stored in [this YAML file](../../src/main/resources/regression/beir-v1.0.0-cqadupstack-physics-splade-distil-cocodenser-medium.yaml).
-Note that this page is automatically generated from [this template](../../src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-physics-splade-distil-cocodenser-medium.template) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression beir-v1.0.0-cqadupstack-physics-splade-distil-cocodenser-medium
-```
-
-## Corpus
-
-We make available a version of the BEIR-v1.0.0 cqadupstack-physics corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., gone through document expansion and term reweighting.
-Thus, no neural inference is involved.
-For details on how to train SPLADE-distil CoCodenser Medium and perform inference, please see [guide provided by Naver Labs Europe](https://github.com/naver/splade/tree/main/anserini_evaluation).
-
-Download the corpus and unpack into `collections/`:
-
-```
-wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-physics.tar -P collections/
-tar xvf collections/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-physics.tar -C collections/
-```
-
-To confirm, the tarball is 8.9 MB and has MD5 checksum `9c5a181e03cbc7f13abd0e0e4bf9158e`.
-
-With the corpus downloaded, the following command will perform the complete regression, end to end, on any machine:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression beir-v1.0.0-cqadupstack-physics-splade-distil-cocodenser-medium \
- --corpus-path collections/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-physics
-```
-
-Alternatively, you can simply copy/paste from the commands below and obtain the same results.
-
-## Indexing
-
-Sample indexing command:
-
-```
-target/appassembler/bin/IndexCollection \
- -collection JsonVectorCollection \
- -input /path/to/beir-v1.0.0-cqadupstack-physics-splade_distil_cocodenser_medium \
- -generator DefaultLuceneDocumentGenerator \
- -index indexes/lucene-index.beir-v1.0.0-cqadupstack-physics-splade_distil_cocodenser_medium/ \
- -threads 16 -impact -pretokenized \
- >& logs/log.beir-v1.0.0-cqadupstack-physics-splade_distil_cocodenser_medium &
-```
-
-The path `/path/to/beir-v1.0.0-cqadupstack-physics-splade_distil_cocodenser_medium/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the pre-encoded tokens.
-Upon completion, we should have an index with 8,674 documents.
-
-For additional details, see explanation of [common indexing options](../../docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```
-target/appassembler/bin/SearchCollection \
- -index indexes/lucene-index.beir-v1.0.0-cqadupstack-physics-splade_distil_cocodenser_medium/ \
- -topics tools/topics-and-qrels/topics.beir-v1.0.0-cqadupstack-physics.test.splade_distil_cocodenser_medium.tsv.gz \
- -topicReader TsvString \
- -output runs/run.beir-v1.0.0-cqadupstack-physics-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-cqadupstack-physics.test.splade_distil_cocodenser_medium.txt \
- -impact -pretokenized -removeQuery -hits 1000 &
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```
-target/appassembler/bin/trec_eval -c -m ndcg_cut.10 tools/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-physics.test.txt runs/run.beir-v1.0.0-cqadupstack-physics-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-cqadupstack-physics.test.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -c -m recall.100 tools/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-physics.test.txt runs/run.beir-v1.0.0-cqadupstack-physics-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-cqadupstack-physics.test.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -c -m recall.1000 tools/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-physics.test.txt runs/run.beir-v1.0.0-cqadupstack-physics-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-cqadupstack-physics.test.splade_distil_cocodenser_medium.txt
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-| **nDCG@10** | **SPLADE-distill CoCodenser Medium**|
-|:-------------------------------------------------------------------------------------------------------------|-----------|
-| BEIR (v1.0.0): CQADupStack-physics | 0.3668 |
-| **R@100** | **SPLADE-distill CoCodenser Medium**|
-| BEIR (v1.0.0): CQADupStack-physics | 0.7286 |
-| **R@1000** | **SPLADE-distill CoCodenser Medium**|
-| BEIR (v1.0.0): CQADupStack-physics | 0.8931 |
-
-
-## Reproduction Log[*](../../docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](../../src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-physics-splade-distil-cocodenser-medium.template) and run `bin/build.sh` to rebuild the documentation.
diff --git a/docs/regressions/regressions-beir-v1.0.0-cqadupstack-programmers-splade-distil-cocodenser-medium.md b/docs/regressions/regressions-beir-v1.0.0-cqadupstack-programmers-splade-distil-cocodenser-medium.md
deleted file mode 100644
index 427fd17c02..0000000000
--- a/docs/regressions/regressions-beir-v1.0.0-cqadupstack-programmers-splade-distil-cocodenser-medium.md
+++ /dev/null
@@ -1,103 +0,0 @@
-# Anserini Regressions: BEIR (v1.0.0) — CQADupStack-programmers
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on [BEIR (v1.0.0) — CQADupStack-programmers](http://beir.ai/).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-The exact configurations for these regressions are stored in [this YAML file](../../src/main/resources/regression/beir-v1.0.0-cqadupstack-programmers-splade-distil-cocodenser-medium.yaml).
-Note that this page is automatically generated from [this template](../../src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-programmers-splade-distil-cocodenser-medium.template) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression beir-v1.0.0-cqadupstack-programmers-splade-distil-cocodenser-medium
-```
-
-## Corpus
-
-We make available a version of the BEIR-v1.0.0 cqadupstack-programmers corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., gone through document expansion and term reweighting.
-Thus, no neural inference is involved.
-For details on how to train SPLADE-distil CoCodenser Medium and perform inference, please see [guide provided by Naver Labs Europe](https://github.com/naver/splade/tree/main/anserini_evaluation).
-
-Download the corpus and unpack into `collections/`:
-
-```
-wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-programmers.tar -P collections/
-tar xvf collections/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-programmers.tar -C collections/
-```
-
-To confirm, the tarball is 8.9 MB and has MD5 checksum `9c5a181e03cbc7f13abd0e0e4bf9158e`.
-
-With the corpus downloaded, the following command will perform the complete regression, end to end, on any machine:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression beir-v1.0.0-cqadupstack-programmers-splade-distil-cocodenser-medium \
- --corpus-path collections/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-programmers
-```
-
-Alternatively, you can simply copy/paste from the commands below and obtain the same results.
-
-## Indexing
-
-Sample indexing command:
-
-```
-target/appassembler/bin/IndexCollection \
- -collection JsonVectorCollection \
- -input /path/to/beir-v1.0.0-cqadupstack-programmers-splade_distil_cocodenser_medium \
- -generator DefaultLuceneDocumentGenerator \
- -index indexes/lucene-index.beir-v1.0.0-cqadupstack-programmers-splade_distil_cocodenser_medium/ \
- -threads 16 -impact -pretokenized \
- >& logs/log.beir-v1.0.0-cqadupstack-programmers-splade_distil_cocodenser_medium &
-```
-
-The path `/path/to/beir-v1.0.0-cqadupstack-programmers-splade_distil_cocodenser_medium/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the pre-encoded tokens.
-Upon completion, we should have an index with 8,674 documents.
-
-For additional details, see explanation of [common indexing options](../../docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```
-target/appassembler/bin/SearchCollection \
- -index indexes/lucene-index.beir-v1.0.0-cqadupstack-programmers-splade_distil_cocodenser_medium/ \
- -topics tools/topics-and-qrels/topics.beir-v1.0.0-cqadupstack-programmers.test.splade_distil_cocodenser_medium.tsv.gz \
- -topicReader TsvString \
- -output runs/run.beir-v1.0.0-cqadupstack-programmers-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-cqadupstack-programmers.test.splade_distil_cocodenser_medium.txt \
- -impact -pretokenized -removeQuery -hits 1000 &
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```
-target/appassembler/bin/trec_eval -c -m ndcg_cut.10 tools/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-programmers.test.txt runs/run.beir-v1.0.0-cqadupstack-programmers-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-cqadupstack-programmers.test.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -c -m recall.100 tools/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-programmers.test.txt runs/run.beir-v1.0.0-cqadupstack-programmers-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-cqadupstack-programmers.test.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -c -m recall.1000 tools/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-programmers.test.txt runs/run.beir-v1.0.0-cqadupstack-programmers-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-cqadupstack-programmers.test.splade_distil_cocodenser_medium.txt
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-| **nDCG@10** | **SPLADE-distill CoCodenser Medium**|
-|:-------------------------------------------------------------------------------------------------------------|-----------|
-| BEIR (v1.0.0): CQADupStack-programmers | 0.3412 |
-| **R@100** | **SPLADE-distill CoCodenser Medium**|
-| BEIR (v1.0.0): CQADupStack-programmers | 0.6653 |
-| **R@1000** | **SPLADE-distill CoCodenser Medium**|
-| BEIR (v1.0.0): CQADupStack-programmers | 0.8451 |
-
-
-## Reproduction Log[*](../../docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](../../src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-programmers-splade-distil-cocodenser-medium.template) and run `bin/build.sh` to rebuild the documentation.
diff --git a/docs/regressions/regressions-beir-v1.0.0-cqadupstack-stats-splade-distil-cocodenser-medium.md b/docs/regressions/regressions-beir-v1.0.0-cqadupstack-stats-splade-distil-cocodenser-medium.md
deleted file mode 100644
index 78baf0c401..0000000000
--- a/docs/regressions/regressions-beir-v1.0.0-cqadupstack-stats-splade-distil-cocodenser-medium.md
+++ /dev/null
@@ -1,103 +0,0 @@
-# Anserini Regressions: BEIR (v1.0.0) — CQADupStack-stats
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on [BEIR (v1.0.0) — CQADupStack-stats](http://beir.ai/).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-The exact configurations for these regressions are stored in [this YAML file](../../src/main/resources/regression/beir-v1.0.0-cqadupstack-stats-splade-distil-cocodenser-medium.yaml).
-Note that this page is automatically generated from [this template](../../src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-stats-splade-distil-cocodenser-medium.template) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression beir-v1.0.0-cqadupstack-stats-splade-distil-cocodenser-medium
-```
-
-## Corpus
-
-We make available a version of the BEIR-v1.0.0 cqadupstack-stats corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., gone through document expansion and term reweighting.
-Thus, no neural inference is involved.
-For details on how to train SPLADE-distil CoCodenser Medium and perform inference, please see [guide provided by Naver Labs Europe](https://github.com/naver/splade/tree/main/anserini_evaluation).
-
-Download the corpus and unpack into `collections/`:
-
-```
-wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-stats.tar -P collections/
-tar xvf collections/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-stats.tar -C collections/
-```
-
-To confirm, the tarball is 8.9 MB and has MD5 checksum `9c5a181e03cbc7f13abd0e0e4bf9158e`.
-
-With the corpus downloaded, the following command will perform the complete regression, end to end, on any machine:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression beir-v1.0.0-cqadupstack-stats-splade-distil-cocodenser-medium \
- --corpus-path collections/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-stats
-```
-
-Alternatively, you can simply copy/paste from the commands below and obtain the same results.
-
-## Indexing
-
-Sample indexing command:
-
-```
-target/appassembler/bin/IndexCollection \
- -collection JsonVectorCollection \
- -input /path/to/beir-v1.0.0-cqadupstack-stats-splade_distil_cocodenser_medium \
- -generator DefaultLuceneDocumentGenerator \
- -index indexes/lucene-index.beir-v1.0.0-cqadupstack-stats-splade_distil_cocodenser_medium/ \
- -threads 16 -impact -pretokenized \
- >& logs/log.beir-v1.0.0-cqadupstack-stats-splade_distil_cocodenser_medium &
-```
-
-The path `/path/to/beir-v1.0.0-cqadupstack-stats-splade_distil_cocodenser_medium/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the pre-encoded tokens.
-Upon completion, we should have an index with 8,674 documents.
-
-For additional details, see explanation of [common indexing options](../../docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```
-target/appassembler/bin/SearchCollection \
- -index indexes/lucene-index.beir-v1.0.0-cqadupstack-stats-splade_distil_cocodenser_medium/ \
- -topics tools/topics-and-qrels/topics.beir-v1.0.0-cqadupstack-stats.test.splade_distil_cocodenser_medium.tsv.gz \
- -topicReader TsvString \
- -output runs/run.beir-v1.0.0-cqadupstack-stats-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-cqadupstack-stats.test.splade_distil_cocodenser_medium.txt \
- -impact -pretokenized -removeQuery -hits 1000 &
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```
-target/appassembler/bin/trec_eval -c -m ndcg_cut.10 tools/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-stats.test.txt runs/run.beir-v1.0.0-cqadupstack-stats-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-cqadupstack-stats.test.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -c -m recall.100 tools/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-stats.test.txt runs/run.beir-v1.0.0-cqadupstack-stats-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-cqadupstack-stats.test.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -c -m recall.1000 tools/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-stats.test.txt runs/run.beir-v1.0.0-cqadupstack-stats-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-cqadupstack-stats.test.splade_distil_cocodenser_medium.txt
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-| **nDCG@10** | **SPLADE-distill CoCodenser Medium**|
-|:-------------------------------------------------------------------------------------------------------------|-----------|
-| BEIR (v1.0.0): CQADupStack-stats | 0.3142 |
-| **R@100** | **SPLADE-distill CoCodenser Medium**|
-| BEIR (v1.0.0): CQADupStack-stats | 0.5889 |
-| **R@1000** | **SPLADE-distill CoCodenser Medium**|
-| BEIR (v1.0.0): CQADupStack-stats | 0.7823 |
-
-
-## Reproduction Log[*](../../docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](../../src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-stats-splade-distil-cocodenser-medium.template) and run `bin/build.sh` to rebuild the documentation.
diff --git a/docs/regressions/regressions-beir-v1.0.0-cqadupstack-tex-splade-distil-cocodenser-medium.md b/docs/regressions/regressions-beir-v1.0.0-cqadupstack-tex-splade-distil-cocodenser-medium.md
deleted file mode 100644
index eab2680b5f..0000000000
--- a/docs/regressions/regressions-beir-v1.0.0-cqadupstack-tex-splade-distil-cocodenser-medium.md
+++ /dev/null
@@ -1,103 +0,0 @@
-# Anserini Regressions: BEIR (v1.0.0) — CQADupStack-tex
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on [BEIR (v1.0.0) — CQADupStack-tex](http://beir.ai/).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-The exact configurations for these regressions are stored in [this YAML file](../../src/main/resources/regression/beir-v1.0.0-cqadupstack-tex-splade-distil-cocodenser-medium.yaml).
-Note that this page is automatically generated from [this template](../../src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-tex-splade-distil-cocodenser-medium.template) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression beir-v1.0.0-cqadupstack-tex-splade-distil-cocodenser-medium
-```
-
-## Corpus
-
-We make available a version of the BEIR-v1.0.0 cqadupstack-tex corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., gone through document expansion and term reweighting.
-Thus, no neural inference is involved.
-For details on how to train SPLADE-distil CoCodenser Medium and perform inference, please see [guide provided by Naver Labs Europe](https://github.com/naver/splade/tree/main/anserini_evaluation).
-
-Download the corpus and unpack into `collections/`:
-
-```
-wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-tex.tar -P collections/
-tar xvf collections/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-tex.tar -C collections/
-```
-
-To confirm, the tarball is 8.9 MB and has MD5 checksum `9c5a181e03cbc7f13abd0e0e4bf9158e`.
-
-With the corpus downloaded, the following command will perform the complete regression, end to end, on any machine:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression beir-v1.0.0-cqadupstack-tex-splade-distil-cocodenser-medium \
- --corpus-path collections/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-tex
-```
-
-Alternatively, you can simply copy/paste from the commands below and obtain the same results.
-
-## Indexing
-
-Sample indexing command:
-
-```
-target/appassembler/bin/IndexCollection \
- -collection JsonVectorCollection \
- -input /path/to/beir-v1.0.0-cqadupstack-tex-splade_distil_cocodenser_medium \
- -generator DefaultLuceneDocumentGenerator \
- -index indexes/lucene-index.beir-v1.0.0-cqadupstack-tex-splade_distil_cocodenser_medium/ \
- -threads 16 -impact -pretokenized \
- >& logs/log.beir-v1.0.0-cqadupstack-tex-splade_distil_cocodenser_medium &
-```
-
-The path `/path/to/beir-v1.0.0-cqadupstack-tex-splade_distil_cocodenser_medium/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the pre-encoded tokens.
-Upon completion, we should have an index with 8,674 documents.
-
-For additional details, see explanation of [common indexing options](../../docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```
-target/appassembler/bin/SearchCollection \
- -index indexes/lucene-index.beir-v1.0.0-cqadupstack-tex-splade_distil_cocodenser_medium/ \
- -topics tools/topics-and-qrels/topics.beir-v1.0.0-cqadupstack-tex.test.splade_distil_cocodenser_medium.tsv.gz \
- -topicReader TsvString \
- -output runs/run.beir-v1.0.0-cqadupstack-tex-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-cqadupstack-tex.test.splade_distil_cocodenser_medium.txt \
- -impact -pretokenized -removeQuery -hits 1000 &
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```
-target/appassembler/bin/trec_eval -c -m ndcg_cut.10 tools/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-tex.test.txt runs/run.beir-v1.0.0-cqadupstack-tex-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-cqadupstack-tex.test.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -c -m recall.100 tools/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-tex.test.txt runs/run.beir-v1.0.0-cqadupstack-tex-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-cqadupstack-tex.test.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -c -m recall.1000 tools/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-tex.test.txt runs/run.beir-v1.0.0-cqadupstack-tex-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-cqadupstack-tex.test.splade_distil_cocodenser_medium.txt
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-| **nDCG@10** | **SPLADE-distill CoCodenser Medium**|
-|:-------------------------------------------------------------------------------------------------------------|-----------|
-| BEIR (v1.0.0): CQADupStack-tex | 0.2575 |
-| **R@100** | **SPLADE-distill CoCodenser Medium**|
-| BEIR (v1.0.0): CQADupStack-tex | 0.5231 |
-| **R@1000** | **SPLADE-distill CoCodenser Medium**|
-| BEIR (v1.0.0): CQADupStack-tex | 0.7372 |
-
-
-## Reproduction Log[*](../../docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](../../src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-tex-splade-distil-cocodenser-medium.template) and run `bin/build.sh` to rebuild the documentation.
diff --git a/docs/regressions/regressions-beir-v1.0.0-cqadupstack-unix-splade-distil-cocodenser-medium.md b/docs/regressions/regressions-beir-v1.0.0-cqadupstack-unix-splade-distil-cocodenser-medium.md
deleted file mode 100644
index 82426356f4..0000000000
--- a/docs/regressions/regressions-beir-v1.0.0-cqadupstack-unix-splade-distil-cocodenser-medium.md
+++ /dev/null
@@ -1,103 +0,0 @@
-# Anserini Regressions: BEIR (v1.0.0) — CQADupStack-unix
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on [BEIR (v1.0.0) — CQADupStack-unix](http://beir.ai/).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-The exact configurations for these regressions are stored in [this YAML file](../../src/main/resources/regression/beir-v1.0.0-cqadupstack-unix-splade-distil-cocodenser-medium.yaml).
-Note that this page is automatically generated from [this template](../../src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-unix-splade-distil-cocodenser-medium.template) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression beir-v1.0.0-cqadupstack-unix-splade-distil-cocodenser-medium
-```
-
-## Corpus
-
-We make available a version of the BEIR-v1.0.0 cqadupstack-unix corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., gone through document expansion and term reweighting.
-Thus, no neural inference is involved.
-For details on how to train SPLADE-distil CoCodenser Medium and perform inference, please see [guide provided by Naver Labs Europe](https://github.com/naver/splade/tree/main/anserini_evaluation).
-
-Download the corpus and unpack into `collections/`:
-
-```
-wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-unix.tar -P collections/
-tar xvf collections/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-unix.tar -C collections/
-```
-
-To confirm, the tarball is 8.9 MB and has MD5 checksum `9c5a181e03cbc7f13abd0e0e4bf9158e`.
-
-With the corpus downloaded, the following command will perform the complete regression, end to end, on any machine:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression beir-v1.0.0-cqadupstack-unix-splade-distil-cocodenser-medium \
- --corpus-path collections/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-unix
-```
-
-Alternatively, you can simply copy/paste from the commands below and obtain the same results.
-
-## Indexing
-
-Sample indexing command:
-
-```
-target/appassembler/bin/IndexCollection \
- -collection JsonVectorCollection \
- -input /path/to/beir-v1.0.0-cqadupstack-unix-splade_distil_cocodenser_medium \
- -generator DefaultLuceneDocumentGenerator \
- -index indexes/lucene-index.beir-v1.0.0-cqadupstack-unix-splade_distil_cocodenser_medium/ \
- -threads 16 -impact -pretokenized \
- >& logs/log.beir-v1.0.0-cqadupstack-unix-splade_distil_cocodenser_medium &
-```
-
-The path `/path/to/beir-v1.0.0-cqadupstack-unix-splade_distil_cocodenser_medium/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the pre-encoded tokens.
-Upon completion, we should have an index with 8,674 documents.
-
-For additional details, see explanation of [common indexing options](../../docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```
-target/appassembler/bin/SearchCollection \
- -index indexes/lucene-index.beir-v1.0.0-cqadupstack-unix-splade_distil_cocodenser_medium/ \
- -topics tools/topics-and-qrels/topics.beir-v1.0.0-cqadupstack-unix.test.splade_distil_cocodenser_medium.tsv.gz \
- -topicReader TsvString \
- -output runs/run.beir-v1.0.0-cqadupstack-unix-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-cqadupstack-unix.test.splade_distil_cocodenser_medium.txt \
- -impact -pretokenized -removeQuery -hits 1000 &
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```
-target/appassembler/bin/trec_eval -c -m ndcg_cut.10 tools/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-unix.test.txt runs/run.beir-v1.0.0-cqadupstack-unix-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-cqadupstack-unix.test.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -c -m recall.100 tools/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-unix.test.txt runs/run.beir-v1.0.0-cqadupstack-unix-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-cqadupstack-unix.test.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -c -m recall.1000 tools/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-unix.test.txt runs/run.beir-v1.0.0-cqadupstack-unix-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-cqadupstack-unix.test.splade_distil_cocodenser_medium.txt
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-| **nDCG@10** | **SPLADE-distill CoCodenser Medium**|
-|:-------------------------------------------------------------------------------------------------------------|-----------|
-| BEIR (v1.0.0): CQADupStack-unix | 0.3292 |
-| **R@100** | **SPLADE-distill CoCodenser Medium**|
-| BEIR (v1.0.0): CQADupStack-unix | 0.6192 |
-| **R@1000** | **SPLADE-distill CoCodenser Medium**|
-| BEIR (v1.0.0): CQADupStack-unix | 0.8225 |
-
-
-## Reproduction Log[*](../../docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](../../src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-unix-splade-distil-cocodenser-medium.template) and run `bin/build.sh` to rebuild the documentation.
diff --git a/docs/regressions/regressions-beir-v1.0.0-cqadupstack-webmasters-splade-distil-cocodenser-medium.md b/docs/regressions/regressions-beir-v1.0.0-cqadupstack-webmasters-splade-distil-cocodenser-medium.md
deleted file mode 100644
index d74b48a3da..0000000000
--- a/docs/regressions/regressions-beir-v1.0.0-cqadupstack-webmasters-splade-distil-cocodenser-medium.md
+++ /dev/null
@@ -1,103 +0,0 @@
-# Anserini Regressions: BEIR (v1.0.0) — CQADupStack-webmasters
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on [BEIR (v1.0.0) — CQADupStack-webmasters](http://beir.ai/).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-The exact configurations for these regressions are stored in [this YAML file](../../src/main/resources/regression/beir-v1.0.0-cqadupstack-webmasters-splade-distil-cocodenser-medium.yaml).
-Note that this page is automatically generated from [this template](../../src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-webmasters-splade-distil-cocodenser-medium.template) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression beir-v1.0.0-cqadupstack-webmasters-splade-distil-cocodenser-medium
-```
-
-## Corpus
-
-We make available a version of the BEIR-v1.0.0 cqadupstack-webmasters corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., gone through document expansion and term reweighting.
-Thus, no neural inference is involved.
-For details on how to train SPLADE-distil CoCodenser Medium and perform inference, please see [guide provided by Naver Labs Europe](https://github.com/naver/splade/tree/main/anserini_evaluation).
-
-Download the corpus and unpack into `collections/`:
-
-```
-wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-webmasters.tar -P collections/
-tar xvf collections/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-webmasters.tar -C collections/
-```
-
-To confirm, the tarball is 8.9 MB and has MD5 checksum `9c5a181e03cbc7f13abd0e0e4bf9158e`.
-
-With the corpus downloaded, the following command will perform the complete regression, end to end, on any machine:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression beir-v1.0.0-cqadupstack-webmasters-splade-distil-cocodenser-medium \
- --corpus-path collections/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-webmasters
-```
-
-Alternatively, you can simply copy/paste from the commands below and obtain the same results.
-
-## Indexing
-
-Sample indexing command:
-
-```
-target/appassembler/bin/IndexCollection \
- -collection JsonVectorCollection \
- -input /path/to/beir-v1.0.0-cqadupstack-webmasters-splade_distil_cocodenser_medium \
- -generator DefaultLuceneDocumentGenerator \
- -index indexes/lucene-index.beir-v1.0.0-cqadupstack-webmasters-splade_distil_cocodenser_medium/ \
- -threads 16 -impact -pretokenized \
- >& logs/log.beir-v1.0.0-cqadupstack-webmasters-splade_distil_cocodenser_medium &
-```
-
-The path `/path/to/beir-v1.0.0-cqadupstack-webmasters-splade_distil_cocodenser_medium/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the pre-encoded tokens.
-Upon completion, we should have an index with 8,674 documents.
-
-For additional details, see explanation of [common indexing options](../../docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```
-target/appassembler/bin/SearchCollection \
- -index indexes/lucene-index.beir-v1.0.0-cqadupstack-webmasters-splade_distil_cocodenser_medium/ \
- -topics tools/topics-and-qrels/topics.beir-v1.0.0-cqadupstack-webmasters.test.splade_distil_cocodenser_medium.tsv.gz \
- -topicReader TsvString \
- -output runs/run.beir-v1.0.0-cqadupstack-webmasters-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-cqadupstack-webmasters.test.splade_distil_cocodenser_medium.txt \
- -impact -pretokenized -removeQuery -hits 1000 &
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```
-target/appassembler/bin/trec_eval -c -m ndcg_cut.10 tools/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-webmasters.test.txt runs/run.beir-v1.0.0-cqadupstack-webmasters-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-cqadupstack-webmasters.test.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -c -m recall.100 tools/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-webmasters.test.txt runs/run.beir-v1.0.0-cqadupstack-webmasters-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-cqadupstack-webmasters.test.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -c -m recall.1000 tools/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-webmasters.test.txt runs/run.beir-v1.0.0-cqadupstack-webmasters-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-cqadupstack-webmasters.test.splade_distil_cocodenser_medium.txt
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-| **nDCG@10** | **SPLADE-distill CoCodenser Medium**|
-|:-------------------------------------------------------------------------------------------------------------|-----------|
-| BEIR (v1.0.0): CQADupStack-webmasters | 0.3343 |
-| **R@100** | **SPLADE-distill CoCodenser Medium**|
-| BEIR (v1.0.0): CQADupStack-webmasters | 0.6404 |
-| **R@1000** | **SPLADE-distill CoCodenser Medium**|
-| BEIR (v1.0.0): CQADupStack-webmasters | 0.8767 |
-
-
-## Reproduction Log[*](../../docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](../../src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-webmasters-splade-distil-cocodenser-medium.template) and run `bin/build.sh` to rebuild the documentation.
diff --git a/docs/regressions/regressions-beir-v1.0.0-cqadupstack-wordpress-splade-distil-cocodenser-medium.md b/docs/regressions/regressions-beir-v1.0.0-cqadupstack-wordpress-splade-distil-cocodenser-medium.md
deleted file mode 100644
index 4d950e95be..0000000000
--- a/docs/regressions/regressions-beir-v1.0.0-cqadupstack-wordpress-splade-distil-cocodenser-medium.md
+++ /dev/null
@@ -1,103 +0,0 @@
-# Anserini Regressions: BEIR (v1.0.0) — CQADupStack-wordpress
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on [BEIR (v1.0.0) — CQADupStack-wordpress](http://beir.ai/).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-The exact configurations for these regressions are stored in [this YAML file](../../src/main/resources/regression/beir-v1.0.0-cqadupstack-wordpress-splade-distil-cocodenser-medium.yaml).
-Note that this page is automatically generated from [this template](../../src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-wordpress-splade-distil-cocodenser-medium.template) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression beir-v1.0.0-cqadupstack-wordpress-splade-distil-cocodenser-medium
-```
-
-## Corpus
-
-We make available a version of the BEIR-v1.0.0 cqadupstack-wordpress corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., gone through document expansion and term reweighting.
-Thus, no neural inference is involved.
-For details on how to train SPLADE-distil CoCodenser Medium and perform inference, please see [guide provided by Naver Labs Europe](https://github.com/naver/splade/tree/main/anserini_evaluation).
-
-Download the corpus and unpack into `collections/`:
-
-```
-wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-wordpress.tar -P collections/
-tar xvf collections/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-wordpress.tar -C collections/
-```
-
-To confirm, the tarball is 8.9 MB and has MD5 checksum `9c5a181e03cbc7f13abd0e0e4bf9158e`.
-
-With the corpus downloaded, the following command will perform the complete regression, end to end, on any machine:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression beir-v1.0.0-cqadupstack-wordpress-splade-distil-cocodenser-medium \
- --corpus-path collections/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-wordpress
-```
-
-Alternatively, you can simply copy/paste from the commands below and obtain the same results.
-
-## Indexing
-
-Sample indexing command:
-
-```
-target/appassembler/bin/IndexCollection \
- -collection JsonVectorCollection \
- -input /path/to/beir-v1.0.0-cqadupstack-wordpress-splade_distil_cocodenser_medium \
- -generator DefaultLuceneDocumentGenerator \
- -index indexes/lucene-index.beir-v1.0.0-cqadupstack-wordpress-splade_distil_cocodenser_medium/ \
- -threads 16 -impact -pretokenized \
- >& logs/log.beir-v1.0.0-cqadupstack-wordpress-splade_distil_cocodenser_medium &
-```
-
-The path `/path/to/beir-v1.0.0-cqadupstack-wordpress-splade_distil_cocodenser_medium/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the pre-encoded tokens.
-Upon completion, we should have an index with 8,674 documents.
-
-For additional details, see explanation of [common indexing options](../../docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```
-target/appassembler/bin/SearchCollection \
- -index indexes/lucene-index.beir-v1.0.0-cqadupstack-wordpress-splade_distil_cocodenser_medium/ \
- -topics tools/topics-and-qrels/topics.beir-v1.0.0-cqadupstack-wordpress.test.splade_distil_cocodenser_medium.tsv.gz \
- -topicReader TsvString \
- -output runs/run.beir-v1.0.0-cqadupstack-wordpress-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-cqadupstack-wordpress.test.splade_distil_cocodenser_medium.txt \
- -impact -pretokenized -removeQuery -hits 1000 &
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```
-target/appassembler/bin/trec_eval -c -m ndcg_cut.10 tools/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-wordpress.test.txt runs/run.beir-v1.0.0-cqadupstack-wordpress-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-cqadupstack-wordpress.test.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -c -m recall.100 tools/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-wordpress.test.txt runs/run.beir-v1.0.0-cqadupstack-wordpress-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-cqadupstack-wordpress.test.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -c -m recall.1000 tools/topics-and-qrels/qrels.beir-v1.0.0-cqadupstack-wordpress.test.txt runs/run.beir-v1.0.0-cqadupstack-wordpress-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-cqadupstack-wordpress.test.splade_distil_cocodenser_medium.txt
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-| **nDCG@10** | **SPLADE-distill CoCodenser Medium**|
-|:-------------------------------------------------------------------------------------------------------------|-----------|
-| BEIR (v1.0.0): CQADupStack-wordpress | 0.2839 |
-| **R@100** | **SPLADE-distill CoCodenser Medium**|
-| BEIR (v1.0.0): CQADupStack-wordpress | 0.5974 |
-| **R@1000** | **SPLADE-distill CoCodenser Medium**|
-| BEIR (v1.0.0): CQADupStack-wordpress | 0.8036 |
-
-
-## Reproduction Log[*](../../docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](../../src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-wordpress-splade-distil-cocodenser-medium.template) and run `bin/build.sh` to rebuild the documentation.
diff --git a/docs/regressions/regressions-beir-v1.0.0-dbpedia-entity-splade-distil-cocodenser-medium.md b/docs/regressions/regressions-beir-v1.0.0-dbpedia-entity-splade-distil-cocodenser-medium.md
deleted file mode 100644
index c9e79e66bd..0000000000
--- a/docs/regressions/regressions-beir-v1.0.0-dbpedia-entity-splade-distil-cocodenser-medium.md
+++ /dev/null
@@ -1,103 +0,0 @@
-# Anserini Regressions: BEIR (v1.0.0) — DBPedia
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on [BEIR (v1.0.0) — DBPedia](http://beir.ai/).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-The exact configurations for these regressions are stored in [this YAML file](../../src/main/resources/regression/beir-v1.0.0-dbpedia-entity-splade-distil-cocodenser-medium.yaml).
-Note that this page is automatically generated from [this template](../../src/main/resources/docgen/templates/beir-v1.0.0-dbpedia-entity-splade-distil-cocodenser-medium.template) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression beir-v1.0.0-dbpedia-entity-splade-distil-cocodenser-medium
-```
-
-## Corpus
-
-We make available a version of the BEIR-v1.0.0 dbpedia-entity corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., gone through document expansion and term reweighting.
-Thus, no neural inference is involved.
-For details on how to train SPLADE-distil CoCodenser Medium and perform inference, please see [guide provided by Naver Labs Europe](https://github.com/naver/splade/tree/main/anserini_evaluation).
-
-Download the corpus and unpack into `collections/`:
-
-```
-wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/beir-v1.0.0-splade_distil_cocodenser_medium-dbpedia-entity.tar -P collections/
-tar xvf collections/beir-v1.0.0-splade_distil_cocodenser_medium-dbpedia-entity.tar -C collections/
-```
-
-To confirm, the tarball is 2.5 GB and has MD5 checksum `fdd1467eae4fbe6c53b75428492c776f`.
-
-With the corpus downloaded, the following command will perform the complete regression, end to end, on any machine:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression beir-v1.0.0-dbpedia-entity-splade-distil-cocodenser-medium \
- --corpus-path collections/beir-v1.0.0-splade_distil_cocodenser_medium-dbpedia-entity
-```
-
-Alternatively, you can simply copy/paste from the commands below and obtain the same results.
-
-## Indexing
-
-Sample indexing command:
-
-```
-target/appassembler/bin/IndexCollection \
- -collection JsonVectorCollection \
- -input /path/to/beir-v1.0.0-dbpedia-entity-splade_distil_cocodenser_medium \
- -generator DefaultLuceneDocumentGenerator \
- -index indexes/lucene-index.beir-v1.0.0-dbpedia-entity-splade_distil_cocodenser_medium/ \
- -threads 16 -impact -pretokenized \
- >& logs/log.beir-v1.0.0-dbpedia-entity-splade_distil_cocodenser_medium &
-```
-
-The path `/path/to/beir-v1.0.0-dbpedia-entity-splade_distil_cocodenser_medium/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the pre-encoded tokens.
-Upon completion, we should have an index with 4,635,922 documents.
-
-For additional details, see explanation of [common indexing options](../../docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```
-target/appassembler/bin/SearchCollection \
- -index indexes/lucene-index.beir-v1.0.0-dbpedia-entity-splade_distil_cocodenser_medium/ \
- -topics tools/topics-and-qrels/topics.beir-v1.0.0-dbpedia-entity.test.splade_distil_cocodenser_medium.tsv.gz \
- -topicReader TsvString \
- -output runs/run.beir-v1.0.0-dbpedia-entity-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-dbpedia-entity.test.splade_distil_cocodenser_medium.txt \
- -impact -pretokenized -removeQuery -hits 1000 &
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```
-target/appassembler/bin/trec_eval -c -m ndcg_cut.10 tools/topics-and-qrels/qrels.beir-v1.0.0-dbpedia-entity.test.txt runs/run.beir-v1.0.0-dbpedia-entity-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-dbpedia-entity.test.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -c -m recall.100 tools/topics-and-qrels/qrels.beir-v1.0.0-dbpedia-entity.test.txt runs/run.beir-v1.0.0-dbpedia-entity-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-dbpedia-entity.test.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -c -m recall.1000 tools/topics-and-qrels/qrels.beir-v1.0.0-dbpedia-entity.test.txt runs/run.beir-v1.0.0-dbpedia-entity-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-dbpedia-entity.test.splade_distil_cocodenser_medium.txt
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-| **nDCG@10** | **SPLADE-distill CoCodenser Medium**|
-|:-------------------------------------------------------------------------------------------------------------|-----------|
-| BEIR (v1.0.0): DBPedia | 0.4416 |
-| **R@100** | **SPLADE-distill CoCodenser Medium**|
-| BEIR (v1.0.0): DBPedia | 0.5636 |
-| **R@1000** | **SPLADE-distill CoCodenser Medium**|
-| BEIR (v1.0.0): DBPedia | 0.7774 |
-
-
-## Reproduction Log[*](../../docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](../../src/main/resources/docgen/templates/beir-v1.0.0-dbpedia-entity-splade-distil-cocodenser-medium.template) and run `bin/build.sh` to rebuild the documentation.
diff --git a/docs/regressions/regressions-beir-v1.0.0-fever-splade-distil-cocodenser-medium.md b/docs/regressions/regressions-beir-v1.0.0-fever-splade-distil-cocodenser-medium.md
deleted file mode 100644
index 66b1c19ce6..0000000000
--- a/docs/regressions/regressions-beir-v1.0.0-fever-splade-distil-cocodenser-medium.md
+++ /dev/null
@@ -1,103 +0,0 @@
-# Anserini Regressions: BEIR (v1.0.0) — FEVER
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on [BEIR (v1.0.0) — FEVER](http://beir.ai/).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-The exact configurations for these regressions are stored in [this YAML file](../../src/main/resources/regression/beir-v1.0.0-fever-splade-distil-cocodenser-medium.yaml).
-Note that this page is automatically generated from [this template](../../src/main/resources/docgen/templates/beir-v1.0.0-fever-splade-distil-cocodenser-medium.template) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression beir-v1.0.0-fever-splade-distil-cocodenser-medium
-```
-
-## Corpus
-
-We make available a version of the BEIR-v1.0.0 fever corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., gone through document expansion and term reweighting.
-Thus, no neural inference is involved.
-For details on how to train SPLADE-distil CoCodenser Medium and perform inference, please see [guide provided by Naver Labs Europe](https://github.com/naver/splade/tree/main/anserini_evaluation).
-
-Download the corpus and unpack into `collections/`:
-
-```
-wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/beir-v1.0.0-splade_distil_cocodenser_medium-fever.tar -P collections/
-tar xvf collections/beir-v1.0.0-splade_distil_cocodenser_medium-fever.tar -C collections/
-```
-
-To confirm, the tarball is 3.2 GB and has MD5 checksum `d5bd33563877667a64d37acbf16f5c5d`.
-
-With the corpus downloaded, the following command will perform the complete regression, end to end, on any machine:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression beir-v1.0.0-fever-splade-distil-cocodenser-medium \
- --corpus-path collections/beir-v1.0.0-splade_distil_cocodenser_medium-fever
-```
-
-Alternatively, you can simply copy/paste from the commands below and obtain the same results.
-
-## Indexing
-
-Sample indexing command:
-
-```
-target/appassembler/bin/IndexCollection \
- -collection JsonVectorCollection \
- -input /path/to/beir-v1.0.0-fever-splade_distil_cocodenser_medium \
- -generator DefaultLuceneDocumentGenerator \
- -index indexes/lucene-index.beir-v1.0.0-fever-splade_distil_cocodenser_medium/ \
- -threads 16 -impact -pretokenized \
- >& logs/log.beir-v1.0.0-fever-splade_distil_cocodenser_medium &
-```
-
-The path `/path/to/beir-v1.0.0-fever-splade_distil_cocodenser_medium/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the pre-encoded tokens.
-Upon completion, we should have an index with 5,416,568 documents.
-
-For additional details, see explanation of [common indexing options](../../docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```
-target/appassembler/bin/SearchCollection \
- -index indexes/lucene-index.beir-v1.0.0-fever-splade_distil_cocodenser_medium/ \
- -topics tools/topics-and-qrels/topics.beir-v1.0.0-fever.test.splade_distil_cocodenser_medium.tsv.gz \
- -topicReader TsvString \
- -output runs/run.beir-v1.0.0-fever-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-fever.test.splade_distil_cocodenser_medium.txt \
- -impact -pretokenized -removeQuery -hits 1000 &
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```
-target/appassembler/bin/trec_eval -c -m ndcg_cut.10 tools/topics-and-qrels/qrels.beir-v1.0.0-fever.test.txt runs/run.beir-v1.0.0-fever-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-fever.test.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -c -m recall.100 tools/topics-and-qrels/qrels.beir-v1.0.0-fever.test.txt runs/run.beir-v1.0.0-fever-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-fever.test.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -c -m recall.1000 tools/topics-and-qrels/qrels.beir-v1.0.0-fever.test.txt runs/run.beir-v1.0.0-fever-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-fever.test.splade_distil_cocodenser_medium.txt
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-| **nDCG@10** | **SPLADE-distill CoCodenser Medium**|
-|:-------------------------------------------------------------------------------------------------------------|-----------|
-| BEIR (v1.0.0): FEVER | 0.7962 |
-| **R@100** | **SPLADE-distill CoCodenser Medium**|
-| BEIR (v1.0.0): FEVER | 0.9550 |
-| **R@1000** | **SPLADE-distill CoCodenser Medium**|
-| BEIR (v1.0.0): FEVER | 0.9751 |
-
-
-## Reproduction Log[*](../../docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](../../src/main/resources/docgen/templates/beir-v1.0.0-fever-splade-distil-cocodenser-medium.template) and run `bin/build.sh` to rebuild the documentation.
diff --git a/docs/regressions/regressions-beir-v1.0.0-fiqa-splade-distil-cocodenser-medium.md b/docs/regressions/regressions-beir-v1.0.0-fiqa-splade-distil-cocodenser-medium.md
deleted file mode 100644
index 95fbb36c19..0000000000
--- a/docs/regressions/regressions-beir-v1.0.0-fiqa-splade-distil-cocodenser-medium.md
+++ /dev/null
@@ -1,103 +0,0 @@
-# Anserini Regressions: BEIR (v1.0.0) — FiQA-2018
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on [BEIR (v1.0.0) — FiQA-2018](http://beir.ai/).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-The exact configurations for these regressions are stored in [this YAML file](../../src/main/resources/regression/beir-v1.0.0-fiqa-splade-distil-cocodenser-medium.yaml).
-Note that this page is automatically generated from [this template](../../src/main/resources/docgen/templates/beir-v1.0.0-fiqa-splade-distil-cocodenser-medium.template) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression beir-v1.0.0-fiqa-splade-distil-cocodenser-medium
-```
-
-## Corpus
-
-We make available a version of the BEIR-v1.0.0 fiqa corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., gone through document expansion and term reweighting.
-Thus, no neural inference is involved.
-For details on how to train SPLADE-distil CoCodenser Medium and perform inference, please see [guide provided by Naver Labs Europe](https://github.com/naver/splade/tree/main/anserini_evaluation).
-
-Download the corpus and unpack into `collections/`:
-
-```
-wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/beir-v1.0.0-splade_distil_cocodenser_medium-fiqa.tar -P collections/
-tar xvf collections/beir-v1.0.0-splade_distil_cocodenser_medium-fiqa.tar -C collections/
-```
-
-To confirm, the tarball is 48 MB and has MD5 checksum `781f7683b6e73971afd01df1650756bf`.
-
-With the corpus downloaded, the following command will perform the complete regression, end to end, on any machine:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression beir-v1.0.0-fiqa-splade-distil-cocodenser-medium \
- --corpus-path collections/beir-v1.0.0-splade_distil_cocodenser_medium-fiqa
-```
-
-Alternatively, you can simply copy/paste from the commands below and obtain the same results.
-
-## Indexing
-
-Sample indexing command:
-
-```
-target/appassembler/bin/IndexCollection \
- -collection JsonVectorCollection \
- -input /path/to/beir-v1.0.0-fiqa-splade_distil_cocodenser_medium \
- -generator DefaultLuceneDocumentGenerator \
- -index indexes/lucene-index.beir-v1.0.0-fiqa-splade_distil_cocodenser_medium/ \
- -threads 16 -impact -pretokenized \
- >& logs/log.beir-v1.0.0-fiqa-splade_distil_cocodenser_medium &
-```
-
-The path `/path/to/beir-v1.0.0-fiqa-splade_distil_cocodenser_medium/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the pre-encoded tokens.
-Upon completion, we should have an index with 57,638 documents.
-
-For additional details, see explanation of [common indexing options](../../docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```
-target/appassembler/bin/SearchCollection \
- -index indexes/lucene-index.beir-v1.0.0-fiqa-splade_distil_cocodenser_medium/ \
- -topics tools/topics-and-qrels/topics.beir-v1.0.0-fiqa.test.splade_distil_cocodenser_medium.tsv.gz \
- -topicReader TsvString \
- -output runs/run.beir-v1.0.0-fiqa-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-fiqa.test.splade_distil_cocodenser_medium.txt \
- -impact -pretokenized -removeQuery -hits 1000 &
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```
-target/appassembler/bin/trec_eval -c -m ndcg_cut.10 tools/topics-and-qrels/qrels.beir-v1.0.0-fiqa.test.txt runs/run.beir-v1.0.0-fiqa-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-fiqa.test.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -c -m recall.100 tools/topics-and-qrels/qrels.beir-v1.0.0-fiqa.test.txt runs/run.beir-v1.0.0-fiqa-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-fiqa.test.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -c -m recall.1000 tools/topics-and-qrels/qrels.beir-v1.0.0-fiqa.test.txt runs/run.beir-v1.0.0-fiqa-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-fiqa.test.splade_distil_cocodenser_medium.txt
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-| **nDCG@10** | **SPLADE-distill CoCodenser Medium**|
-|:-------------------------------------------------------------------------------------------------------------|-----------|
-| BEIR (v1.0.0): FiQA-2018 | 0.3514 |
-| **R@100** | **SPLADE-distill CoCodenser Medium**|
-| BEIR (v1.0.0): FiQA-2018 | 0.6298 |
-| **R@1000** | **SPLADE-distill CoCodenser Medium**|
-| BEIR (v1.0.0): FiQA-2018 | 0.8323 |
-
-
-## Reproduction Log[*](../../docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](../../src/main/resources/docgen/templates/beir-v1.0.0-fiqa-splade-distil-cocodenser-medium.template) and run `bin/build.sh` to rebuild the documentation.
diff --git a/docs/regressions/regressions-beir-v1.0.0-hotpotqa-splade-distil-cocodenser-medium.md b/docs/regressions/regressions-beir-v1.0.0-hotpotqa-splade-distil-cocodenser-medium.md
deleted file mode 100644
index 99aad0b287..0000000000
--- a/docs/regressions/regressions-beir-v1.0.0-hotpotqa-splade-distil-cocodenser-medium.md
+++ /dev/null
@@ -1,103 +0,0 @@
-# Anserini Regressions: BEIR (v1.0.0) — HotpotQA
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on [BEIR (v1.0.0) — HotpotQA](http://beir.ai/).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-The exact configurations for these regressions are stored in [this YAML file](../../src/main/resources/regression/beir-v1.0.0-hotpotqa-splade-distil-cocodenser-medium.yaml).
-Note that this page is automatically generated from [this template](../../src/main/resources/docgen/templates/beir-v1.0.0-hotpotqa-splade-distil-cocodenser-medium.template) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression beir-v1.0.0-hotpotqa-splade-distil-cocodenser-medium
-```
-
-## Corpus
-
-We make available a version of the BEIR-v1.0.0 hotpotqa corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., gone through document expansion and term reweighting.
-Thus, no neural inference is involved.
-For details on how to train SPLADE-distil CoCodenser Medium and perform inference, please see [guide provided by Naver Labs Europe](https://github.com/naver/splade/tree/main/anserini_evaluation).
-
-Download the corpus and unpack into `collections/`:
-
-```
-wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/beir-v1.0.0-splade_distil_cocodenser_medium-hotpotqa.tar -P collections/
-tar xvf collections/beir-v1.0.0-splade_distil_cocodenser_medium-hotpotqa.tar -C collections/
-```
-
-To confirm, the tarball is 2.6 GB and has MD5 checksum `b607eb3e1cb0ba105a75ae8db4356e90`.
-
-With the corpus downloaded, the following command will perform the complete regression, end to end, on any machine:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression beir-v1.0.0-hotpotqa-splade-distil-cocodenser-medium \
- --corpus-path collections/beir-v1.0.0-splade_distil_cocodenser_medium-hotpotqa
-```
-
-Alternatively, you can simply copy/paste from the commands below and obtain the same results.
-
-## Indexing
-
-Sample indexing command:
-
-```
-target/appassembler/bin/IndexCollection \
- -collection JsonVectorCollection \
- -input /path/to/beir-v1.0.0-hotpotqa-splade_distil_cocodenser_medium \
- -generator DefaultLuceneDocumentGenerator \
- -index indexes/lucene-index.beir-v1.0.0-hotpotqa-splade_distil_cocodenser_medium/ \
- -threads 16 -impact -pretokenized \
- >& logs/log.beir-v1.0.0-hotpotqa-splade_distil_cocodenser_medium &
-```
-
-The path `/path/to/beir-v1.0.0-hotpotqa-splade_distil_cocodenser_medium/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the pre-encoded tokens.
-Upon completion, we should have an index with 5,233,329 documents.
-
-For additional details, see explanation of [common indexing options](../../docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```
-target/appassembler/bin/SearchCollection \
- -index indexes/lucene-index.beir-v1.0.0-hotpotqa-splade_distil_cocodenser_medium/ \
- -topics tools/topics-and-qrels/topics.beir-v1.0.0-hotpotqa.test.splade_distil_cocodenser_medium.tsv.gz \
- -topicReader TsvString \
- -output runs/run.beir-v1.0.0-hotpotqa-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-hotpotqa.test.splade_distil_cocodenser_medium.txt \
- -impact -pretokenized -removeQuery -hits 1000 &
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```
-target/appassembler/bin/trec_eval -c -m ndcg_cut.10 tools/topics-and-qrels/qrels.beir-v1.0.0-hotpotqa.test.txt runs/run.beir-v1.0.0-hotpotqa-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-hotpotqa.test.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -c -m recall.100 tools/topics-and-qrels/qrels.beir-v1.0.0-hotpotqa.test.txt runs/run.beir-v1.0.0-hotpotqa-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-hotpotqa.test.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -c -m recall.1000 tools/topics-and-qrels/qrels.beir-v1.0.0-hotpotqa.test.txt runs/run.beir-v1.0.0-hotpotqa-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-hotpotqa.test.splade_distil_cocodenser_medium.txt
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-| **nDCG@10** | **SPLADE-distill CoCodenser Medium**|
-|:-------------------------------------------------------------------------------------------------------------|-----------|
-| BEIR (v1.0.0): HotpotQA | 0.6860 |
-| **R@100** | **SPLADE-distill CoCodenser Medium**|
-| BEIR (v1.0.0): HotpotQA | 0.8144 |
-| **R@1000** | **SPLADE-distill CoCodenser Medium**|
-| BEIR (v1.0.0): HotpotQA | 0.8945 |
-
-
-## Reproduction Log[*](../../docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](../../src/main/resources/docgen/templates/beir-v1.0.0-hotpotqa-splade-distil-cocodenser-medium.template) and run `bin/build.sh` to rebuild the documentation.
diff --git a/docs/regressions/regressions-beir-v1.0.0-nfcorpus-splade-distil-cocodenser-medium.md b/docs/regressions/regressions-beir-v1.0.0-nfcorpus-splade-distil-cocodenser-medium.md
deleted file mode 100644
index 4a6ae16b87..0000000000
--- a/docs/regressions/regressions-beir-v1.0.0-nfcorpus-splade-distil-cocodenser-medium.md
+++ /dev/null
@@ -1,103 +0,0 @@
-# Anserini Regressions: BEIR (v1.0.0) — NFCorpus
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on [BEIR (v1.0.0) — NFCorpus](http://beir.ai/).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-The exact configurations for these regressions are stored in [this YAML file](../../src/main/resources/regression/beir-v1.0.0-nfcorpus-splade-distil-cocodenser-medium.yaml).
-Note that this page is automatically generated from [this template](../../src/main/resources/docgen/templates/beir-v1.0.0-nfcorpus-splade-distil-cocodenser-medium.template) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression beir-v1.0.0-nfcorpus-splade-distil-cocodenser-medium
-```
-
-## Corpus
-
-We make available a version of the BEIR-v1.0.0 nfcorpus corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., gone through document expansion and term reweighting.
-Thus, no neural inference is involved.
-For details on how to train SPLADE-distil CoCodenser Medium and perform inference, please see [guide provided by Naver Labs Europe](https://github.com/naver/splade/tree/main/anserini_evaluation).
-
-Download the corpus and unpack into `collections/`:
-
-```
-wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/beir-v1.0.0-splade_distil_cocodenser_medium-nfcorpus.tar -P collections/
-tar xvf collections/beir-v1.0.0-splade_distil_cocodenser_medium-nfcorpus.tar -C collections/
-```
-
-To confirm, the tarball is 3.2 MB and has MD5 checksum `81215d1fd44c378b44c5b1f4ab555098`.
-
-With the corpus downloaded, the following command will perform the complete regression, end to end, on any machine:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression beir-v1.0.0-nfcorpus-splade-distil-cocodenser-medium \
- --corpus-path collections/beir-v1.0.0-splade_distil_cocodenser_medium-nfcorpus
-```
-
-Alternatively, you can simply copy/paste from the commands below and obtain the same results.
-
-## Indexing
-
-Sample indexing command:
-
-```
-target/appassembler/bin/IndexCollection \
- -collection JsonVectorCollection \
- -input /path/to/beir-v1.0.0-nfcorpus-splade_distil_cocodenser_medium \
- -generator DefaultLuceneDocumentGenerator \
- -index indexes/lucene-index.beir-v1.0.0-nfcorpus-splade_distil_cocodenser_medium/ \
- -threads 16 -impact -pretokenized \
- >& logs/log.beir-v1.0.0-nfcorpus-splade_distil_cocodenser_medium &
-```
-
-The path `/path/to/beir-v1.0.0-nfcorpus-splade_distil_cocodenser_medium/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the pre-encoded tokens.
-Upon completion, we should have an index with 3,633 documents.
-
-For additional details, see explanation of [common indexing options](../../docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```
-target/appassembler/bin/SearchCollection \
- -index indexes/lucene-index.beir-v1.0.0-nfcorpus-splade_distil_cocodenser_medium/ \
- -topics tools/topics-and-qrels/topics.beir-v1.0.0-nfcorpus.test.splade_distil_cocodenser_medium.tsv.gz \
- -topicReader TsvString \
- -output runs/run.beir-v1.0.0-nfcorpus-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-nfcorpus.test.splade_distil_cocodenser_medium.txt \
- -impact -pretokenized -removeQuery -hits 1000 &
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```
-target/appassembler/bin/trec_eval -c -m ndcg_cut.10 tools/topics-and-qrels/qrels.beir-v1.0.0-nfcorpus.test.txt runs/run.beir-v1.0.0-nfcorpus-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-nfcorpus.test.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -c -m recall.100 tools/topics-and-qrels/qrels.beir-v1.0.0-nfcorpus.test.txt runs/run.beir-v1.0.0-nfcorpus-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-nfcorpus.test.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -c -m recall.1000 tools/topics-and-qrels/qrels.beir-v1.0.0-nfcorpus.test.txt runs/run.beir-v1.0.0-nfcorpus-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-nfcorpus.test.splade_distil_cocodenser_medium.txt
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-| **nDCG@10** | **SPLADE-distill CoCodenser Medium**|
-|:-------------------------------------------------------------------------------------------------------------|-----------|
-| BEIR (v1.0.0): NFCorpus | 0.3454 |
-| **R@100** | **SPLADE-distill CoCodenser Medium**|
-| BEIR (v1.0.0): NFCorpus | 0.2891 |
-| **R@1000** | **SPLADE-distill CoCodenser Medium**|
-| BEIR (v1.0.0): NFCorpus | 0.5694 |
-
-
-## Reproduction Log[*](../../docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](../../src/main/resources/docgen/templates/beir-v1.0.0-nfcorpus-splade-distil-cocodenser-medium.template) and run `bin/build.sh` to rebuild the documentation.
diff --git a/docs/regressions/regressions-beir-v1.0.0-nq-splade-distil-cocodenser-medium.md b/docs/regressions/regressions-beir-v1.0.0-nq-splade-distil-cocodenser-medium.md
deleted file mode 100644
index dbf34ba32d..0000000000
--- a/docs/regressions/regressions-beir-v1.0.0-nq-splade-distil-cocodenser-medium.md
+++ /dev/null
@@ -1,103 +0,0 @@
-# Anserini Regressions: BEIR (v1.0.0) — NQ
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on [BEIR (v1.0.0) — NQ](http://beir.ai/).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-The exact configurations for these regressions are stored in [this YAML file](../../src/main/resources/regression/beir-v1.0.0-nq-splade-distil-cocodenser-medium.yaml).
-Note that this page is automatically generated from [this template](../../src/main/resources/docgen/templates/beir-v1.0.0-nq-splade-distil-cocodenser-medium.template) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression beir-v1.0.0-nq-splade-distil-cocodenser-medium
-```
-
-## Corpus
-
-We make available a version of the BEIR-v1.0.0 nq corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., gone through document expansion and term reweighting.
-Thus, no neural inference is involved.
-For details on how to train SPLADE-distil CoCodenser Medium and perform inference, please see [guide provided by Naver Labs Europe](https://github.com/naver/splade/tree/main/anserini_evaluation).
-
-Download the corpus and unpack into `collections/`:
-
-```
-wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/beir-v1.0.0-splade_distil_cocodenser_medium-nq.tar -P collections/
-tar xvf collections/beir-v1.0.0-splade_distil_cocodenser_medium-nq.tar -C collections/
-```
-
-To confirm, the tarball is 1.9 GB and has MD5 checksum `ec9f0d3245c7200209c4dd9ec19055a9`.
-
-With the corpus downloaded, the following command will perform the complete regression, end to end, on any machine:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression beir-v1.0.0-nq-splade-distil-cocodenser-medium \
- --corpus-path collections/beir-v1.0.0-splade_distil_cocodenser_medium-nq
-```
-
-Alternatively, you can simply copy/paste from the commands below and obtain the same results.
-
-## Indexing
-
-Sample indexing command:
-
-```
-target/appassembler/bin/IndexCollection \
- -collection JsonVectorCollection \
- -input /path/to/beir-v1.0.0-nq-splade_distil_cocodenser_medium \
- -generator DefaultLuceneDocumentGenerator \
- -index indexes/lucene-index.beir-v1.0.0-nq-splade_distil_cocodenser_medium/ \
- -threads 16 -impact -pretokenized \
- >& logs/log.beir-v1.0.0-nq-splade_distil_cocodenser_medium &
-```
-
-The path `/path/to/beir-v1.0.0-nq-splade_distil_cocodenser_medium/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the pre-encoded tokens.
-Upon completion, we should have an index with 2,681,468 documents.
-
-For additional details, see explanation of [common indexing options](../../docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```
-target/appassembler/bin/SearchCollection \
- -index indexes/lucene-index.beir-v1.0.0-nq-splade_distil_cocodenser_medium/ \
- -topics tools/topics-and-qrels/topics.beir-v1.0.0-nq.test.splade_distil_cocodenser_medium.tsv.gz \
- -topicReader TsvString \
- -output runs/run.beir-v1.0.0-nq-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-nq.test.splade_distil_cocodenser_medium.txt \
- -impact -pretokenized -removeQuery -hits 1000 &
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```
-target/appassembler/bin/trec_eval -c -m ndcg_cut.10 tools/topics-and-qrels/qrels.beir-v1.0.0-nq.test.txt runs/run.beir-v1.0.0-nq-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-nq.test.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -c -m recall.100 tools/topics-and-qrels/qrels.beir-v1.0.0-nq.test.txt runs/run.beir-v1.0.0-nq-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-nq.test.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -c -m recall.1000 tools/topics-and-qrels/qrels.beir-v1.0.0-nq.test.txt runs/run.beir-v1.0.0-nq-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-nq.test.splade_distil_cocodenser_medium.txt
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-| **nDCG@10** | **SPLADE-distill CoCodenser Medium**|
-|:-------------------------------------------------------------------------------------------------------------|-----------|
-| BEIR (v1.0.0): NQ | 0.5442 |
-| **R@100** | **SPLADE-distill CoCodenser Medium**|
-| BEIR (v1.0.0): NQ | 0.9285 |
-| **R@1000** | **SPLADE-distill CoCodenser Medium**|
-| BEIR (v1.0.0): NQ | 0.9812 |
-
-
-## Reproduction Log[*](../../docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](../../src/main/resources/docgen/templates/beir-v1.0.0-nq-splade-distil-cocodenser-medium.template) and run `bin/build.sh` to rebuild the documentation.
diff --git a/docs/regressions/regressions-beir-v1.0.0-quora-splade-distil-cocodenser-medium.md b/docs/regressions/regressions-beir-v1.0.0-quora-splade-distil-cocodenser-medium.md
deleted file mode 100644
index 1d6267bf56..0000000000
--- a/docs/regressions/regressions-beir-v1.0.0-quora-splade-distil-cocodenser-medium.md
+++ /dev/null
@@ -1,103 +0,0 @@
-# Anserini Regressions: BEIR (v1.0.0) — Quora
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on [BEIR (v1.0.0) — Quora](http://beir.ai/).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-The exact configurations for these regressions are stored in [this YAML file](../../src/main/resources/regression/beir-v1.0.0-quora-splade-distil-cocodenser-medium.yaml).
-Note that this page is automatically generated from [this template](../../src/main/resources/docgen/templates/beir-v1.0.0-quora-splade-distil-cocodenser-medium.template) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression beir-v1.0.0-quora-splade-distil-cocodenser-medium
-```
-
-## Corpus
-
-We make available a version of the BEIR-v1.0.0 quora corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., gone through document expansion and term reweighting.
-Thus, no neural inference is involved.
-For details on how to train SPLADE-distil CoCodenser Medium and perform inference, please see [guide provided by Naver Labs Europe](https://github.com/naver/splade/tree/main/anserini_evaluation).
-
-Download the corpus and unpack into `collections/`:
-
-```
-wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/beir-v1.0.0-splade_distil_cocodenser_medium-quora.tar -P collections/
-tar xvf collections/beir-v1.0.0-splade_distil_cocodenser_medium-quora.tar -C collections/
-```
-
-To confirm, the tarball is 112 MB and has MD5 checksum `fa86bbfa8195c6f45a0b0435ee268b0e`.
-
-With the corpus downloaded, the following command will perform the complete regression, end to end, on any machine:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression beir-v1.0.0-quora-splade-distil-cocodenser-medium \
- --corpus-path collections/beir-v1.0.0-splade_distil_cocodenser_medium-quora
-```
-
-Alternatively, you can simply copy/paste from the commands below and obtain the same results.
-
-## Indexing
-
-Sample indexing command:
-
-```
-target/appassembler/bin/IndexCollection \
- -collection JsonVectorCollection \
- -input /path/to/beir-v1.0.0-quora-splade_distil_cocodenser_medium \
- -generator DefaultLuceneDocumentGenerator \
- -index indexes/lucene-index.beir-v1.0.0-quora-splade_distil_cocodenser_medium/ \
- -threads 16 -impact -pretokenized \
- >& logs/log.beir-v1.0.0-quora-splade_distil_cocodenser_medium &
-```
-
-The path `/path/to/beir-v1.0.0-quora-splade_distil_cocodenser_medium/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the pre-encoded tokens.
-Upon completion, we should have an index with 522,931 documents.
-
-For additional details, see explanation of [common indexing options](../../docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```
-target/appassembler/bin/SearchCollection \
- -index indexes/lucene-index.beir-v1.0.0-quora-splade_distil_cocodenser_medium/ \
- -topics tools/topics-and-qrels/topics.beir-v1.0.0-quora.test.splade_distil_cocodenser_medium.tsv.gz \
- -topicReader TsvString \
- -output runs/run.beir-v1.0.0-quora-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-quora.test.splade_distil_cocodenser_medium.txt \
- -impact -pretokenized -removeQuery -hits 1000 &
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```
-target/appassembler/bin/trec_eval -c -m ndcg_cut.10 tools/topics-and-qrels/qrels.beir-v1.0.0-quora.test.txt runs/run.beir-v1.0.0-quora-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-quora.test.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -c -m recall.100 tools/topics-and-qrels/qrels.beir-v1.0.0-quora.test.txt runs/run.beir-v1.0.0-quora-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-quora.test.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -c -m recall.1000 tools/topics-and-qrels/qrels.beir-v1.0.0-quora.test.txt runs/run.beir-v1.0.0-quora-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-quora.test.splade_distil_cocodenser_medium.txt
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-| **nDCG@10** | **SPLADE-distill CoCodenser Medium**|
-|:-------------------------------------------------------------------------------------------------------------|-----------|
-| BEIR (v1.0.0): Quora | 0.8136 |
-| **R@100** | **SPLADE-distill CoCodenser Medium**|
-| BEIR (v1.0.0): Quora | 0.9817 |
-| **R@1000** | **SPLADE-distill CoCodenser Medium**|
-| BEIR (v1.0.0): Quora | 0.9979 |
-
-
-## Reproduction Log[*](../../docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](../../src/main/resources/docgen/templates/beir-v1.0.0-quora-splade-distil-cocodenser-medium.template) and run `bin/build.sh` to rebuild the documentation.
diff --git a/docs/regressions/regressions-beir-v1.0.0-robust04-splade-distil-cocodenser-medium.md b/docs/regressions/regressions-beir-v1.0.0-robust04-splade-distil-cocodenser-medium.md
deleted file mode 100644
index 0cee5a99ff..0000000000
--- a/docs/regressions/regressions-beir-v1.0.0-robust04-splade-distil-cocodenser-medium.md
+++ /dev/null
@@ -1,103 +0,0 @@
-# Anserini Regressions: BEIR (v1.0.0) — Robust04
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on [BEIR (v1.0.0) — Robust04](http://beir.ai/).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-The exact configurations for these regressions are stored in [this YAML file](../../src/main/resources/regression/beir-v1.0.0-robust04-splade-distil-cocodenser-medium.yaml).
-Note that this page is automatically generated from [this template](../../src/main/resources/docgen/templates/beir-v1.0.0-robust04-splade-distil-cocodenser-medium.template) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression beir-v1.0.0-robust04-splade-distil-cocodenser-medium
-```
-
-## Corpus
-
-We make available a version of the BEIR-v1.0.0 robust04 corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., gone through document expansion and term reweighting.
-Thus, no neural inference is involved.
-For details on how to train SPLADE-distil CoCodenser Medium and perform inference, please see [guide provided by Naver Labs Europe](https://github.com/naver/splade/tree/main/anserini_evaluation).
-
-Download the corpus and unpack into `collections/`:
-
-```
-wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/beir-v1.0.0-splade_distil_cocodenser_medium-robust04.tar -P collections/
-tar xvf collections/beir-v1.0.0-splade_distil_cocodenser_medium-robust04.tar -C collections/
-```
-
-To confirm, the tarball is 8.9 MB and has MD5 checksum `9c5a181e03cbc7f13abd0e0e4bf9158e`.
-
-With the corpus downloaded, the following command will perform the complete regression, end to end, on any machine:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression beir-v1.0.0-robust04-splade-distil-cocodenser-medium \
- --corpus-path collections/beir-v1.0.0-splade_distil_cocodenser_medium-robust04
-```
-
-Alternatively, you can simply copy/paste from the commands below and obtain the same results.
-
-## Indexing
-
-Sample indexing command:
-
-```
-target/appassembler/bin/IndexCollection \
- -collection JsonVectorCollection \
- -input /path/to/beir-v1.0.0-robust04-splade_distil_cocodenser_medium \
- -generator DefaultLuceneDocumentGenerator \
- -index indexes/lucene-index.beir-v1.0.0-robust04-splade_distil_cocodenser_medium/ \
- -threads 16 -impact -pretokenized \
- >& logs/log.beir-v1.0.0-robust04-splade_distil_cocodenser_medium &
-```
-
-The path `/path/to/beir-v1.0.0-robust04-splade_distil_cocodenser_medium/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the pre-encoded tokens.
-Upon completion, we should have an index with 8,674 documents.
-
-For additional details, see explanation of [common indexing options](../../docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```
-target/appassembler/bin/SearchCollection \
- -index indexes/lucene-index.beir-v1.0.0-robust04-splade_distil_cocodenser_medium/ \
- -topics tools/topics-and-qrels/topics.beir-v1.0.0-robust04.test.splade_distil_cocodenser_medium.tsv.gz \
- -topicReader TsvString \
- -output runs/run.beir-v1.0.0-robust04-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-robust04.test.splade_distil_cocodenser_medium.txt \
- -impact -pretokenized -removeQuery -hits 1000 &
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```
-target/appassembler/bin/trec_eval -c -m ndcg_cut.10 tools/topics-and-qrels/qrels.beir-v1.0.0-robust04.test.txt runs/run.beir-v1.0.0-robust04-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-robust04.test.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -c -m recall.100 tools/topics-and-qrels/qrels.beir-v1.0.0-robust04.test.txt runs/run.beir-v1.0.0-robust04-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-robust04.test.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -c -m recall.1000 tools/topics-and-qrels/qrels.beir-v1.0.0-robust04.test.txt runs/run.beir-v1.0.0-robust04-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-robust04.test.splade_distil_cocodenser_medium.txt
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-| **nDCG@10** | **SPLADE-distill CoCodenser Medium**|
-|:-------------------------------------------------------------------------------------------------------------|-----------|
-| BEIR (v1.0.0): Robust04 | 0.4581 |
-| **R@100** | **SPLADE-distill CoCodenser Medium**|
-| BEIR (v1.0.0): Robust04 | 0.3773 |
-| **R@1000** | **SPLADE-distill CoCodenser Medium**|
-| BEIR (v1.0.0): Robust04 | 0.6099 |
-
-
-## Reproduction Log[*](../../docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](../../src/main/resources/docgen/templates/beir-v1.0.0-robust04-splade-distil-cocodenser-medium.template) and run `bin/build.sh` to rebuild the documentation.
diff --git a/docs/regressions/regressions-beir-v1.0.0-scidocs-splade-distil-cocodenser-medium.md b/docs/regressions/regressions-beir-v1.0.0-scidocs-splade-distil-cocodenser-medium.md
deleted file mode 100644
index 44df0f40c1..0000000000
--- a/docs/regressions/regressions-beir-v1.0.0-scidocs-splade-distil-cocodenser-medium.md
+++ /dev/null
@@ -1,103 +0,0 @@
-# Anserini Regressions: BEIR (v1.0.0) — SCIDOCS
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on [BEIR (v1.0.0) — SCIDOCS](http://beir.ai/).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-The exact configurations for these regressions are stored in [this YAML file](../../src/main/resources/regression/beir-v1.0.0-scidocs-splade-distil-cocodenser-medium.yaml).
-Note that this page is automatically generated from [this template](../../src/main/resources/docgen/templates/beir-v1.0.0-scidocs-splade-distil-cocodenser-medium.template) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression beir-v1.0.0-scidocs-splade-distil-cocodenser-medium
-```
-
-## Corpus
-
-We make available a version of the BEIR-v1.0.0 scidocs corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., gone through document expansion and term reweighting.
-Thus, no neural inference is involved.
-For details on how to train SPLADE-distil CoCodenser Medium and perform inference, please see [guide provided by Naver Labs Europe](https://github.com/naver/splade/tree/main/anserini_evaluation).
-
-Download the corpus and unpack into `collections/`:
-
-```
-wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/beir-v1.0.0-splade_distil_cocodenser_medium-scidocs.tar -P collections/
-tar xvf collections/beir-v1.0.0-splade_distil_cocodenser_medium-scidocs.tar -C collections/
-```
-
-To confirm, the tarball is 24 MB and has MD5 checksum `535c9dcb9698bec4345c07d96e5b1e75`.
-
-With the corpus downloaded, the following command will perform the complete regression, end to end, on any machine:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression beir-v1.0.0-scidocs-splade-distil-cocodenser-medium \
- --corpus-path collections/beir-v1.0.0-splade_distil_cocodenser_medium-scidocs
-```
-
-Alternatively, you can simply copy/paste from the commands below and obtain the same results.
-
-## Indexing
-
-Sample indexing command:
-
-```
-target/appassembler/bin/IndexCollection \
- -collection JsonVectorCollection \
- -input /path/to/beir-v1.0.0-scidocs-splade_distil_cocodenser_medium \
- -generator DefaultLuceneDocumentGenerator \
- -index indexes/lucene-index.beir-v1.0.0-scidocs-splade_distil_cocodenser_medium/ \
- -threads 16 -impact -pretokenized \
- >& logs/log.beir-v1.0.0-scidocs-splade_distil_cocodenser_medium &
-```
-
-The path `/path/to/beir-v1.0.0-scidocs-splade_distil_cocodenser_medium/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the pre-encoded tokens.
-Upon completion, we should have an index with 25,657 documents.
-
-For additional details, see explanation of [common indexing options](../../docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```
-target/appassembler/bin/SearchCollection \
- -index indexes/lucene-index.beir-v1.0.0-scidocs-splade_distil_cocodenser_medium/ \
- -topics tools/topics-and-qrels/topics.beir-v1.0.0-scidocs.test.splade_distil_cocodenser_medium.tsv.gz \
- -topicReader TsvString \
- -output runs/run.beir-v1.0.0-scidocs-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-scidocs.test.splade_distil_cocodenser_medium.txt \
- -impact -pretokenized -removeQuery -hits 1000 &
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```
-target/appassembler/bin/trec_eval -c -m ndcg_cut.10 tools/topics-and-qrels/qrels.beir-v1.0.0-scidocs.test.txt runs/run.beir-v1.0.0-scidocs-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-scidocs.test.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -c -m recall.100 tools/topics-and-qrels/qrels.beir-v1.0.0-scidocs.test.txt runs/run.beir-v1.0.0-scidocs-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-scidocs.test.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -c -m recall.1000 tools/topics-and-qrels/qrels.beir-v1.0.0-scidocs.test.txt runs/run.beir-v1.0.0-scidocs-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-scidocs.test.splade_distil_cocodenser_medium.txt
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-| **nDCG@10** | **SPLADE-distill CoCodenser Medium**|
-|:-------------------------------------------------------------------------------------------------------------|-----------|
-| BEIR (v1.0.0): SCIDOCS | 0.1590 |
-| **R@100** | **SPLADE-distill CoCodenser Medium**|
-| BEIR (v1.0.0): SCIDOCS | 0.3671 |
-| **R@1000** | **SPLADE-distill CoCodenser Medium**|
-| BEIR (v1.0.0): SCIDOCS | 0.5891 |
-
-
-## Reproduction Log[*](../../docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](../../src/main/resources/docgen/templates/beir-v1.0.0-scidocs-splade-distil-cocodenser-medium.template) and run `bin/build.sh` to rebuild the documentation.
diff --git a/docs/regressions/regressions-beir-v1.0.0-scifact-splade-distil-cocodenser-medium.md b/docs/regressions/regressions-beir-v1.0.0-scifact-splade-distil-cocodenser-medium.md
deleted file mode 100644
index d4c40d9e09..0000000000
--- a/docs/regressions/regressions-beir-v1.0.0-scifact-splade-distil-cocodenser-medium.md
+++ /dev/null
@@ -1,103 +0,0 @@
-# Anserini Regressions: BEIR (v1.0.0) — SciFact
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on [BEIR (v1.0.0) — SciFact](http://beir.ai/).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-The exact configurations for these regressions are stored in [this YAML file](../../src/main/resources/regression/beir-v1.0.0-scifact-splade-distil-cocodenser-medium.yaml).
-Note that this page is automatically generated from [this template](../../src/main/resources/docgen/templates/beir-v1.0.0-scifact-splade-distil-cocodenser-medium.template) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression beir-v1.0.0-scifact-splade-distil-cocodenser-medium
-```
-
-## Corpus
-
-We make available a version of the BEIR-v1.0.0 scifact corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., gone through document expansion and term reweighting.
-Thus, no neural inference is involved.
-For details on how to train SPLADE-distil CoCodenser Medium and perform inference, please see [guide provided by Naver Labs Europe](https://github.com/naver/splade/tree/main/anserini_evaluation).
-
-Download the corpus and unpack into `collections/`:
-
-```
-wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/beir-v1.0.0-splade_distil_cocodenser_medium-scifact.tar -P collections/
-tar xvf collections/beir-v1.0.0-splade_distil_cocodenser_medium-scifact.tar -C collections/
-```
-
-To confirm, the tarball is 4.8 MB and has MD5 checksum `8b82c0fcbf0d1287fb1c5044e8422902`.
-
-With the corpus downloaded, the following command will perform the complete regression, end to end, on any machine:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression beir-v1.0.0-scifact-splade-distil-cocodenser-medium \
- --corpus-path collections/beir-v1.0.0-splade_distil_cocodenser_medium-scifact
-```
-
-Alternatively, you can simply copy/paste from the commands below and obtain the same results.
-
-## Indexing
-
-Sample indexing command:
-
-```
-target/appassembler/bin/IndexCollection \
- -collection JsonVectorCollection \
- -input /path/to/beir-v1.0.0-scifact-splade_distil_cocodenser_medium \
- -generator DefaultLuceneDocumentGenerator \
- -index indexes/lucene-index.beir-v1.0.0-scifact-splade_distil_cocodenser_medium/ \
- -threads 16 -impact -pretokenized \
- >& logs/log.beir-v1.0.0-scifact-splade_distil_cocodenser_medium &
-```
-
-The path `/path/to/beir-v1.0.0-scifact-splade_distil_cocodenser_medium/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the pre-encoded tokens.
-Upon completion, we should have an index with 5,183 documents.
-
-For additional details, see explanation of [common indexing options](../../docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```
-target/appassembler/bin/SearchCollection \
- -index indexes/lucene-index.beir-v1.0.0-scifact-splade_distil_cocodenser_medium/ \
- -topics tools/topics-and-qrels/topics.beir-v1.0.0-scifact.test.splade_distil_cocodenser_medium.tsv.gz \
- -topicReader TsvString \
- -output runs/run.beir-v1.0.0-scifact-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-scifact.test.splade_distil_cocodenser_medium.txt \
- -impact -pretokenized -removeQuery -hits 1000 &
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```
-target/appassembler/bin/trec_eval -c -m ndcg_cut.10 tools/topics-and-qrels/qrels.beir-v1.0.0-scifact.test.txt runs/run.beir-v1.0.0-scifact-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-scifact.test.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -c -m recall.100 tools/topics-and-qrels/qrels.beir-v1.0.0-scifact.test.txt runs/run.beir-v1.0.0-scifact-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-scifact.test.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -c -m recall.1000 tools/topics-and-qrels/qrels.beir-v1.0.0-scifact.test.txt runs/run.beir-v1.0.0-scifact-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-scifact.test.splade_distil_cocodenser_medium.txt
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-| **nDCG@10** | **SPLADE-distill CoCodenser Medium**|
-|:-------------------------------------------------------------------------------------------------------------|-----------|
-| BEIR (v1.0.0): SciFact | 0.6992 |
-| **R@100** | **SPLADE-distill CoCodenser Medium**|
-| BEIR (v1.0.0): SciFact | 0.9270 |
-| **R@1000** | **SPLADE-distill CoCodenser Medium**|
-| BEIR (v1.0.0): SciFact | 0.9767 |
-
-
-## Reproduction Log[*](../../docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](../../src/main/resources/docgen/templates/beir-v1.0.0-scifact-splade-distil-cocodenser-medium.template) and run `bin/build.sh` to rebuild the documentation.
diff --git a/docs/regressions/regressions-beir-v1.0.0-signal1m-splade-distil-cocodenser-medium.md b/docs/regressions/regressions-beir-v1.0.0-signal1m-splade-distil-cocodenser-medium.md
deleted file mode 100644
index 4a46105ab1..0000000000
--- a/docs/regressions/regressions-beir-v1.0.0-signal1m-splade-distil-cocodenser-medium.md
+++ /dev/null
@@ -1,103 +0,0 @@
-# Anserini Regressions: BEIR (v1.0.0) — Signal-1M
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on [BEIR (v1.0.0) — Signal-1M](http://beir.ai/).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-The exact configurations for these regressions are stored in [this YAML file](../../src/main/resources/regression/beir-v1.0.0-signal1m-splade-distil-cocodenser-medium.yaml).
-Note that this page is automatically generated from [this template](../../src/main/resources/docgen/templates/beir-v1.0.0-signal1m-splade-distil-cocodenser-medium.template) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression beir-v1.0.0-signal1m-splade-distil-cocodenser-medium
-```
-
-## Corpus
-
-We make available a version of the BEIR-v1.0.0 signal1m corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., gone through document expansion and term reweighting.
-Thus, no neural inference is involved.
-For details on how to train SPLADE-distil CoCodenser Medium and perform inference, please see [guide provided by Naver Labs Europe](https://github.com/naver/splade/tree/main/anserini_evaluation).
-
-Download the corpus and unpack into `collections/`:
-
-```
-wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/beir-v1.0.0-splade_distil_cocodenser_medium-signal1m.tar -P collections/
-tar xvf collections/beir-v1.0.0-splade_distil_cocodenser_medium-signal1m.tar -C collections/
-```
-
-To confirm, the tarball is 8.9 MB and has MD5 checksum `9c5a181e03cbc7f13abd0e0e4bf9158e`.
-
-With the corpus downloaded, the following command will perform the complete regression, end to end, on any machine:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression beir-v1.0.0-signal1m-splade-distil-cocodenser-medium \
- --corpus-path collections/beir-v1.0.0-splade_distil_cocodenser_medium-signal1m
-```
-
-Alternatively, you can simply copy/paste from the commands below and obtain the same results.
-
-## Indexing
-
-Sample indexing command:
-
-```
-target/appassembler/bin/IndexCollection \
- -collection JsonVectorCollection \
- -input /path/to/beir-v1.0.0-signal1m-splade_distil_cocodenser_medium \
- -generator DefaultLuceneDocumentGenerator \
- -index indexes/lucene-index.beir-v1.0.0-signal1m-splade_distil_cocodenser_medium/ \
- -threads 16 -impact -pretokenized \
- >& logs/log.beir-v1.0.0-signal1m-splade_distil_cocodenser_medium &
-```
-
-The path `/path/to/beir-v1.0.0-signal1m-splade_distil_cocodenser_medium/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the pre-encoded tokens.
-Upon completion, we should have an index with 8,674 documents.
-
-For additional details, see explanation of [common indexing options](../../docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```
-target/appassembler/bin/SearchCollection \
- -index indexes/lucene-index.beir-v1.0.0-signal1m-splade_distil_cocodenser_medium/ \
- -topics tools/topics-and-qrels/topics.beir-v1.0.0-signal1m.test.splade_distil_cocodenser_medium.tsv.gz \
- -topicReader TsvString \
- -output runs/run.beir-v1.0.0-signal1m-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-signal1m.test.splade_distil_cocodenser_medium.txt \
- -impact -pretokenized -removeQuery -hits 1000 &
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```
-target/appassembler/bin/trec_eval -c -m ndcg_cut.10 tools/topics-and-qrels/qrels.beir-v1.0.0-signal1m.test.txt runs/run.beir-v1.0.0-signal1m-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-signal1m.test.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -c -m recall.100 tools/topics-and-qrels/qrels.beir-v1.0.0-signal1m.test.txt runs/run.beir-v1.0.0-signal1m-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-signal1m.test.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -c -m recall.1000 tools/topics-and-qrels/qrels.beir-v1.0.0-signal1m.test.txt runs/run.beir-v1.0.0-signal1m-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-signal1m.test.splade_distil_cocodenser_medium.txt
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-| **nDCG@10** | **SPLADE-distill CoCodenser Medium**|
-|:-------------------------------------------------------------------------------------------------------------|-----------|
-| BEIR (v1.0.0): Signal-1M | 0.2957 |
-| **R@100** | **SPLADE-distill CoCodenser Medium**|
-| BEIR (v1.0.0): Signal-1M | 0.3311 |
-| **R@1000** | **SPLADE-distill CoCodenser Medium**|
-| BEIR (v1.0.0): Signal-1M | 0.5514 |
-
-
-## Reproduction Log[*](../../docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](../../src/main/resources/docgen/templates/beir-v1.0.0-signal1m-splade-distil-cocodenser-medium.template) and run `bin/build.sh` to rebuild the documentation.
diff --git a/docs/regressions/regressions-beir-v1.0.0-trec-covid-splade-distil-cocodenser-medium.md b/docs/regressions/regressions-beir-v1.0.0-trec-covid-splade-distil-cocodenser-medium.md
deleted file mode 100644
index 6ba51e975a..0000000000
--- a/docs/regressions/regressions-beir-v1.0.0-trec-covid-splade-distil-cocodenser-medium.md
+++ /dev/null
@@ -1,103 +0,0 @@
-# Anserini Regressions: BEIR (v1.0.0) — TREC-COVID
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on [BEIR (v1.0.0) — TREC-COVID](http://beir.ai/).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-The exact configurations for these regressions are stored in [this YAML file](../../src/main/resources/regression/beir-v1.0.0-trec-covid-splade-distil-cocodenser-medium.yaml).
-Note that this page is automatically generated from [this template](../../src/main/resources/docgen/templates/beir-v1.0.0-trec-covid-splade-distil-cocodenser-medium.template) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression beir-v1.0.0-trec-covid-splade-distil-cocodenser-medium
-```
-
-## Corpus
-
-We make available a version of the BEIR-v1.0.0 trec-covid corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., gone through document expansion and term reweighting.
-Thus, no neural inference is involved.
-For details on how to train SPLADE-distil CoCodenser Medium and perform inference, please see [guide provided by Naver Labs Europe](https://github.com/naver/splade/tree/main/anserini_evaluation).
-
-Download the corpus and unpack into `collections/`:
-
-```
-wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/beir-v1.0.0-splade_distil_cocodenser_medium-trec-covid.tar -P collections/
-tar xvf collections/beir-v1.0.0-splade_distil_cocodenser_medium-trec-covid.tar -C collections/
-```
-
-To confirm, the tarball is 133 MB and has MD5 checksum `84b9b090fa7ad7f08e2208a59837c216`.
-
-With the corpus downloaded, the following command will perform the complete regression, end to end, on any machine:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression beir-v1.0.0-trec-covid-splade-distil-cocodenser-medium \
- --corpus-path collections/beir-v1.0.0-splade_distil_cocodenser_medium-trec-covid
-```
-
-Alternatively, you can simply copy/paste from the commands below and obtain the same results.
-
-## Indexing
-
-Sample indexing command:
-
-```
-target/appassembler/bin/IndexCollection \
- -collection JsonVectorCollection \
- -input /path/to/beir-v1.0.0-trec-covid-splade_distil_cocodenser_medium \
- -generator DefaultLuceneDocumentGenerator \
- -index indexes/lucene-index.beir-v1.0.0-trec-covid-splade_distil_cocodenser_medium/ \
- -threads 16 -impact -pretokenized \
- >& logs/log.beir-v1.0.0-trec-covid-splade_distil_cocodenser_medium &
-```
-
-The path `/path/to/beir-v1.0.0-trec-covid-splade_distil_cocodenser_medium/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the pre-encoded tokens.
-Upon completion, we should have an index with 171,332 documents.
-
-For additional details, see explanation of [common indexing options](../../docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```
-target/appassembler/bin/SearchCollection \
- -index indexes/lucene-index.beir-v1.0.0-trec-covid-splade_distil_cocodenser_medium/ \
- -topics tools/topics-and-qrels/topics.beir-v1.0.0-trec-covid.test.splade_distil_cocodenser_medium.tsv.gz \
- -topicReader TsvString \
- -output runs/run.beir-v1.0.0-trec-covid-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-trec-covid.test.splade_distil_cocodenser_medium.txt \
- -impact -pretokenized -removeQuery -hits 1000 &
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```
-target/appassembler/bin/trec_eval -c -m ndcg_cut.10 tools/topics-and-qrels/qrels.beir-v1.0.0-trec-covid.test.txt runs/run.beir-v1.0.0-trec-covid-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-trec-covid.test.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -c -m recall.100 tools/topics-and-qrels/qrels.beir-v1.0.0-trec-covid.test.txt runs/run.beir-v1.0.0-trec-covid-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-trec-covid.test.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -c -m recall.1000 tools/topics-and-qrels/qrels.beir-v1.0.0-trec-covid.test.txt runs/run.beir-v1.0.0-trec-covid-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-trec-covid.test.splade_distil_cocodenser_medium.txt
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-| **nDCG@10** | **SPLADE-distill CoCodenser Medium**|
-|:-------------------------------------------------------------------------------------------------------------|-----------|
-| BEIR (v1.0.0): TREC-COVID | 0.7109 |
-| **R@100** | **SPLADE-distill CoCodenser Medium**|
-| BEIR (v1.0.0): TREC-COVID | 0.1308 |
-| **R@1000** | **SPLADE-distill CoCodenser Medium**|
-| BEIR (v1.0.0): TREC-COVID | 0.4433 |
-
-
-## Reproduction Log[*](../../docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](../../src/main/resources/docgen/templates/beir-v1.0.0-trec-covid-splade-distil-cocodenser-medium.template) and run `bin/build.sh` to rebuild the documentation.
diff --git a/docs/regressions/regressions-beir-v1.0.0-trec-news-splade-distil-cocodenser-medium.md b/docs/regressions/regressions-beir-v1.0.0-trec-news-splade-distil-cocodenser-medium.md
deleted file mode 100644
index e660e656d7..0000000000
--- a/docs/regressions/regressions-beir-v1.0.0-trec-news-splade-distil-cocodenser-medium.md
+++ /dev/null
@@ -1,103 +0,0 @@
-# Anserini Regressions: BEIR (v1.0.0) — TREC-NEWS
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on [BEIR (v1.0.0) — TREC-NEWS](http://beir.ai/).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-The exact configurations for these regressions are stored in [this YAML file](../../src/main/resources/regression/beir-v1.0.0-trec-news-splade-distil-cocodenser-medium.yaml).
-Note that this page is automatically generated from [this template](../../src/main/resources/docgen/templates/beir-v1.0.0-trec-news-splade-distil-cocodenser-medium.template) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression beir-v1.0.0-trec-news-splade-distil-cocodenser-medium
-```
-
-## Corpus
-
-We make available a version of the BEIR-v1.0.0 trec-news corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., gone through document expansion and term reweighting.
-Thus, no neural inference is involved.
-For details on how to train SPLADE-distil CoCodenser Medium and perform inference, please see [guide provided by Naver Labs Europe](https://github.com/naver/splade/tree/main/anserini_evaluation).
-
-Download the corpus and unpack into `collections/`:
-
-```
-wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/beir-v1.0.0-splade_distil_cocodenser_medium-trec-news.tar -P collections/
-tar xvf collections/beir-v1.0.0-splade_distil_cocodenser_medium-trec-news.tar -C collections/
-```
-
-To confirm, the tarball is 8.9 MB and has MD5 checksum `9c5a181e03cbc7f13abd0e0e4bf9158e`.
-
-With the corpus downloaded, the following command will perform the complete regression, end to end, on any machine:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression beir-v1.0.0-trec-news-splade-distil-cocodenser-medium \
- --corpus-path collections/beir-v1.0.0-splade_distil_cocodenser_medium-trec-news
-```
-
-Alternatively, you can simply copy/paste from the commands below and obtain the same results.
-
-## Indexing
-
-Sample indexing command:
-
-```
-target/appassembler/bin/IndexCollection \
- -collection JsonVectorCollection \
- -input /path/to/beir-v1.0.0-trec-news-splade_distil_cocodenser_medium \
- -generator DefaultLuceneDocumentGenerator \
- -index indexes/lucene-index.beir-v1.0.0-trec-news-splade_distil_cocodenser_medium/ \
- -threads 16 -impact -pretokenized \
- >& logs/log.beir-v1.0.0-trec-news-splade_distil_cocodenser_medium &
-```
-
-The path `/path/to/beir-v1.0.0-trec-news-splade_distil_cocodenser_medium/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the pre-encoded tokens.
-Upon completion, we should have an index with 8,674 documents.
-
-For additional details, see explanation of [common indexing options](../../docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```
-target/appassembler/bin/SearchCollection \
- -index indexes/lucene-index.beir-v1.0.0-trec-news-splade_distil_cocodenser_medium/ \
- -topics tools/topics-and-qrels/topics.beir-v1.0.0-trec-news.test.splade_distil_cocodenser_medium.tsv.gz \
- -topicReader TsvString \
- -output runs/run.beir-v1.0.0-trec-news-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-trec-news.test.splade_distil_cocodenser_medium.txt \
- -impact -pretokenized -removeQuery -hits 1000 &
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```
-target/appassembler/bin/trec_eval -c -m ndcg_cut.10 tools/topics-and-qrels/qrels.beir-v1.0.0-trec-news.test.txt runs/run.beir-v1.0.0-trec-news-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-trec-news.test.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -c -m recall.100 tools/topics-and-qrels/qrels.beir-v1.0.0-trec-news.test.txt runs/run.beir-v1.0.0-trec-news-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-trec-news.test.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -c -m recall.1000 tools/topics-and-qrels/qrels.beir-v1.0.0-trec-news.test.txt runs/run.beir-v1.0.0-trec-news-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-trec-news.test.splade_distil_cocodenser_medium.txt
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-| **nDCG@10** | **SPLADE-distill CoCodenser Medium**|
-|:-------------------------------------------------------------------------------------------------------------|-----------|
-| BEIR (v1.0.0): TREC-NEWS | 0.3936 |
-| **R@100** | **SPLADE-distill CoCodenser Medium**|
-| BEIR (v1.0.0): TREC-NEWS | 0.4323 |
-| **R@1000** | **SPLADE-distill CoCodenser Medium**|
-| BEIR (v1.0.0): TREC-NEWS | 0.6977 |
-
-
-## Reproduction Log[*](../../docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](../../src/main/resources/docgen/templates/beir-v1.0.0-trec-news-splade-distil-cocodenser-medium.template) and run `bin/build.sh` to rebuild the documentation.
diff --git a/docs/regressions/regressions-beir-v1.0.0-webis-touche2020-splade-distil-cocodenser-medium.md b/docs/regressions/regressions-beir-v1.0.0-webis-touche2020-splade-distil-cocodenser-medium.md
deleted file mode 100644
index c9d1d50892..0000000000
--- a/docs/regressions/regressions-beir-v1.0.0-webis-touche2020-splade-distil-cocodenser-medium.md
+++ /dev/null
@@ -1,103 +0,0 @@
-# Anserini Regressions: BEIR (v1.0.0) — Webis-Touche2020
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on [BEIR (v1.0.0) — Webis-Touche2020](http://beir.ai/).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-The exact configurations for these regressions are stored in [this YAML file](../../src/main/resources/regression/beir-v1.0.0-webis-touche2020-splade-distil-cocodenser-medium.yaml).
-Note that this page is automatically generated from [this template](../../src/main/resources/docgen/templates/beir-v1.0.0-webis-touche2020-splade-distil-cocodenser-medium.template) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression beir-v1.0.0-webis-touche2020-splade-distil-cocodenser-medium
-```
-
-## Corpus
-
-We make available a version of the BEIR-v1.0.0 webis-touche2020 corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., gone through document expansion and term reweighting.
-Thus, no neural inference is involved.
-For details on how to train SPLADE-distil CoCodenser Medium and perform inference, please see [guide provided by Naver Labs Europe](https://github.com/naver/splade/tree/main/anserini_evaluation).
-
-Download the corpus and unpack into `collections/`:
-
-```
-wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/beir-v1.0.0-splade_distil_cocodenser_medium-webis-touche2020.tar -P collections/
-tar xvf collections/beir-v1.0.0-splade_distil_cocodenser_medium-webis-touche2020.tar -C collections/
-```
-
-To confirm, the tarball is 293 MB and has MD5 checksum `cb8486c7b1bf9b8ff7a14aedf9074c58`.
-
-With the corpus downloaded, the following command will perform the complete regression, end to end, on any machine:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression beir-v1.0.0-webis-touche2020-splade-distil-cocodenser-medium \
- --corpus-path collections/beir-v1.0.0-splade_distil_cocodenser_medium-webis-touche2020
-```
-
-Alternatively, you can simply copy/paste from the commands below and obtain the same results.
-
-## Indexing
-
-Sample indexing command:
-
-```
-target/appassembler/bin/IndexCollection \
- -collection JsonVectorCollection \
- -input /path/to/beir-v1.0.0-webis-touche2020-splade_distil_cocodenser_medium \
- -generator DefaultLuceneDocumentGenerator \
- -index indexes/lucene-index.beir-v1.0.0-webis-touche2020-splade_distil_cocodenser_medium/ \
- -threads 16 -impact -pretokenized \
- >& logs/log.beir-v1.0.0-webis-touche2020-splade_distil_cocodenser_medium &
-```
-
-The path `/path/to/beir-v1.0.0-webis-touche2020-splade_distil_cocodenser_medium/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the pre-encoded tokens.
-Upon completion, we should have an index with 382,545 documents.
-
-For additional details, see explanation of [common indexing options](../../docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```
-target/appassembler/bin/SearchCollection \
- -index indexes/lucene-index.beir-v1.0.0-webis-touche2020-splade_distil_cocodenser_medium/ \
- -topics tools/topics-and-qrels/topics.beir-v1.0.0-webis-touche2020.test.splade_distil_cocodenser_medium.tsv.gz \
- -topicReader TsvString \
- -output runs/run.beir-v1.0.0-webis-touche2020-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-webis-touche2020.test.splade_distil_cocodenser_medium.txt \
- -impact -pretokenized -removeQuery -hits 1000 &
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```
-target/appassembler/bin/trec_eval -c -m ndcg_cut.10 tools/topics-and-qrels/qrels.beir-v1.0.0-webis-touche2020.test.txt runs/run.beir-v1.0.0-webis-touche2020-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-webis-touche2020.test.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -c -m recall.100 tools/topics-and-qrels/qrels.beir-v1.0.0-webis-touche2020.test.txt runs/run.beir-v1.0.0-webis-touche2020-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-webis-touche2020.test.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -c -m recall.1000 tools/topics-and-qrels/qrels.beir-v1.0.0-webis-touche2020.test.txt runs/run.beir-v1.0.0-webis-touche2020-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.beir-v1.0.0-webis-touche2020.test.splade_distil_cocodenser_medium.txt
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-| **nDCG@10** | **SPLADE-distill CoCodenser Medium**|
-|:-------------------------------------------------------------------------------------------------------------|-----------|
-| BEIR (v1.0.0): Webis-Touche2020 | 0.2435 |
-| **R@100** | **SPLADE-distill CoCodenser Medium**|
-| BEIR (v1.0.0): Webis-Touche2020 | 0.4723 |
-| **R@1000** | **SPLADE-distill CoCodenser Medium**|
-| BEIR (v1.0.0): Webis-Touche2020 | 0.8116 |
-
-
-## Reproduction Log[*](../../docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](../../src/main/resources/docgen/templates/beir-v1.0.0-webis-touche2020-splade-distil-cocodenser-medium.template) and run `bin/build.sh` to rebuild the documentation.
diff --git a/docs/regressions/regressions-dl19-passage-splade-distil-cocodenser-medium.md b/docs/regressions/regressions-dl19-passage-splade-distil-cocodenser-medium.md
deleted file mode 100644
index beb27055b3..0000000000
--- a/docs/regressions/regressions-dl19-passage-splade-distil-cocodenser-medium.md
+++ /dev/null
@@ -1,142 +0,0 @@
-# Anserini Regressions: TREC 2019 Deep Learning Track (Passage)
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on the [TREC 2019 Deep Learning Track passage ranking task](https://trec.nist.gov/data/deep2019.html).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-Note that the NIST relevance judgments provide far more relevant passages per topic, unlike the "sparse" judgments provided by Microsoft (these are sometimes called "dense" judgments to emphasize this contrast).
-For additional instructions on working with MS MARCO passage collection, refer to [this page](../../docs/experiments-msmarco-passage.md).
-
-The exact configurations for these regressions are stored in [this YAML file](../../src/main/resources/regression/dl19-passage-splade-distil-cocodenser-medium.yaml).
-Note that this page is automatically generated from [this template](../../src/main/resources/docgen/templates/dl19-passage-splade-distil-cocodenser-medium.template) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```bash
-python src/main/python/run_regression.py --index --verify --search --regression dl19-passage-splade-distil-cocodenser-medium
-```
-
-We make available a version of the MS MARCO Passage Corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., performed model inference on every document and stored the output sparse vectors.
-Thus, no neural inference is involved.
-
-From any machine, the following command will download the corpus and perform the complete regression, end to end:
-
-```bash
-python src/main/python/run_regression.py --download --index --verify --search --regression dl19-passage-splade-distil-cocodenser-medium
-```
-
-The `run_regression.py` script automates the following steps, but if you want to perform each step manually, simply copy/paste from the commands below and you'll obtain the same regression results.
-
-## Corpus Download
-
-Download the corpus and unpack into `collections/`:
-
-```bash
-wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/msmarco-passage-splade_distil_cocodenser_medium.tar -P collections/
-tar xvf collections/msmarco-passage-splade_distil_cocodenser_medium.tar -C collections/
-```
-
-To confirm, `msmarco-passage-splade_distil_cocodenser_medium.tar` is 4.9 GB and has MD5 checksum `f77239a26d08856e6491a34062893b0c`.
-With the corpus downloaded, the following command will perform the remaining steps below:
-
-```bash
-python src/main/python/run_regression.py --index --verify --search --regression dl19-passage-splade-distil-cocodenser-medium \
- --corpus-path collections/msmarco-passage-splade_distil_cocodenser_medium
-```
-
-## Indexing
-
-Sample indexing command:
-
-```bash
-target/appassembler/bin/IndexCollection \
- -collection JsonVectorCollection \
- -input /path/to/msmarco-passage-splade_distil_cocodenser_medium \
- -generator DefaultLuceneDocumentGenerator \
- -index indexes/lucene-index.msmarco-passage-splade_distil_cocodenser_medium/ \
- -threads 16 -impact -pretokenized -storeDocvectors \
- >& logs/log.msmarco-passage-splade_distil_cocodenser_medium &
-```
-
-The path `/path/to/msmarco-passage-splade_distil_cocodenser_medium/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the SPLADE-distil CoCodenser Medium tokens.
-Upon completion, we should have an index with 8,841,823 documents.
-
-For additional details, see explanation of [common indexing options](../../docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-The regression experiments here evaluate on the 43 topics for which NIST has provided judgments as part of the TREC 2019 Deep Learning Track.
-The original data can be found [here](https://trec.nist.gov/data/deep2019.html).
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```bash
-target/appassembler/bin/SearchCollection \
- -index indexes/lucene-index.msmarco-passage-splade_distil_cocodenser_medium/ \
- -topics tools/topics-and-qrels/topics.dl19-passage.splade_distil_cocodenser_medium.tsv.gz \
- -topicReader TsvInt \
- -output runs/run.msmarco-passage-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.dl19-passage.splade_distil_cocodenser_medium.txt \
- -impact -pretokenized &
-
-target/appassembler/bin/SearchCollection \
- -index indexes/lucene-index.msmarco-passage-splade_distil_cocodenser_medium/ \
- -topics tools/topics-and-qrels/topics.dl19-passage.splade_distil_cocodenser_medium.tsv.gz \
- -topicReader TsvInt \
- -output runs/run.msmarco-passage-splade_distil_cocodenser_medium.rm3.topics.dl19-passage.splade_distil_cocodenser_medium.txt \
- -impact -pretokenized -rm3 &
-
-target/appassembler/bin/SearchCollection \
- -index indexes/lucene-index.msmarco-passage-splade_distil_cocodenser_medium/ \
- -topics tools/topics-and-qrels/topics.dl19-passage.splade_distil_cocodenser_medium.tsv.gz \
- -topicReader TsvInt \
- -output runs/run.msmarco-passage-splade_distil_cocodenser_medium.rocchio.topics.dl19-passage.splade_distil_cocodenser_medium.txt \
- -impact -pretokenized -rocchio &
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```bash
-target/appassembler/bin/trec_eval -m map -c -l 2 tools/topics-and-qrels/qrels.dl19-passage.txt runs/run.msmarco-passage-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.dl19-passage.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -m ndcg_cut.10 -c tools/topics-and-qrels/qrels.dl19-passage.txt runs/run.msmarco-passage-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.dl19-passage.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -m recall.100 -c -l 2 tools/topics-and-qrels/qrels.dl19-passage.txt runs/run.msmarco-passage-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.dl19-passage.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -m recall.1000 -c -l 2 tools/topics-and-qrels/qrels.dl19-passage.txt runs/run.msmarco-passage-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.dl19-passage.splade_distil_cocodenser_medium.txt
-
-target/appassembler/bin/trec_eval -m map -c -l 2 tools/topics-and-qrels/qrels.dl19-passage.txt runs/run.msmarco-passage-splade_distil_cocodenser_medium.rm3.topics.dl19-passage.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -m ndcg_cut.10 -c tools/topics-and-qrels/qrels.dl19-passage.txt runs/run.msmarco-passage-splade_distil_cocodenser_medium.rm3.topics.dl19-passage.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -m recall.100 -c -l 2 tools/topics-and-qrels/qrels.dl19-passage.txt runs/run.msmarco-passage-splade_distil_cocodenser_medium.rm3.topics.dl19-passage.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -m recall.1000 -c -l 2 tools/topics-and-qrels/qrels.dl19-passage.txt runs/run.msmarco-passage-splade_distil_cocodenser_medium.rm3.topics.dl19-passage.splade_distil_cocodenser_medium.txt
-
-target/appassembler/bin/trec_eval -m map -c -l 2 tools/topics-and-qrels/qrels.dl19-passage.txt runs/run.msmarco-passage-splade_distil_cocodenser_medium.rocchio.topics.dl19-passage.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -m ndcg_cut.10 -c tools/topics-and-qrels/qrels.dl19-passage.txt runs/run.msmarco-passage-splade_distil_cocodenser_medium.rocchio.topics.dl19-passage.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -m recall.100 -c -l 2 tools/topics-and-qrels/qrels.dl19-passage.txt runs/run.msmarco-passage-splade_distil_cocodenser_medium.rocchio.topics.dl19-passage.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -m recall.1000 -c -l 2 tools/topics-and-qrels/qrels.dl19-passage.txt runs/run.msmarco-passage-splade_distil_cocodenser_medium.rocchio.topics.dl19-passage.splade_distil_cocodenser_medium.txt
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-| **AP@1000** | **SPLADE-distill CoCodenser Medium**| **+RM3** | **+Rocchio**|
-|:-------------------------------------------------------------------------------------------------------------|-----------|-----------|-----------|
-| [DL19 (Passage)](https://trec.nist.gov/data/deep2020.html) | 0.4970 | 0.5194 | 0.5224 |
-| **nDCG@10** | **SPLADE-distill CoCodenser Medium**| **+RM3** | **+Rocchio**|
-| [DL19 (Passage)](https://trec.nist.gov/data/deep2020.html) | 0.7425 | 0.7261 | 0.7316 |
-| **R@100** | **SPLADE-distill CoCodenser Medium**| **+RM3** | **+Rocchio**|
-| [DL19 (Passage)](https://trec.nist.gov/data/deep2020.html) | 0.6344 | 0.6485 | 0.6533 |
-| **R@1000** | **SPLADE-distill CoCodenser Medium**| **+RM3** | **+Rocchio**|
-| [DL19 (Passage)](https://trec.nist.gov/data/deep2020.html) | 0.8756 | 0.8736 | 0.8774 |
-
-Note that retrieval metrics are computed to depth 1000 hits per query (as opposed to 100 hits per query for document ranking).
-Also, for computing nDCG, remember that we keep qrels of _all_ relevance grades, whereas for other metrics (e.g., AP), relevance grade 1 is considered not relevant (i.e., use the `-l 2` option in `trec_eval`).
-The experimental results reported here are directly comparable to the results reported in the [track overview paper](https://arxiv.org/abs/2003.07820).
-
-## Reproduction Log[*](../../docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](../../src/main/resources/docgen/templates/dl19-passage-splade-distil-cocodenser-medium.template) and run `bin/build.sh` to rebuild the documentation.
-
-+ Results reproduced by [@lintool](https://github.com/lintool) on 2022-06-14 (commit [`dc07344`](https://github.com/castorini/anserini/commit/dc073447c8a0c07b53d979c49bf1e2e018200508))
diff --git a/docs/regressions/regressions-dl20-passage-splade-distil-cocodenser-medium.md b/docs/regressions/regressions-dl20-passage-splade-distil-cocodenser-medium.md
deleted file mode 100644
index 28703f8496..0000000000
--- a/docs/regressions/regressions-dl20-passage-splade-distil-cocodenser-medium.md
+++ /dev/null
@@ -1,142 +0,0 @@
-# Anserini Regressions: TREC 2020 Deep Learning Track (Passage)
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on the [TREC 2020 Deep Learning Track passage ranking task](https://trec.nist.gov/data/deep2019.html).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-Note that the NIST relevance judgments provide far more relevant passages per topic, unlike the "sparse" judgments provided by Microsoft (these are sometimes called "dense" judgments to emphasize this contrast).
-For additional instructions on working with MS MARCO passage collection, refer to [this page](../../docs/experiments-msmarco-passage.md).
-
-The exact configurations for these regressions are stored in [this YAML file](../../src/main/resources/regression/dl20-passage-splade-distil-cocodenser-medium.yaml).
-Note that this page is automatically generated from [this template](../../src/main/resources/docgen/templates/dl20-passage-splade-distil-cocodenser-medium.template) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```bash
-python src/main/python/run_regression.py --index --verify --search --regression dl20-passage-splade-distil-cocodenser-medium
-```
-
-We make available a version of the MS MARCO Passage Corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., performed model inference on every document and stored the output sparse vectors.
-Thus, no neural inference is involved.
-
-From any machine, the following command will download the corpus and perform the complete regression, end to end:
-
-```bash
-python src/main/python/run_regression.py --download --index --verify --search --regression dl20-passage-splade-distil-cocodenser-medium
-```
-
-The `run_regression.py` script automates the following steps, but if you want to perform each step manually, simply copy/paste from the commands below and you'll obtain the same regression results.
-
-## Corpus Download
-
-Download the corpus and unpack into `collections/`:
-
-```bash
-wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/msmarco-passage-splade_distil_cocodenser_medium.tar -P collections/
-tar xvf collections/msmarco-passage-splade_distil_cocodenser_medium.tar -C collections/
-```
-
-To confirm, `msmarco-passage-splade_distil_cocodenser_medium.tar` is 4.9 GB and has MD5 checksum `f77239a26d08856e6491a34062893b0c`.
-With the corpus downloaded, the following command will perform the remaining steps below:
-
-```bash
-python src/main/python/run_regression.py --index --verify --search --regression dl20-passage-splade-distil-cocodenser-medium \
- --corpus-path collections/msmarco-passage-splade_distil_cocodenser_medium
-```
-
-## Indexing
-
-Sample indexing command:
-
-```bash
-target/appassembler/bin/IndexCollection \
- -collection JsonVectorCollection \
- -input /path/to/msmarco-passage-splade_distil_cocodenser_medium \
- -generator DefaultLuceneDocumentGenerator \
- -index indexes/lucene-index.msmarco-passage-splade_distil_cocodenser_medium/ \
- -threads 16 -impact -pretokenized -storeDocvectors \
- >& logs/log.msmarco-passage-splade_distil_cocodenser_medium &
-```
-
-The path `/path/to/msmarco-passage-splade_distil_cocodenser_medium/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the SPLADE-distil CoCodenser Medium tokens.
-Upon completion, we should have an index with 8,841,823 documents.
-
-For additional details, see explanation of [common indexing options](../../docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-The regression experiments here evaluate on the 54 topics for which NIST has provided judgments as part of the TREC 2020 Deep Learning Track.
-The original data can be found [here](https://trec.nist.gov/data/deep2020.html).
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```bash
-target/appassembler/bin/SearchCollection \
- -index indexes/lucene-index.msmarco-passage-splade_distil_cocodenser_medium/ \
- -topics tools/topics-and-qrels/topics.dl20.splade_distil_cocodenser_medium.tsv.gz \
- -topicReader TsvInt \
- -output runs/run.msmarco-passage-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.dl20.splade_distil_cocodenser_medium.txt \
- -impact -pretokenized &
-
-target/appassembler/bin/SearchCollection \
- -index indexes/lucene-index.msmarco-passage-splade_distil_cocodenser_medium/ \
- -topics tools/topics-and-qrels/topics.dl20.splade_distil_cocodenser_medium.tsv.gz \
- -topicReader TsvInt \
- -output runs/run.msmarco-passage-splade_distil_cocodenser_medium.rm3.topics.dl20.splade_distil_cocodenser_medium.txt \
- -impact -pretokenized -rm3 &
-
-target/appassembler/bin/SearchCollection \
- -index indexes/lucene-index.msmarco-passage-splade_distil_cocodenser_medium/ \
- -topics tools/topics-and-qrels/topics.dl20.splade_distil_cocodenser_medium.tsv.gz \
- -topicReader TsvInt \
- -output runs/run.msmarco-passage-splade_distil_cocodenser_medium.rocchio.topics.dl20.splade_distil_cocodenser_medium.txt \
- -impact -pretokenized -rocchio &
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```bash
-target/appassembler/bin/trec_eval -m map -c -l 2 tools/topics-and-qrels/qrels.dl20-passage.txt runs/run.msmarco-passage-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.dl20.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -m ndcg_cut.10 -c tools/topics-and-qrels/qrels.dl20-passage.txt runs/run.msmarco-passage-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.dl20.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -m recall.100 -c -l 2 tools/topics-and-qrels/qrels.dl20-passage.txt runs/run.msmarco-passage-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.dl20.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -m recall.1000 -c -l 2 tools/topics-and-qrels/qrels.dl20-passage.txt runs/run.msmarco-passage-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.dl20.splade_distil_cocodenser_medium.txt
-
-target/appassembler/bin/trec_eval -m map -c -l 2 tools/topics-and-qrels/qrels.dl20-passage.txt runs/run.msmarco-passage-splade_distil_cocodenser_medium.rm3.topics.dl20.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -m ndcg_cut.10 -c tools/topics-and-qrels/qrels.dl20-passage.txt runs/run.msmarco-passage-splade_distil_cocodenser_medium.rm3.topics.dl20.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -m recall.100 -c -l 2 tools/topics-and-qrels/qrels.dl20-passage.txt runs/run.msmarco-passage-splade_distil_cocodenser_medium.rm3.topics.dl20.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -m recall.1000 -c -l 2 tools/topics-and-qrels/qrels.dl20-passage.txt runs/run.msmarco-passage-splade_distil_cocodenser_medium.rm3.topics.dl20.splade_distil_cocodenser_medium.txt
-
-target/appassembler/bin/trec_eval -m map -c -l 2 tools/topics-and-qrels/qrels.dl20-passage.txt runs/run.msmarco-passage-splade_distil_cocodenser_medium.rocchio.topics.dl20.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -m ndcg_cut.10 -c tools/topics-and-qrels/qrels.dl20-passage.txt runs/run.msmarco-passage-splade_distil_cocodenser_medium.rocchio.topics.dl20.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -m recall.100 -c -l 2 tools/topics-and-qrels/qrels.dl20-passage.txt runs/run.msmarco-passage-splade_distil_cocodenser_medium.rocchio.topics.dl20.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -m recall.1000 -c -l 2 tools/topics-and-qrels/qrels.dl20-passage.txt runs/run.msmarco-passage-splade_distil_cocodenser_medium.rocchio.topics.dl20.splade_distil_cocodenser_medium.txt
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-| **AP@1000** | **SPLADE-distill CoCodenser Medium**| **+RM3** | **+Rocchio**|
-|:-------------------------------------------------------------------------------------------------------------|-----------|-----------|-----------|
-| [DL20 (Passage)](https://trec.nist.gov/data/deep2020.html) | 0.5019 | 0.5155 | 0.5133 |
-| **nDCG@10** | **SPLADE-distill CoCodenser Medium**| **+RM3** | **+Rocchio**|
-| [DL20 (Passage)](https://trec.nist.gov/data/deep2020.html) | 0.7179 | 0.7132 | 0.7033 |
-| **R@100** | **SPLADE-distill CoCodenser Medium**| **+RM3** | **+Rocchio**|
-| [DL20 (Passage)](https://trec.nist.gov/data/deep2020.html) | 0.7619 | 0.7553 | 0.7575 |
-| **R@1000** | **SPLADE-distill CoCodenser Medium**| **+RM3** | **+Rocchio**|
-| [DL20 (Passage)](https://trec.nist.gov/data/deep2020.html) | 0.8901 | 0.9080 | 0.8937 |
-
-Note that retrieval metrics are computed to depth 1000 hits per query (as opposed to 100 hits per query for document ranking).
-Also, for computing nDCG, remember that we keep qrels of _all_ relevance grades, whereas for other metrics (e.g., AP), relevance grade 1 is considered not relevant (i.e., use the `-l 2` option in `trec_eval`).
-The experimental results reported here are directly comparable to the results reported in the [track overview paper](https://arxiv.org/abs/2003.07820).
-
-## Reproduction Log[*](../../docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](../../src/main/resources/docgen/templates/dl20-passage-splade-distil-cocodenser-medium.template) and run `bin/build.sh` to rebuild the documentation.
-
-+ Results reproduced by [@lintool](https://github.com/lintool) on 2022-06-14 (commit [`dc07344`](https://github.com/castorini/anserini/commit/dc073447c8a0c07b53d979c49bf1e2e018200508))
diff --git a/docs/regressions/regressions-msmarco-passage-splade-distil-cocodenser-medium.md b/docs/regressions/regressions-msmarco-passage-splade-distil-cocodenser-medium.md
deleted file mode 100644
index a57a968a27..0000000000
--- a/docs/regressions/regressions-msmarco-passage-splade-distil-cocodenser-medium.md
+++ /dev/null
@@ -1,110 +0,0 @@
-# Anserini Regressions: MS MARCO Passage Ranking
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using the SPLADE-distil CoCodenser Medium model on the [MS MARCO passage ranking task](https://github.com/microsoft/MSMARCO-Passage-Ranking).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-The exact configurations for these regressions are stored in [this YAML file](../../src/main/resources/regression/msmarco-passage-splade-distil-cocodenser-medium.yaml).
-Note that this page is automatically generated from [this template](../../src/main/resources/docgen/templates/msmarco-passage-splade-distil-cocodenser-medium.template) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```bash
-python src/main/python/run_regression.py --index --verify --search --regression msmarco-passage-splade-distil-cocodenser-medium
-```
-
-We make available a version of the MS MARCO Passage Corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., performed model inference on every document and stored the output sparse vectors.
-Thus, no neural inference is involved.
-
-From any machine, the following command will download the corpus and perform the complete regression, end to end:
-
-```bash
-python src/main/python/run_regression.py --download --index --verify --search --regression msmarco-passage-splade-distil-cocodenser-medium
-```
-
-The `run_regression.py` script automates the following steps, but if you want to perform each step manually, simply copy/paste from the commands below and you'll obtain the same regression results.
-
-## Corpus Download
-
-Download the corpus and unpack into `collections/`:
-
-```bash
-wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/msmarco-passage-splade_distil_cocodenser_medium.tar -P collections/
-tar xvf collections/msmarco-passage-splade_distil_cocodenser_medium.tar -C collections/
-```
-
-To confirm, `msmarco-passage-splade_distil_cocodenser_medium.tar` is 4.9 GB and has MD5 checksum `f77239a26d08856e6491a34062893b0c`.
-With the corpus downloaded, the following command will perform the remaining steps below:
-
-```bash
-python src/main/python/run_regression.py --index --verify --search --regression msmarco-passage-splade-distil-cocodenser-medium \
- --corpus-path collections/msmarco-passage-splade_distil_cocodenser_medium
-```
-
-## Indexing
-
-Sample indexing command:
-
-```bash
-target/appassembler/bin/IndexCollection \
- -collection JsonVectorCollection \
- -input /path/to/msmarco-passage-splade_distil_cocodenser_medium \
- -generator DefaultLuceneDocumentGenerator \
- -index indexes/lucene-index.msmarco-passage-splade_distil_cocodenser_medium/ \
- -threads 16 -impact -pretokenized -storeDocvectors \
- >& logs/log.msmarco-passage-splade_distil_cocodenser_medium &
-```
-
-The path `/path/to/msmarco-passage-splade_distil_cocodenser_medium/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doc lengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the pre-encoded tokens.
-Upon completion, we should have an index with 8,841,823 documents.
-
-For additional details, see explanation of [common indexing options](../../docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-The regression experiments here evaluate on the 6980 dev set questions; see [this page](../../docs/experiments-msmarco-passage.md) for more details.
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```bash
-target/appassembler/bin/SearchCollection \
- -index indexes/lucene-index.msmarco-passage-splade_distil_cocodenser_medium/ \
- -topics tools/topics-and-qrels/topics.msmarco-passage.dev-subset.splade_distil_cocodenser_medium.tsv.gz \
- -topicReader TsvInt \
- -output runs/run.msmarco-passage-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.msmarco-passage.dev-subset.splade_distil_cocodenser_medium.txt \
- -impact -pretokenized &
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```bash
-target/appassembler/bin/trec_eval -c -m map tools/topics-and-qrels/qrels.msmarco-passage.dev-subset.txt runs/run.msmarco-passage-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.msmarco-passage.dev-subset.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -c -M 10 -m recip_rank tools/topics-and-qrels/qrels.msmarco-passage.dev-subset.txt runs/run.msmarco-passage-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.msmarco-passage.dev-subset.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -c -m recall.100 tools/topics-and-qrels/qrels.msmarco-passage.dev-subset.txt runs/run.msmarco-passage-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.msmarco-passage.dev-subset.splade_distil_cocodenser_medium.txt
-target/appassembler/bin/trec_eval -c -m recall.1000 tools/topics-and-qrels/qrels.msmarco-passage.dev-subset.txt runs/run.msmarco-passage-splade_distil_cocodenser_medium.splade_distil_cocodenser_medium.topics.msmarco-passage.dev-subset.splade_distil_cocodenser_medium.txt
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-| **AP@1000** | **SPLADE-distill CoCodenser Medium**|
-|:-------------------------------------------------------------------------------------------------------------|-----------|
-| [MS MARCO Passage: Dev](https://github.com/microsoft/MSMARCO-Passage-Ranking) | 0.3943 |
-| **RR@10** | **SPLADE-distill CoCodenser Medium**|
-| [MS MARCO Passage: Dev](https://github.com/microsoft/MSMARCO-Passage-Ranking) | 0.3892 |
-| **R@100** | **SPLADE-distill CoCodenser Medium**|
-| [MS MARCO Passage: Dev](https://github.com/microsoft/MSMARCO-Passage-Ranking) | 0.9111 |
-| **R@1000** | **SPLADE-distill CoCodenser Medium**|
-| [MS MARCO Passage: Dev](https://github.com/microsoft/MSMARCO-Passage-Ranking) | 0.9817 |
-
-## Reproduction Log[*](../../docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](../../src/main/resources/docgen/templates/msmarco-passage-splade-distil-cocodenser-medium.template) and run `bin/build.sh` to rebuild the documentation.
-
-+ Results reproduced by [@lintool](https://github.com/lintool) on 2022-06-14 (commit [`dc07344`](https://github.com/castorini/anserini/commit/dc073447c8a0c07b53d979c49bf1e2e018200508))
diff --git a/src/main/python/regressions-batch03.txt b/src/main/python/regressions-batch03.txt
index 41c0ab2e12..ac8f3f9e3f 100644
--- a/src/main/python/regressions-batch03.txt
+++ b/src/main/python/regressions-batch03.txt
@@ -22,7 +22,6 @@ python src/main/python/run_regression.py --index --verify --search --regression
python src/main/python/run_regression.py --index --verify --search --regression msmarco-passage-unicoil-noexp > logs/log.msmarco-passage-unicoil-noexp 2>&1
python src/main/python/run_regression.py --index --verify --search --regression msmarco-passage-unicoil-tilde-expansion > logs/log.msmarco-passage-unicoil-tilde-expansion 2>&1
python src/main/python/run_regression.py --index --verify --search --regression msmarco-passage-distill-splade-max > logs/log.msmarco-passage-distill-splade-max 2>&1
-python src/main/python/run_regression.py --index --verify --search --regression msmarco-passage-splade-distil-cocodenser-medium > logs/log.msmarco-passage-splade-distil-cocodenser-medium 2>&1
# HNSW search-only
python src/main/python/run_regression.py --search --regression msmarco-passage-cos-dpr-distil-hnsw-onnx > logs/log.msmarco-passage-cos-dpr-distil-hnsw-onnx 2>&1
@@ -148,7 +147,6 @@ python src/main/python/run_regression.py --search --regression dl19-passage-open
python src/main/python/run_regression.py --search --regression dl19-passage-unicoil > logs/log.dl19-passage-unicoil 2>&1
python src/main/python/run_regression.py --search --regression dl19-passage-unicoil-noexp > logs/log.dl19-passage-unicoil-noexp 2>&1
-python src/main/python/run_regression.py --search --regression dl19-passage-splade-distil-cocodenser-medium > logs/log.dl19-passage-splade-distil-cocodenser-medium 2>&1
python src/main/python/run_regression.py --search --regression dl19-passage-splade-pp-ed > logs/log.dl19-passage-splade-pp-ed 2>&1
python src/main/python/run_regression.py --search --regression dl19-passage-splade-pp-sd > logs/log.dl19-passage-splade-pp-sd 2>&1
@@ -185,7 +183,6 @@ python src/main/python/run_regression.py --search --regression dl20-passage-open
python src/main/python/run_regression.py --search --regression dl20-passage-unicoil > logs/log.dl20-passage-unicoil 2>&1
python src/main/python/run_regression.py --search --regression dl20-passage-unicoil-noexp > logs/log.dl20-passage-unicoil-noexp 2>&1
-python src/main/python/run_regression.py --search --regression dl20-passage-splade-distil-cocodenser-medium > logs/log.dl20-passage-splade-distil-cocodenser-medium 2>&1
python src/main/python/run_regression.py --search --regression dl20-passage-splade-pp-ed > logs/log.dl20-passage-splade-pp-ed 2>&1
python src/main/python/run_regression.py --search --regression dl20-passage-splade-pp-sd > logs/log.dl20-passage-splade-pp-sd 2>&1
diff --git a/src/main/python/regressions-batch04.txt b/src/main/python/regressions-batch04.txt
index fda36474e7..b319c8de39 100644
--- a/src/main/python/regressions-batch04.txt
+++ b/src/main/python/regressions-batch04.txt
@@ -105,36 +105,6 @@ python src/main/python/run_regression.py --index --verify --search --regression
python src/main/python/run_regression.py --index --verify --search --regression cw12b13 > logs/log.cw12b13 2>&1
# BEIR
-python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-trec-covid-splade-distil-cocodenser-medium > logs/log.beir-v1.0.0-trec-covid-splade-distil-cocodenser-medium 2>&1
-python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-bioasq-splade-distil-cocodenser-medium > logs/log.beir-v1.0.0-bioasq-splade-distil-cocodenser-medium 2>&1
-python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-nfcorpus-splade-distil-cocodenser-medium > logs/log.beir-v1.0.0-nfcorpus-splade-distil-cocodenser-medium 2>&1
-python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-nq-splade-distil-cocodenser-medium > logs/log.beir-v1.0.0-nq-splade-distil-cocodenser-medium 2>&1
-python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-hotpotqa-splade-distil-cocodenser-medium > logs/log.beir-v1.0.0-hotpotqa-splade-distil-cocodenser-medium 2>&1
-python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-fiqa-splade-distil-cocodenser-medium > logs/log.beir-v1.0.0-fiqa-splade-distil-cocodenser-medium 2>&1
-python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-signal1m-splade-distil-cocodenser-medium > logs/log.beir-v1.0.0-signal1m-splade-distil-cocodenser-medium 2>&1
-python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-trec-news-splade-distil-cocodenser-medium > logs/log.beir-v1.0.0-trec-news-splade-distil-cocodenser-medium 2>&1
-python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-robust04-splade-distil-cocodenser-medium > logs/log.beir-v1.0.0-robust04-splade-distil-cocodenser-medium 2>&1
-python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-arguana-splade-distil-cocodenser-medium > logs/log.beir-v1.0.0-arguana-splade-distil-cocodenser-medium 2>&1
-python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-webis-touche2020-splade-distil-cocodenser-medium > logs/log.beir-v1.0.0-webis-touche2020-splade-distil-cocodenser-medium 2>&1
-python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-cqadupstack-android-splade-distil-cocodenser-medium > logs/log.beir-v1.0.0-cqadupstack-android-splade-distil-cocodenser-medium 2>&1
-python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-cqadupstack-english-splade-distil-cocodenser-medium > logs/log.beir-v1.0.0-cqadupstack-english-splade-distil-cocodenser-medium 2>&1
-python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-cqadupstack-gaming-splade-distil-cocodenser-medium > logs/log.beir-v1.0.0-cqadupstack-gaming-splade-distil-cocodenser-medium 2>&1
-python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-cqadupstack-gis-splade-distil-cocodenser-medium > logs/log.beir-v1.0.0-cqadupstack-gis-splade-distil-cocodenser-medium 2>&1
-python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-cqadupstack-mathematica-splade-distil-cocodenser-medium > logs/log.beir-v1.0.0-cqadupstack-mathematica-splade-distil-cocodenser-medium 2>&1
-python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-cqadupstack-physics-splade-distil-cocodenser-medium > logs/log.beir-v1.0.0-cqadupstack-physics-splade-distil-cocodenser-medium 2>&1
-python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-cqadupstack-programmers-splade-distil-cocodenser-medium > logs/log.beir-v1.0.0-cqadupstack-programmers-splade-distil-cocodenser-medium 2>&1
-python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-cqadupstack-stats-splade-distil-cocodenser-medium > logs/log.beir-v1.0.0-cqadupstack-stats-splade-distil-cocodenser-medium 2>&1
-python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-cqadupstack-tex-splade-distil-cocodenser-medium > logs/log.beir-v1.0.0-cqadupstack-tex-splade-distil-cocodenser-medium 2>&1
-python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-cqadupstack-unix-splade-distil-cocodenser-medium > logs/log.beir-v1.0.0-cqadupstack-unix-splade-distil-cocodenser-medium 2>&1
-python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-cqadupstack-webmasters-splade-distil-cocodenser-medium > logs/log.beir-v1.0.0-cqadupstack-webmasters-splade-distil-cocodenser-medium 2>&1
-python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-cqadupstack-wordpress-splade-distil-cocodenser-medium > logs/log.beir-v1.0.0-cqadupstack-wordpress-splade-distil-cocodenser-medium 2>&1
-python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-quora-splade-distil-cocodenser-medium > logs/log.beir-v1.0.0-quora-splade-distil-cocodenser-medium 2>&1
-python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-dbpedia-entity-splade-distil-cocodenser-medium > logs/log.beir-v1.0.0-dbpedia-entity-splade-distil-cocodenser-medium 2>&1
-python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-scidocs-splade-distil-cocodenser-medium > logs/log.beir-v1.0.0-scidocs-splade-distil-cocodenser-medium 2>&1
-python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-fever-splade-distil-cocodenser-medium > logs/log.beir-v1.0.0-fever-splade-distil-cocodenser-medium 2>&1
-python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-climate-fever-splade-distil-cocodenser-medium > logs/log.beir-v1.0.0-climate-fever-splade-distil-cocodenser-medium 2>&1
-python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-scifact-splade-distil-cocodenser-medium > logs/log.beir-v1.0.0-scifact-splade-distil-cocodenser-medium 2>&1
-
python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-trec-covid-unicoil-noexp > logs/log.beir-v1.0.0-trec-covid-unicoil-noexp 2>&1
python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-bioasq-unicoil-noexp > logs/log.beir-v1.0.0-bioasq-unicoil-noexp 2>&1
python src/main/python/run_regression.py --index --verify --search --regression beir-v1.0.0-nfcorpus-unicoil-noexp > logs/log.beir-v1.0.0-nfcorpus-unicoil-noexp 2>&1
diff --git a/src/main/resources/docgen/templates/beir-v1.0.0-arguana-splade-distil-cocodenser-medium.template b/src/main/resources/docgen/templates/beir-v1.0.0-arguana-splade-distil-cocodenser-medium.template
deleted file mode 100644
index cd97611556..0000000000
--- a/src/main/resources/docgen/templates/beir-v1.0.0-arguana-splade-distil-cocodenser-medium.template
+++ /dev/null
@@ -1,83 +0,0 @@
-# Anserini Regressions: BEIR (v1.0.0) — ArguAna
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on [BEIR (v1.0.0) — ArguAna](http://beir.ai/).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-The exact configurations for these regressions are stored in [this YAML file](${yaml}).
-Note that this page is automatically generated from [this template](${template}) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression ${test_name}
-```
-
-## Corpus
-
-We make available a version of the BEIR-v1.0.0 arguana corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., gone through document expansion and term reweighting.
-Thus, no neural inference is involved.
-
-Download the corpus and unpack into `collections/`:
-
-```
-wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/beir-v1.0.0-splade_distil_cocodenser_medium-arguana.tar -P collections/
-tar xvf collections/beir-v1.0.0-splade_distil_cocodenser_medium-arguana.tar -C collections/
-```
-
-To confirm, the tarball is 8.9 MB and has MD5 checksum `9c5a181e03cbc7f13abd0e0e4bf9158e`.
-
-With the corpus downloaded, the following command will perform the complete regression, end to end, on any machine:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression ${test_name} \
- --corpus-path collections/beir-v1.0.0-splade_distil_cocodenser_medium-arguana
-```
-
-Alternatively, you can simply copy/paste from the commands below and obtain the same results.
-
-## Indexing
-
-Sample indexing command:
-
-```
-${index_cmds}
-```
-
-The path `/path/to/${corpus}/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the pre-encoded tokens.
-Upon completion, we should have an index with 8,674 documents.
-
-For additional details, see explanation of [common indexing options](${root_path}/docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```
-${ranking_cmds}
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```
-${eval_cmds}
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-${effectiveness}
-
-
-## Reproduction Log[*](${root_path}/docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](${template}) and run `bin/build.sh` to rebuild the documentation.
diff --git a/src/main/resources/docgen/templates/beir-v1.0.0-bioasq-splade-distil-cocodenser-medium.template b/src/main/resources/docgen/templates/beir-v1.0.0-bioasq-splade-distil-cocodenser-medium.template
deleted file mode 100644
index ed4fed1d15..0000000000
--- a/src/main/resources/docgen/templates/beir-v1.0.0-bioasq-splade-distil-cocodenser-medium.template
+++ /dev/null
@@ -1,84 +0,0 @@
-# Anserini Regressions: BEIR (v1.0.0) — BioASQ
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on [BEIR (v1.0.0) — BioASQ](http://beir.ai/).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-The exact configurations for these regressions are stored in [this YAML file](${yaml}).
-Note that this page is automatically generated from [this template](${template}) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression ${test_name}
-```
-
-## Corpus
-
-We make available a version of the BEIR-v1.0.0 bioasq corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., gone through document expansion and term reweighting.
-Thus, no neural inference is involved.
-For details on how to train SPLADE-distil CoCodenser Medium and perform inference, please see [guide provided by Naver Labs Europe](https://github.com/naver/splade/tree/main/anserini_evaluation).
-
-Download the corpus and unpack into `collections/`:
-
-```
-wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/beir-v1.0.0-splade_distil_cocodenser_medium-bioasq.tar -P collections/
-tar xvf collections/beir-v1.0.0-splade_distil_cocodenser_medium-bioasq.tar -C collections/
-```
-
-To confirm, the tarball is 8.9 MB and has MD5 checksum `9c5a181e03cbc7f13abd0e0e4bf9158e`.
-
-With the corpus downloaded, the following command will perform the complete regression, end to end, on any machine:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression ${test_name} \
- --corpus-path collections/beir-v1.0.0-splade_distil_cocodenser_medium-bioasq
-```
-
-Alternatively, you can simply copy/paste from the commands below and obtain the same results.
-
-## Indexing
-
-Sample indexing command:
-
-```
-${index_cmds}
-```
-
-The path `/path/to/${corpus}/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the pre-encoded tokens.
-Upon completion, we should have an index with 8,674 documents.
-
-For additional details, see explanation of [common indexing options](${root_path}/docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```
-${ranking_cmds}
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```
-${eval_cmds}
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-${effectiveness}
-
-
-## Reproduction Log[*](${root_path}/docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](${template}) and run `bin/build.sh` to rebuild the documentation.
diff --git a/src/main/resources/docgen/templates/beir-v1.0.0-climate-fever-splade-distil-cocodenser-medium.template b/src/main/resources/docgen/templates/beir-v1.0.0-climate-fever-splade-distil-cocodenser-medium.template
deleted file mode 100644
index 14e51cb572..0000000000
--- a/src/main/resources/docgen/templates/beir-v1.0.0-climate-fever-splade-distil-cocodenser-medium.template
+++ /dev/null
@@ -1,84 +0,0 @@
-# Anserini Regressions: BEIR (v1.0.0) — Climate-FEVER
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on [BEIR (v1.0.0) — Climate-FEVER](http://beir.ai/).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-The exact configurations for these regressions are stored in [this YAML file](${yaml}).
-Note that this page is automatically generated from [this template](${template}) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression ${test_name}
-```
-
-## Corpus
-
-We make available a version of the BEIR-v1.0.0 climate-fever corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., gone through document expansion and term reweighting.
-Thus, no neural inference is involved.
-For details on how to train SPLADE-distil CoCodenser Medium and perform inference, please see [guide provided by Naver Labs Europe](https://github.com/naver/splade/tree/main/anserini_evaluation).
-
-Download the corpus and unpack into `collections/`:
-
-```
-wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/beir-v1.0.0-splade_distil_cocodenser_medium-climate-fever.tar -P collections/
-tar xvf collections/beir-v1.0.0-splade_distil_cocodenser_medium-climate-fever.tar -C collections/
-```
-
-To confirm, the tarball is 3.2 GB and has MD5 checksum `07787a4b1236fad234a7fe6d89197b34`.
-
-With the corpus downloaded, the following command will perform the complete regression, end to end, on any machine:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression ${test_name} \
- --corpus-path collections/beir-v1.0.0-splade_distil_cocodenser_medium-climate-fever
-```
-
-Alternatively, you can simply copy/paste from the commands below and obtain the same results.
-
-## Indexing
-
-Sample indexing command:
-
-```
-${index_cmds}
-```
-
-The path `/path/to/${corpus}/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the pre-encoded tokens.
-Upon completion, we should have an index with 5,416,593 documents.
-
-For additional details, see explanation of [common indexing options](${root_path}/docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```
-${ranking_cmds}
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```
-${eval_cmds}
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-${effectiveness}
-
-
-## Reproduction Log[*](${root_path}/docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](${template}) and run `bin/build.sh` to rebuild the documentation.
diff --git a/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-android-splade-distil-cocodenser-medium.template b/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-android-splade-distil-cocodenser-medium.template
deleted file mode 100644
index 8fa530574d..0000000000
--- a/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-android-splade-distil-cocodenser-medium.template
+++ /dev/null
@@ -1,84 +0,0 @@
-# Anserini Regressions: BEIR (v1.0.0) — CQADupStack-android
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on [BEIR (v1.0.0) — CQADupStack-android](http://beir.ai/).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-The exact configurations for these regressions are stored in [this YAML file](${yaml}).
-Note that this page is automatically generated from [this template](${template}) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression ${test_name}
-```
-
-## Corpus
-
-We make available a version of the BEIR-v1.0.0 cqadupstack-android corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., gone through document expansion and term reweighting.
-Thus, no neural inference is involved.
-For details on how to train SPLADE-distil CoCodenser Medium and perform inference, please see [guide provided by Naver Labs Europe](https://github.com/naver/splade/tree/main/anserini_evaluation).
-
-Download the corpus and unpack into `collections/`:
-
-```
-wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-android.tar -P collections/
-tar xvf collections/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-android.tar -C collections/
-```
-
-To confirm, the tarball is 8.9 MB and has MD5 checksum `9c5a181e03cbc7f13abd0e0e4bf9158e`.
-
-With the corpus downloaded, the following command will perform the complete regression, end to end, on any machine:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression ${test_name} \
- --corpus-path collections/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-android
-```
-
-Alternatively, you can simply copy/paste from the commands below and obtain the same results.
-
-## Indexing
-
-Sample indexing command:
-
-```
-${index_cmds}
-```
-
-The path `/path/to/${corpus}/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the pre-encoded tokens.
-Upon completion, we should have an index with 8,674 documents.
-
-For additional details, see explanation of [common indexing options](${root_path}/docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```
-${ranking_cmds}
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```
-${eval_cmds}
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-${effectiveness}
-
-
-## Reproduction Log[*](${root_path}/docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](${template}) and run `bin/build.sh` to rebuild the documentation.
diff --git a/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-english-splade-distil-cocodenser-medium.template b/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-english-splade-distil-cocodenser-medium.template
deleted file mode 100644
index 4ffbdfb407..0000000000
--- a/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-english-splade-distil-cocodenser-medium.template
+++ /dev/null
@@ -1,84 +0,0 @@
-# Anserini Regressions: BEIR (v1.0.0) — CQADupStack-english
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on [BEIR (v1.0.0) — CQADupStack-english](http://beir.ai/).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-The exact configurations for these regressions are stored in [this YAML file](${yaml}).
-Note that this page is automatically generated from [this template](${template}) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression ${test_name}
-```
-
-## Corpus
-
-We make available a version of the BEIR-v1.0.0 cqadupstack-english corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., gone through document expansion and term reweighting.
-Thus, no neural inference is involved.
-For details on how to train SPLADE-distil CoCodenser Medium and perform inference, please see [guide provided by Naver Labs Europe](https://github.com/naver/splade/tree/main/anserini_evaluation).
-
-Download the corpus and unpack into `collections/`:
-
-```
-wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-english.tar -P collections/
-tar xvf collections/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-english.tar -C collections/
-```
-
-To confirm, the tarball is 8.9 MB and has MD5 checksum `9c5a181e03cbc7f13abd0e0e4bf9158e`.
-
-With the corpus downloaded, the following command will perform the complete regression, end to end, on any machine:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression ${test_name} \
- --corpus-path collections/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-english
-```
-
-Alternatively, you can simply copy/paste from the commands below and obtain the same results.
-
-## Indexing
-
-Sample indexing command:
-
-```
-${index_cmds}
-```
-
-The path `/path/to/${corpus}/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the pre-encoded tokens.
-Upon completion, we should have an index with 8,674 documents.
-
-For additional details, see explanation of [common indexing options](${root_path}/docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```
-${ranking_cmds}
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```
-${eval_cmds}
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-${effectiveness}
-
-
-## Reproduction Log[*](${root_path}/docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](${template}) and run `bin/build.sh` to rebuild the documentation.
diff --git a/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-gaming-splade-distil-cocodenser-medium.template b/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-gaming-splade-distil-cocodenser-medium.template
deleted file mode 100644
index fe7ac567b7..0000000000
--- a/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-gaming-splade-distil-cocodenser-medium.template
+++ /dev/null
@@ -1,84 +0,0 @@
-# Anserini Regressions: BEIR (v1.0.0) — CQADupStack-gaming
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on [BEIR (v1.0.0) — CQADupStack-gaming](http://beir.ai/).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-The exact configurations for these regressions are stored in [this YAML file](${yaml}).
-Note that this page is automatically generated from [this template](${template}) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression ${test_name}
-```
-
-## Corpus
-
-We make available a version of the BEIR-v1.0.0 cqadupstack-gaming corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., gone through document expansion and term reweighting.
-Thus, no neural inference is involved.
-For details on how to train SPLADE-distil CoCodenser Medium and perform inference, please see [guide provided by Naver Labs Europe](https://github.com/naver/splade/tree/main/anserini_evaluation).
-
-Download the corpus and unpack into `collections/`:
-
-```
-wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-gaming.tar -P collections/
-tar xvf collections/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-gaming.tar -C collections/
-```
-
-To confirm, the tarball is 8.9 MB and has MD5 checksum `9c5a181e03cbc7f13abd0e0e4bf9158e`.
-
-With the corpus downloaded, the following command will perform the complete regression, end to end, on any machine:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression ${test_name} \
- --corpus-path collections/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-gaming
-```
-
-Alternatively, you can simply copy/paste from the commands below and obtain the same results.
-
-## Indexing
-
-Sample indexing command:
-
-```
-${index_cmds}
-```
-
-The path `/path/to/${corpus}/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the pre-encoded tokens.
-Upon completion, we should have an index with 8,674 documents.
-
-For additional details, see explanation of [common indexing options](${root_path}/docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```
-${ranking_cmds}
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```
-${eval_cmds}
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-${effectiveness}
-
-
-## Reproduction Log[*](${root_path}/docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](${template}) and run `bin/build.sh` to rebuild the documentation.
diff --git a/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-gis-splade-distil-cocodenser-medium.template b/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-gis-splade-distil-cocodenser-medium.template
deleted file mode 100644
index d72475ff5f..0000000000
--- a/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-gis-splade-distil-cocodenser-medium.template
+++ /dev/null
@@ -1,84 +0,0 @@
-# Anserini Regressions: BEIR (v1.0.0) — CQADupStack-gis
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on [BEIR (v1.0.0) — CQADupStack-gis](http://beir.ai/).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-The exact configurations for these regressions are stored in [this YAML file](${yaml}).
-Note that this page is automatically generated from [this template](${template}) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression ${test_name}
-```
-
-## Corpus
-
-We make available a version of the BEIR-v1.0.0 cqadupstack-gis corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., gone through document expansion and term reweighting.
-Thus, no neural inference is involved.
-For details on how to train SPLADE-distil CoCodenser Medium and perform inference, please see [guide provided by Naver Labs Europe](https://github.com/naver/splade/tree/main/anserini_evaluation).
-
-Download the corpus and unpack into `collections/`:
-
-```
-wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-gis.tar -P collections/
-tar xvf collections/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-gis.tar -C collections/
-```
-
-To confirm, the tarball is 8.9 MB and has MD5 checksum `9c5a181e03cbc7f13abd0e0e4bf9158e`.
-
-With the corpus downloaded, the following command will perform the complete regression, end to end, on any machine:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression ${test_name} \
- --corpus-path collections/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-gis
-```
-
-Alternatively, you can simply copy/paste from the commands below and obtain the same results.
-
-## Indexing
-
-Sample indexing command:
-
-```
-${index_cmds}
-```
-
-The path `/path/to/${corpus}/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the pre-encoded tokens.
-Upon completion, we should have an index with 8,674 documents.
-
-For additional details, see explanation of [common indexing options](${root_path}/docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```
-${ranking_cmds}
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```
-${eval_cmds}
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-${effectiveness}
-
-
-## Reproduction Log[*](${root_path}/docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](${template}) and run `bin/build.sh` to rebuild the documentation.
diff --git a/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-mathematica-splade-distil-cocodenser-medium.template b/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-mathematica-splade-distil-cocodenser-medium.template
deleted file mode 100644
index 38a1092588..0000000000
--- a/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-mathematica-splade-distil-cocodenser-medium.template
+++ /dev/null
@@ -1,84 +0,0 @@
-# Anserini Regressions: BEIR (v1.0.0) — CQADupStack-mathematica
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on [BEIR (v1.0.0) — CQADupStack-mathematica](http://beir.ai/).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-The exact configurations for these regressions are stored in [this YAML file](${yaml}).
-Note that this page is automatically generated from [this template](${template}) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression ${test_name}
-```
-
-## Corpus
-
-We make available a version of the BEIR-v1.0.0 cqadupstack-mathematica corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., gone through document expansion and term reweighting.
-Thus, no neural inference is involved.
-For details on how to train SPLADE-distil CoCodenser Medium and perform inference, please see [guide provided by Naver Labs Europe](https://github.com/naver/splade/tree/main/anserini_evaluation).
-
-Download the corpus and unpack into `collections/`:
-
-```
-wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-mathematica.tar -P collections/
-tar xvf collections/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-mathematica.tar -C collections/
-```
-
-To confirm, the tarball is 8.9 MB and has MD5 checksum `9c5a181e03cbc7f13abd0e0e4bf9158e`.
-
-With the corpus downloaded, the following command will perform the complete regression, end to end, on any machine:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression ${test_name} \
- --corpus-path collections/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-mathematica
-```
-
-Alternatively, you can simply copy/paste from the commands below and obtain the same results.
-
-## Indexing
-
-Sample indexing command:
-
-```
-${index_cmds}
-```
-
-The path `/path/to/${corpus}/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the pre-encoded tokens.
-Upon completion, we should have an index with 8,674 documents.
-
-For additional details, see explanation of [common indexing options](${root_path}/docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```
-${ranking_cmds}
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```
-${eval_cmds}
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-${effectiveness}
-
-
-## Reproduction Log[*](${root_path}/docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](${template}) and run `bin/build.sh` to rebuild the documentation.
diff --git a/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-physics-splade-distil-cocodenser-medium.template b/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-physics-splade-distil-cocodenser-medium.template
deleted file mode 100644
index 406c876676..0000000000
--- a/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-physics-splade-distil-cocodenser-medium.template
+++ /dev/null
@@ -1,84 +0,0 @@
-# Anserini Regressions: BEIR (v1.0.0) — CQADupStack-physics
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on [BEIR (v1.0.0) — CQADupStack-physics](http://beir.ai/).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-The exact configurations for these regressions are stored in [this YAML file](${yaml}).
-Note that this page is automatically generated from [this template](${template}) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression ${test_name}
-```
-
-## Corpus
-
-We make available a version of the BEIR-v1.0.0 cqadupstack-physics corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., gone through document expansion and term reweighting.
-Thus, no neural inference is involved.
-For details on how to train SPLADE-distil CoCodenser Medium and perform inference, please see [guide provided by Naver Labs Europe](https://github.com/naver/splade/tree/main/anserini_evaluation).
-
-Download the corpus and unpack into `collections/`:
-
-```
-wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-physics.tar -P collections/
-tar xvf collections/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-physics.tar -C collections/
-```
-
-To confirm, the tarball is 8.9 MB and has MD5 checksum `9c5a181e03cbc7f13abd0e0e4bf9158e`.
-
-With the corpus downloaded, the following command will perform the complete regression, end to end, on any machine:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression ${test_name} \
- --corpus-path collections/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-physics
-```
-
-Alternatively, you can simply copy/paste from the commands below and obtain the same results.
-
-## Indexing
-
-Sample indexing command:
-
-```
-${index_cmds}
-```
-
-The path `/path/to/${corpus}/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the pre-encoded tokens.
-Upon completion, we should have an index with 8,674 documents.
-
-For additional details, see explanation of [common indexing options](${root_path}/docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```
-${ranking_cmds}
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```
-${eval_cmds}
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-${effectiveness}
-
-
-## Reproduction Log[*](${root_path}/docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](${template}) and run `bin/build.sh` to rebuild the documentation.
diff --git a/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-programmers-splade-distil-cocodenser-medium.template b/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-programmers-splade-distil-cocodenser-medium.template
deleted file mode 100644
index 06ce7712ae..0000000000
--- a/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-programmers-splade-distil-cocodenser-medium.template
+++ /dev/null
@@ -1,84 +0,0 @@
-# Anserini Regressions: BEIR (v1.0.0) — CQADupStack-programmers
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on [BEIR (v1.0.0) — CQADupStack-programmers](http://beir.ai/).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-The exact configurations for these regressions are stored in [this YAML file](${yaml}).
-Note that this page is automatically generated from [this template](${template}) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression ${test_name}
-```
-
-## Corpus
-
-We make available a version of the BEIR-v1.0.0 cqadupstack-programmers corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., gone through document expansion and term reweighting.
-Thus, no neural inference is involved.
-For details on how to train SPLADE-distil CoCodenser Medium and perform inference, please see [guide provided by Naver Labs Europe](https://github.com/naver/splade/tree/main/anserini_evaluation).
-
-Download the corpus and unpack into `collections/`:
-
-```
-wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-programmers.tar -P collections/
-tar xvf collections/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-programmers.tar -C collections/
-```
-
-To confirm, the tarball is 8.9 MB and has MD5 checksum `9c5a181e03cbc7f13abd0e0e4bf9158e`.
-
-With the corpus downloaded, the following command will perform the complete regression, end to end, on any machine:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression ${test_name} \
- --corpus-path collections/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-programmers
-```
-
-Alternatively, you can simply copy/paste from the commands below and obtain the same results.
-
-## Indexing
-
-Sample indexing command:
-
-```
-${index_cmds}
-```
-
-The path `/path/to/${corpus}/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the pre-encoded tokens.
-Upon completion, we should have an index with 8,674 documents.
-
-For additional details, see explanation of [common indexing options](${root_path}/docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```
-${ranking_cmds}
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```
-${eval_cmds}
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-${effectiveness}
-
-
-## Reproduction Log[*](${root_path}/docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](${template}) and run `bin/build.sh` to rebuild the documentation.
diff --git a/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-stats-splade-distil-cocodenser-medium.template b/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-stats-splade-distil-cocodenser-medium.template
deleted file mode 100644
index baadf25305..0000000000
--- a/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-stats-splade-distil-cocodenser-medium.template
+++ /dev/null
@@ -1,84 +0,0 @@
-# Anserini Regressions: BEIR (v1.0.0) — CQADupStack-stats
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on [BEIR (v1.0.0) — CQADupStack-stats](http://beir.ai/).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-The exact configurations for these regressions are stored in [this YAML file](${yaml}).
-Note that this page is automatically generated from [this template](${template}) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression ${test_name}
-```
-
-## Corpus
-
-We make available a version of the BEIR-v1.0.0 cqadupstack-stats corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., gone through document expansion and term reweighting.
-Thus, no neural inference is involved.
-For details on how to train SPLADE-distil CoCodenser Medium and perform inference, please see [guide provided by Naver Labs Europe](https://github.com/naver/splade/tree/main/anserini_evaluation).
-
-Download the corpus and unpack into `collections/`:
-
-```
-wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-stats.tar -P collections/
-tar xvf collections/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-stats.tar -C collections/
-```
-
-To confirm, the tarball is 8.9 MB and has MD5 checksum `9c5a181e03cbc7f13abd0e0e4bf9158e`.
-
-With the corpus downloaded, the following command will perform the complete regression, end to end, on any machine:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression ${test_name} \
- --corpus-path collections/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-stats
-```
-
-Alternatively, you can simply copy/paste from the commands below and obtain the same results.
-
-## Indexing
-
-Sample indexing command:
-
-```
-${index_cmds}
-```
-
-The path `/path/to/${corpus}/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the pre-encoded tokens.
-Upon completion, we should have an index with 8,674 documents.
-
-For additional details, see explanation of [common indexing options](${root_path}/docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```
-${ranking_cmds}
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```
-${eval_cmds}
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-${effectiveness}
-
-
-## Reproduction Log[*](${root_path}/docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](${template}) and run `bin/build.sh` to rebuild the documentation.
diff --git a/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-tex-splade-distil-cocodenser-medium.template b/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-tex-splade-distil-cocodenser-medium.template
deleted file mode 100644
index f7385e348b..0000000000
--- a/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-tex-splade-distil-cocodenser-medium.template
+++ /dev/null
@@ -1,84 +0,0 @@
-# Anserini Regressions: BEIR (v1.0.0) — CQADupStack-tex
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on [BEIR (v1.0.0) — CQADupStack-tex](http://beir.ai/).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-The exact configurations for these regressions are stored in [this YAML file](${yaml}).
-Note that this page is automatically generated from [this template](${template}) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression ${test_name}
-```
-
-## Corpus
-
-We make available a version of the BEIR-v1.0.0 cqadupstack-tex corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., gone through document expansion and term reweighting.
-Thus, no neural inference is involved.
-For details on how to train SPLADE-distil CoCodenser Medium and perform inference, please see [guide provided by Naver Labs Europe](https://github.com/naver/splade/tree/main/anserini_evaluation).
-
-Download the corpus and unpack into `collections/`:
-
-```
-wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-tex.tar -P collections/
-tar xvf collections/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-tex.tar -C collections/
-```
-
-To confirm, the tarball is 8.9 MB and has MD5 checksum `9c5a181e03cbc7f13abd0e0e4bf9158e`.
-
-With the corpus downloaded, the following command will perform the complete regression, end to end, on any machine:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression ${test_name} \
- --corpus-path collections/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-tex
-```
-
-Alternatively, you can simply copy/paste from the commands below and obtain the same results.
-
-## Indexing
-
-Sample indexing command:
-
-```
-${index_cmds}
-```
-
-The path `/path/to/${corpus}/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the pre-encoded tokens.
-Upon completion, we should have an index with 8,674 documents.
-
-For additional details, see explanation of [common indexing options](${root_path}/docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```
-${ranking_cmds}
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```
-${eval_cmds}
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-${effectiveness}
-
-
-## Reproduction Log[*](${root_path}/docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](${template}) and run `bin/build.sh` to rebuild the documentation.
diff --git a/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-unix-splade-distil-cocodenser-medium.template b/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-unix-splade-distil-cocodenser-medium.template
deleted file mode 100644
index bcded125b7..0000000000
--- a/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-unix-splade-distil-cocodenser-medium.template
+++ /dev/null
@@ -1,84 +0,0 @@
-# Anserini Regressions: BEIR (v1.0.0) — CQADupStack-unix
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on [BEIR (v1.0.0) — CQADupStack-unix](http://beir.ai/).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-The exact configurations for these regressions are stored in [this YAML file](${yaml}).
-Note that this page is automatically generated from [this template](${template}) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression ${test_name}
-```
-
-## Corpus
-
-We make available a version of the BEIR-v1.0.0 cqadupstack-unix corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., gone through document expansion and term reweighting.
-Thus, no neural inference is involved.
-For details on how to train SPLADE-distil CoCodenser Medium and perform inference, please see [guide provided by Naver Labs Europe](https://github.com/naver/splade/tree/main/anserini_evaluation).
-
-Download the corpus and unpack into `collections/`:
-
-```
-wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-unix.tar -P collections/
-tar xvf collections/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-unix.tar -C collections/
-```
-
-To confirm, the tarball is 8.9 MB and has MD5 checksum `9c5a181e03cbc7f13abd0e0e4bf9158e`.
-
-With the corpus downloaded, the following command will perform the complete regression, end to end, on any machine:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression ${test_name} \
- --corpus-path collections/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-unix
-```
-
-Alternatively, you can simply copy/paste from the commands below and obtain the same results.
-
-## Indexing
-
-Sample indexing command:
-
-```
-${index_cmds}
-```
-
-The path `/path/to/${corpus}/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the pre-encoded tokens.
-Upon completion, we should have an index with 8,674 documents.
-
-For additional details, see explanation of [common indexing options](${root_path}/docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```
-${ranking_cmds}
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```
-${eval_cmds}
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-${effectiveness}
-
-
-## Reproduction Log[*](${root_path}/docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](${template}) and run `bin/build.sh` to rebuild the documentation.
diff --git a/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-webmasters-splade-distil-cocodenser-medium.template b/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-webmasters-splade-distil-cocodenser-medium.template
deleted file mode 100644
index 4affe51746..0000000000
--- a/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-webmasters-splade-distil-cocodenser-medium.template
+++ /dev/null
@@ -1,84 +0,0 @@
-# Anserini Regressions: BEIR (v1.0.0) — CQADupStack-webmasters
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on [BEIR (v1.0.0) — CQADupStack-webmasters](http://beir.ai/).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-The exact configurations for these regressions are stored in [this YAML file](${yaml}).
-Note that this page is automatically generated from [this template](${template}) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression ${test_name}
-```
-
-## Corpus
-
-We make available a version of the BEIR-v1.0.0 cqadupstack-webmasters corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., gone through document expansion and term reweighting.
-Thus, no neural inference is involved.
-For details on how to train SPLADE-distil CoCodenser Medium and perform inference, please see [guide provided by Naver Labs Europe](https://github.com/naver/splade/tree/main/anserini_evaluation).
-
-Download the corpus and unpack into `collections/`:
-
-```
-wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-webmasters.tar -P collections/
-tar xvf collections/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-webmasters.tar -C collections/
-```
-
-To confirm, the tarball is 8.9 MB and has MD5 checksum `9c5a181e03cbc7f13abd0e0e4bf9158e`.
-
-With the corpus downloaded, the following command will perform the complete regression, end to end, on any machine:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression ${test_name} \
- --corpus-path collections/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-webmasters
-```
-
-Alternatively, you can simply copy/paste from the commands below and obtain the same results.
-
-## Indexing
-
-Sample indexing command:
-
-```
-${index_cmds}
-```
-
-The path `/path/to/${corpus}/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the pre-encoded tokens.
-Upon completion, we should have an index with 8,674 documents.
-
-For additional details, see explanation of [common indexing options](${root_path}/docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```
-${ranking_cmds}
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```
-${eval_cmds}
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-${effectiveness}
-
-
-## Reproduction Log[*](${root_path}/docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](${template}) and run `bin/build.sh` to rebuild the documentation.
diff --git a/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-wordpress-splade-distil-cocodenser-medium.template b/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-wordpress-splade-distil-cocodenser-medium.template
deleted file mode 100644
index cc22532ebc..0000000000
--- a/src/main/resources/docgen/templates/beir-v1.0.0-cqadupstack-wordpress-splade-distil-cocodenser-medium.template
+++ /dev/null
@@ -1,84 +0,0 @@
-# Anserini Regressions: BEIR (v1.0.0) — CQADupStack-wordpress
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on [BEIR (v1.0.0) — CQADupStack-wordpress](http://beir.ai/).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-The exact configurations for these regressions are stored in [this YAML file](${yaml}).
-Note that this page is automatically generated from [this template](${template}) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression ${test_name}
-```
-
-## Corpus
-
-We make available a version of the BEIR-v1.0.0 cqadupstack-wordpress corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., gone through document expansion and term reweighting.
-Thus, no neural inference is involved.
-For details on how to train SPLADE-distil CoCodenser Medium and perform inference, please see [guide provided by Naver Labs Europe](https://github.com/naver/splade/tree/main/anserini_evaluation).
-
-Download the corpus and unpack into `collections/`:
-
-```
-wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-wordpress.tar -P collections/
-tar xvf collections/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-wordpress.tar -C collections/
-```
-
-To confirm, the tarball is 8.9 MB and has MD5 checksum `9c5a181e03cbc7f13abd0e0e4bf9158e`.
-
-With the corpus downloaded, the following command will perform the complete regression, end to end, on any machine:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression ${test_name} \
- --corpus-path collections/beir-v1.0.0-splade_distil_cocodenser_medium-cqadupstack-wordpress
-```
-
-Alternatively, you can simply copy/paste from the commands below and obtain the same results.
-
-## Indexing
-
-Sample indexing command:
-
-```
-${index_cmds}
-```
-
-The path `/path/to/${corpus}/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the pre-encoded tokens.
-Upon completion, we should have an index with 8,674 documents.
-
-For additional details, see explanation of [common indexing options](${root_path}/docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```
-${ranking_cmds}
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```
-${eval_cmds}
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-${effectiveness}
-
-
-## Reproduction Log[*](${root_path}/docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](${template}) and run `bin/build.sh` to rebuild the documentation.
diff --git a/src/main/resources/docgen/templates/beir-v1.0.0-dbpedia-entity-splade-distil-cocodenser-medium.template b/src/main/resources/docgen/templates/beir-v1.0.0-dbpedia-entity-splade-distil-cocodenser-medium.template
deleted file mode 100644
index 6264a8a98f..0000000000
--- a/src/main/resources/docgen/templates/beir-v1.0.0-dbpedia-entity-splade-distil-cocodenser-medium.template
+++ /dev/null
@@ -1,84 +0,0 @@
-# Anserini Regressions: BEIR (v1.0.0) — DBPedia
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on [BEIR (v1.0.0) — DBPedia](http://beir.ai/).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-The exact configurations for these regressions are stored in [this YAML file](${yaml}).
-Note that this page is automatically generated from [this template](${template}) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression ${test_name}
-```
-
-## Corpus
-
-We make available a version of the BEIR-v1.0.0 dbpedia-entity corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., gone through document expansion and term reweighting.
-Thus, no neural inference is involved.
-For details on how to train SPLADE-distil CoCodenser Medium and perform inference, please see [guide provided by Naver Labs Europe](https://github.com/naver/splade/tree/main/anserini_evaluation).
-
-Download the corpus and unpack into `collections/`:
-
-```
-wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/beir-v1.0.0-splade_distil_cocodenser_medium-dbpedia-entity.tar -P collections/
-tar xvf collections/beir-v1.0.0-splade_distil_cocodenser_medium-dbpedia-entity.tar -C collections/
-```
-
-To confirm, the tarball is 2.5 GB and has MD5 checksum `fdd1467eae4fbe6c53b75428492c776f`.
-
-With the corpus downloaded, the following command will perform the complete regression, end to end, on any machine:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression ${test_name} \
- --corpus-path collections/beir-v1.0.0-splade_distil_cocodenser_medium-dbpedia-entity
-```
-
-Alternatively, you can simply copy/paste from the commands below and obtain the same results.
-
-## Indexing
-
-Sample indexing command:
-
-```
-${index_cmds}
-```
-
-The path `/path/to/${corpus}/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the pre-encoded tokens.
-Upon completion, we should have an index with 4,635,922 documents.
-
-For additional details, see explanation of [common indexing options](${root_path}/docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```
-${ranking_cmds}
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```
-${eval_cmds}
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-${effectiveness}
-
-
-## Reproduction Log[*](${root_path}/docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](${template}) and run `bin/build.sh` to rebuild the documentation.
diff --git a/src/main/resources/docgen/templates/beir-v1.0.0-fever-splade-distil-cocodenser-medium.template b/src/main/resources/docgen/templates/beir-v1.0.0-fever-splade-distil-cocodenser-medium.template
deleted file mode 100644
index 7346c23167..0000000000
--- a/src/main/resources/docgen/templates/beir-v1.0.0-fever-splade-distil-cocodenser-medium.template
+++ /dev/null
@@ -1,84 +0,0 @@
-# Anserini Regressions: BEIR (v1.0.0) — FEVER
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on [BEIR (v1.0.0) — FEVER](http://beir.ai/).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-The exact configurations for these regressions are stored in [this YAML file](${yaml}).
-Note that this page is automatically generated from [this template](${template}) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression ${test_name}
-```
-
-## Corpus
-
-We make available a version of the BEIR-v1.0.0 fever corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., gone through document expansion and term reweighting.
-Thus, no neural inference is involved.
-For details on how to train SPLADE-distil CoCodenser Medium and perform inference, please see [guide provided by Naver Labs Europe](https://github.com/naver/splade/tree/main/anserini_evaluation).
-
-Download the corpus and unpack into `collections/`:
-
-```
-wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/beir-v1.0.0-splade_distil_cocodenser_medium-fever.tar -P collections/
-tar xvf collections/beir-v1.0.0-splade_distil_cocodenser_medium-fever.tar -C collections/
-```
-
-To confirm, the tarball is 3.2 GB and has MD5 checksum `d5bd33563877667a64d37acbf16f5c5d`.
-
-With the corpus downloaded, the following command will perform the complete regression, end to end, on any machine:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression ${test_name} \
- --corpus-path collections/beir-v1.0.0-splade_distil_cocodenser_medium-fever
-```
-
-Alternatively, you can simply copy/paste from the commands below and obtain the same results.
-
-## Indexing
-
-Sample indexing command:
-
-```
-${index_cmds}
-```
-
-The path `/path/to/${corpus}/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the pre-encoded tokens.
-Upon completion, we should have an index with 5,416,568 documents.
-
-For additional details, see explanation of [common indexing options](${root_path}/docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```
-${ranking_cmds}
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```
-${eval_cmds}
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-${effectiveness}
-
-
-## Reproduction Log[*](${root_path}/docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](${template}) and run `bin/build.sh` to rebuild the documentation.
diff --git a/src/main/resources/docgen/templates/beir-v1.0.0-fiqa-splade-distil-cocodenser-medium.template b/src/main/resources/docgen/templates/beir-v1.0.0-fiqa-splade-distil-cocodenser-medium.template
deleted file mode 100644
index 12f2f1fbba..0000000000
--- a/src/main/resources/docgen/templates/beir-v1.0.0-fiqa-splade-distil-cocodenser-medium.template
+++ /dev/null
@@ -1,84 +0,0 @@
-# Anserini Regressions: BEIR (v1.0.0) — FiQA-2018
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on [BEIR (v1.0.0) — FiQA-2018](http://beir.ai/).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-The exact configurations for these regressions are stored in [this YAML file](${yaml}).
-Note that this page is automatically generated from [this template](${template}) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression ${test_name}
-```
-
-## Corpus
-
-We make available a version of the BEIR-v1.0.0 fiqa corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., gone through document expansion and term reweighting.
-Thus, no neural inference is involved.
-For details on how to train SPLADE-distil CoCodenser Medium and perform inference, please see [guide provided by Naver Labs Europe](https://github.com/naver/splade/tree/main/anserini_evaluation).
-
-Download the corpus and unpack into `collections/`:
-
-```
-wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/beir-v1.0.0-splade_distil_cocodenser_medium-fiqa.tar -P collections/
-tar xvf collections/beir-v1.0.0-splade_distil_cocodenser_medium-fiqa.tar -C collections/
-```
-
-To confirm, the tarball is 48 MB and has MD5 checksum `781f7683b6e73971afd01df1650756bf`.
-
-With the corpus downloaded, the following command will perform the complete regression, end to end, on any machine:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression ${test_name} \
- --corpus-path collections/beir-v1.0.0-splade_distil_cocodenser_medium-fiqa
-```
-
-Alternatively, you can simply copy/paste from the commands below and obtain the same results.
-
-## Indexing
-
-Sample indexing command:
-
-```
-${index_cmds}
-```
-
-The path `/path/to/${corpus}/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the pre-encoded tokens.
-Upon completion, we should have an index with 57,638 documents.
-
-For additional details, see explanation of [common indexing options](${root_path}/docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```
-${ranking_cmds}
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```
-${eval_cmds}
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-${effectiveness}
-
-
-## Reproduction Log[*](${root_path}/docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](${template}) and run `bin/build.sh` to rebuild the documentation.
diff --git a/src/main/resources/docgen/templates/beir-v1.0.0-hotpotqa-splade-distil-cocodenser-medium.template b/src/main/resources/docgen/templates/beir-v1.0.0-hotpotqa-splade-distil-cocodenser-medium.template
deleted file mode 100644
index ab3f819af9..0000000000
--- a/src/main/resources/docgen/templates/beir-v1.0.0-hotpotqa-splade-distil-cocodenser-medium.template
+++ /dev/null
@@ -1,84 +0,0 @@
-# Anserini Regressions: BEIR (v1.0.0) — HotpotQA
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on [BEIR (v1.0.0) — HotpotQA](http://beir.ai/).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-The exact configurations for these regressions are stored in [this YAML file](${yaml}).
-Note that this page is automatically generated from [this template](${template}) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression ${test_name}
-```
-
-## Corpus
-
-We make available a version of the BEIR-v1.0.0 hotpotqa corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., gone through document expansion and term reweighting.
-Thus, no neural inference is involved.
-For details on how to train SPLADE-distil CoCodenser Medium and perform inference, please see [guide provided by Naver Labs Europe](https://github.com/naver/splade/tree/main/anserini_evaluation).
-
-Download the corpus and unpack into `collections/`:
-
-```
-wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/beir-v1.0.0-splade_distil_cocodenser_medium-hotpotqa.tar -P collections/
-tar xvf collections/beir-v1.0.0-splade_distil_cocodenser_medium-hotpotqa.tar -C collections/
-```
-
-To confirm, the tarball is 2.6 GB and has MD5 checksum `b607eb3e1cb0ba105a75ae8db4356e90`.
-
-With the corpus downloaded, the following command will perform the complete regression, end to end, on any machine:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression ${test_name} \
- --corpus-path collections/beir-v1.0.0-splade_distil_cocodenser_medium-hotpotqa
-```
-
-Alternatively, you can simply copy/paste from the commands below and obtain the same results.
-
-## Indexing
-
-Sample indexing command:
-
-```
-${index_cmds}
-```
-
-The path `/path/to/${corpus}/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the pre-encoded tokens.
-Upon completion, we should have an index with 5,233,329 documents.
-
-For additional details, see explanation of [common indexing options](${root_path}/docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```
-${ranking_cmds}
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```
-${eval_cmds}
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-${effectiveness}
-
-
-## Reproduction Log[*](${root_path}/docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](${template}) and run `bin/build.sh` to rebuild the documentation.
diff --git a/src/main/resources/docgen/templates/beir-v1.0.0-nfcorpus-splade-distil-cocodenser-medium.template b/src/main/resources/docgen/templates/beir-v1.0.0-nfcorpus-splade-distil-cocodenser-medium.template
deleted file mode 100644
index a51cc637fd..0000000000
--- a/src/main/resources/docgen/templates/beir-v1.0.0-nfcorpus-splade-distil-cocodenser-medium.template
+++ /dev/null
@@ -1,84 +0,0 @@
-# Anserini Regressions: BEIR (v1.0.0) — NFCorpus
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on [BEIR (v1.0.0) — NFCorpus](http://beir.ai/).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-The exact configurations for these regressions are stored in [this YAML file](${yaml}).
-Note that this page is automatically generated from [this template](${template}) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression ${test_name}
-```
-
-## Corpus
-
-We make available a version of the BEIR-v1.0.0 nfcorpus corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., gone through document expansion and term reweighting.
-Thus, no neural inference is involved.
-For details on how to train SPLADE-distil CoCodenser Medium and perform inference, please see [guide provided by Naver Labs Europe](https://github.com/naver/splade/tree/main/anserini_evaluation).
-
-Download the corpus and unpack into `collections/`:
-
-```
-wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/beir-v1.0.0-splade_distil_cocodenser_medium-nfcorpus.tar -P collections/
-tar xvf collections/beir-v1.0.0-splade_distil_cocodenser_medium-nfcorpus.tar -C collections/
-```
-
-To confirm, the tarball is 3.2 MB and has MD5 checksum `81215d1fd44c378b44c5b1f4ab555098`.
-
-With the corpus downloaded, the following command will perform the complete regression, end to end, on any machine:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression ${test_name} \
- --corpus-path collections/beir-v1.0.0-splade_distil_cocodenser_medium-nfcorpus
-```
-
-Alternatively, you can simply copy/paste from the commands below and obtain the same results.
-
-## Indexing
-
-Sample indexing command:
-
-```
-${index_cmds}
-```
-
-The path `/path/to/${corpus}/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the pre-encoded tokens.
-Upon completion, we should have an index with 3,633 documents.
-
-For additional details, see explanation of [common indexing options](${root_path}/docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```
-${ranking_cmds}
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```
-${eval_cmds}
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-${effectiveness}
-
-
-## Reproduction Log[*](${root_path}/docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](${template}) and run `bin/build.sh` to rebuild the documentation.
diff --git a/src/main/resources/docgen/templates/beir-v1.0.0-nq-splade-distil-cocodenser-medium.template b/src/main/resources/docgen/templates/beir-v1.0.0-nq-splade-distil-cocodenser-medium.template
deleted file mode 100644
index cf6fd36ba3..0000000000
--- a/src/main/resources/docgen/templates/beir-v1.0.0-nq-splade-distil-cocodenser-medium.template
+++ /dev/null
@@ -1,84 +0,0 @@
-# Anserini Regressions: BEIR (v1.0.0) — NQ
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on [BEIR (v1.0.0) — NQ](http://beir.ai/).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-The exact configurations for these regressions are stored in [this YAML file](${yaml}).
-Note that this page is automatically generated from [this template](${template}) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression ${test_name}
-```
-
-## Corpus
-
-We make available a version of the BEIR-v1.0.0 nq corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., gone through document expansion and term reweighting.
-Thus, no neural inference is involved.
-For details on how to train SPLADE-distil CoCodenser Medium and perform inference, please see [guide provided by Naver Labs Europe](https://github.com/naver/splade/tree/main/anserini_evaluation).
-
-Download the corpus and unpack into `collections/`:
-
-```
-wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/beir-v1.0.0-splade_distil_cocodenser_medium-nq.tar -P collections/
-tar xvf collections/beir-v1.0.0-splade_distil_cocodenser_medium-nq.tar -C collections/
-```
-
-To confirm, the tarball is 1.9 GB and has MD5 checksum `ec9f0d3245c7200209c4dd9ec19055a9`.
-
-With the corpus downloaded, the following command will perform the complete regression, end to end, on any machine:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression ${test_name} \
- --corpus-path collections/beir-v1.0.0-splade_distil_cocodenser_medium-nq
-```
-
-Alternatively, you can simply copy/paste from the commands below and obtain the same results.
-
-## Indexing
-
-Sample indexing command:
-
-```
-${index_cmds}
-```
-
-The path `/path/to/${corpus}/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the pre-encoded tokens.
-Upon completion, we should have an index with 2,681,468 documents.
-
-For additional details, see explanation of [common indexing options](${root_path}/docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```
-${ranking_cmds}
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```
-${eval_cmds}
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-${effectiveness}
-
-
-## Reproduction Log[*](${root_path}/docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](${template}) and run `bin/build.sh` to rebuild the documentation.
diff --git a/src/main/resources/docgen/templates/beir-v1.0.0-quora-splade-distil-cocodenser-medium.template b/src/main/resources/docgen/templates/beir-v1.0.0-quora-splade-distil-cocodenser-medium.template
deleted file mode 100644
index 382cab8430..0000000000
--- a/src/main/resources/docgen/templates/beir-v1.0.0-quora-splade-distil-cocodenser-medium.template
+++ /dev/null
@@ -1,84 +0,0 @@
-# Anserini Regressions: BEIR (v1.0.0) — Quora
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on [BEIR (v1.0.0) — Quora](http://beir.ai/).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-The exact configurations for these regressions are stored in [this YAML file](${yaml}).
-Note that this page is automatically generated from [this template](${template}) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression ${test_name}
-```
-
-## Corpus
-
-We make available a version of the BEIR-v1.0.0 quora corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., gone through document expansion and term reweighting.
-Thus, no neural inference is involved.
-For details on how to train SPLADE-distil CoCodenser Medium and perform inference, please see [guide provided by Naver Labs Europe](https://github.com/naver/splade/tree/main/anserini_evaluation).
-
-Download the corpus and unpack into `collections/`:
-
-```
-wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/beir-v1.0.0-splade_distil_cocodenser_medium-quora.tar -P collections/
-tar xvf collections/beir-v1.0.0-splade_distil_cocodenser_medium-quora.tar -C collections/
-```
-
-To confirm, the tarball is 112 MB and has MD5 checksum `fa86bbfa8195c6f45a0b0435ee268b0e`.
-
-With the corpus downloaded, the following command will perform the complete regression, end to end, on any machine:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression ${test_name} \
- --corpus-path collections/beir-v1.0.0-splade_distil_cocodenser_medium-quora
-```
-
-Alternatively, you can simply copy/paste from the commands below and obtain the same results.
-
-## Indexing
-
-Sample indexing command:
-
-```
-${index_cmds}
-```
-
-The path `/path/to/${corpus}/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the pre-encoded tokens.
-Upon completion, we should have an index with 522,931 documents.
-
-For additional details, see explanation of [common indexing options](${root_path}/docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```
-${ranking_cmds}
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```
-${eval_cmds}
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-${effectiveness}
-
-
-## Reproduction Log[*](${root_path}/docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](${template}) and run `bin/build.sh` to rebuild the documentation.
diff --git a/src/main/resources/docgen/templates/beir-v1.0.0-robust04-splade-distil-cocodenser-medium.template b/src/main/resources/docgen/templates/beir-v1.0.0-robust04-splade-distil-cocodenser-medium.template
deleted file mode 100644
index 2f9e45661b..0000000000
--- a/src/main/resources/docgen/templates/beir-v1.0.0-robust04-splade-distil-cocodenser-medium.template
+++ /dev/null
@@ -1,84 +0,0 @@
-# Anserini Regressions: BEIR (v1.0.0) — Robust04
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on [BEIR (v1.0.0) — Robust04](http://beir.ai/).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-The exact configurations for these regressions are stored in [this YAML file](${yaml}).
-Note that this page is automatically generated from [this template](${template}) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression ${test_name}
-```
-
-## Corpus
-
-We make available a version of the BEIR-v1.0.0 robust04 corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., gone through document expansion and term reweighting.
-Thus, no neural inference is involved.
-For details on how to train SPLADE-distil CoCodenser Medium and perform inference, please see [guide provided by Naver Labs Europe](https://github.com/naver/splade/tree/main/anserini_evaluation).
-
-Download the corpus and unpack into `collections/`:
-
-```
-wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/beir-v1.0.0-splade_distil_cocodenser_medium-robust04.tar -P collections/
-tar xvf collections/beir-v1.0.0-splade_distil_cocodenser_medium-robust04.tar -C collections/
-```
-
-To confirm, the tarball is 8.9 MB and has MD5 checksum `9c5a181e03cbc7f13abd0e0e4bf9158e`.
-
-With the corpus downloaded, the following command will perform the complete regression, end to end, on any machine:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression ${test_name} \
- --corpus-path collections/beir-v1.0.0-splade_distil_cocodenser_medium-robust04
-```
-
-Alternatively, you can simply copy/paste from the commands below and obtain the same results.
-
-## Indexing
-
-Sample indexing command:
-
-```
-${index_cmds}
-```
-
-The path `/path/to/${corpus}/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the pre-encoded tokens.
-Upon completion, we should have an index with 8,674 documents.
-
-For additional details, see explanation of [common indexing options](${root_path}/docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```
-${ranking_cmds}
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```
-${eval_cmds}
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-${effectiveness}
-
-
-## Reproduction Log[*](${root_path}/docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](${template}) and run `bin/build.sh` to rebuild the documentation.
diff --git a/src/main/resources/docgen/templates/beir-v1.0.0-scidocs-splade-distil-cocodenser-medium.template b/src/main/resources/docgen/templates/beir-v1.0.0-scidocs-splade-distil-cocodenser-medium.template
deleted file mode 100644
index 034e3dc093..0000000000
--- a/src/main/resources/docgen/templates/beir-v1.0.0-scidocs-splade-distil-cocodenser-medium.template
+++ /dev/null
@@ -1,84 +0,0 @@
-# Anserini Regressions: BEIR (v1.0.0) — SCIDOCS
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on [BEIR (v1.0.0) — SCIDOCS](http://beir.ai/).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-The exact configurations for these regressions are stored in [this YAML file](${yaml}).
-Note that this page is automatically generated from [this template](${template}) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression ${test_name}
-```
-
-## Corpus
-
-We make available a version of the BEIR-v1.0.0 scidocs corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., gone through document expansion and term reweighting.
-Thus, no neural inference is involved.
-For details on how to train SPLADE-distil CoCodenser Medium and perform inference, please see [guide provided by Naver Labs Europe](https://github.com/naver/splade/tree/main/anserini_evaluation).
-
-Download the corpus and unpack into `collections/`:
-
-```
-wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/beir-v1.0.0-splade_distil_cocodenser_medium-scidocs.tar -P collections/
-tar xvf collections/beir-v1.0.0-splade_distil_cocodenser_medium-scidocs.tar -C collections/
-```
-
-To confirm, the tarball is 24 MB and has MD5 checksum `535c9dcb9698bec4345c07d96e5b1e75`.
-
-With the corpus downloaded, the following command will perform the complete regression, end to end, on any machine:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression ${test_name} \
- --corpus-path collections/beir-v1.0.0-splade_distil_cocodenser_medium-scidocs
-```
-
-Alternatively, you can simply copy/paste from the commands below and obtain the same results.
-
-## Indexing
-
-Sample indexing command:
-
-```
-${index_cmds}
-```
-
-The path `/path/to/${corpus}/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the pre-encoded tokens.
-Upon completion, we should have an index with 25,657 documents.
-
-For additional details, see explanation of [common indexing options](${root_path}/docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```
-${ranking_cmds}
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```
-${eval_cmds}
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-${effectiveness}
-
-
-## Reproduction Log[*](${root_path}/docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](${template}) and run `bin/build.sh` to rebuild the documentation.
diff --git a/src/main/resources/docgen/templates/beir-v1.0.0-scifact-splade-distil-cocodenser-medium.template b/src/main/resources/docgen/templates/beir-v1.0.0-scifact-splade-distil-cocodenser-medium.template
deleted file mode 100644
index 9da17165f5..0000000000
--- a/src/main/resources/docgen/templates/beir-v1.0.0-scifact-splade-distil-cocodenser-medium.template
+++ /dev/null
@@ -1,84 +0,0 @@
-# Anserini Regressions: BEIR (v1.0.0) — SciFact
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on [BEIR (v1.0.0) — SciFact](http://beir.ai/).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-The exact configurations for these regressions are stored in [this YAML file](${yaml}).
-Note that this page is automatically generated from [this template](${template}) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression ${test_name}
-```
-
-## Corpus
-
-We make available a version of the BEIR-v1.0.0 scifact corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., gone through document expansion and term reweighting.
-Thus, no neural inference is involved.
-For details on how to train SPLADE-distil CoCodenser Medium and perform inference, please see [guide provided by Naver Labs Europe](https://github.com/naver/splade/tree/main/anserini_evaluation).
-
-Download the corpus and unpack into `collections/`:
-
-```
-wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/beir-v1.0.0-splade_distil_cocodenser_medium-scifact.tar -P collections/
-tar xvf collections/beir-v1.0.0-splade_distil_cocodenser_medium-scifact.tar -C collections/
-```
-
-To confirm, the tarball is 4.8 MB and has MD5 checksum `8b82c0fcbf0d1287fb1c5044e8422902`.
-
-With the corpus downloaded, the following command will perform the complete regression, end to end, on any machine:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression ${test_name} \
- --corpus-path collections/beir-v1.0.0-splade_distil_cocodenser_medium-scifact
-```
-
-Alternatively, you can simply copy/paste from the commands below and obtain the same results.
-
-## Indexing
-
-Sample indexing command:
-
-```
-${index_cmds}
-```
-
-The path `/path/to/${corpus}/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the pre-encoded tokens.
-Upon completion, we should have an index with 5,183 documents.
-
-For additional details, see explanation of [common indexing options](${root_path}/docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```
-${ranking_cmds}
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```
-${eval_cmds}
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-${effectiveness}
-
-
-## Reproduction Log[*](${root_path}/docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](${template}) and run `bin/build.sh` to rebuild the documentation.
diff --git a/src/main/resources/docgen/templates/beir-v1.0.0-signal1m-splade-distil-cocodenser-medium.template b/src/main/resources/docgen/templates/beir-v1.0.0-signal1m-splade-distil-cocodenser-medium.template
deleted file mode 100644
index 7e6b6e8fc0..0000000000
--- a/src/main/resources/docgen/templates/beir-v1.0.0-signal1m-splade-distil-cocodenser-medium.template
+++ /dev/null
@@ -1,84 +0,0 @@
-# Anserini Regressions: BEIR (v1.0.0) — Signal-1M
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on [BEIR (v1.0.0) — Signal-1M](http://beir.ai/).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-The exact configurations for these regressions are stored in [this YAML file](${yaml}).
-Note that this page is automatically generated from [this template](${template}) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression ${test_name}
-```
-
-## Corpus
-
-We make available a version of the BEIR-v1.0.0 signal1m corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., gone through document expansion and term reweighting.
-Thus, no neural inference is involved.
-For details on how to train SPLADE-distil CoCodenser Medium and perform inference, please see [guide provided by Naver Labs Europe](https://github.com/naver/splade/tree/main/anserini_evaluation).
-
-Download the corpus and unpack into `collections/`:
-
-```
-wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/beir-v1.0.0-splade_distil_cocodenser_medium-signal1m.tar -P collections/
-tar xvf collections/beir-v1.0.0-splade_distil_cocodenser_medium-signal1m.tar -C collections/
-```
-
-To confirm, the tarball is 8.9 MB and has MD5 checksum `9c5a181e03cbc7f13abd0e0e4bf9158e`.
-
-With the corpus downloaded, the following command will perform the complete regression, end to end, on any machine:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression ${test_name} \
- --corpus-path collections/beir-v1.0.0-splade_distil_cocodenser_medium-signal1m
-```
-
-Alternatively, you can simply copy/paste from the commands below and obtain the same results.
-
-## Indexing
-
-Sample indexing command:
-
-```
-${index_cmds}
-```
-
-The path `/path/to/${corpus}/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the pre-encoded tokens.
-Upon completion, we should have an index with 8,674 documents.
-
-For additional details, see explanation of [common indexing options](${root_path}/docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```
-${ranking_cmds}
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```
-${eval_cmds}
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-${effectiveness}
-
-
-## Reproduction Log[*](${root_path}/docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](${template}) and run `bin/build.sh` to rebuild the documentation.
diff --git a/src/main/resources/docgen/templates/beir-v1.0.0-trec-covid-splade-distil-cocodenser-medium.template b/src/main/resources/docgen/templates/beir-v1.0.0-trec-covid-splade-distil-cocodenser-medium.template
deleted file mode 100644
index bcfa9277d1..0000000000
--- a/src/main/resources/docgen/templates/beir-v1.0.0-trec-covid-splade-distil-cocodenser-medium.template
+++ /dev/null
@@ -1,84 +0,0 @@
-# Anserini Regressions: BEIR (v1.0.0) — TREC-COVID
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on [BEIR (v1.0.0) — TREC-COVID](http://beir.ai/).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-The exact configurations for these regressions are stored in [this YAML file](${yaml}).
-Note that this page is automatically generated from [this template](${template}) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression ${test_name}
-```
-
-## Corpus
-
-We make available a version of the BEIR-v1.0.0 trec-covid corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., gone through document expansion and term reweighting.
-Thus, no neural inference is involved.
-For details on how to train SPLADE-distil CoCodenser Medium and perform inference, please see [guide provided by Naver Labs Europe](https://github.com/naver/splade/tree/main/anserini_evaluation).
-
-Download the corpus and unpack into `collections/`:
-
-```
-wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/beir-v1.0.0-splade_distil_cocodenser_medium-trec-covid.tar -P collections/
-tar xvf collections/beir-v1.0.0-splade_distil_cocodenser_medium-trec-covid.tar -C collections/
-```
-
-To confirm, the tarball is 133 MB and has MD5 checksum `84b9b090fa7ad7f08e2208a59837c216`.
-
-With the corpus downloaded, the following command will perform the complete regression, end to end, on any machine:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression ${test_name} \
- --corpus-path collections/beir-v1.0.0-splade_distil_cocodenser_medium-trec-covid
-```
-
-Alternatively, you can simply copy/paste from the commands below and obtain the same results.
-
-## Indexing
-
-Sample indexing command:
-
-```
-${index_cmds}
-```
-
-The path `/path/to/${corpus}/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the pre-encoded tokens.
-Upon completion, we should have an index with 171,332 documents.
-
-For additional details, see explanation of [common indexing options](${root_path}/docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```
-${ranking_cmds}
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```
-${eval_cmds}
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-${effectiveness}
-
-
-## Reproduction Log[*](${root_path}/docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](${template}) and run `bin/build.sh` to rebuild the documentation.
diff --git a/src/main/resources/docgen/templates/beir-v1.0.0-trec-news-splade-distil-cocodenser-medium.template b/src/main/resources/docgen/templates/beir-v1.0.0-trec-news-splade-distil-cocodenser-medium.template
deleted file mode 100644
index 831537cc20..0000000000
--- a/src/main/resources/docgen/templates/beir-v1.0.0-trec-news-splade-distil-cocodenser-medium.template
+++ /dev/null
@@ -1,84 +0,0 @@
-# Anserini Regressions: BEIR (v1.0.0) — TREC-NEWS
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on [BEIR (v1.0.0) — TREC-NEWS](http://beir.ai/).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-The exact configurations for these regressions are stored in [this YAML file](${yaml}).
-Note that this page is automatically generated from [this template](${template}) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression ${test_name}
-```
-
-## Corpus
-
-We make available a version of the BEIR-v1.0.0 trec-news corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., gone through document expansion and term reweighting.
-Thus, no neural inference is involved.
-For details on how to train SPLADE-distil CoCodenser Medium and perform inference, please see [guide provided by Naver Labs Europe](https://github.com/naver/splade/tree/main/anserini_evaluation).
-
-Download the corpus and unpack into `collections/`:
-
-```
-wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/beir-v1.0.0-splade_distil_cocodenser_medium-trec-news.tar -P collections/
-tar xvf collections/beir-v1.0.0-splade_distil_cocodenser_medium-trec-news.tar -C collections/
-```
-
-To confirm, the tarball is 8.9 MB and has MD5 checksum `9c5a181e03cbc7f13abd0e0e4bf9158e`.
-
-With the corpus downloaded, the following command will perform the complete regression, end to end, on any machine:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression ${test_name} \
- --corpus-path collections/beir-v1.0.0-splade_distil_cocodenser_medium-trec-news
-```
-
-Alternatively, you can simply copy/paste from the commands below and obtain the same results.
-
-## Indexing
-
-Sample indexing command:
-
-```
-${index_cmds}
-```
-
-The path `/path/to/${corpus}/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the pre-encoded tokens.
-Upon completion, we should have an index with 8,674 documents.
-
-For additional details, see explanation of [common indexing options](${root_path}/docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```
-${ranking_cmds}
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```
-${eval_cmds}
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-${effectiveness}
-
-
-## Reproduction Log[*](${root_path}/docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](${template}) and run `bin/build.sh` to rebuild the documentation.
diff --git a/src/main/resources/docgen/templates/beir-v1.0.0-webis-touche2020-splade-distil-cocodenser-medium.template b/src/main/resources/docgen/templates/beir-v1.0.0-webis-touche2020-splade-distil-cocodenser-medium.template
deleted file mode 100644
index fb2a06a493..0000000000
--- a/src/main/resources/docgen/templates/beir-v1.0.0-webis-touche2020-splade-distil-cocodenser-medium.template
+++ /dev/null
@@ -1,84 +0,0 @@
-# Anserini Regressions: BEIR (v1.0.0) — Webis-Touche2020
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on [BEIR (v1.0.0) — Webis-Touche2020](http://beir.ai/).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-The exact configurations for these regressions are stored in [this YAML file](${yaml}).
-Note that this page is automatically generated from [this template](${template}) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression ${test_name}
-```
-
-## Corpus
-
-We make available a version of the BEIR-v1.0.0 webis-touche2020 corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., gone through document expansion and term reweighting.
-Thus, no neural inference is involved.
-For details on how to train SPLADE-distil CoCodenser Medium and perform inference, please see [guide provided by Naver Labs Europe](https://github.com/naver/splade/tree/main/anserini_evaluation).
-
-Download the corpus and unpack into `collections/`:
-
-```
-wget https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/beir-v1.0.0-splade_distil_cocodenser_medium-webis-touche2020.tar -P collections/
-tar xvf collections/beir-v1.0.0-splade_distil_cocodenser_medium-webis-touche2020.tar -C collections/
-```
-
-To confirm, the tarball is 293 MB and has MD5 checksum `cb8486c7b1bf9b8ff7a14aedf9074c58`.
-
-With the corpus downloaded, the following command will perform the complete regression, end to end, on any machine:
-
-```
-python src/main/python/run_regression.py --index --verify --search \
- --regression ${test_name} \
- --corpus-path collections/beir-v1.0.0-splade_distil_cocodenser_medium-webis-touche2020
-```
-
-Alternatively, you can simply copy/paste from the commands below and obtain the same results.
-
-## Indexing
-
-Sample indexing command:
-
-```
-${index_cmds}
-```
-
-The path `/path/to/${corpus}/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the pre-encoded tokens.
-Upon completion, we should have an index with 382,545 documents.
-
-For additional details, see explanation of [common indexing options](${root_path}/docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```
-${ranking_cmds}
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```
-${eval_cmds}
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-${effectiveness}
-
-
-## Reproduction Log[*](${root_path}/docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](${template}) and run `bin/build.sh` to rebuild the documentation.
diff --git a/src/main/resources/docgen/templates/dl19-passage-splade-distil-cocodenser-medium.template b/src/main/resources/docgen/templates/dl19-passage-splade-distil-cocodenser-medium.template
deleted file mode 100644
index 501086d4ea..0000000000
--- a/src/main/resources/docgen/templates/dl19-passage-splade-distil-cocodenser-medium.template
+++ /dev/null
@@ -1,96 +0,0 @@
-# Anserini Regressions: TREC 2019 Deep Learning Track (Passage)
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on the [TREC 2019 Deep Learning Track passage ranking task](https://trec.nist.gov/data/deep2019.html).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-Note that the NIST relevance judgments provide far more relevant passages per topic, unlike the "sparse" judgments provided by Microsoft (these are sometimes called "dense" judgments to emphasize this contrast).
-For additional instructions on working with MS MARCO passage collection, refer to [this page](${root_path}/docs/experiments-msmarco-passage.md).
-
-The exact configurations for these regressions are stored in [this YAML file](${yaml}).
-Note that this page is automatically generated from [this template](${template}) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```bash
-python src/main/python/run_regression.py --index --verify --search --regression ${test_name}
-```
-
-We make available a version of the MS MARCO Passage Corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., performed model inference on every document and stored the output sparse vectors.
-Thus, no neural inference is involved.
-
-From any machine, the following command will download the corpus and perform the complete regression, end to end:
-
-```bash
-python src/main/python/run_regression.py --download --index --verify --search --regression ${test_name}
-```
-
-The `run_regression.py` script automates the following steps, but if you want to perform each step manually, simply copy/paste from the commands below and you'll obtain the same regression results.
-
-## Corpus Download
-
-Download the corpus and unpack into `collections/`:
-
-```bash
-wget ${download_url} -P collections/
-tar xvf collections/${corpus}.tar -C collections/
-```
-
-To confirm, `${corpus}.tar` is 4.9 GB and has MD5 checksum `${download_checksum}`.
-With the corpus downloaded, the following command will perform the remaining steps below:
-
-```bash
-python src/main/python/run_regression.py --index --verify --search --regression ${test_name} \
- --corpus-path collections/${corpus}
-```
-
-## Indexing
-
-Sample indexing command:
-
-```bash
-${index_cmds}
-```
-
-The path `/path/to/${corpus}/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the SPLADE-distil CoCodenser Medium tokens.
-Upon completion, we should have an index with 8,841,823 documents.
-
-For additional details, see explanation of [common indexing options](${root_path}/docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-The regression experiments here evaluate on the 43 topics for which NIST has provided judgments as part of the TREC 2019 Deep Learning Track.
-The original data can be found [here](https://trec.nist.gov/data/deep2019.html).
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```bash
-${ranking_cmds}
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```bash
-${eval_cmds}
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-${effectiveness}
-
-Note that retrieval metrics are computed to depth 1000 hits per query (as opposed to 100 hits per query for document ranking).
-Also, for computing nDCG, remember that we keep qrels of _all_ relevance grades, whereas for other metrics (e.g., AP), relevance grade 1 is considered not relevant (i.e., use the `-l 2` option in `trec_eval`).
-The experimental results reported here are directly comparable to the results reported in the [track overview paper](https://arxiv.org/abs/2003.07820).
-
-## Reproduction Log[*](${root_path}/docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](${template}) and run `bin/build.sh` to rebuild the documentation.
-
-+ Results reproduced by [@lintool](https://github.com/lintool) on 2022-06-14 (commit [`dc07344`](https://github.com/castorini/anserini/commit/dc073447c8a0c07b53d979c49bf1e2e018200508))
diff --git a/src/main/resources/docgen/templates/dl20-passage-splade-distil-cocodenser-medium.template b/src/main/resources/docgen/templates/dl20-passage-splade-distil-cocodenser-medium.template
deleted file mode 100644
index 728610541a..0000000000
--- a/src/main/resources/docgen/templates/dl20-passage-splade-distil-cocodenser-medium.template
+++ /dev/null
@@ -1,96 +0,0 @@
-# Anserini Regressions: TREC 2020 Deep Learning Track (Passage)
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using SPLADE-distil CoCodenser Medium on the [TREC 2020 Deep Learning Track passage ranking task](https://trec.nist.gov/data/deep2019.html).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-Note that the NIST relevance judgments provide far more relevant passages per topic, unlike the "sparse" judgments provided by Microsoft (these are sometimes called "dense" judgments to emphasize this contrast).
-For additional instructions on working with MS MARCO passage collection, refer to [this page](${root_path}/docs/experiments-msmarco-passage.md).
-
-The exact configurations for these regressions are stored in [this YAML file](${yaml}).
-Note that this page is automatically generated from [this template](${template}) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```bash
-python src/main/python/run_regression.py --index --verify --search --regression ${test_name}
-```
-
-We make available a version of the MS MARCO Passage Corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., performed model inference on every document and stored the output sparse vectors.
-Thus, no neural inference is involved.
-
-From any machine, the following command will download the corpus and perform the complete regression, end to end:
-
-```bash
-python src/main/python/run_regression.py --download --index --verify --search --regression ${test_name}
-```
-
-The `run_regression.py` script automates the following steps, but if you want to perform each step manually, simply copy/paste from the commands below and you'll obtain the same regression results.
-
-## Corpus Download
-
-Download the corpus and unpack into `collections/`:
-
-```bash
-wget ${download_url} -P collections/
-tar xvf collections/${corpus}.tar -C collections/
-```
-
-To confirm, `${corpus}.tar` is 4.9 GB and has MD5 checksum `${download_checksum}`.
-With the corpus downloaded, the following command will perform the remaining steps below:
-
-```bash
-python src/main/python/run_regression.py --index --verify --search --regression ${test_name} \
- --corpus-path collections/${corpus}
-```
-
-## Indexing
-
-Sample indexing command:
-
-```bash
-${index_cmds}
-```
-
-The path `/path/to/${corpus}/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doclengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the SPLADE-distil CoCodenser Medium tokens.
-Upon completion, we should have an index with 8,841,823 documents.
-
-For additional details, see explanation of [common indexing options](${root_path}/docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-The regression experiments here evaluate on the 54 topics for which NIST has provided judgments as part of the TREC 2020 Deep Learning Track.
-The original data can be found [here](https://trec.nist.gov/data/deep2020.html).
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```bash
-${ranking_cmds}
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```bash
-${eval_cmds}
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-${effectiveness}
-
-Note that retrieval metrics are computed to depth 1000 hits per query (as opposed to 100 hits per query for document ranking).
-Also, for computing nDCG, remember that we keep qrels of _all_ relevance grades, whereas for other metrics (e.g., AP), relevance grade 1 is considered not relevant (i.e., use the `-l 2` option in `trec_eval`).
-The experimental results reported here are directly comparable to the results reported in the [track overview paper](https://arxiv.org/abs/2003.07820).
-
-## Reproduction Log[*](${root_path}/docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](${template}) and run `bin/build.sh` to rebuild the documentation.
-
-+ Results reproduced by [@lintool](https://github.com/lintool) on 2022-06-14 (commit [`dc07344`](https://github.com/castorini/anserini/commit/dc073447c8a0c07b53d979c49bf1e2e018200508))
diff --git a/src/main/resources/docgen/templates/msmarco-passage-splade-distil-cocodenser-medium.template b/src/main/resources/docgen/templates/msmarco-passage-splade-distil-cocodenser-medium.template
deleted file mode 100644
index 77cf08caab..0000000000
--- a/src/main/resources/docgen/templates/msmarco-passage-splade-distil-cocodenser-medium.template
+++ /dev/null
@@ -1,88 +0,0 @@
-# Anserini Regressions: MS MARCO Passage Ranking
-
-**Model**: SPLADE-distil CoCodenser Medium
-
-This page describes regression experiments, integrated into Anserini's regression testing framework, using the SPLADE-distil CoCodenser Medium model on the [MS MARCO passage ranking task](https://github.com/microsoft/MSMARCO-Passage-Ranking).
-SPLADE-distil CoCodenser Medium is an intermediate model version between [SPLADEv2](https://arxiv.org/abs/2109.10086) and [SPLADE++](https://arxiv.org/abs/2205.04733), where the model used distillation (as in SPLADEv2), but started with the CoCondenser pre-trained model.
-See the [official SPLADE repo](https://github.com/naver/splade) for more details; the model itself can be download [here](http://download-de.europe.naverlabs.com/Splade_Release_Jan22/splade_distil_CoCodenser_medium.tar.gz).
-
-The exact configurations for these regressions are stored in [this YAML file](${yaml}).
-Note that this page is automatically generated from [this template](${template}) as part of Anserini's regression pipeline, so do not modify this page directly; modify the template instead and then run `bin/build.sh` to rebuild the documentation.
-
-From one of our Waterloo servers (e.g., `orca`), the following command will perform the complete regression, end to end:
-
-```bash
-python src/main/python/run_regression.py --index --verify --search --regression ${test_name}
-```
-
-We make available a version of the MS MARCO Passage Corpus that has already been processed with SPLADE-distil CoCodenser Medium, i.e., performed model inference on every document and stored the output sparse vectors.
-Thus, no neural inference is involved.
-
-From any machine, the following command will download the corpus and perform the complete regression, end to end:
-
-```bash
-python src/main/python/run_regression.py --download --index --verify --search --regression ${test_name}
-```
-
-The `run_regression.py` script automates the following steps, but if you want to perform each step manually, simply copy/paste from the commands below and you'll obtain the same regression results.
-
-## Corpus Download
-
-Download the corpus and unpack into `collections/`:
-
-```bash
-wget ${download_url} -P collections/
-tar xvf collections/${corpus}.tar -C collections/
-```
-
-To confirm, `${corpus}.tar` is 4.9 GB and has MD5 checksum `${download_checksum}`.
-With the corpus downloaded, the following command will perform the remaining steps below:
-
-```bash
-python src/main/python/run_regression.py --index --verify --search --regression ${test_name} \
- --corpus-path collections/${corpus}
-```
-
-## Indexing
-
-Sample indexing command:
-
-```bash
-${index_cmds}
-```
-
-The path `/path/to/${corpus}/` should point to the corpus downloaded above.
-
-The important indexing options to note here are `-impact -pretokenized`: the first tells Anserini not to encode BM25 doc lengths into Lucene's norms (which is the default) and the second option says not to apply any additional tokenization on the pre-encoded tokens.
-Upon completion, we should have an index with 8,841,823 documents.
-
-For additional details, see explanation of [common indexing options](${root_path}/docs/common-indexing-options.md).
-
-## Retrieval
-
-Topics and qrels are stored [here](https://github.com/castorini/anserini-tools/tree/master/topics-and-qrels), which is linked to the Anserini repo as a submodule.
-The regression experiments here evaluate on the 6980 dev set questions; see [this page](${root_path}/docs/experiments-msmarco-passage.md) for more details.
-
-After indexing has completed, you should be able to perform retrieval as follows:
-
-```bash
-${ranking_cmds}
-```
-
-Evaluation can be performed using `trec_eval`:
-
-```bash
-${eval_cmds}
-```
-
-## Effectiveness
-
-With the above commands, you should be able to reproduce the following results:
-
-${effectiveness}
-
-## Reproduction Log[*](${root_path}/docs/reproducibility.md)
-
-To add to this reproduction log, modify [this template](${template}) and run `bin/build.sh` to rebuild the documentation.
-
-+ Results reproduced by [@lintool](https://github.com/lintool) on 2022-06-14 (commit [`dc07344`](https://github.com/castorini/anserini/commit/dc073447c8a0c07b53d979c49bf1e2e018200508))
diff --git a/src/main/resources/regression/beir-v1.0.0-arguana-splade-distil-cocodenser-medium.yaml b/src/main/resources/regression/beir-v1.0.0-arguana-splade-distil-cocodenser-medium.yaml
deleted file mode 100644
index 790bfd02f5..0000000000
--- a/src/main/resources/regression/beir-v1.0.0-arguana-splade-distil-cocodenser-medium.yaml
+++ /dev/null
@@ -1,55 +0,0 @@
----
-corpus: beir-v1.0.0-arguana-splade_distil_cocodenser_medium
-corpus_path: collections/beir-v1.0.0/splade_distil_cocodenser_medium/arguana
-
-index_path: indexes/lucene-index.beir-v1.0.0-arguana-splade_distil_cocodenser_medium/
-collection_class: JsonVectorCollection
-generator_class: DefaultLuceneDocumentGenerator
-index_threads: 16
-index_options: -impact -pretokenized
-index_stats:
- documents: 8674
- documents (non-empty): 8674
- total terms: 96421121
-
-metrics:
- - metric: nDCG@10
- command: target/appassembler/bin/trec_eval
- params: -c -m ndcg_cut.10
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@100
- command: target/appassembler/bin/trec_eval
- params: -c -m recall.100
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@1000
- command: target/appassembler/bin/trec_eval
- params: -c -m recall.1000
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
-
-topic_reader: TsvString
-topics:
- - name: "BEIR (v1.0.0): ArguAna"
- id: test
- path: topics.beir-v1.0.0-arguana.test.splade_distil_cocodenser_medium.tsv.gz
- qrel: qrels.beir-v1.0.0-arguana.test.txt
-
-models:
- - name: splade_distil_cocodenser_medium
- display: SPLADE-distill CoCodenser Medium
- params: -impact -pretokenized -removeQuery -hits 1000
- results:
- nDCG@10:
- - 0.5210
- R@100:
- - 0.9822
- R@1000:
- - 0.9950
diff --git a/src/main/resources/regression/beir-v1.0.0-bioasq-splade-distil-cocodenser-medium.yaml b/src/main/resources/regression/beir-v1.0.0-bioasq-splade-distil-cocodenser-medium.yaml
deleted file mode 100644
index 9906e00663..0000000000
--- a/src/main/resources/regression/beir-v1.0.0-bioasq-splade-distil-cocodenser-medium.yaml
+++ /dev/null
@@ -1,55 +0,0 @@
----
-corpus: beir-v1.0.0-bioasq-splade_distil_cocodenser_medium
-corpus_path: collections/beir-v1.0.0/splade_distil_cocodenser_medium/bioasq
-
-index_path: indexes/lucene-index.beir-v1.0.0-bioasq-splade_distil_cocodenser_medium/
-collection_class: JsonVectorCollection
-generator_class: DefaultLuceneDocumentGenerator
-index_threads: 16
-index_options: -impact -pretokenized
-index_stats:
- documents: 14914603
- documents (non-empty): 14914603
- total terms: 181960155708
-
-metrics:
- - metric: nDCG@10
- command: target/appassembler/bin/trec_eval
- params: -c -m ndcg_cut.10
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@100
- command: target/appassembler/bin/trec_eval
- params: -c -m recall.100
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@1000
- command: target/appassembler/bin/trec_eval
- params: -c -m recall.1000
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
-
-topic_reader: TsvString
-topics:
- - name: "BEIR (v1.0.0): BioASQ"
- id: test
- path: topics.beir-v1.0.0-bioasq.test.splade_distil_cocodenser_medium.tsv.gz
- qrel: qrels.beir-v1.0.0-bioasq.test.txt
-
-models:
- - name: splade_distil_cocodenser_medium
- display: SPLADE-distill CoCodenser Medium
- params: -impact -pretokenized -removeQuery -hits 1000
- results:
- nDCG@10:
- - 0.5035
- R@100:
- - 0.7422
- R@1000:
- - 0.8904
diff --git a/src/main/resources/regression/beir-v1.0.0-climate-fever-splade-distil-cocodenser-medium.yaml b/src/main/resources/regression/beir-v1.0.0-climate-fever-splade-distil-cocodenser-medium.yaml
deleted file mode 100644
index 5bee8412b1..0000000000
--- a/src/main/resources/regression/beir-v1.0.0-climate-fever-splade-distil-cocodenser-medium.yaml
+++ /dev/null
@@ -1,55 +0,0 @@
----
-corpus: beir-v1.0.0-climate-fever-splade_distil_cocodenser_medium
-corpus_path: collections/beir-v1.0.0/splade_distil_cocodenser_medium/climate-fever
-
-index_path: indexes/lucene-index.beir-v1.0.0-climate-fever-splade_distil_cocodenser_medium/
-collection_class: JsonVectorCollection
-generator_class: DefaultLuceneDocumentGenerator
-index_threads: 16
-index_options: -impact -pretokenized
-index_stats:
- documents: 5416593
- documents (non-empty): 5416593
- total terms: 38845226073
-
-metrics:
- - metric: nDCG@10
- command: target/appassembler/bin/trec_eval
- params: -c -m ndcg_cut.10
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@100
- command: target/appassembler/bin/trec_eval
- params: -c -m recall.100
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@1000
- command: target/appassembler/bin/trec_eval
- params: -c -m recall.1000
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
-
-topic_reader: TsvString
-topics:
- - name: "BEIR (v1.0.0): Climate-FEVER"
- id: test
- path: topics.beir-v1.0.0-climate-fever.test.splade_distil_cocodenser_medium.tsv.gz
- qrel: qrels.beir-v1.0.0-climate-fever.test.txt
-
-models:
- - name: splade_distil_cocodenser_medium
- display: SPLADE-distill CoCodenser Medium
- params: -impact -pretokenized -removeQuery -hits 1000
- results:
- nDCG@10:
- - 0.2276
- R@100:
- - 0.5140
- R@1000:
- - 0.7084
diff --git a/src/main/resources/regression/beir-v1.0.0-cqadupstack-android-splade-distil-cocodenser-medium.yaml b/src/main/resources/regression/beir-v1.0.0-cqadupstack-android-splade-distil-cocodenser-medium.yaml
deleted file mode 100644
index 8c4a55cc23..0000000000
--- a/src/main/resources/regression/beir-v1.0.0-cqadupstack-android-splade-distil-cocodenser-medium.yaml
+++ /dev/null
@@ -1,55 +0,0 @@
----
-corpus: beir-v1.0.0-cqadupstack-android-splade_distil_cocodenser_medium
-corpus_path: collections/beir-v1.0.0/splade_distil_cocodenser_medium/cqadupstack-android
-
-index_path: indexes/lucene-index.beir-v1.0.0-cqadupstack-android-splade_distil_cocodenser_medium/
-collection_class: JsonVectorCollection
-generator_class: DefaultLuceneDocumentGenerator
-index_threads: 16
-index_options: -impact -pretokenized
-index_stats:
- documents: 22998
- documents (non-empty): 22998
- total terms: 157949889
-
-metrics:
- - metric: nDCG@10
- command: target/appassembler/bin/trec_eval
- params: -c -m ndcg_cut.10
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@100
- command: target/appassembler/bin/trec_eval
- params: -c -m recall.100
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@1000
- command: target/appassembler/bin/trec_eval
- params: -c -m recall.1000
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
-
-topic_reader: TsvString
-topics:
- - name: "BEIR (v1.0.0): CQADupStack-android"
- id: test
- path: topics.beir-v1.0.0-cqadupstack-android.test.splade_distil_cocodenser_medium.tsv.gz
- qrel: qrels.beir-v1.0.0-cqadupstack-android.test.txt
-
-models:
- - name: splade_distil_cocodenser_medium
- display: SPLADE-distill CoCodenser Medium
- params: -impact -pretokenized -removeQuery -hits 1000
- results:
- nDCG@10:
- - 0.3954
- R@100:
- - 0.7405
- R@1000:
- - 0.9035
diff --git a/src/main/resources/regression/beir-v1.0.0-cqadupstack-english-splade-distil-cocodenser-medium.yaml b/src/main/resources/regression/beir-v1.0.0-cqadupstack-english-splade-distil-cocodenser-medium.yaml
deleted file mode 100644
index 42f34b4ee6..0000000000
--- a/src/main/resources/regression/beir-v1.0.0-cqadupstack-english-splade-distil-cocodenser-medium.yaml
+++ /dev/null
@@ -1,55 +0,0 @@
----
-corpus: beir-v1.0.0-cqadupstack-english-splade_distil_cocodenser_medium
-corpus_path: collections/beir-v1.0.0/splade_distil_cocodenser_medium/cqadupstack-english
-
-index_path: indexes/lucene-index.beir-v1.0.0-cqadupstack-english-splade_distil_cocodenser_medium/
-collection_class: JsonVectorCollection
-generator_class: DefaultLuceneDocumentGenerator
-index_threads: 16
-index_options: -impact -pretokenized
-index_stats:
- documents: 40221
- documents (non-empty): 40221
- total terms: 218761119
-
-metrics:
- - metric: nDCG@10
- command: target/appassembler/bin/trec_eval
- params: -c -m ndcg_cut.10
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@100
- command: target/appassembler/bin/trec_eval
- params: -c -m recall.100
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@1000
- command: target/appassembler/bin/trec_eval
- params: -c -m recall.1000
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
-
-topic_reader: TsvString
-topics:
- - name: "BEIR (v1.0.0): CQADupStack-english"
- id: test
- path: topics.beir-v1.0.0-cqadupstack-english.test.splade_distil_cocodenser_medium.tsv.gz
- qrel: qrels.beir-v1.0.0-cqadupstack-english.test.txt
-
-models:
- - name: splade_distil_cocodenser_medium
- display: SPLADE-distill CoCodenser Medium
- params: -impact -pretokenized -removeQuery -hits 1000
- results:
- nDCG@10:
- - 0.4026
- R@100:
- - 0.6768
- R@1000:
- - 0.8346
diff --git a/src/main/resources/regression/beir-v1.0.0-cqadupstack-gaming-splade-distil-cocodenser-medium.yaml b/src/main/resources/regression/beir-v1.0.0-cqadupstack-gaming-splade-distil-cocodenser-medium.yaml
deleted file mode 100644
index 94dfee52c5..0000000000
--- a/src/main/resources/regression/beir-v1.0.0-cqadupstack-gaming-splade-distil-cocodenser-medium.yaml
+++ /dev/null
@@ -1,55 +0,0 @@
----
-corpus: beir-v1.0.0-cqadupstack-gaming-splade_distil_cocodenser_medium
-corpus_path: collections/beir-v1.0.0/splade_distil_cocodenser_medium/cqadupstack-gaming
-
-index_path: indexes/lucene-index.beir-v1.0.0-cqadupstack-gaming-splade_distil_cocodenser_medium/
-collection_class: JsonVectorCollection
-generator_class: DefaultLuceneDocumentGenerator
-index_threads: 16
-index_options: -impact -pretokenized
-index_stats:
- documents: 45301
- documents (non-empty): 45301
- total terms: 296073202
-
-metrics:
- - metric: nDCG@10
- command: target/appassembler/bin/trec_eval
- params: -c -m ndcg_cut.10
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@100
- command: target/appassembler/bin/trec_eval
- params: -c -m recall.100
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@1000
- command: target/appassembler/bin/trec_eval
- params: -c -m recall.1000
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
-
-topic_reader: TsvString
-topics:
- - name: "BEIR (v1.0.0): CQADupStack-gaming"
- id: test
- path: topics.beir-v1.0.0-cqadupstack-gaming.test.splade_distil_cocodenser_medium.tsv.gz
- qrel: qrels.beir-v1.0.0-cqadupstack-gaming.test.txt
-
-models:
- - name: splade_distil_cocodenser_medium
- display: SPLADE-distill CoCodenser Medium
- params: -impact -pretokenized -removeQuery -hits 1000
- results:
- nDCG@10:
- - 0.5061
- R@100:
- - 0.8138
- R@1000:
- - 0.9253
diff --git a/src/main/resources/regression/beir-v1.0.0-cqadupstack-gis-splade-distil-cocodenser-medium.yaml b/src/main/resources/regression/beir-v1.0.0-cqadupstack-gis-splade-distil-cocodenser-medium.yaml
deleted file mode 100644
index 3b1a28cd6e..0000000000
--- a/src/main/resources/regression/beir-v1.0.0-cqadupstack-gis-splade-distil-cocodenser-medium.yaml
+++ /dev/null
@@ -1,55 +0,0 @@
----
-corpus: beir-v1.0.0-cqadupstack-gis-splade_distil_cocodenser_medium
-corpus_path: collections/beir-v1.0.0/splade_distil_cocodenser_medium/cqadupstack-gis
-
-index_path: indexes/lucene-index.beir-v1.0.0-cqadupstack-gis-splade_distil_cocodenser_medium/
-collection_class: JsonVectorCollection
-generator_class: DefaultLuceneDocumentGenerator
-index_threads: 16
-index_options: -impact -pretokenized
-index_stats:
- documents: 37637
- documents (non-empty): 37637
- total terms: 296967034
-
-metrics:
- - metric: nDCG@10
- command: target/appassembler/bin/trec_eval
- params: -c -m ndcg_cut.10
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@100
- command: target/appassembler/bin/trec_eval
- params: -c -m recall.100
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@1000
- command: target/appassembler/bin/trec_eval
- params: -c -m recall.1000
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
-
-topic_reader: TsvString
-topics:
- - name: "BEIR (v1.0.0): CQADupStack-gis"
- id: test
- path: topics.beir-v1.0.0-cqadupstack-gis.test.splade_distil_cocodenser_medium.tsv.gz
- qrel: qrels.beir-v1.0.0-cqadupstack-gis.test.txt
-
-models:
- - name: splade_distil_cocodenser_medium
- display: SPLADE-distill CoCodenser Medium
- params: -impact -pretokenized -removeQuery -hits 1000
- results:
- nDCG@10:
- - 0.3223
- R@100:
- - 0.6419
- R@1000:
- - 0.8385
diff --git a/src/main/resources/regression/beir-v1.0.0-cqadupstack-mathematica-splade-distil-cocodenser-medium.yaml b/src/main/resources/regression/beir-v1.0.0-cqadupstack-mathematica-splade-distil-cocodenser-medium.yaml
deleted file mode 100644
index 3abbdd462a..0000000000
--- a/src/main/resources/regression/beir-v1.0.0-cqadupstack-mathematica-splade-distil-cocodenser-medium.yaml
+++ /dev/null
@@ -1,55 +0,0 @@
----
-corpus: beir-v1.0.0-cqadupstack-mathematica-splade_distil_cocodenser_medium
-corpus_path: collections/beir-v1.0.0/splade_distil_cocodenser_medium/cqadupstack-mathematica
-
-index_path: indexes/lucene-index.beir-v1.0.0-cqadupstack-mathematica-splade_distil_cocodenser_medium/
-collection_class: JsonVectorCollection
-generator_class: DefaultLuceneDocumentGenerator
-index_threads: 16
-index_options: -impact -pretokenized
-index_stats:
- documents: 16705
- documents (non-empty): 16705
- total terms: 132796971
-
-metrics:
- - metric: nDCG@10
- command: target/appassembler/bin/trec_eval
- params: -c -m ndcg_cut.10
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@100
- command: target/appassembler/bin/trec_eval
- params: -c -m recall.100
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@1000
- command: target/appassembler/bin/trec_eval
- params: -c -m recall.1000
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
-
-topic_reader: TsvString
-topics:
- - name: "BEIR (v1.0.0): CQADupStack-mathematica"
- id: test
- path: topics.beir-v1.0.0-cqadupstack-mathematica.test.splade_distil_cocodenser_medium.tsv.gz
- qrel: qrels.beir-v1.0.0-cqadupstack-mathematica.test.txt
-
-models:
- - name: splade_distil_cocodenser_medium
- display: SPLADE-distill CoCodenser Medium
- params: -impact -pretokenized -removeQuery -hits 1000
- results:
- nDCG@10:
- - 0.2423
- R@100:
- - 0.5732
- R@1000:
- - 0.7848
diff --git a/src/main/resources/regression/beir-v1.0.0-cqadupstack-physics-splade-distil-cocodenser-medium.yaml b/src/main/resources/regression/beir-v1.0.0-cqadupstack-physics-splade-distil-cocodenser-medium.yaml
deleted file mode 100644
index 794d506176..0000000000
--- a/src/main/resources/regression/beir-v1.0.0-cqadupstack-physics-splade-distil-cocodenser-medium.yaml
+++ /dev/null
@@ -1,55 +0,0 @@
----
-corpus: beir-v1.0.0-cqadupstack-physics-splade_distil_cocodenser_medium
-corpus_path: collections/beir-v1.0.0/splade_distil_cocodenser_medium/cqadupstack-physics
-
-index_path: indexes/lucene-index.beir-v1.0.0-cqadupstack-physics-splade_distil_cocodenser_medium/
-collection_class: JsonVectorCollection
-generator_class: DefaultLuceneDocumentGenerator
-index_threads: 16
-index_options: -impact -pretokenized
-index_stats:
- documents: 38316
- documents (non-empty): 38316
- total terms: 284896455
-
-metrics:
- - metric: nDCG@10
- command: target/appassembler/bin/trec_eval
- params: -c -m ndcg_cut.10
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@100
- command: target/appassembler/bin/trec_eval
- params: -c -m recall.100
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@1000
- command: target/appassembler/bin/trec_eval
- params: -c -m recall.1000
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
-
-topic_reader: TsvString
-topics:
- - name: "BEIR (v1.0.0): CQADupStack-physics"
- id: test
- path: topics.beir-v1.0.0-cqadupstack-physics.test.splade_distil_cocodenser_medium.tsv.gz
- qrel: qrels.beir-v1.0.0-cqadupstack-physics.test.txt
-
-models:
- - name: splade_distil_cocodenser_medium
- display: SPLADE-distill CoCodenser Medium
- params: -impact -pretokenized -removeQuery -hits 1000
- results:
- nDCG@10:
- - 0.3668
- R@100:
- - 0.7286
- R@1000:
- - 0.8931
diff --git a/src/main/resources/regression/beir-v1.0.0-cqadupstack-programmers-splade-distil-cocodenser-medium.yaml b/src/main/resources/regression/beir-v1.0.0-cqadupstack-programmers-splade-distil-cocodenser-medium.yaml
deleted file mode 100644
index aa571ae12e..0000000000
--- a/src/main/resources/regression/beir-v1.0.0-cqadupstack-programmers-splade-distil-cocodenser-medium.yaml
+++ /dev/null
@@ -1,55 +0,0 @@
----
-corpus: beir-v1.0.0-cqadupstack-programmers-splade_distil_cocodenser_medium
-corpus_path: collections/beir-v1.0.0/splade_distil_cocodenser_medium/cqadupstack-programmers
-
-index_path: indexes/lucene-index.beir-v1.0.0-cqadupstack-programmers-splade_distil_cocodenser_medium/
-collection_class: JsonVectorCollection
-generator_class: DefaultLuceneDocumentGenerator
-index_threads: 16
-index_options: -impact -pretokenized
-index_stats:
- documents: 32176
- documents (non-empty): 32176
- total terms: 258856106
-
-metrics:
- - metric: nDCG@10
- command: target/appassembler/bin/trec_eval
- params: -c -m ndcg_cut.10
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@100
- command: target/appassembler/bin/trec_eval
- params: -c -m recall.100
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@1000
- command: target/appassembler/bin/trec_eval
- params: -c -m recall.1000
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
-
-topic_reader: TsvString
-topics:
- - name: "BEIR (v1.0.0): CQADupStack-programmers"
- id: test
- path: topics.beir-v1.0.0-cqadupstack-programmers.test.splade_distil_cocodenser_medium.tsv.gz
- qrel: qrels.beir-v1.0.0-cqadupstack-programmers.test.txt
-
-models:
- - name: splade_distil_cocodenser_medium
- display: SPLADE-distill CoCodenser Medium
- params: -impact -pretokenized -removeQuery -hits 1000
- results:
- nDCG@10:
- - 0.3412
- R@100:
- - 0.6653
- R@1000:
- - 0.8451
diff --git a/src/main/resources/regression/beir-v1.0.0-cqadupstack-stats-splade-distil-cocodenser-medium.yaml b/src/main/resources/regression/beir-v1.0.0-cqadupstack-stats-splade-distil-cocodenser-medium.yaml
deleted file mode 100644
index 671bd75b89..0000000000
--- a/src/main/resources/regression/beir-v1.0.0-cqadupstack-stats-splade-distil-cocodenser-medium.yaml
+++ /dev/null
@@ -1,55 +0,0 @@
----
-corpus: beir-v1.0.0-cqadupstack-stats-splade_distil_cocodenser_medium
-corpus_path: collections/beir-v1.0.0/splade_distil_cocodenser_medium/cqadupstack-stats
-
-index_path: indexes/lucene-index.beir-v1.0.0-cqadupstack-stats-splade_distil_cocodenser_medium/
-collection_class: JsonVectorCollection
-generator_class: DefaultLuceneDocumentGenerator
-index_threads: 16
-index_options: -impact -pretokenized
-index_stats:
- documents: 42269
- documents (non-empty): 42269
- total terms: 333590386
-
-metrics:
- - metric: nDCG@10
- command: target/appassembler/bin/trec_eval
- params: -c -m ndcg_cut.10
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@100
- command: target/appassembler/bin/trec_eval
- params: -c -m recall.100
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@1000
- command: target/appassembler/bin/trec_eval
- params: -c -m recall.1000
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
-
-topic_reader: TsvString
-topics:
- - name: "BEIR (v1.0.0): CQADupStack-stats"
- id: test
- path: topics.beir-v1.0.0-cqadupstack-stats.test.splade_distil_cocodenser_medium.tsv.gz
- qrel: qrels.beir-v1.0.0-cqadupstack-stats.test.txt
-
-models:
- - name: splade_distil_cocodenser_medium
- display: SPLADE-distill CoCodenser Medium
- params: -impact -pretokenized -removeQuery -hits 1000
- results:
- nDCG@10:
- - 0.3142
- R@100:
- - 0.5889
- R@1000:
- - 0.7823
diff --git a/src/main/resources/regression/beir-v1.0.0-cqadupstack-tex-splade-distil-cocodenser-medium.yaml b/src/main/resources/regression/beir-v1.0.0-cqadupstack-tex-splade-distil-cocodenser-medium.yaml
deleted file mode 100644
index 1d78d6dddf..0000000000
--- a/src/main/resources/regression/beir-v1.0.0-cqadupstack-tex-splade-distil-cocodenser-medium.yaml
+++ /dev/null
@@ -1,55 +0,0 @@
----
-corpus: beir-v1.0.0-cqadupstack-tex-splade_distil_cocodenser_medium
-corpus_path: collections/beir-v1.0.0/splade_distil_cocodenser_medium/cqadupstack-tex
-
-index_path: indexes/lucene-index.beir-v1.0.0-cqadupstack-tex-splade_distil_cocodenser_medium/
-collection_class: JsonVectorCollection
-generator_class: DefaultLuceneDocumentGenerator
-index_threads: 16
-index_options: -impact -pretokenized
-index_stats:
- documents: 68184
- documents (non-empty): 68184
- total terms: 604604076
-
-metrics:
- - metric: nDCG@10
- command: target/appassembler/bin/trec_eval
- params: -c -m ndcg_cut.10
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@100
- command: target/appassembler/bin/trec_eval
- params: -c -m recall.100
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@1000
- command: target/appassembler/bin/trec_eval
- params: -c -m recall.1000
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
-
-topic_reader: TsvString
-topics:
- - name: "BEIR (v1.0.0): CQADupStack-tex"
- id: test
- path: topics.beir-v1.0.0-cqadupstack-tex.test.splade_distil_cocodenser_medium.tsv.gz
- qrel: qrels.beir-v1.0.0-cqadupstack-tex.test.txt
-
-models:
- - name: splade_distil_cocodenser_medium
- display: SPLADE-distill CoCodenser Medium
- params: -impact -pretokenized -removeQuery -hits 1000
- results:
- nDCG@10:
- - 0.2575
- R@100:
- - 0.5231
- R@1000:
- - 0.7372
diff --git a/src/main/resources/regression/beir-v1.0.0-cqadupstack-unix-splade-distil-cocodenser-medium.yaml b/src/main/resources/regression/beir-v1.0.0-cqadupstack-unix-splade-distil-cocodenser-medium.yaml
deleted file mode 100644
index 119c9d7961..0000000000
--- a/src/main/resources/regression/beir-v1.0.0-cqadupstack-unix-splade-distil-cocodenser-medium.yaml
+++ /dev/null
@@ -1,55 +0,0 @@
----
-corpus: beir-v1.0.0-cqadupstack-unix-splade_distil_cocodenser_medium
-corpus_path: collections/beir-v1.0.0/splade_distil_cocodenser_medium/cqadupstack-unix
-
-index_path: indexes/lucene-index.beir-v1.0.0-cqadupstack-unix-splade_distil_cocodenser_medium/
-collection_class: JsonVectorCollection
-generator_class: DefaultLuceneDocumentGenerator
-index_threads: 16
-index_options: -impact -pretokenized
-index_stats:
- documents: 47382
- documents (non-empty): 47382
- total terms: 369576280
-
-metrics:
- - metric: nDCG@10
- command: target/appassembler/bin/trec_eval
- params: -c -m ndcg_cut.10
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@100
- command: target/appassembler/bin/trec_eval
- params: -c -m recall.100
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@1000
- command: target/appassembler/bin/trec_eval
- params: -c -m recall.1000
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
-
-topic_reader: TsvString
-topics:
- - name: "BEIR (v1.0.0): CQADupStack-unix"
- id: test
- path: topics.beir-v1.0.0-cqadupstack-unix.test.splade_distil_cocodenser_medium.tsv.gz
- qrel: qrels.beir-v1.0.0-cqadupstack-unix.test.txt
-
-models:
- - name: splade_distil_cocodenser_medium
- display: SPLADE-distill CoCodenser Medium
- params: -impact -pretokenized -removeQuery -hits 1000
- results:
- nDCG@10:
- - 0.3292
- R@100:
- - 0.6192
- R@1000:
- - 0.8225
diff --git a/src/main/resources/regression/beir-v1.0.0-cqadupstack-webmasters-splade-distil-cocodenser-medium.yaml b/src/main/resources/regression/beir-v1.0.0-cqadupstack-webmasters-splade-distil-cocodenser-medium.yaml
deleted file mode 100644
index 30169676b6..0000000000
--- a/src/main/resources/regression/beir-v1.0.0-cqadupstack-webmasters-splade-distil-cocodenser-medium.yaml
+++ /dev/null
@@ -1,55 +0,0 @@
----
-corpus: beir-v1.0.0-cqadupstack-webmasters-splade_distil_cocodenser_medium
-corpus_path: collections/beir-v1.0.0/splade_distil_cocodenser_medium/cqadupstack-webmasters
-
-index_path: indexes/lucene-index.beir-v1.0.0-cqadupstack-webmasters-splade_distil_cocodenser_medium/
-collection_class: JsonVectorCollection
-generator_class: DefaultLuceneDocumentGenerator
-index_threads: 16
-index_options: -impact -pretokenized
-index_stats:
- documents: 17405
- documents (non-empty): 17405
- total terms: 127823828
-
-metrics:
- - metric: nDCG@10
- command: target/appassembler/bin/trec_eval
- params: -c -m ndcg_cut.10
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@100
- command: target/appassembler/bin/trec_eval
- params: -c -m recall.100
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@1000
- command: target/appassembler/bin/trec_eval
- params: -c -m recall.1000
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
-
-topic_reader: TsvString
-topics:
- - name: "BEIR (v1.0.0): CQADupStack-webmasters"
- id: test
- path: topics.beir-v1.0.0-cqadupstack-webmasters.test.splade_distil_cocodenser_medium.tsv.gz
- qrel: qrels.beir-v1.0.0-cqadupstack-webmasters.test.txt
-
-models:
- - name: splade_distil_cocodenser_medium
- display: SPLADE-distill CoCodenser Medium
- params: -impact -pretokenized -removeQuery -hits 1000
- results:
- nDCG@10:
- - 0.3343
- R@100:
- - 0.6404
- R@1000:
- - 0.8767
diff --git a/src/main/resources/regression/beir-v1.0.0-cqadupstack-wordpress-splade-distil-cocodenser-medium.yaml b/src/main/resources/regression/beir-v1.0.0-cqadupstack-wordpress-splade-distil-cocodenser-medium.yaml
deleted file mode 100644
index 8564352d8b..0000000000
--- a/src/main/resources/regression/beir-v1.0.0-cqadupstack-wordpress-splade-distil-cocodenser-medium.yaml
+++ /dev/null
@@ -1,55 +0,0 @@
----
-corpus: beir-v1.0.0-cqadupstack-wordpress-splade_distil_cocodenser_medium
-corpus_path: collections/beir-v1.0.0/splade_distil_cocodenser_medium/cqadupstack-wordpress
-
-index_path: indexes/lucene-index.beir-v1.0.0-cqadupstack-wordpress-splade_distil_cocodenser_medium/
-collection_class: JsonVectorCollection
-generator_class: DefaultLuceneDocumentGenerator
-index_threads: 16
-index_options: -impact -pretokenized
-index_stats:
- documents: 48605
- documents (non-empty): 48605
- total terms: 362488001
-
-metrics:
- - metric: nDCG@10
- command: target/appassembler/bin/trec_eval
- params: -c -m ndcg_cut.10
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@100
- command: target/appassembler/bin/trec_eval
- params: -c -m recall.100
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@1000
- command: target/appassembler/bin/trec_eval
- params: -c -m recall.1000
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
-
-topic_reader: TsvString
-topics:
- - name: "BEIR (v1.0.0): CQADupStack-wordpress"
- id: test
- path: topics.beir-v1.0.0-cqadupstack-wordpress.test.splade_distil_cocodenser_medium.tsv.gz
- qrel: qrels.beir-v1.0.0-cqadupstack-wordpress.test.txt
-
-models:
- - name: splade_distil_cocodenser_medium
- display: SPLADE-distill CoCodenser Medium
- params: -impact -pretokenized -removeQuery -hits 1000
- results:
- nDCG@10:
- - 0.2839
- R@100:
- - 0.5974
- R@1000:
- - 0.8036
diff --git a/src/main/resources/regression/beir-v1.0.0-dbpedia-entity-splade-distil-cocodenser-medium.yaml b/src/main/resources/regression/beir-v1.0.0-dbpedia-entity-splade-distil-cocodenser-medium.yaml
deleted file mode 100644
index 13ad8712b4..0000000000
--- a/src/main/resources/regression/beir-v1.0.0-dbpedia-entity-splade-distil-cocodenser-medium.yaml
+++ /dev/null
@@ -1,55 +0,0 @@
----
-corpus: beir-v1.0.0-dbpedia-entity-splade_distil_cocodenser_medium
-corpus_path: collections/beir-v1.0.0/splade_distil_cocodenser_medium/dbpedia-entity
-
-index_path: indexes/lucene-index.beir-v1.0.0-dbpedia-entity-splade_distil_cocodenser_medium/
-collection_class: JsonVectorCollection
-generator_class: DefaultLuceneDocumentGenerator
-index_threads: 16
-index_options: -impact -pretokenized
-index_stats:
- documents: 4635922
- documents (non-empty): 4635922
- total terms: 30490098411
-
-metrics:
- - metric: nDCG@10
- command: target/appassembler/bin/trec_eval
- params: -c -m ndcg_cut.10
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@100
- command: target/appassembler/bin/trec_eval
- params: -c -m recall.100
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@1000
- command: target/appassembler/bin/trec_eval
- params: -c -m recall.1000
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
-
-topic_reader: TsvString
-topics:
- - name: "BEIR (v1.0.0): DBPedia"
- id: test
- path: topics.beir-v1.0.0-dbpedia-entity.test.splade_distil_cocodenser_medium.tsv.gz
- qrel: qrels.beir-v1.0.0-dbpedia-entity.test.txt
-
-models:
- - name: splade_distil_cocodenser_medium
- display: SPLADE-distill CoCodenser Medium
- params: -impact -pretokenized -removeQuery -hits 1000
- results:
- nDCG@10:
- - 0.4416
- R@100:
- - 0.5636
- R@1000:
- - 0.7774
diff --git a/src/main/resources/regression/beir-v1.0.0-fever-splade-distil-cocodenser-medium.yaml b/src/main/resources/regression/beir-v1.0.0-fever-splade-distil-cocodenser-medium.yaml
deleted file mode 100644
index d6fea7d9b4..0000000000
--- a/src/main/resources/regression/beir-v1.0.0-fever-splade-distil-cocodenser-medium.yaml
+++ /dev/null
@@ -1,55 +0,0 @@
----
-corpus: beir-v1.0.0-fever-splade_distil_cocodenser_medium
-corpus_path: collections/beir-v1.0.0/splade_distil_cocodenser_medium/fever
-
-index_path: indexes/lucene-index.beir-v1.0.0-fever-splade_distil_cocodenser_medium/
-collection_class: JsonVectorCollection
-generator_class: DefaultLuceneDocumentGenerator
-index_threads: 16
-index_options: -impact -pretokenized
-index_stats:
- documents: 5416568
- documents (non-empty): 5416568
- total terms: 38844967407
-
-metrics:
- - metric: nDCG@10
- command: target/appassembler/bin/trec_eval
- params: -c -m ndcg_cut.10
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@100
- command: target/appassembler/bin/trec_eval
- params: -c -m recall.100
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@1000
- command: target/appassembler/bin/trec_eval
- params: -c -m recall.1000
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
-
-topic_reader: TsvString
-topics:
- - name: "BEIR (v1.0.0): FEVER"
- id: test
- path: topics.beir-v1.0.0-fever.test.splade_distil_cocodenser_medium.tsv.gz
- qrel: qrels.beir-v1.0.0-fever.test.txt
-
-models:
- - name: splade_distil_cocodenser_medium
- display: SPLADE-distill CoCodenser Medium
- params: -impact -pretokenized -removeQuery -hits 1000
- results:
- nDCG@10:
- - 0.7962
- R@100:
- - 0.9550
- R@1000:
- - 0.9751
diff --git a/src/main/resources/regression/beir-v1.0.0-fiqa-splade-distil-cocodenser-medium.yaml b/src/main/resources/regression/beir-v1.0.0-fiqa-splade-distil-cocodenser-medium.yaml
deleted file mode 100644
index a6abe997b2..0000000000
--- a/src/main/resources/regression/beir-v1.0.0-fiqa-splade-distil-cocodenser-medium.yaml
+++ /dev/null
@@ -1,55 +0,0 @@
----
-corpus: beir-v1.0.0-fiqa-splade_distil_cocodenser_medium
-corpus_path: collections/beir-v1.0.0/splade_distil_cocodenser_medium/fiqa
-
-index_path: indexes/lucene-index.beir-v1.0.0-fiqa-splade_distil_cocodenser_medium/
-collection_class: JsonVectorCollection
-generator_class: DefaultLuceneDocumentGenerator
-index_threads: 16
-index_options: -impact -pretokenized
-index_stats:
- documents: 57638
- documents (non-empty): 57638
- total terms: 487502241
-
-metrics:
- - metric: nDCG@10
- command: target/appassembler/bin/trec_eval
- params: -c -m ndcg_cut.10
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@100
- command: target/appassembler/bin/trec_eval
- params: -c -m recall.100
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@1000
- command: target/appassembler/bin/trec_eval
- params: -c -m recall.1000
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
-
-topic_reader: TsvString
-topics:
- - name: "BEIR (v1.0.0): FiQA-2018"
- id: test
- path: topics.beir-v1.0.0-fiqa.test.splade_distil_cocodenser_medium.tsv.gz
- qrel: qrels.beir-v1.0.0-fiqa.test.txt
-
-models:
- - name: splade_distil_cocodenser_medium
- display: SPLADE-distill CoCodenser Medium
- params: -impact -pretokenized -removeQuery -hits 1000
- results:
- nDCG@10:
- - 0.3514
- R@100:
- - 0.6298
- R@1000:
- - 0.8323
diff --git a/src/main/resources/regression/beir-v1.0.0-hotpotqa-splade-distil-cocodenser-medium.yaml b/src/main/resources/regression/beir-v1.0.0-hotpotqa-splade-distil-cocodenser-medium.yaml
deleted file mode 100644
index bfd6508935..0000000000
--- a/src/main/resources/regression/beir-v1.0.0-hotpotqa-splade-distil-cocodenser-medium.yaml
+++ /dev/null
@@ -1,55 +0,0 @@
----
-corpus: beir-v1.0.0-hotpotqa-splade_distil_cocodenser_medium
-corpus_path: collections/beir-v1.0.0/splade_distil_cocodenser_medium/hotpotqa
-
-index_path: indexes/lucene-index.beir-v1.0.0-hotpotqa-splade_distil_cocodenser_medium/
-collection_class: JsonVectorCollection
-generator_class: DefaultLuceneDocumentGenerator
-index_threads: 16
-index_options: -impact -pretokenized
-index_stats:
- documents: 5233329
- documents (non-empty): 5233329
- total terms: 32565190895
-
-metrics:
- - metric: nDCG@10
- command: target/appassembler/bin/trec_eval
- params: -c -m ndcg_cut.10
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@100
- command: target/appassembler/bin/trec_eval
- params: -c -m recall.100
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@1000
- command: target/appassembler/bin/trec_eval
- params: -c -m recall.1000
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
-
-topic_reader: TsvString
-topics:
- - name: "BEIR (v1.0.0): HotpotQA"
- id: test
- path: topics.beir-v1.0.0-hotpotqa.test.splade_distil_cocodenser_medium.tsv.gz
- qrel: qrels.beir-v1.0.0-hotpotqa.test.txt
-
-models:
- - name: splade_distil_cocodenser_medium
- display: SPLADE-distill CoCodenser Medium
- params: -impact -pretokenized -removeQuery -hits 1000
- results:
- nDCG@10:
- - 0.6860
- R@100:
- - 0.8144
- R@1000:
- - 0.8945
diff --git a/src/main/resources/regression/beir-v1.0.0-nfcorpus-splade-distil-cocodenser-medium.yaml b/src/main/resources/regression/beir-v1.0.0-nfcorpus-splade-distil-cocodenser-medium.yaml
deleted file mode 100644
index b72c936df6..0000000000
--- a/src/main/resources/regression/beir-v1.0.0-nfcorpus-splade-distil-cocodenser-medium.yaml
+++ /dev/null
@@ -1,55 +0,0 @@
----
-corpus: beir-v1.0.0-nfcorpus-splade_distil_cocodenser_medium
-corpus_path: collections/beir-v1.0.0/splade_distil_cocodenser_medium/nfcorpus
-
-index_path: indexes/lucene-index.beir-v1.0.0-nfcorpus-splade_distil_cocodenser_medium/
-collection_class: JsonVectorCollection
-generator_class: DefaultLuceneDocumentGenerator
-index_threads: 16
-index_options: -impact -pretokenized
-index_stats:
- documents: 3633
- documents (non-empty): 3633
- total terms: 41582222
-
-metrics:
- - metric: nDCG@10
- command: target/appassembler/bin/trec_eval
- params: -c -m ndcg_cut.10
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@100
- command: target/appassembler/bin/trec_eval
- params: -c -m recall.100
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@1000
- command: target/appassembler/bin/trec_eval
- params: -c -m recall.1000
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
-
-topic_reader: TsvString
-topics:
- - name: "BEIR (v1.0.0): NFCorpus"
- id: test
- path: topics.beir-v1.0.0-nfcorpus.test.splade_distil_cocodenser_medium.tsv.gz
- qrel: qrels.beir-v1.0.0-nfcorpus.test.txt
-
-models:
- - name: splade_distil_cocodenser_medium
- display: SPLADE-distill CoCodenser Medium
- params: -impact -pretokenized -removeQuery -hits 1000
- results:
- nDCG@10:
- - 0.3454
- R@100:
- - 0.2891
- R@1000:
- - 0.5694
diff --git a/src/main/resources/regression/beir-v1.0.0-nq-splade-distil-cocodenser-medium.yaml b/src/main/resources/regression/beir-v1.0.0-nq-splade-distil-cocodenser-medium.yaml
deleted file mode 100644
index ce35769bd7..0000000000
--- a/src/main/resources/regression/beir-v1.0.0-nq-splade-distil-cocodenser-medium.yaml
+++ /dev/null
@@ -1,55 +0,0 @@
----
-corpus: beir-v1.0.0-nq-splade_distil_cocodenser_medium
-corpus_path: collections/beir-v1.0.0/splade_distil_cocodenser_medium/nq
-
-index_path: indexes/lucene-index.beir-v1.0.0-nq-splade_distil_cocodenser_medium/
-collection_class: JsonVectorCollection
-generator_class: DefaultLuceneDocumentGenerator
-index_threads: 16
-index_options: -impact -pretokenized
-index_stats:
- documents: 2681468
- documents (non-empty): 2681468
- total terms: 21901570532
-
-metrics:
- - metric: nDCG@10
- command: target/appassembler/bin/trec_eval
- params: -c -m ndcg_cut.10
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@100
- command: target/appassembler/bin/trec_eval
- params: -c -m recall.100
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@1000
- command: target/appassembler/bin/trec_eval
- params: -c -m recall.1000
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
-
-topic_reader: TsvString
-topics:
- - name: "BEIR (v1.0.0): NQ"
- id: test
- path: topics.beir-v1.0.0-nq.test.splade_distil_cocodenser_medium.tsv.gz
- qrel: qrels.beir-v1.0.0-nq.test.txt
-
-models:
- - name: splade_distil_cocodenser_medium
- display: SPLADE-distill CoCodenser Medium
- params: -impact -pretokenized -removeQuery -hits 1000
- results:
- nDCG@10:
- - 0.5442
- R@100:
- - 0.9285
- R@1000:
- - 0.9812
diff --git a/src/main/resources/regression/beir-v1.0.0-quora-splade-distil-cocodenser-medium.yaml b/src/main/resources/regression/beir-v1.0.0-quora-splade-distil-cocodenser-medium.yaml
deleted file mode 100644
index 0d876c437f..0000000000
--- a/src/main/resources/regression/beir-v1.0.0-quora-splade-distil-cocodenser-medium.yaml
+++ /dev/null
@@ -1,55 +0,0 @@
----
-corpus: beir-v1.0.0-quora-splade_distil_cocodenser_medium
-corpus_path: collections/beir-v1.0.0/splade_distil_cocodenser_medium/quora
-
-index_path: indexes/lucene-index.beir-v1.0.0-quora-splade_distil_cocodenser_medium/
-collection_class: JsonVectorCollection
-generator_class: DefaultLuceneDocumentGenerator
-index_threads: 16
-index_options: -impact -pretokenized
-index_stats:
- documents: 522931
- documents (non-empty): 522931
- total terms: 1322737004
-
-metrics:
- - metric: nDCG@10
- command: target/appassembler/bin/trec_eval
- params: -c -m ndcg_cut.10
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@100
- command: target/appassembler/bin/trec_eval
- params: -c -m recall.100
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@1000
- command: target/appassembler/bin/trec_eval
- params: -c -m recall.1000
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
-
-topic_reader: TsvString
-topics:
- - name: "BEIR (v1.0.0): Quora"
- id: test
- path: topics.beir-v1.0.0-quora.test.splade_distil_cocodenser_medium.tsv.gz
- qrel: qrels.beir-v1.0.0-quora.test.txt
-
-models:
- - name: splade_distil_cocodenser_medium
- display: SPLADE-distill CoCodenser Medium
- params: -impact -pretokenized -removeQuery -hits 1000
- results:
- nDCG@10:
- - 0.8136
- R@100:
- - 0.9817
- R@1000:
- - 0.9979
diff --git a/src/main/resources/regression/beir-v1.0.0-robust04-splade-distil-cocodenser-medium.yaml b/src/main/resources/regression/beir-v1.0.0-robust04-splade-distil-cocodenser-medium.yaml
deleted file mode 100644
index 5bc2c89eb7..0000000000
--- a/src/main/resources/regression/beir-v1.0.0-robust04-splade-distil-cocodenser-medium.yaml
+++ /dev/null
@@ -1,55 +0,0 @@
----
-corpus: beir-v1.0.0-robust04-splade_distil_cocodenser_medium
-corpus_path: collections/beir-v1.0.0/splade_distil_cocodenser_medium/robust04
-
-index_path: indexes/lucene-index.beir-v1.0.0-robust04-splade_distil_cocodenser_medium/
-collection_class: JsonVectorCollection
-generator_class: DefaultLuceneDocumentGenerator
-index_threads: 16
-index_options: -impact -pretokenized
-index_stats:
- documents: 528155
- documents (non-empty): 528155
- total terms: 6718533167
-
-metrics:
- - metric: nDCG@10
- command: target/appassembler/bin/trec_eval
- params: -c -m ndcg_cut.10
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@100
- command: target/appassembler/bin/trec_eval
- params: -c -m recall.100
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@1000
- command: target/appassembler/bin/trec_eval
- params: -c -m recall.1000
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
-
-topic_reader: TsvString
-topics:
- - name: "BEIR (v1.0.0): Robust04"
- id: test
- path: topics.beir-v1.0.0-robust04.test.splade_distil_cocodenser_medium.tsv.gz
- qrel: qrels.beir-v1.0.0-robust04.test.txt
-
-models:
- - name: splade_distil_cocodenser_medium
- display: SPLADE-distill CoCodenser Medium
- params: -impact -pretokenized -removeQuery -hits 1000
- results:
- nDCG@10:
- - 0.4581
- R@100:
- - 0.3773
- R@1000:
- - 0.6099
diff --git a/src/main/resources/regression/beir-v1.0.0-scidocs-splade-distil-cocodenser-medium.yaml b/src/main/resources/regression/beir-v1.0.0-scidocs-splade-distil-cocodenser-medium.yaml
deleted file mode 100644
index fd1e9c5298..0000000000
--- a/src/main/resources/regression/beir-v1.0.0-scidocs-splade-distil-cocodenser-medium.yaml
+++ /dev/null
@@ -1,55 +0,0 @@
----
-corpus: beir-v1.0.0-scidocs-splade_distil_cocodenser_medium
-corpus_path: collections/beir-v1.0.0/splade_distil_cocodenser_medium/scidocs
-
-index_path: indexes/lucene-index.beir-v1.0.0-scidocs-splade_distil_cocodenser_medium/
-collection_class: JsonVectorCollection
-generator_class: DefaultLuceneDocumentGenerator
-index_threads: 16
-index_options: -impact -pretokenized
-index_stats:
- documents: 25657
- documents (non-empty): 25657
- total terms: 273175826
-
-metrics:
- - metric: nDCG@10
- command: target/appassembler/bin/trec_eval
- params: -c -m ndcg_cut.10
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@100
- command: target/appassembler/bin/trec_eval
- params: -c -m recall.100
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@1000
- command: target/appassembler/bin/trec_eval
- params: -c -m recall.1000
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
-
-topic_reader: TsvString
-topics:
- - name: "BEIR (v1.0.0): SCIDOCS"
- id: test
- path: topics.beir-v1.0.0-scidocs.test.splade_distil_cocodenser_medium.tsv.gz
- qrel: qrels.beir-v1.0.0-scidocs.test.txt
-
-models:
- - name: splade_distil_cocodenser_medium
- display: SPLADE-distill CoCodenser Medium
- params: -impact -pretokenized -removeQuery -hits 1000
- results:
- nDCG@10:
- - 0.1590
- R@100:
- - 0.3671
- R@1000:
- - 0.5891
diff --git a/src/main/resources/regression/beir-v1.0.0-scifact-splade-distil-cocodenser-medium.yaml b/src/main/resources/regression/beir-v1.0.0-scifact-splade-distil-cocodenser-medium.yaml
deleted file mode 100644
index 377f9d3093..0000000000
--- a/src/main/resources/regression/beir-v1.0.0-scifact-splade-distil-cocodenser-medium.yaml
+++ /dev/null
@@ -1,55 +0,0 @@
----
-corpus: beir-v1.0.0-scifact-splade_distil_cocodenser_medium
-corpus_path: collections/beir-v1.0.0/splade_distil_cocodenser_medium/scifact
-
-index_path: indexes/lucene-index.beir-v1.0.0-scifact-splade_distil_cocodenser_medium/
-collection_class: JsonVectorCollection
-generator_class: DefaultLuceneDocumentGenerator
-index_threads: 16
-index_options: -impact -pretokenized
-index_stats:
- documents: 5183
- documents (non-empty): 5183
- total terms: 65836037
-
-metrics:
- - metric: nDCG@10
- command: target/appassembler/bin/trec_eval
- params: -c -m ndcg_cut.10
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@100
- command: target/appassembler/bin/trec_eval
- params: -c -m recall.100
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@1000
- command: target/appassembler/bin/trec_eval
- params: -c -m recall.1000
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
-
-topic_reader: TsvString
-topics:
- - name: "BEIR (v1.0.0): SciFact"
- id: test
- path: topics.beir-v1.0.0-scifact.test.splade_distil_cocodenser_medium.tsv.gz
- qrel: qrels.beir-v1.0.0-scifact.test.txt
-
-models:
- - name: splade_distil_cocodenser_medium
- display: SPLADE-distill CoCodenser Medium
- params: -impact -pretokenized -removeQuery -hits 1000
- results:
- nDCG@10:
- - 0.6992
- R@100:
- - 0.9270
- R@1000:
- - 0.9767
diff --git a/src/main/resources/regression/beir-v1.0.0-signal1m-splade-distil-cocodenser-medium.yaml b/src/main/resources/regression/beir-v1.0.0-signal1m-splade-distil-cocodenser-medium.yaml
deleted file mode 100644
index 220e583cd3..0000000000
--- a/src/main/resources/regression/beir-v1.0.0-signal1m-splade-distil-cocodenser-medium.yaml
+++ /dev/null
@@ -1,55 +0,0 @@
----
-corpus: beir-v1.0.0-signal1m-splade_distil_cocodenser_medium
-corpus_path: collections/beir-v1.0.0/splade_distil_cocodenser_medium/signal1m
-
-index_path: indexes/lucene-index.beir-v1.0.0-signal1m-splade_distil_cocodenser_medium/
-collection_class: JsonVectorCollection
-generator_class: DefaultLuceneDocumentGenerator
-index_threads: 16
-index_options: -impact -pretokenized
-index_stats:
- documents: 2866316
- documents (non-empty): 2866316
- total terms: 13103073741
-
-metrics:
- - metric: nDCG@10
- command: target/appassembler/bin/trec_eval
- params: -c -m ndcg_cut.10
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@100
- command: target/appassembler/bin/trec_eval
- params: -c -m recall.100
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@1000
- command: target/appassembler/bin/trec_eval
- params: -c -m recall.1000
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
-
-topic_reader: TsvString
-topics:
- - name: "BEIR (v1.0.0): Signal-1M"
- id: test
- path: topics.beir-v1.0.0-signal1m.test.splade_distil_cocodenser_medium.tsv.gz
- qrel: qrels.beir-v1.0.0-signal1m.test.txt
-
-models:
- - name: splade_distil_cocodenser_medium
- display: SPLADE-distill CoCodenser Medium
- params: -impact -pretokenized -removeQuery -hits 1000
- results:
- nDCG@10:
- - 0.2957
- R@100:
- - 0.3311
- R@1000:
- - 0.5514
diff --git a/src/main/resources/regression/beir-v1.0.0-trec-covid-splade-distil-cocodenser-medium.yaml b/src/main/resources/regression/beir-v1.0.0-trec-covid-splade-distil-cocodenser-medium.yaml
deleted file mode 100644
index 16cb94226a..0000000000
--- a/src/main/resources/regression/beir-v1.0.0-trec-covid-splade-distil-cocodenser-medium.yaml
+++ /dev/null
@@ -1,55 +0,0 @@
----
-corpus: beir-v1.0.0-trec-covid-splade_distil_cocodenser_medium
-corpus_path: collections/beir-v1.0.0/splade_distil_cocodenser_medium/trec-covid
-
-index_path: indexes/lucene-index.beir-v1.0.0-trec-covid-splade_distil_cocodenser_medium/
-collection_class: JsonVectorCollection
-generator_class: DefaultLuceneDocumentGenerator
-index_threads: 16
-index_options: -impact -pretokenized
-index_stats:
- documents: 171332
- documents (non-empty): 171332
- total terms: 1697942549
-
-metrics:
- - metric: nDCG@10
- command: target/appassembler/bin/trec_eval
- params: -c -m ndcg_cut.10
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@100
- command: target/appassembler/bin/trec_eval
- params: -c -m recall.100
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@1000
- command: target/appassembler/bin/trec_eval
- params: -c -m recall.1000
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
-
-topic_reader: TsvString
-topics:
- - name: "BEIR (v1.0.0): TREC-COVID"
- id: test
- path: topics.beir-v1.0.0-trec-covid.test.splade_distil_cocodenser_medium.tsv.gz
- qrel: qrels.beir-v1.0.0-trec-covid.test.txt
-
-models:
- - name: splade_distil_cocodenser_medium
- display: SPLADE-distill CoCodenser Medium
- params: -impact -pretokenized -removeQuery -hits 1000
- results:
- nDCG@10:
- - 0.7109
- R@100:
- - 0.1308
- R@1000:
- - 0.4433
diff --git a/src/main/resources/regression/beir-v1.0.0-trec-news-splade-distil-cocodenser-medium.yaml b/src/main/resources/regression/beir-v1.0.0-trec-news-splade-distil-cocodenser-medium.yaml
deleted file mode 100644
index 7b45773840..0000000000
--- a/src/main/resources/regression/beir-v1.0.0-trec-news-splade-distil-cocodenser-medium.yaml
+++ /dev/null
@@ -1,55 +0,0 @@
----
-corpus: beir-v1.0.0-trec-news-splade_distil_cocodenser_medium
-corpus_path: collections/beir-v1.0.0/splade_distil_cocodenser_medium/trec-news
-
-index_path: indexes/lucene-index.beir-v1.0.0-trec-news-splade_distil_cocodenser_medium/
-collection_class: JsonVectorCollection
-generator_class: DefaultLuceneDocumentGenerator
-index_threads: 16
-index_options: -impact -pretokenized
-index_stats:
- documents: 594977
- documents (non-empty): 594977
- total terms: 7519025445
-
-metrics:
- - metric: nDCG@10
- command: target/appassembler/bin/trec_eval
- params: -c -m ndcg_cut.10
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@100
- command: target/appassembler/bin/trec_eval
- params: -c -m recall.100
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@1000
- command: target/appassembler/bin/trec_eval
- params: -c -m recall.1000
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
-
-topic_reader: TsvString
-topics:
- - name: "BEIR (v1.0.0): TREC-NEWS"
- id: test
- path: topics.beir-v1.0.0-trec-news.test.splade_distil_cocodenser_medium.tsv.gz
- qrel: qrels.beir-v1.0.0-trec-news.test.txt
-
-models:
- - name: splade_distil_cocodenser_medium
- display: SPLADE-distill CoCodenser Medium
- params: -impact -pretokenized -removeQuery -hits 1000
- results:
- nDCG@10:
- - 0.3936
- R@100:
- - 0.4323
- R@1000:
- - 0.6977
diff --git a/src/main/resources/regression/beir-v1.0.0-webis-touche2020-splade-distil-cocodenser-medium.yaml b/src/main/resources/regression/beir-v1.0.0-webis-touche2020-splade-distil-cocodenser-medium.yaml
deleted file mode 100644
index dcd6f03edb..0000000000
--- a/src/main/resources/regression/beir-v1.0.0-webis-touche2020-splade-distil-cocodenser-medium.yaml
+++ /dev/null
@@ -1,55 +0,0 @@
----
-corpus: beir-v1.0.0-webis-touche2020-splade_distil_cocodenser_medium
-corpus_path: collections/beir-v1.0.0/splade_distil_cocodenser_medium/webis-touche2020
-
-index_path: indexes/lucene-index.beir-v1.0.0-webis-touche2020-splade_distil_cocodenser_medium/
-collection_class: JsonVectorCollection
-generator_class: DefaultLuceneDocumentGenerator
-index_threads: 16
-index_options: -impact -pretokenized
-index_stats:
- documents: 382545
- documents (non-empty): 382545
- total terms: 3229042324
-
-metrics:
- - metric: nDCG@10
- command: target/appassembler/bin/trec_eval
- params: -c -m ndcg_cut.10
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@100
- command: target/appassembler/bin/trec_eval
- params: -c -m recall.100
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@1000
- command: target/appassembler/bin/trec_eval
- params: -c -m recall.1000
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
-
-topic_reader: TsvString
-topics:
- - name: "BEIR (v1.0.0): Webis-Touche2020"
- id: test
- path: topics.beir-v1.0.0-webis-touche2020.test.splade_distil_cocodenser_medium.tsv.gz
- qrel: qrels.beir-v1.0.0-webis-touche2020.test.txt
-
-models:
- - name: splade_distil_cocodenser_medium
- display: SPLADE-distill CoCodenser Medium
- params: -impact -pretokenized -removeQuery -hits 1000
- results:
- nDCG@10:
- - 0.2435
- R@100:
- - 0.4723
- R@1000:
- - 0.8116
diff --git a/src/main/resources/regression/dl19-passage-splade-distil-cocodenser-medium.yaml b/src/main/resources/regression/dl19-passage-splade-distil-cocodenser-medium.yaml
deleted file mode 100644
index 2834e1ae16..0000000000
--- a/src/main/resources/regression/dl19-passage-splade-distil-cocodenser-medium.yaml
+++ /dev/null
@@ -1,91 +0,0 @@
----
-corpus: msmarco-passage-splade_distil_cocodenser_medium
-corpus_path: collections/msmarco/msmarco-passage-splade_distil_cocodenser_medium
-
-download_url: https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/msmarco-passage-splade_distil_cocodenser_medium.tar
-download_checksum: f77239a26d08856e6491a34062893b0c
-
-index_path: indexes/lucene-index.msmarco-passage-splade_distil_cocodenser_medium/
-collection_class: JsonVectorCollection
-generator_class: DefaultLuceneDocumentGenerator
-index_threads: 16
-index_options: -impact -pretokenized -storeDocvectors
-index_stats:
- documents: 8841823
- documents (non-empty): 8841823
- total terms: 54967294608
-
-metrics:
- - metric: AP@1000
- command: target/appassembler/bin/trec_eval
- params: -m map -c -l 2
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: nDCG@10
- command: target/appassembler/bin/trec_eval
- params: -m ndcg_cut.10 -c
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@100
- command: target/appassembler/bin/trec_eval
- params: -m recall.100 -c -l 2
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@1000
- command: target/appassembler/bin/trec_eval
- params: -m recall.1000 -c -l 2
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
-
-topic_reader: TsvInt
-topics:
- - name: "[DL19 (Passage)](https://trec.nist.gov/data/deep2020.html)"
- id: dl19
- path: topics.dl19-passage.splade_distil_cocodenser_medium.tsv.gz
- qrel: qrels.dl19-passage.txt
-
-models:
- - name: splade_distil_cocodenser_medium
- display: SPLADE-distill CoCodenser Medium
- params: -impact -pretokenized
- results:
- AP@1000:
- - 0.4970
- nDCG@10:
- - 0.7425
- R@100:
- - 0.6344
- R@1000:
- - 0.8756
- - name: rm3
- display: +RM3
- params: -impact -pretokenized -rm3
- results:
- AP@1000:
- - 0.5194
- nDCG@10:
- - 0.7261
- R@100:
- - 0.6485
- R@1000:
- - 0.8736
- - name: rocchio
- display: +Rocchio
- params: -impact -pretokenized -rocchio
- results:
- AP@1000:
- - 0.5224
- nDCG@10:
- - 0.7316
- R@100:
- - 0.6533
- R@1000:
- - 0.8774
diff --git a/src/main/resources/regression/dl20-passage-splade-distil-cocodenser-medium.yaml b/src/main/resources/regression/dl20-passage-splade-distil-cocodenser-medium.yaml
deleted file mode 100644
index 904e441c92..0000000000
--- a/src/main/resources/regression/dl20-passage-splade-distil-cocodenser-medium.yaml
+++ /dev/null
@@ -1,91 +0,0 @@
----
-corpus: msmarco-passage-splade_distil_cocodenser_medium
-corpus_path: collections/msmarco/msmarco-passage-splade_distil_cocodenser_medium
-
-download_url: https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/msmarco-passage-splade_distil_cocodenser_medium.tar
-download_checksum: f77239a26d08856e6491a34062893b0c
-
-index_path: indexes/lucene-index.msmarco-passage-splade_distil_cocodenser_medium/
-collection_class: JsonVectorCollection
-generator_class: DefaultLuceneDocumentGenerator
-index_threads: 16
-index_options: -impact -pretokenized -storeDocvectors
-index_stats:
- documents: 8841823
- documents (non-empty): 8841823
- total terms: 54967294608
-
-metrics:
- - metric: AP@1000
- command: target/appassembler/bin/trec_eval
- params: -m map -c -l 2
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: nDCG@10
- command: target/appassembler/bin/trec_eval
- params: -m ndcg_cut.10 -c
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@100
- command: target/appassembler/bin/trec_eval
- params: -m recall.100 -c -l 2
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@1000
- command: target/appassembler/bin/trec_eval
- params: -m recall.1000 -c -l 2
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
-
-topic_reader: TsvInt
-topics:
- - name: "[DL20 (Passage)](https://trec.nist.gov/data/deep2020.html)"
- id: dl20
- path: topics.dl20.splade_distil_cocodenser_medium.tsv.gz
- qrel: qrels.dl20-passage.txt
-
-models:
- - name: splade_distil_cocodenser_medium
- display: SPLADE-distill CoCodenser Medium
- params: -impact -pretokenized
- results:
- AP@1000:
- - 0.5019
- nDCG@10:
- - 0.7179
- R@100:
- - 0.7619
- R@1000:
- - 0.8901
- - name: rm3
- display: +RM3
- params: -impact -pretokenized -rm3
- results:
- AP@1000:
- - 0.5155
- nDCG@10:
- - 0.7132
- R@100:
- - 0.7553
- R@1000:
- - 0.9080
- - name: rocchio
- display: +Rocchio
- params: -impact -pretokenized -rocchio
- results:
- AP@1000:
- - 0.5133
- nDCG@10:
- - 0.7033
- R@100:
- - 0.7575
- R@1000:
- - 0.8937
diff --git a/src/main/resources/regression/msmarco-passage-splade-distil-cocodenser-medium.yaml b/src/main/resources/regression/msmarco-passage-splade-distil-cocodenser-medium.yaml
deleted file mode 100644
index 04140adef4..0000000000
--- a/src/main/resources/regression/msmarco-passage-splade-distil-cocodenser-medium.yaml
+++ /dev/null
@@ -1,94 +0,0 @@
----
-corpus: msmarco-passage-splade_distil_cocodenser_medium
-corpus_path: collections/msmarco/msmarco-passage-splade_distil_cocodenser_medium
-
-download_url: https://rgw.cs.uwaterloo.ca/JIMMYLIN-bucket0/data/msmarco-passage-splade_distil_cocodenser_medium.tar
-download_checksum: f77239a26d08856e6491a34062893b0c
-
-index_path: indexes/lucene-index.msmarco-passage-splade_distil_cocodenser_medium/
-collection_class: JsonVectorCollection
-generator_class: DefaultLuceneDocumentGenerator
-index_threads: 16
-index_options: -impact -pretokenized -storeDocvectors
-index_stats:
- documents: 8841823
- documents (non-empty): 8841823
- total terms: 54967294608
-
-metrics:
- - metric: AP@1000
- command: target/appassembler/bin/trec_eval
- params: -c -m map
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: RR@10
- command: target/appassembler/bin/trec_eval
- params: -c -M 10 -m recip_rank
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@100
- command: target/appassembler/bin/trec_eval
- params: -c -m recall.100
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
- - metric: R@1000
- command: target/appassembler/bin/trec_eval
- params: -c -m recall.1000
- separator: "\t"
- parse_index: 2
- metric_precision: 4
- can_combine: false
-
-topic_reader: TsvInt
-topics:
- - name: "[MS MARCO Passage: Dev](https://github.com/microsoft/MSMARCO-Passage-Ranking)"
- id: dev
- path: topics.msmarco-passage.dev-subset.splade_distil_cocodenser_medium.tsv.gz
- qrel: qrels.msmarco-passage.dev-subset.txt
-
-models:
- - name: splade_distil_cocodenser_medium
- display: SPLADE-distill CoCodenser Medium
- params: -impact -pretokenized
- results:
- AP@1000:
- - 0.3943
- RR@10:
- - 0.3892
- R@100:
- - 0.9111
- R@1000:
- - 0.9817
-# PRF regressions are no longer maintained for sparse judgments to reduce running times.
-# (commenting out instead of removing; in case these numbers are needed, just uncomment and rerun.)
-#
-# - name: rm3
-# display: +RM3
-# params: -impact -pretokenized -rm3
-# results:
-# AP@1000:
-# - 0.3020
-# RR@10:
-# - 0.2936
-# R@100:
-# - 0.8750
-# R@1000:
-# - 0.9750
-# - name: rocchio
-# display: +Rocchio
-# params: -impact -pretokenized -rocchio
-# results:
-# AP@1000:
-# - 0.3345
-# RR@10:
-# - 0.3279
-# R@100:
-# - 0.8911
-# R@1000:
-# - 0.9804