Program
ReNeuIR takes place on July 27, 2023. All times in the table below are according to the local time zone.
Time |
Agenda |
09:00 - 09:30 |
Opening and Welcome |
09:30 - 10:30 |
Keynote by Omar Khattab |
10:30 - 11:00 |
Coffee Break |
11:00 - 12:30 |
Joint Poster Session w/ REML and GenIR |
12:30 - 13:30 |
Lunch Break |
13:30 - 14:30 |
Paper Presentations |
14:30 - 15:00 |
TREC Track Proposal |
15:00 - 15:30 |
Coffee Break |
15:30 - 17:00 |
Break-out discussion on TREC Track Proposal |
Paper Session Detailed Schedule
Each paper is given 20 minutes for presentation including Q&A.
- “Injecting Domain Adaptation with Learning-to-hash for Effective and Efficient Zero-shot Dense Retrieval.”
Nandan Thakur, Nils Reimers, and Jimmy Lin. (paper)
- “Towards Consistency Filtering-Free Unsupervised Learning for Dense Retrieval.”
Haoxiang Shi, Sumio Fujita, and Tetsuya Sakai.(paper)
- “The Information Retrieval Experiment Platform.”
Maik Fröbe, Jan Heinrich Reimer, Sean MacAvaney, Niklas Deckers, Simon Reich, Janek Bevendorff, Benno Stein, Matthias Hagen, and Martin Potthast. (paper)
ReNeuIR Posters
- “Attention over pre-trained Sentence Embeddings for Long Document Classification.” Amine Abdaoui and Sourav Dutta. (paper)
- “Retrieval for Extremely Long Queries and Documents with RPRS: a Highly Efficient and Effective Transformer-based Re-Ranker.” Arian Askari, Suzan Verberne, Amin Abolghasemi, Wessel Kraaij, and Gabriella Pasi. (paper, TOIS Submission)
- “Data Augmentation for Sample Efficient and Robust Document Ranking.” Abhijit Anand, Jurek Leonhardt, Jaspreet Singh, Koustav Rudra, Avishek Anand. (TOIS Submission)
- “A Static Pruning Study on Sparse Neural Retrievers.” Carlos Lassance, Simon Lupart, Hervé Dejean, Stéphane Clinchant, Nicola Tonellotto. (paper)
- “Adapting Learned Sparse Retrieval for Long Documents.” Thong Nguyen, Sean MacAvaney, Andrew Yates. (paper)
- “Efficient Neural Ranking using Forward Indexes and Lightweight Encoders.” Jurek Leonhardt, Koustav Rudra, Megha Khosla, Abhijit Anand, Avishek Anand. (paper)
TREC Track Proposal: An Efficiency-first Benchmark
We present our initiative to create an efficiency-first benchmark
for (neural) Information Retrieval research. We seek the community’s input on
key details of the framework in an ideation and discussion session at the event.
We invite you to read this
short problem description prior to the workshop and
ask you to kindly fill out this
questionnaire and share your thoughts with us.