Keynote Speakers

We are thrilled to have Hamed Zamani and Bhaskar Mitra as keynote speakers at the event. Below you will find the title and abstract of their talks as well as a short bio for each speaker.


Efficient Neural Models for Representing, Indexing, and Retrieving Documents

(slides)

Abstract: Deep learning has transformed information retrieval research in the last few years. Many effective neural information retrieval models contain millions or billions of parameters and are computationally expensive. In this talk, I will present some solutions for improving the efficient of neural information retrieval models by focusing on efficient representation of long documents and efficient indexing and retrieval of documents using neural models. At the end, I will go beyond typical problems that have been addressed by the information retrieval community and discuss the potential impact of efficient neural retrieval models on machine learning research.

Speaker: Hamed Zamani is an Assistant Professor in the Manning College of Information and Computer Sciences (CICS) at the University of Massachusetts Amherst (UMass), where he also serves as the Associate Director of the Center for Intelligent Information Retrieval (CIIR). Prior to UMass, he was a Researcher at Microsoft. In 2019, he received his Ph.D. from UMass under supervision of W. Bruce Croft and received the CICS Outstanding Dissertation Award. His research focuses on developing and evaluating statistical and machine learning models with application to (interactive) information access systems including search engines, recommender systems, and question answering. He is an active member of the information retrieval community and has published over 75 peer-reviewed articles. He is mostly known for his work on neural information retrieval and conversational search. He is a recipient of NSF CAREER Award and his papers have received awards from ICTIR 2019 and CIKM 2020. He has organized multiple workshops at SIGIR, WSDM, and RecSys and has served as the Program Committee Co-Chair for SIGIR 2022 - Short Paper Track.


Efficient Machine Learning and Machine Learning for Efficiency in Information Retrieval

(slides)

Abstract: Emerging machine learning approaches, including deep learning methods, for information retrieval (IR) have recently demonstrated significant improvements in accuracy of relevance estimation at the cost of increasing model complexity and corresponding rise in computational and environmental costs of training and inference. In web search, these costs are further compounded by the necessity to train on large-scale datasets, consume long documents as inputs, and retrieve relevant documents from web-scale collections within milliseconds in response to high volume query traffic. A typical playbook for developing deep learning models for IR involves largely ignoring efficiency concerns during model development and then later scaling these methods by either finding faster approximations of the same models or employing heuristics to reduce the input space over which these models operate. Domain knowledge about the specific IR task and deeper understanding of system design and data structures in whose context these models are deployed can significantly help with not only model simplification but also to inform data-structure specific machine learning model design. Alternatively, predictive machine learning can also be employed specifically to improve efficiency in large scale IR settings. In this talk, I will cover several case studies for both improving efficiency of machine learning models for IR as well as direct application of machine learning to improve retrieval efficiency, and conclude with a brief discussion on potential future directions for efficiency-sensitive benchmarking of machine learning models for IR.

Speaker: Bhaskar Mitra is a Principal Researcher at Microsoft Research based in Montreal, Canada. He received a Ph.D. in Computer Science from University College London under the supervision of Dr. Emine Yilmaz. His research interests are at the intersection of information retrieval, deep learning, and FATE (Fairness, Accountability, Transparency, and Ethics). He joined Microsoft in 2006 and in his 15+ years at Microsoft, he shipped several search quality improvements for Bing and conducted research with strong focus on both academic and product impact. He co-organized the Neural IR (Neu-IR) Workshop in 2016 and 2017 which was the first to attempt to bring together a community of IR researchers interested in deep learning methods, and since then have influenced the research vision for the community through development of the MS MARCO benchmark, co-founding the TREC Deep Learning Track, co-authoring a book on the topic of neural information retrieval, serving as a guest editor for the special issue of the Information Retrieval Journal, co-organizing multiple tutorials on the same topic, and through his own research.