While search systems today are very efficient for simple look-up information tasks (fact-finding search), they are unable to guide users engaged in exploratory, multi-step and highly cognitive search tasks (e.g, diagnosis, human learning). Hence, paradoxically, while we consider information search nowadays to be ’natural’ and ’easy’, search systems are not yet able to provide adequate support for achieving a wide range of real-life work complex search tasks. In the CoST project (funded by ANR 2019-2022), we envision a shift from search engines to task completion engines by dynamically assisting users in making the optimal decisions, empowering them to achieve multi-step complex search tasks. While most of previous work rely on query-aware models and techniques to structure the session context and model search satisfaction [2,3,4] at the query level, we rather attempt to design task-aware IR models to make task-level satisfaction predictions.
This PhD will be focussed on applying neural approaches for task-based information retrieval. Based on the findings that have raised from previous works about the effectiveness of seq2seg models to capture reformulation patterns for the next query prediction task [4,5], we envision new end-to-end network architectures that make possible to account for sequences of sub-tasks. We will also explore end-to-end learning for task satisfaction prediction based on deep reinforcement learning that goes beyond query-level relevance. The candidate will investigate the modelling, the deployment and evaluation of search assistance techniques (eg., query suggestion) and ranking models using deep neural networks architectures. The evaluation of the resulting systems will be carried out using both public benchmarks (eg., TREC Tasks, TREC session, AOL dataset) as well as laboratory-built datasets built within the CoST project.
Université Paul Sabatier
Skills: the successful candidate is expected to have skills/background in information retrieval, machine learning, deep learning. Background in reinforcement learning would be greatly appreciated.
Starting and duration: September 2019, 36 months