Utilizing machine learning models to improve search has been an immensely active area for several years now. Some promises were kept, many others were broken. With the rise of Transformer models like BERT we seem to finally be entering a chapter, where models not only perform well in the research lab but actually make their way into the production stack. 

Now that almost every English Google search query is powered by a Transformer [1], it is clear that these models improve the search experience, and can do so at scale. As Transformers only rely on text, the transition from web search to a custom enterprise search seems more tempting than ever.

In this talk, we will dive into some of the most promising methods and show how to ... 

  • … improve document retrieval via dense passage retrieval 
  • … return more granular search results by showing direct answers to user’s questions
  • scale those pipelines via DAGs and Approx. Nearest Neighbour search (ANN) for production workloads
  • … avoid common pitfalls when moving to production

All methods will be illustrated with code examples based on the open-source framework Haystack [2] so that participants can easily reproduce them at home and let the transformers into their production stack - one by one and carefully selected!

[1] https://searchengineland.com/google-bert-used-on-almost-every-english-query-342193

[2] https://github.com/deepset-ai/haystack/

 

 

 

Video

Frannz Salon
17.06.2021 18:10 – 18:40
Talk
Intermediate

Speakers