How do shoppers pick a single product out of the vast number presented to them? One part of their decision making is to compare between the different products by weighing the pros and cons of their features for a given price: for shops with a huge inventory, this can no doubt be a challenging task.
Explicit comparison features (i.e. “click on product X and Y to see them side-by-side”) are a classical way of easing the shopping cognitive load, and recently eCommerce giants started incorporating this concept into a new type of recommendation. However, scaling this approach to huge inventories and a variety of verticals is a daunting task for traditional retailers: explicit comparisons are limited to manual 1:1 interfaces, and detailed comparison tables require a lot of manual work and often presuppose a well structured product catalogue.
In this talk, we present our pipeline to generate comparisons-as-recs at scale in a multi-tenant setting, with minimal assumptions about catalog size and web traffic. Our approach leverages both product meta-data (image, text) and behavioural data, and a combination of neural inference and decision-making principles. In particular, we show how to break down the problem into two main steps. First, for a given product we use dense representations to perform substitute identification, which determines a group of alternative products of the same category. Then, based on how their features and price vary, we select the final set of products and determine which features to display for comparison. Compared to existing, single-tenant literature, our experiments highlight the need for further improvements in dealing with noisy data and the adoption of data augmentation techniques: we conclude by sharing some practical tips for practitioners, and highlighting our testing and product roadmap.
Slides can be found here: https://drive.google.com/file/d/1E8GEc6DA3H7lxSkVRuwKCJVVioJ3prQB/view