In this workshop, you will go on a journey from raw data, historical financial transactions, that we engineer into features using Apache Spark, to model training with TensorFlow, to model serving on Kubeflow's model serving framework, KFServing. You will learn how to leverage a feature store to build a single feature pipeline that is used to supply features for both training and serving, and you will learn how to productionize the feature engineering and training pipelines with Airflow. You will learn how you can scale this entire framework from running on your laptop to a cluster of hundreds of servers.
We will work with Hopsworks, an open-source data science platform that includes the industry's first open-source feature store for machine learning. Hopsworks can run on anything from your laptop to a cluster of hundreds of servers. To enable participants to learn by doing, we will provide web-based access to a managed version of Hopsworks, running at www.hopsworks.ai. All you will need is a laptop and web access to join in, along with knowledge of Python.