The next item prediction AI workflow shows how to use NVIDIA Merlin, an end-to-end framework for building high-performing recommender systems at scale. A session-based recommendation is the next-generation AI method that predicts the next action of a user – it predicts preferences from contextual user interactions for first-time, early, or anonymous online users.
The next item prediction workflow includes cloud-native Kubernetes services such as an example recommender system deployment based on NVIDIA Merlin, MLflow for model storage, Prometheus monitoring and Grafana dashboards. This reference solution provides a sample dataset as well as Python code and pre-built pipelines for data prep, training, and inference, packaged for deployment via Helm charts, and step-by-step instructions to help organizations quickly get started in building a recommendation system.
Key benefits of the next item prediction AI workflow:
Merlin Transformers4Rec library for making state-of-the-art session-based recommendations available for recommender systems.
HuggingFace’s Transformers NLP library which makes it easy to use cutting-edge implementations of the latest NLP Transformer architectures in your recommendation systems.
Merlin Systems simplifies the deployment of recommender systems to NVIDIA Triton™ Inference Server.
Triton Inference Server maximizes inference performance with standardized and optimized model deployment and execution.
To get started, review the documentation linked below for more information on what is included in the workflow, and how to deploy and run the workflow.
Leverage the NVIDIA Merlin SDK to build your own AI-based session based recommendation solutions.
By pulling and using the containers or Helm Charts, you accept the terms and conditions of this End User License Agreement.