Valid Professional-Machine-Learning-Engineer Dumps shared by ExamDiscuss.com for Helping Passing Professional-Machine-Learning-Engineer Exam! ExamDiscuss.com now offer the newest Professional-Machine-Learning-Engineer exam dumps, the ExamDiscuss.com Professional-Machine-Learning-Engineer exam questions have been updated and answers have been corrected get the newest ExamDiscuss.com Professional-Machine-Learning-Engineer dumps with Test Engine here:
Your company manages an ecommerce website. You developed an ML model that recommends additional products to users in near real time based on items currently in the user's cart. The workflow will include the following processes. 1 The website will send a Pub/Sub message with the relevant data and then receive a message with the prediction from Pub/Sub. 2 Predictions will be stored in BigQuery 3. The model will be stored in a Cloud Storage bucket and will be updated frequently You want to minimize prediction latency and the effort required to update the model How should you reconfigure the architecture?
Correct Answer: D
According to the web search results, RunInference API1 is a feature of Apache Beam that enables you to run models as part of your pipeline in a way that is optimized for machine learning inference. RunInference API supports features like batching, caching, and model reloading. RunInference API can be used with various frameworks, such as TensorFlow, PyTorch, Sklearn, XGBoost, ONNX, and TensorRT1. Dataflow2 is a fully managed service for running Apache Beam pipelines on Google Cloud. Dataflow handles the provisioning and management of the compute resources, as well as the optimization and execution of the pipelines. Therefore, option D is the best way to reconfigure the architecture for the given use case, as it allows you to use the RunInference API with watchFilePattern in a Dataflow job that wraps around the model and serves predictions. This way, you can minimize prediction latency and the effort required to update the model, as the RunInference API will automatically reload the model from the Cloud Storage bucket whenever there is a change in the model file1. The other options are not relevant or optimal for this scenario. References: * RunInference API * Dataflow * Google Professional Machine Learning Certification Exam 2023 * Latest Google Professional Machine Learning Engineer Actual Free Exam Questions