Valid Professional-Data-Engineer Dumps shared by ExamDiscuss.com for Helping Passing Professional-Data-Engineer Exam! ExamDiscuss.com now offer the newest Professional-Data-Engineer exam dumps, the ExamDiscuss.com Professional-Data-Engineer exam questions have been updated and answers have been corrected get the newest ExamDiscuss.com Professional-Data-Engineer dumps with Test Engine here:
You are designing a Dataflow pipeline for a batch processing job. You want to mitigate multiple zonal failures at job submission time. What should you do?
Correct Answer: B
By specifying a worker region, you can run your Dataflow pipeline in a multi-zone or multi-region configuration, which provides higher availability and resilience in case of zonal failures1. The -region flag allows you to specify the regional endpoint for your pipeline, which determines the location of the Dataflow service and the default location of the Compute Engine resources1. If you do not specify a zone by using the -zone flag, Dataflow automatically selects a zone within the region for your job workers1. This option is recommended over submitting duplicate pipelines in two different zones, which would incur additional costs and complexity. Setting the pipeline staging location as a regional Cloud Storage bucket does not affect the availability of your pipeline, as the staging location only stores the pipeline code and dependencies2. Creating an Eventarc trigger to resubmit the job in case of zonal failure is not a reliable solution, as it depends on the availability of the Eventarc service and the zonal resources at the time of resubmission. Reference: 1: Pipeline troubleshooting and debugging | Cloud Dataflow | Google Cloud 3: Regional endpoints | Cloud Dataflow | Google Cloud