<< Prev Question Next Question >>

Question 24/60

An upstream system is emitting change data capture (CDC) logs that are being written to a cloud object storage directory. Each record in the log indicates the change type (insert, update, or delete) and the values for each field after the change. The source table has a primary key identified by the field pk_id.
For auditing purposes, the data governance team wishes to maintain a full record of all values that have ever been valid in the source system. For analytical purposes, only the most recent value for each record needs to be recorded. The Databricks job to ingest these records occurs once per hour, but each individual record may have changed multiple times over the course of an hour.
Which solution meets these requirements?

Recent Comments (The most recent comments are at the top.)

test - Mar 09, 2025

An upstream system is emitting change data capture (CDC) logs that are being written to a cloud object storage directory. Each record in the log indicates the change type (insert, update, or delete) and the values for each field after the change. The source table has a primary key identified by the field pk_id.
For auditing purposes, the data governance team wishes to maintain a full record of all values that have ever been valid in the source system. For analytical purposes, only the most recent value for each record needs to be recorded. The Databricks job to ingest these records occurs once per hour, but each individual record may have changed multiple times over the course of an hour.
Which solution meets these requirements?
A. Create a separate history table for each pk_id resolve the current state of the table by running a union all filtering the history tables for the most recent state.

B. Use merge into to insert, update, or delete the most recent entry for each pk_id into a bronze table, then propagate all changes throughout the system.

C. Iterate through an ordered set of changes to the table, applying each in turn; rely on Delta Lake's versioning ability to create an audit log.

D. Use Delta Lake's change data feed to automatically process CDC data from an external system, propagating all changes to all dependent tables in the Lakehouse.

E. Ingest all log information into a bronze table; use merge into to insert, update, or delete the most recent entry for each pk_id into a silver table to recreate the current table state.


Answer shown in website seems wrong correct answer seems e not b...

LEAVE A REPLY

Your email address will not be published. Required fields are marked *

Question List (60q)
Question 1: The Databricks workspace administrator has configured intera...
Question 2: A junior data engineer is migrating a workload from a relati...
Question 3: Review the following error traceback: Which statement descri...
Question 4: A Delta Lake table representing metadata about content from ...
Question 5: Which is a key benefit of an end-to-end test?...
Question 6: The data engineer team is configuring environment for develo...
Question 7: A production cluster has 3 executor nodes and uses the same ...
Question 8: A Delta table of weather records is partitioned by date and ...
Question 9: An upstream source writes Parquet data as hourly batches to ...
Question 10: A Data engineer wants to run unit's tests using common Pytho...
Question 11: An upstream system has been configured to pass the date for ...
Question 12: A junior developer complains that the code in their notebook...
Question 13: A data engineer is configuring a pipeline that will potentia...
1 commentQuestion 14: Which statement characterizes the general programming model ...
Question 15: Which statement describes Delta Lake optimized writes?...
1 commentQuestion 16: Which distribution does Databricks support for installing cu...
Question 17: A junior data engineer is working to implement logic for a L...
Question 18: A junior data engineer seeks to leverage Delta Lake's Change...
Question 19: The data engineering team maintains the following code: (Exh...
Question 20: Which statement regarding stream-static joins and static Del...
Question 21: The data governance team is reviewing code used for deleting...
Question 22: An external object storage container has been mounted to the...
Question 23: A Delta Lake table was created with the below query: Conside...
1 commentQuestion 24: An upstream system is emitting change data capture (CDC) log...
Question 25: The downstream consumers of a Delta Lake table have been com...
Question 26: A production workload incrementally applies updates from an ...
Question 27: The data governance team is reviewing user for deleting reco...
1 commentQuestion 28: What is a method of installing a Python package scoped at th...
Question 29: The DevOps team has configured a production workload as a co...
Question 30: A user new to Databricks is trying to troubleshoot long exec...
Question 31: A data ingestion task requires a one-TB JSON dataset to be w...
Question 32: The business intelligence team has a dashboard configured to...
Question 33: Spill occurs as a result of executing various wide transform...
Question 34: The data engineering team is migrating an enterprise system ...
Question 35: A junior data engineer on your team has implemented the foll...
Question 36: In order to prevent accidental commits to production data, a...
Question 37: A DLT pipeline includes the following streaming tables: Raw_...
Question 38: The security team is exploring whether or not the Databricks...
Question 39: A table is registered with the following code: Both users an...
Question 40: A distributed team of data analysts share computing resource...
Question 41: When scheduling Structured Streaming jobs for production, wh...
Question 42: A Delta Lake table representing metadata about content posts...
Question 43: The data engineer team has been tasked with configured conne...
Question 44: The business reporting tem requires that data for their dash...
Question 45: Which of the following is true of Delta Lake and the Lakehou...
Question 46: A Delta Lake table was created with the below query: Realizi...
Question 47: A junior data engineer is working to implement logic for a L...
Question 48: The data engineering team maintains a table of aggregate sta...
Question 49: A data team's Structured Streaming job is configured to calc...
Question 50: Incorporating unit tests into a PySpark application requires...
Question 51: A small company based in the United States has recently cont...
Question 52: The data architect has decided that once data has been inges...
Question 53: Which statement describes integration testing?...
Question 54: Assuming that the Databricks CLI has been installed and conf...
Question 55: Which Python variable contains a list of directories to be s...
Question 56: A DLT pipeline includes the following streaming tables: Raw_...
Question 57: Which configuration parameter directly affects the size of a...
Question 58: The data engineering team maintains the following code: (Exh...
Question 59: A data engineer, User A, has promoted a new pipeline to prod...
Question 60: A data pipeline uses Structured Streaming to ingest data fro...