You are designing an Azure Data Factory solution that will download up to 5 TB of data from several REST APIs.
The solution must meet the following staging requirements:
* Ensure that the data can be landed quickly and in parallel to a staging area.
* Minimize the need to return to the API sources to retrieve the data again should a later activity in the pipeline fail.
The solution must meet the following analysis requirements:
* Ensure that the data can be loaded in parallel.
* Ensure that users and applications can query the data without requiring an additional compute engine.
What should you include in the solution to meet the requirements? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.

Correct Answer:

Explanation

Box 1: Azure Blob storage
When you activate the staging feature, first the data is copied from the source data store to the staging storage (bring your own Azure Blob or Azure Data Lake Storage Gen2).
Box 2: Azure Synapse Analytics
The Azure Synapse Analytics connector in copy activity provides built-in data partitioning to copy data in parallel.
Reference:
https://docs.microsoft.com/en-us/azure/data-factory/copy-activity-performance-features
https://docs.microsoft.com/en-us/azure/data-factory/connector-azure-sql-data-warehouse