Explanation/Reference:
Explanation:
Anyone using TLS must be mindful of how certificates are validated. The first thing an attacker is likely to try against any TLS implementation is to conduct a man-in-the-middle attack that presents self-signed or otherwise forged certificates to TLS clients (and servers, if client certificates are in use). To its credit, Microsoft's implementation of TDS is safe in the sense that it enables certificate validation by default, which prevents this attack.
From Scenario: Common security issues such as SQL injection and XSS must be prevented.
Database-related security issues must not result in customers' data being exposed.
Note:
TDS depends on Transport Layer Security (TLS)/Secure Socket Layer (SSL) for network channel encryption.
The Tabular Data Stream (TDS) Protocol is an application-level protocol used for the transfer of requests and responses between clients and database server systems. In such systems, the client will typically establish a long-lived connection with the server. Once the connection is established using a transport- level protocol, TDS messages are used to communicate between the client and the server. A database server can also act as the client if needed, in which case a separate TDS connection has to be established.
References:
https://summitinfosec.com/2017/12/19/advanced-sql-server-mitm-attacks/
https://msdn.microsoft.com/en-us/library/dd304492.aspx
Testlet 1
This is a case study. Case studies are not timed separately. You can use as much exam time as you would like to complete each case. However, there may be additional case studies and sections on this exam. You must manage your time to ensure that you are able to complete all questions included on this exam in the time provided.
To answer the questions included in a case study, you will need to reference information that is provided in the case study. Case studies might contain exhibits and other resources that provide more information about the scenario that is described in the case study. Each question is independent of the other questions in this case study.
At the end of this case study, a review screen will appear. This screen allows you to review your answer and to make changes before you move to the next section of the exam. After you begin a new section, you cannot return to this section.
To start the case study
To display the first question in this case study, click the Next button. Use the buttons in the left pane to explore the content of the case study before you answer the questions. Clicking these but-tons displays information such as business requirements, existing environment, and problem statements. If the case study has an All Information tab, note that the information displayed is identical to the information displayed on the subsequent tabs. When you are ready to answer a question, click the Question button to return to the question.
Background
You are a software architect for Trey Research Inc., a Software-as-a-Service (SaaS) company that provides text analysis services. Trey Research Inc. has a service that scans text documents and analyzes the content to determine content similarities. These similarities are referred to as categories, and indicate groupings on authorship, opinions, and group affiliation.
The document scanning solution has an Azure Web App that provides the user interface. The web app includes the following pages:
Document Uploads: This page allows customers to upload documents manually.

Document Inventory: This page shows a list of all processed documents provided by a customer. The

page can be configured to show documents for a selected category.
Document Upload Sources: This page shows a map and information about the geographic distribution

of uploaded documents. This page allows users to filter the map based on assigned categories.
The web application is instrumented with Azure Application Insights. The solution uses Cosmos DB for data storage.
Changes to the web application and data storage are not permitted.
The solution contains an endpoint where customers can directly upload documents from external systems.
Document processing
Source Documents
Documents must be in a specific format before they are uploaded to the system. The first four lines of the document must contain the following information. If any of the first four lines are missing or invalid, the document must not be processed.
the customer account number

the user who uploaded the document

the IP address of the person who created the document

the date and time the document was created

The remaining portion of the documents contain the content that must be analyzed. Prior to processing by the Azure Data Factory pipeline, the document text must be normalized so that words have spaces between them.
Document Uploads
During the document upload process, the solution must capture information about the geographic location where documents originate. Processing of documents must be automatically triggered when documents are uploaded. Customers must be notified when analysis of their uploaded documents begins.
Uploaded documents must be processed using Azure Machine Learning Studio in an Azure Data Factory pipeline. The machine learning portion of the pipeline is updated once a quarter.
When document processing is complete, the documents and the results of the analysis process must be visible.
Other requirements
Business Analysts
Trey Research Inc. business analysts must be able to review processed documents, and analyze data by using Microsoft Excel. Business analysts must be able to discover data across the enterprise regardless of where the data resides.
Data Science
Data scientists must be able to analyze results without changing the deployed application. The data scientists must be able to analyze results without being connected to the Internet.
Security and Personally Identifiable Information (PII)
Access to the analysis results must be limited to the specific customer account of the user that

originally uploaded the documents.
All access and usage of analysis results must be logged. Any unusual activity must be detected.

Documents must not be retained for more than 100 hours.

Operations
All application logs, diagnostic data, and system monitoring must be available in a single location.

Logging and diagnostic information must be reliably processed.

The document upload time must be tracked and monitored.
