Home » Microsoft » DP-420 » Which two actions should you perform?
You have an Azure Cosmos DB Core (SQL) API account that is used by 10 web apps.
You need to analyze the data stored in the account by using Apache Spark to create machine learning models. The solution must NOT affect the performance of the web apps.
Which two actions should you perform? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.
A. In an Apache Spark pool in Azure Synapse, create a table that uses cosmos.olap as the data source.
B. Create a private endpoint connection to the account.
C. In an Azure Synapse Analytics serverless SQL pool, create a view that uses OPENROWSET and the CosmosDB provider.
D. Enable Azure Synapse Link for the account and Analytical store on the container.
E. In an Apache Spark pool in Azure Synapse, create a table that uses cosmos.oltp as the data source.
ANSWER: A D
Explanation:
Explore analytical store with Apache Spark 1. Navigate to the Data hub.
2. Select the Linked tab (1), expand the Azure Cosmos DB group (if you don’t see this, select the Refresh button above), then expand the WoodgroveCosmosDb account (2). Right-click on the transactions container (3), select New notebook (4), then select Load to DataFrame (5).
3. In the generated code within Cell 1 (3), notice that the spark.read format is set to cosmos.olap. This instructs Synapse Link to use the container’s analytical store. If we wanted to connect to the transactional store, like to read from the change feed or write to the container, we’d use cosmos.oltp instead.
Reference: https://github.com/microsoft/MCW-Cosmos-DB-Real-Time-Advanced-Analytics/blob/main/Handson%20lab/HOL%20step-by%20step%20-%20Cosmos%20DB%20real-time%20advanced%20analytics.md