You have an Azure data factory named ADM.
You currently publish all pipeline authoring changes directly to ADF1.
You need to implement version control for the changes made to pipeline artifacts The solution must ensure that you can apply version control to the resources currently defined in the Azure Data Factory Studio for AOFl
Which two actions should you perform? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.
Answer : D, F
You have an Azure subscription that contains the resources shown in the following table.
Diagnostic logs from ADF1 are sent to LA1. ADF1 contains a pipeline named Pipeline that copies data (torn DB1 to Dwl. You need to perform the following actions:
* Create an action group named AG1.
* Configure an alert in ADF1 to use AG1.
In which resource group should you create AG1?
Answer : C
You use Azure Stream Analytics to receive Twitter data from Azure Event Hubs and to output the data to an Azure Blob storage account
You need to output the count of tweets during the last five minutes every five minutes. Each tweet must only be counted once.
Which windowing function should you use?
Answer : A
You have an Azure subscription that contains an Azure Synapse Analytics dedicated SQL pool named Pool1. You have the queries shown in the following table.
You are evaluating whether to enable result set caching for Pool1. Which query results will be cached if result set caching is enabled?
Answer : C
You have an Azure subscription that contains an Azure data factory named ADF1.
From Azure Data Factory Studio, you build a complex data pipeline in ADF1.
You discover that the Save button is unavailable and there are validation errors that prevent the pipeline from being published.
You need to ensure that you can save the logic of the pipeline.
Solution: You enable Git integration for ADF1.
Answer : B
You have an Azure Data Factory pipeline named pipeline1 that is invoked by a tumbling window trigger named Trigger1. Trigger1 has a recurrence of 60 minutes.
You need to ensure that pipeline1 will execute only if the previous execution completes successfully.
How should you configure the self-dependency for Trigger1?
Answer : D
Tumbling window self-dependency properties
In scenarios where the trigger shouldn't proceed to the next window until the preceding window is successfully completed, build a self-dependency. A self-dependency trigger that's dependent on the success of earlier runs of itself within the preceding hour will have the properties indicated in the following code.
Example code:
'name': 'DemoSelfDependency',
'properties': {
'runtimeState': 'Started',
'pipeline': {
'pipelineReference': {
'referenceName': 'Demo',
'type': 'PipelineReference'
}
},
'type': 'TumblingWindowTrigger',
'typeProperties': {
'frequency': 'Hour',
'interval': 1,
'startTime': '2018-10-04T00:00:00Z',
'delay': '00:01:00',
'maxConcurrency': 50,
'retryPolicy': {
'intervalInSeconds': 30
},
'dependsOn': [
{
'type': 'SelfDependencyTumblingWindowTriggerReference',
'size': '01:00:00',
'offset': '-01:00:00'
}
]
}
}
}
You have an Azure subscription that contains an Azure Synapse Analytics dedicated SQL pool named Pool1. Pool1 receives new data once every 24 hours.
You have the following function.
You have the following query.
The query is executed once every 15 minutes and the @parameter value is set to the current date.
You need to minimize the time it takes for the query to return results.
Which two actions should you perform? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.