Showing posts with label JOOST. Show all posts
Showing posts with label JOOST. Show all posts

Saturday, 7 December 2024

Synapse snack: mount storage container in notebook

Case
I want to mount a Storage Account container in my notebook to write some files to it. Preferably without storage account keys.
Mount Storage Account in notebook
























Solution
You can use the Synapse Spark Utils to mount a storage account container within a notebook. It requires having a Linked Service that points to your Storage Account which is not using a Self Hosted Integration Runtime.

For this we use the default Linked Services which ends with [workspacename]-WorkspaceDefaultStorage. By default this Linked Service is using the System-assinged managed identity to authenicate to the Storage Account connected to Synapse.

Since we might be using this in a DTAP environment we don't want to hardcode the Linked Service name. Instead we will retrieve the Synapse workspace name with code and concat "-WorkspaceDefaultStorage" to it.
Default Workspace Linked Service

















Now when you want to debug your code in Synapse Studio it's best to also make sure your sparkpool is also running as managed idenity otherwise it will use your personal account. Also make sure that role Storage Blob Data Contributor is assigned.

Now the python code. There is a lot displaying in the code to show what is happening, but there are no checks and no error handling in the code. You might want to add that when going to production to make your code more robust and get easier error messages. 
# Import necessary modules
from notebookutils import mssparkutils  # Utilities for Synapse notebooks
import re  # Module for regular expressions
import os  # Module for interacting with the operating system

# Parameters
container_name = "mycontainer"  # Name of the storage container
display("Container: " + container_name)

# Retrieve Synapse Workspace Name to get the Linked Service
workspace_name = mssparkutils.env.getWorkspaceName()
display("Synapse Workspace: " + workspace_name)

# Determine the default Linked Service name using the Workspace Name
linked_service_name = f"{workspace_name}-WorkspaceDefaultStorage"
display("Default Linked Service: " + linked_service_name)

# Retrieve the full connection string from the Linked Service to retrieve storage account
connection_string = mssparkutils.credentials.getFullConnectionString(linked_service_name)
display("Connection String: " + connection_string)

# Extract storage account name from the connection string using regex
# Expected format: url=https://{storage_account_name}.dfs.core.windows.net
storage_account_pattern = r'//([^\.]+)\.'
storage_account_name = re.search(storage_account_pattern, connection_string).group(1)
display("Storage Account: " + storage_account_name)

# Mount the Storage Account container via the Linked Service
m = mssparkutils.fs.mount(
    f"abfss://{container_name}@{storage_account_name}.dfs.core.windows.net",
    "/mymount",
    {"linkedService": linked_service_name}
)

# Get the local path of the mounted container folder
# Example path format: '/synfs/notebook/{job_id}/{mount_name}'
root_folder = mssparkutils.fs.getMountPath("/mymount")
display("Root Folder: " + root_folder)

# Create a file path for a new dummy file within the mounted folder
# Note that the subfolder should exists or add code to create it.
file_name = os.path.join(root_folder, "myfolder/dummy_file.txt")
display("Creating File: " + file_name)

# Content to be written to the new dummy file
file_content = "This is an example file created and saved using Python."

# Open the file in write mode and write the content to it
with open(file_name, 'w') as file:
    file.write(file_content)
display("File created")

# Clean up mount 
um = mssparkutils.fs.unmount("/mymount") 

Now you will see the file appear in your storage account.
dummy file created in storage account











Conclusion
In this post we showed you how to use the managed idenity from a Linked Service to create a mount to the storage account container within a notebook. With this you can easily create or edit files in your Storage account when the pipeline doesn't provide the options you need.

Note: this code uses 'regular' Python and not PySpark which in some cases is unavoidable. However this also means that in Synapse this code is only running on the headnode. So all child nodes of your cluster are being totally useless and costing you money. For short processes that doesn't hurt that much, but for longer running jobs you should also consider alternatives like Azure Function Apps

Thanks to Jan Remko Yntema


Sunday, 22 September 2024

Synapse - Run all notebooks in a foreach loop

Case
I have several notebooks to create/fill Silver and Gold tables which I want to execute. Is their a way to execute several notebooks in a foreach loop? That would help me not having to create and maintain a pipeline with just lots of Notebook activities.

Notebook activity in FOREACH
















Solution
The properties of the Notebook activity can be overridden with an expression in the Dynamic Content section. This means you can use a collection of notebook names and execute them within a Foreach activity. 

The smart part of the solution is creating the collection of notebooks so that you can iterate it, but is also always up to date. This solution uses the Synapse rest API to get all notebooks from Synapse and then use a filter to only get a selection of all those notebook.

The hard part is that the rest API will return mulitple pages with notebooks if you have a lot of notebooks. The Entire solutions looks like this. Only the last part is for looping and executing the notebooks. The first few activities are all for retrieving the collection of notebooks. It perhaps looks a bit complex, but looks can be deceiving.
The solution











1) Parameter
This solution uses one string pipeline parameter to provide a folder path that we can use to filter out a selection of notebooks. Name of the parameter is NotebookFolderPath.
String parameter to get a selection of notebooks











2) Variables
The UNTIL loop to retrieve all notebooks uses three variables. The first is a String variable containing the URL of the rest API: SynapseRestAPI. The second and third are Array variables to store the response of the Rest API: Notebooks and Notebooks_Temp.
The pipeline variables to retrieve and store the notebooks














3) Determine Rest API URL
The Rest API will return multiple pages with details of all notebooks if you exceed the max number of notebooks per page. Therefore we will first determine the initial URL for the first page before the UNTIL loop. Lateron the URL of the successive pages is retrieved within the UNTIL loop by reading the response of the Rest API.

For the initial URL we need the name of the Synapse workspace. Since this can be retrieved via an expression, we can make the URL dynamic:
@concat(
    'https://'
    ,pipeline().DataFactory
    ,'.dev.azuresynapse.net/notebooks?api-version=2021-04-01'
    )
For this we will start with a Set Variable activity in front of the UNTIL loop.
Set Variable task to determine Rest API URL











4) Until loop
Within the UNTIL loop we use a WEB activity for the first REST API call. The UNTIL loop uses its output to to check whether it contains a property nextLink. If that property is present in the output then there is a next page and this property will contain the REST API URL for it.

The name of the WEB activity is web_GetNotebooks and therefor the expression of the UNTIL is:
@not(
    contains(
            activity('web_GetNotebooks').output
            ,'nextLink'
            )
    )
UNTIL loop









5) Web activity
The first activity in the UNTIL is the WEB activity. It's easier to add this activity to the UNTIL before writing the expression of the step above.
As mentioned above its name is web_GetNotebooks. The important settings are:
  • URL - @variables('SynapseRestAPI')
  • Method - GET
  • Authenication - System-assigned managed identity
  • Resource - https://dev.azuresynapse.net/
And make sure to select your SHIR in the advanced settings if you use one. Also make sure to give the Synapse workspace at least the Reader role to itself otherwise it can't use its own Rest API.
WEB activity for REST API



















6) Union output to temp
Next we need two Set Variable activities in the UNTIL. With the first Set Variable activity we union the output of the Rest API with the value of the Notebooks array variable and store it in the Notebooks_Temp array variable. This has to be a two step task, hence the temp variable. The expression looks like:
@union(
    activity('WEB_GetNotebooks').output.value
    ,variables('Notebooks')
    )

The first iteration the Notebooks variable will still be empty, but for all next iterations it will be filled by the next activity.
Union output to temp array variable









7) Use temp to fill variable
In the previous activity we filled the Notebooks_Temp variable. In this next step we store the value of Notebooks_Temp in the main array variable Notebooks. Then we can use this value to union it in the next iteration.
Store temp value in main variable












8) Determine URL of next Rest API call
The last activity in the UNTIL is to check whether there a next page. If there is we will fill the string variable SynapseRestAPI with the URL and if not we will fill it with an empty string that will break the UNTIL loop.
@if(
    contains(
            activity('WEB_GetNotebooks').output
            , 'nextLink'
            )
    ,activity('WEB_GetNotebooks').output.nextLink
    ,''
    )
Retrieve URL of next page










9) Filter notebook array
The notebook array is now filled with all published notebooks. If you only want certain notebooks from a specific folder then we need to add a FILTER Activity. The path for filtering is retrieved from the pipeline parameter. The startswith will also retrieve notebooks from all subfolders. Replace with EQUALS if you don't want that:
@if(
    not(
        empty(item().properties.folder)
        )
    , startswith(
        item().properties.folder.name
        , pipeline().parameters.NotebookFolderPath
        )
    , false
    )
Filter notebooks by folder path









10) Foreach notebook
Now you can use the output of the FILTER Activity in the FOREACH loop. You should also tune the FOREACH settings to the available Spark node. If you use a small node then you probably won't run 20 notebooks at a time.
Foreach Notebook









11) Execute Notebook
Last step of the solution is executing all notebooks via the Notebook Activity. The notebook name property should be filled with an expression: @item().name
Execute all notebooks












The loop will ofcourse only work if the parameters and settings for each notebook are all equal. Otherwise you will have a lot of expression work to do making the solution probably to difficult to maintain.

Tip
In the User property tab of the Notebook activity you can add a few properties which you can show in the output making it very handy for debugging. In this case there is a NotebookName property with the expression @{item().name} and a FolderPath property with the expression @{item().properties.folder.name}.
Adding User Properties

















Now when debugging your pipeline you can add those properies as columns. Click on the icon in the User Property column to see the available columns. This is useful for loops like UNTIL and FOREACH. Now you can forexample instantly see which notebook fails and which one succeeds without checking the input of each Notebook activity.
Showing user properties in Output window










Conclusion
This solution will make it easy to iterate through a whole bunch of notebooks without adding them one by one to a pipeline. You will have to use folders for your notebooks (or a certain notebook naming convention) if you don't want them all to be executed. In development this only works if you first publish the notebooks, because the Rest API only returns published/live notebooks. Don't forget to use the User Properties tip to make debugging a lot easier.


Saturday, 21 September 2024

Synapse - ReferenceError: Response is not defined

Case
My Synapse CICD deployment pipeline is failing with some kind of error pointing to a javascript file main.js: ReferenceError: Response is not defined
ReferenceError: Response is not defined
























When using the Synapse Workspace Deployment addon for Azure Devops, it gives me an error (and it did work before) when trying to deploy a Synapase Analytics Workspace:
ReferenceError: Response is not defined
    at Module../node_modules/@trident/data-access-artifact/src/lib/models/http.js (D:\a\1\s\downloads\main.js:360446:40)
    at __webpack_require__ (D:\a\1\s\downloads\main.js:668203:42)
    at Module../node_modules/@trident/data-access-artifact/src/lib/models/index.js (D:\a\1\s\downloads\main.js:360536:63)
    at __webpack_require__ (D:\a\1\s\downloads\main.js:668203:42)
    at Module../node_modules/@trident/data-access-artifact/src/lib/clients/artifact-client.js (D:\a\1\s\downloads\main.js:359333:65)
    at __webpack_require__ (D:\a\1\s\downloads\main.js:668203:42)
    at Module../node_modules/@trident/data-access-artifact/src/lib/clients/index.js (D:\a\1\s\downloads\main.js:359814:74)
    at __webpack_require__ (D:\a\1\s\downloads\main.js:668203:42)
    at Module../node_modules/@trident/data-access-artifact/src/index.js (D:\a\1\s\downloads\main.js:359310:70)
    at __webpack_require__ (D:\a\1\s\downloads\main.js:668203:42)

Entire error















Solution
This Synapse addon for DevOps requires node.js on your Azure DevOps agent. If you installed an older(/wrong) version of node.js then you will get this error. If you installed node.js in your YAML pipeline then you just need to raise the version. At the moment version 18 is the minimum version that works, but 22 also works. However it shifted a lot the past years. I think it started with version 10.

###################################
# Installs Node.js on agent
###################################
- task: NodeTool@0
  displayName: 'Install Node.js'
  inputs:
    versionSpec: '22.x'
    checkLatest: true  

Conclusion
Just make sure to regularly check available node.js versions and then try if they already work with your Synapse release proces. In the past the most recent version didn't always worked for me.



Monday, 1 January 2024

Toggle Azure Synapse Triggers during deployment

Case
To turn of the Synapse triggers before deployment and turning them back of after deployment, we used a PowerShell script for ADF that we rewritten for Synapse. However now there is a new activity available that avoids the use of scripting, but it is a bit hidden if you mainly use YAML for pipelines.
Azure Synapse Toggle Triggers Dev (Preview)








Solution
If you go the release pipelines (not YAML, but the visual variant) and under add task search for Synapse then you will find the Synapse workspace deployment activity, but also the Azure Synapse Toggle Triggers Dev. It's already out there for a couple months but still preview (probably to busy with Fabric at the moment). For these tasks you need to add the Synapse addon to Azure DevOps, but if you are already using the deployment task then you already have it. The postfix of the taskname is Dev, so it is to be expected that they will remove it once it is General Available.
Synapse Tasks









Since we have a preference for YAML we need to know what the task name is and which properties are available. There is no YAML documentation available, but it you fill in the form in the release pipeline then you can view the YAML code. And by clicking on the circled i icon you will get some more info about the properties.
View YAML code











First the YAML code for toggling off all triggers which you need to do before deploying a new version of Synapse. With and without parameters.
steps:
- task: AzureSynapseWorkspace.synapsecicd-deploy.toggle-trigger.toggle-triggers-dev@2
  displayName: 'Toggle Azure Synapse Triggers'
  inputs:
    azureSubscription: SP-synw-d
    ResourceGroupName: 'dwh-synw-d-we-001'
    WorkspaceName: 'dwh-synw-d-we-001'
    ToggleOn: false
    Triggers: '*'
    
    
- task: AzureSynapseWorkspace.synapsecicd-deploy.toggle-trigger.toggle-triggers-dev@2
  displayName: 'Toggle Azure Synapse Triggers'
  inputs:
    azureSubscription: ${{ parameters.SerCon }}
    ResourceGroupName: ${{ parameters.Synapse_ResourceGroupName }}
    WorkspaceName: ${{ parameters.Synapse_WorkspaceName }}
    ToggleOn: false
    Triggers: '*'
Toggle all trigger OFF




















Then the same code but now for enabling certain triggers. Again with and without parameters.
steps:
- task: toggle-triggers-dev@2
  displayName: 'Toggle Azure Synapse Triggers'
  inputs:
    azureSubscription: SP-synw-d
    ResourceGroupName: 'dwh-synw-d-we-001'
    WorkspaceName: 'dwh-synw-d-we-001'
    ToggleOn: true
    Triggers: 'TR_DAILY_RUN_DEV,TR_DAILY_DOWNSCALE_GEN'


- task: toggle-triggers-dev@2
  displayName: 'Toggle Azure Synapse Triggers'
  inputs:
    azureSubscription: ${{ parameters.SerCon }}
    ResourceGroupName: ${{ parameters.Synapse_ResourceGroupName }}
    WorkspaceName: ${{ parameters.Synapse_WorkspaceName }}
    ToggleOn: true
    Triggers: '${{ parameters.Synapse_EnableTriggers }}'
Toggle specific triggers ON


















You can remove the long prefix in the task name and just keep toggle-triggers-dev@2. Property ToggleOn set to 'false' means stop the triggers and 'true' means start the mentioned triggers. The triggers property shoud contain a '*' to stop/start everything, but you probably only want to enable certain triggers on each environment. In that case you can use 'trigger1,trigger2,trigger3'  to do that, but without spaces around each comma. With extra spaces you will get an error that the trigger is not found. In the error message you can see the extra space in front of the trigger name.
##[error]Refer to above logs for more details:
The Trigger TR_DAILY_DOWNSCALE_GEN was not found









An other issue is that the Triggers property must be filled. If you have one environment where you don't want to start any of the triggers then you will get an error saying that the property is required. I would rather see a warning instead or an other user friendly solution that works with some of the YAML shortcomings.
##[error]Unhandled: Input required: Triggers













You can solve this by adding a YAML condition to the task where you check whether the list of triggers is empty. If it is, the task will be skipped. Unfortunately you cannot use a parameter in a condition. The workaround is to read directly from a variable group:
- task: toggle-triggers-dev@2
  displayName: 'Toggle Azure Synapse Triggers'
  condition: ne(replace(variables.Synapse_EnableTriggers, ' ', ''), '')
  inputs:
    azureSubscription: ${{ parameters.SerCon }}
    ResourceGroupName: ${{ parameters.Synapse_ResourceGroupName }}
    WorkspaceName: ${{ parameters.Synapse_WorkspaceName }}
    ToggleOn: true
    Triggers: '${{ parameters.Synapse_EnableTriggers }}'

Alternatives
If you don't like that workaround there are two alternatives until Microsoft fixes the issue. You could use the OverrideParameters option to override the endTime property of a trigger during the Synapse deployment as showed here for Data Factory or use a PowerShell activity to enable the triggers just like for ADF. For this PowerShell option you need to create a PowerShell file in the repository under \CICD\PowerShell with the name SetTriggers.ps1 and the following code (Synapse version):

param
(
    [parameter(Mandatory = $true)] [String] $WorkspaceName,
    [parameter(Mandatory = $true)] [String] $ResourceGroupName,
    [parameter(Mandatory = $true)] [string] $EnableTriggers,
    [parameter(Mandatory = $false)] [Bool] $DisableAllTriggers = $true
)

Write-Host "Checking existance Resource Group [$($ResourceGroupName)]..."
Get-AzResourceGroup -Name $ResourceGroupName > $null
Write-Host "- Resource Group [$($ResourceGroupName)] found."

Write-Host "Checking existance Synapse Workspace [$($WorkspaceName)]..."
Get-AzSynapseWorkspace -ResourceGroupName $ResourceGroupName `
              -Name $WorkspaceName > $null
Write-Host "- Synapse Workspace [$($WorkspaceName)] found."

#Getting triggers
Write-Host "Looking for triggers..."
$Triggers = Get-AzSynapseTrigger -WorkspaceName $WorkspaceName
Write-Host "Found [$($Triggers.Count)] trigger(s)"

# Checking provided triggernames, first split into array
$EnabledTriggersArray = $EnableTriggers.Split(",")
Write-Host "Checking existance of ($($EnabledTriggersArray.Count)) provided triggernames."
# Loop through all provided triggernames
foreach ($EnabledTrigger in $EnabledTriggersArray)
{ 
    # Get Trigger by name
    $CheckTrigger = Get-AzSynapseTrigger    -WorkspaceName $WorkspaceName `
                                            -Name $EnabledTrigger `
                                            -ErrorAction Ignore # To be able to provide more detailed error

    # Check if trigger was found
    if (!$CheckTrigger)
    {
        throw "Trigger $($EnabledTrigger) not found in Synapse Workspace $($WorkspaceName) within resource group $($ResourceGroupName)"
    }
}
Write-Host "- All ($($EnabledTriggersArray.Count)) provided triggernames found."

##############################################
# Disable triggers
##############################################
# Check if all trigger should be disabled
if ($DisableAllTriggers)
{
   # Get all enabled triggers and stop them (unless they should be enabled)
   Write-Host "Getting all enabled triggers that should be disabled."
   $CurrentTriggers = Get-AzSynapseTrigger    -WorkspaceName $WorkspaceName `
                      | Where-Object {$_.RuntimeState -ne 'Stopped'} `
                      | Where-Object {$EnabledTriggersArray.Contains($_.Name) -eq $false}

   # Loop through all found triggers
   Write-Host "- Number of triggers to disable: $($CurrentTriggers.Count)."
   foreach ($CurrentTrigger in $CurrentTriggers)
   {
        # Stop trigger
        Write-Host "- Stopping trigger [$($CurrentTrigger.Name)]."
        try {
            Stop-AzSynapseTrigger -WorkspaceName $WorkspaceName -Name $CurrentTrigger.Name > $null
        } catch {
            Write-Host "error code 1, but disabling trigger that already is disabled"
        }
   }
}

##############################################
# Enable triggers
##############################################
# Loop through provided triggernames and enable them
Write-Host "Enable all ($($EnabledTriggersArray.Count)) provided triggers."

foreach ($EnabledTrigger in $EnabledTriggersArray)
{                   
    # Get trigger details
    $CheckTrigger = Get-AzSynapseTrigger     -WorkspaceName $WorkspaceName `
                                               -Name $EnabledTrigger

    # Check status of trigger
    if ($CheckTrigger.RuntimeState -ne "Started")
    {
        try {
            Write-Host "- Trigger [$($EnabledTrigger)] starting. This will take only a few seconds..."
            Start-AzSynapseTrigger      -WorkspaceName $WorkspaceName `
                                        -Name $EnabledTrigger
        } catch {
            Throw "Error enabling trigger '$EnabledTrigger': $Error[0].Message"
            exit 1
        }
    }
    else
    {
        Write-Host "- Trigger [$($EnabledTrigger)] already started"
    }
}
Then the YAML Code for after the deployment step:
- task: AzurePowerShell@5
  displayName: 'Enabling triggers per environment'
  inputs:
    azureSubscription: ${{ parameters.SerCon }}
    scriptType: filePath
    scriptPath: $(Pipeline.Workspace)\s\CICD\PowerShell\Set_Triggers.ps1
    scriptArguments:
      -WorkspaceName ${{ parameters.Synapse_WorkspaceName }} `
      -ResourceGroupName ${{ parameters.Synapse_ResourceGroupName }} `
      -EnableTriggers "${{ parameters.Synapse_EnableTriggers }}"
    azurePowerShellVersion: latestVersion
    pwsh: true
Conclusion
Microsoft is introducing a very promising new DevOps task for Synapse that will make you CICD task much easier. Hopefully it will be G.A. soon and the small bugs will be solved. In the mean while you could use one of the alternatives if you don't want to use a preview version of the task.

An other solution for Microsoft could be to just add these options in the current deployment task since you always need to stop all triggers first and than start some of the triggers once the new version has been deployed. Is there a good reason to have these as two separate tasks in DevOps? Let it know in the comments below.


ADDITION
The new task toggle-triggers-dev@2 does not yet work with the new type of Azure Service Connection with Workload Identity federation. This will result in the following error: {"code":"U002","message":"The service connection authScheme WorkloadIdentityFederation is not supported by this task"}.
The service connection authScheme
WorkloadIdentityFederation is not supported by this task






Tuesday, 26 December 2023

Give Power BI access to Synapse Serverless SQL Pool

Case
I have a Service Account for my data source in the Power BI Gateway, but how do I give this account access to the (External) Tables located in the Synapse Serverless SQL Pool database?
Give Power BI Access to Delta Tables








Solution
If you create a Data Lakehouse with Synapse then at the moment you still need to put the datamart as a set of external tables in the Synapse Serverless SQL Pool to make it easily accesable for Power BI. Giving access to it is very similar to the solution we showed to give readonly access to an Azure SQL Database when you have a Data Warehouse Achitecture. For a Data Lakehouse Architecture we use  External Tables that point to a Delta Table instead of regular tables. This requires an extra step with a Database Scoped Credential.

The starting position is that you have a database in the Synapse Serverless SQL Pool with some external tables pointing to your Delta Tables within your 'golden' container. Below the basic TSQL code for setting up those External Tables.
  1. First step is to create a Master Key. We need this for the second step. You can make it more secure by adding a strong password for encryption.
  2. Second step is to create a Database Scoped Credential with the Managed Service Identity of the underlying Synapse Workspace. This allows us the give the underlying Synapse Workspace access to the 'golden' container instead of giving all the separate users access. There are alternatives so study the options to see which solution fits the security policy of your organization.
  3. Third step is creating an External Data Source that points to your 'golden' container where the Delta Tables are stored. Notice that it uses the credential from the previous step. The location is the URI pointing to a container in a Gen2 Storage Account. ABFSS is short for Azure Blob File System Secure. The format of the URI is:  abfss://[containername]@[storageaccountname].dfs.core.windows.net/
  4. Fourth step is creating an External File Format where you can specify options for your source type (CSV/Parquet/JSON/Delta/etc). In this example the format for the Delta Table is very basic.
  5. The fifth and last step is creating an External Table that points to the Delta Table in your data lake. This means the data stays in the data lake and only the table structure is stored in the database. Notice the reference to your data source and file format from the previous steps. The location in the table is a folder path that starts in the root of your container and points to the Delta Table folder.
-- 1. Create Master Key for using Managed Service Identity access
CREATE MASTER KEY;

-- 2. Create Credential to use Managed Service Identity of Synapse
CREATE DATABASE SCOPED CREDENTIAL [SynapseIdentity]
WITH
    IDENTITY = 'Managed Service Identity'
;

-- 3. Create a Data source pointing to the container of your Delta tables
CREATE EXTERNAL DATA SOURCE [DeltaLocation]
	WITH (
	LOCATION = N'abfss://gold@mydevstorage.dfs.core.windows.net/',
    CREDENTIAL = [SynapseIdentity]
	);

-- 4. Create a File format for Delta tables
CREATE EXTERNAL FILE FORMAT [DeltaFormat]
    WITH (
    FORMAT_TYPE = DELTA,
    DATA_COMPRESSION = N'org.apache.hadoop.io.compress.SnappyCodec'
    );

-- 5. Create an external table
CREATE EXTERNAL TABLE [dbo].[dim_abcdef] (
    [integerfield] INT NULL,
    [stringfield] NVARCHAR (400) NULL,
    [datefield] DATETIME2 (7) NULL
)
    WITH (
    LOCATION = N'/delta/dim_abcdef',
    DATA_SOURCE = [DeltaLocation],
    FILE_FORMAT = [DeltaFormat]
    );

When you have created a whole set of those External Tables for your datamart, then you can add a user to the Serverless SQL Pool database which can be used by Power BI to create the Data Source in the Gateway.
  1. First you need to create a user. For this example we used a user from Microsoft Entra  ID (Azure Active Directory). Therefor you see the FROM EXTERNAL USER. This is because our Synapse Workspace is created with the option 'Microsoft Entra authentication only'. Therefor database users or Shared Access Signatures are not allowed.
  2. The second step is to give this newly added user some access. In this simplified example we give the new user db_datareader access to the entire database. If you have a larger or more complex database then you can make this much more sophisticated by for example giving reader access to only a specific schema or table.
  3. The last step is giving the user permission to use the Database Scoped Credential that was used in the External Data Source that was used in the External Table. This is done with GRANT and the REFERENCES permission. Without this step you will not be able to query the External Tables.
-- 6. Service Account bekend maken op datamart database
CREATE USER [sa-pbi-workspace-d@xyz.com] FROM EXTERNAL PROVIDER;

-- 7. Service Account leesrechten geven op tabellen
ALTER ROLE [db_datareader] ADD MEMBER [sa-pbi-workspace-d@xyz.com]

-- 8. Service Account reference rechten geven op credential van Synapse
GRANT REFERENCES ON DATABASE SCOPED CREDENTIAL::SynapseIdentity TO [sa-pbi-workspace-d@xyz.com]
Conclusion
In this post we showed you how to give a user, that was created for Power BI, read-only access to the tables in your Azure Synapse Serverless SQL Pool database. Very similar to giving access to a regular database. Now you can use this user for your Power BI gateway datasource.

In the future we can hopefully use the Power BI Direct Lake mode on regular Azure Storage Accounts or write with Synapse to the Fabric One Lake.

Sunday, 24 December 2023

Deploying Synapse Serverless SQL pool with DevOps

Case
The external tables and other objects in my Synapse Serverless SQL pool are not stored in the GIT repository of Synapse. How do I deploy those objects through the DTAP street?
Release Synapse Serverless SQL Pools with DevOps











Solution
You can use the good old database project in Visual Studio with the SQL Server Data Tools (SSDT) addon to store the External tables, File formats and Data Stores in a Git repository and then use Azure DevOps with YAML pipelines to release the database objects through the DTAP street. Very similar to deploying a regular Azure SQL Database with some minor differences.


1) Download Visual Studio
First download and install Visual Studio 2022 then make sure to add the SQL Server Data Tools (SSDT) extention which will add the Database project option. If you already have Visual Studio 2022 then make sure to update it because versions before 17.7 don't support SSDT for Serverless SQL pools.
Visual Studio 2022 with SQL Server Data Tools

















2) Create repository
This example is for Azure DevOps, but feel free to use Github instead. Create a new repository. In the root we have two folders:
  • CICD: to store all release-related files such as the YAML files. In fact there is a subfolder called YAML to store the .yml files
  • SQL: to store the database project from Visual Studio.
The repository folder structure








After you have created the initial folder structure you need to clone the repository to Visual Studio by hitting the clone button in the upper right corner.
Clone repository to Visual Studio











3) Create new database project
Once in Visual Studio you now have a cloned repos folder. In the SQL folder we will create a new database project.
Cloned DevOps repository in VS2022














Create a new project via the File menu and search for SQL in the upper search textbox.
Create new project
















Create the new SQL Server Database Project in the SQL folder from your repository. Since there will be only one project in the solution, the solution and project are stored in the same folder. Otherwise you will get an additional subfolder level.
Create new SQL Server Database Project
























4) Create Azure DevOps Service Connection
Wihtin your Azure DevOps project click on the Project settings and under Service connections create a new Service connection of the type Azure Resource Manager. You will need an Azure Entra Service Principal for this. Depending on the organization/projectsize/number of environments create one or more Service Connections. Ideally, one per DTAP environment.
Create Service Connection













5) Give Service Principal access to database
Go to Synapse and open a new SQL Script. Then either create a new database in your Serverless SQL pool to store your external tables or use an existing one. Our example database is called datamart since it will host external tables from our gold layer a.k.a. the datamart with facts and dimensions for Power BI.

Then switch to your datamart database. Either via the use-command or via the selectbox. Once you are in your datamart database create an (external) user for your Service Principal (SP) that you used in the  Azure Devops Service connection from the previous step. After that we will need to give the SP enough rights to deploy all objects to this database: db_owner
Give Service Principal access to Serverless SQL pool DB









-- First create a new database (if you don't have one)
CREATE DATABASE datamart;

-- Once created switch to your (new) database
USE datamart;

-- Create a new database use for the Service Principal
-- used in the DevOps Service Connection
CREATE USER [myserviceprincipal] FROM EXTERNAL PROVIDER;

-- Give the Service Principal enough rights to create
-- external resources and a master key
ALTER ROLE [db_owner] ADD MEMBER [myserviceprincipal];
You have to repeat this for all your Serverless SQL pool DB's in your DTAP environment. Note that if you will do the SQL development within Synapse Studio, and not in Visual Studio, then you don't need to deploy to the Development environment and then you also don't need to give a SP access to your development environment. In that case only do this for Test, Acceptance and Production.

Tip: you can also give the SP access within Synapse by either making is Synapse Adminstrator or Synapse SQL administrator. However then it's access for everything wihtin Synapse or all Serverless SQL Pool database within Synapse. If you already are using the same SP to role out Synapse with Infra as Code (with BICEP or Terraform) then it already has the Synapse Admistrator role.

6) Master Key and External Resources
For this example we will use a basic example with the following code. Note the that the DeltaLocation is pointing to the Development environment of our dataplatform. During the deployment we will need to override this since each DTAP enviroment has its own Azure Storage Account.
Create external table on delta table (for Power BI)















-- Create Master Key for using Managed Service Identity access
CREATE MASTER KEY;

-- Create Credential to use Managed Service Identity of Synapse
CREATE DATABASE SCOPED CREDENTIAL [SynapseIdentity]
WITH
    IDENTITY = 'Managed Service Identity'
;

-- Create a File format for Delta tables
CREATE EXTERNAL FILE FORMAT [DeltaFormat]
    WITH (
    FORMAT_TYPE = DELTA,
    DATA_COMPRESSION = N'org.apache.hadoop.io.compress.SnappyCodec'
    );

-- Create a Data source pointing to the container of your Delta tables
CREATE EXTERNAL DATA SOURCE [DeltaLocation]
	WITH (
	LOCATION = N'abfss://gold@mydevstorage.dfs.core.windows.net/',
    CREDENTIAL = [SynapseIdentity]
	);

-- Create an external table
CREATE EXTERNAL TABLE [dbo].[dim_abcdef] (
    [integerfield] INT NULL,
    [stringfield] NVARCHAR (400) NULL,
    [datefield] DATETIME2 (7) NULL
)
    WITH (
    LOCATION = N'/delta/dim_abcdef',
    DATA_SOURCE = [DeltaLocation],
    FILE_FORMAT = [DeltaFormat]
    );
You can encrypt the master key with an addional password. The credential with the Managed Service Identity (MSI) is so that this specific Synapse identity will be used to connect to the data source (the Delta table in your Azure Storage Account a.k.a. Datalake a.k.a. Delta Lake). This means that if you have access to the Serverless SQL Pool and its credentials, you also have access to the underlying storage account.

7) Schema Compare in Visual Studio
The datamart database located in the Serverless SQL pool of Synapse in the development is filled with all the above SQL objects like the external table. Now we want to get those objects to our database project. For this we need to do a schema compare in the database project by rightclicking the project in the Solution Explorer. The schema compare is between the development database and the database project.
Schema Compare














In the Schema Compare window click on options to exclude Role Memberships and Users, because this is different for each environment in your DTAP street. This will result in not showing up in the list to compare. Otherwise you have to uncheck those each time you do a schema compare.
Application-scoped
























You also want to add Database Scoped Credentials and Master Key in the Non-Application-scoped section. These are the first two items in the script from the previous step. Without these your external tables won't work in the test/acceptance/production database. Click OK to confirm the settings.
Non-Application-scoped
























Now make sure your dev database is on the left side in the schema compare and the database project is on the right side. You can find the URL of your Serverless SQL pool in the Azure portal on the overview page of your Synapse. It's called Serverless SQL endpoint and looks like [synapsename]-ondemand.sql.azuresynapse.net.
Azure Synapse Workspace overview page










Then hit the Compare button to see all the new objects that are not yet in your project. Tip: save this schema compare (including the changed options) in your database project. Then you can reuse it.
Result of schema compare














Verify the objects and hit the Update button to include them in your database project. 
Result of the update














8) Override location of External Data Source
If you open the external data source, you will notice the hardcoded URL of the gold container in the development environment. Replace that URL by $(DeltaLocation)
Replace hardcoded URL by variable









Go to the properties of your project (not the solution) and then go to the SQLCMD Variables. Enter a new variable called $(DeltaLocation) and enter the URL of your Development environment (the URL you just replaced). You only need to fill in the Default column. Save it.
Add variable to SQLCMD Variables







Now we need to create a Publish Profile file, which we can override during deployment in the YAML pipeline from Azure DevOps. Right click the project (not the solution) and click Publish. In this window fill in the values from your development environment (we will replace them later on): target database connection and the value of the SQLCMD variable. Then hit the Create Profile button which will add the Publish Profile file to your database project. After that Cancel the window because we will not publish via Visual Studio.
Publish Profile
















9) Stage, Commit and Sync repository
The changes in Visual Studio need to go back to the repository in Azure DevOps. Brance strategy and brance policies are ignored in the explanation to keep things short and simpel.
Commit and sync changes to Azure DevOps repos




















Now all changes are stored in the Azure Repository. Next step is setting up the CICD proces.
Azure DevOps repository














10) Pipeline Libarary - Variable Groups
To make the YAML scripts reusable for mulitple Serveless SQL pools of your DTAP street we need to create some Variable Groups in Azure DevOps. You can find them under Pipelines - Library. You need one for each of your DTAP environments and opionally one for general variables that don't change between the environments. 
DevOps Variable Groups
























Is this case we will store the database project name and the database name in the general variable group.  You could just use one variable because they probably often have the same value.
  • SqlDatabaseName - Name of the database within the Serverless SQL Pool
  • SqlProjectName - Name of the database project within Visual Studio
For the environment specific variable groups we have the name of the SQL server name which is equals to the Synapse Workspacename. So if you are also deploying the Synapse workspace then you could reuse that one. The other one is the storage account location.  
  • ADLSLocation - For replacing the storage account URL between environments
  • SqlServerName - For storing the name of the server (equals to Synapse workspacename)
11) YAML pipelines
Last development step is setting up the YAML files. This example has two YAML file located in the CDCD\YAML folder of the repository. Tip: Visual Studio Code has some nice YAML editors, but you can also just use the Azure DevOps website to create and edit the YAML files.

BuildSqlServerless.yml
The YAML starts with some general steps, like reading the variable group from the previous step. Then showing that it triggers when in one of those 4 branches a change happens in the SQL folder.
Step 1, checkout, is to get the repository content to the agent.
Step 2, treeview, is just for debugging and showing you all files on the agent. Useful for step 3
Step 3, build, builds the database project that was just retrieved from the repository
Step 4, copy, copies the files required for deployment to a artifact staging folder
Step 5, publish, publishes the artifact so that it can be used in the next YAML file
###################################
# General Variables
###################################
variables:
  - group: SQLServerlessParamsGen
  
###################################
# When to create a pipeline run
###################################
trigger:
  branches:
    include:
    - development
    - test
    - acceptance
    - main
  paths:
    include:
    - SQL/*
 
stages:
###################################
# Create DacPac Artifcat
###################################
- stage: CreateSQLArtifact
  displayName: Create SQL Artifact (dacpac)
 
  jobs:
  - job: Job
    displayName: 'Build DacPac'
    workspace:
      clean: all
    pool:
      vmImage: 'windows-latest'
      # name: my-agentpool
    steps:
 
    ###################################
    # 1 Retrieve Repository
    ###################################
    - checkout: self
      displayName: '1 Retrieve Repository'
      clean: true
 
    ###################################
    # 2 Show treeview of agent
    ###################################
    - powershell: |
        Write-Output "This is the folder structure within Pipeline.Workspace"
        tree "$(Pipeline.Workspace)" /F
      displayName: '2 Treeview Workspace'
 
    ###################################
    # 3 Build Visual Studio project
    ###################################
    - task: MSBuild@1
      displayName: '3. Creating Artifact'
      inputs:
        solution: '$(System.DefaultWorkingDirectory)/SQL/$(SqlProjectName)/$(SqlProjectName).sqlproj'
        msbuildArchitecture: x86
        msbuildVersion: latest
 
    ###################################
    # 4 Stage dacpac and publish.xml
    ###################################
    - task: CopyFiles@2
      displayName: '4. Copy Artifact'
      inputs:
        contents: |
          **\*.dacpac
          **\*.publish.xml
        TargetFolder: '$(build.artifactstagingdirectory)'
 
    ###################################
    # 5 Publish dacpac and xml artifact
    ###################################
    - task: PublishPipelineArtifact@1
      displayName: '5. Publish Artifact'
      inputs:
        targetPath: '$(Build.ArtifactStagingDirectory)'
        artifact: 'SQL_Dacpac'
        publishLocation: 'pipeline'
After that is calls the second YAML file that does the actual deployment. Because we want to reuse is for all environments you need to pass some parameter values that are comming from the variable groups from the previous step. The hardcoded parameter is the name of Azure DevOps Service Connection that uses the SP to connect to the database. The env parameter is just for some logging/debugging purposes to show to which environment you are deploying.
###################################
# Deploy Test environment
###################################
- stage: DeployTst
  displayName: Deploy TST
  variables:
    - group: SQLServerlessParamsTst
  pool:
    vmImage: 'windows-latest'
    # name: my-agentpool
  condition: and(succeeded(), eq(variables['Build.SourceBranchName'], 'test'))
  dependsOn: CreateSQLArtifact
  jobs:
    - template: DeploySqlServerless.yml
      parameters:
        env: TST
        ServiceConnection: SC-Synapse-T
        SqlServerName: $(SqlServerName)
        SqlDatabaseName: $(SqlDatabaseName)
        SqlProjectName: $(SqlProjectName)
        ADLSLocation: $(ADLSLocation)

###################################
# Deploy Acceptance environment
###################################
- stage: DeployAcc
  displayName: Deploy ACC
  variables:
    - group: SQLServerlessParamsAcc
  pool:
    vmImage: 'windows-latest'
    # name: my-agentpool
  condition: and(succeeded(), eq(variables['Build.SourceBranchName'], 'acceptance'))
  dependsOn: CreateSQLArtifact
  jobs:
    - template: DeploySqlServerless.yml
      parameters:
        env: ACC
        ServiceConnection: SC-Synapse-A
        SqlServerName: $(SqlServerName)
        SqlDatabaseName: $(SqlDatabaseName)
        SqlProjectName: $(SqlProjectName)
        ADLSLocation: $(ADLSLocation)

###################################
# Deploy Production environment
###################################
- stage: DeployPrd
  displayName: Deploy PRD
  variables:
    - group: SQLServerlessParamsTst
  pool:
    vmImage: 'windows-latest'
    # name: my-agentpool
  condition: and(succeeded(), eq(variables['Build.SourceBranchName'], 'main'))
  dependsOn: CreateSQLArtifact
  jobs:
    - template: DeploySqlServerless.yml
      parameters:
        env: PRD
        ServiceConnection: SC-Synapse-P
        SqlServerName: $(SqlServerName)
        SqlDatabaseName: $(SqlDatabaseName)
        SqlProjectName: $(SqlProjectName)
        ADLSLocation: $(ADLSLocation)

Build and publish artifact












DeploySqlServerless.yml
The second YAML file starts with the parameters that are required to call this script. Then the environment name in the job that you can use to set some approvals. The first step is just for debugging. Showing the parameter values and a treeview of the agent. You should be able to see the artifact folder which is useful to set up the deployment task where you need the paths of the dacpac and publish profile. In the AdditionalArguments property you can override the value of the Storage Account location. If you have multiple just repeat the entire value with a space between it.
parameters:
  - name: env
    displayName: Environment
    type: string
    values:
    - DEV
    - TST
    - ACC
    - PRD
  - name: ServiceConnection
    displayName: Service Connection
    type: string
  - name: SqlServerName
    displayName: Sql Server Name
    type: string
  - name: SqlDatabaseName
    displayName: Sql Database Name
    type: string
  - name: SqlProjectName
    displayName: Sql Project Name
    type: string
  - name: ADLSLocation
    displayName: Azure Data Lake Location
    type: string
 
 
jobs:
    - deployment: deploymentjob${{ parameters.Env }}
      displayName: Deployment Job ${{ parameters.Env }}
      environment: deploy-to-${{ parameters.Env }}
      strategy:
        runOnce:
          deploy:
            steps:
            ###################################
            # 1 Show environment and treeview
            ###################################
            - powershell: |
                Write-Output "Deploying ${{ parameters.SqlProjectName }} to DB ${{ parameters.SqlDatabaseName }} and server ${{ parameters.SqlServerName }} in the ${{ parameters.env }} environment"
                Write-Output "Changing SQLCMD variabele DeltaLocation to value ${{ parameters.ADLSLocation }}"
                tree "$(Pipeline.Workspace)" /F
              displayName: '1 Show environment and treeview Pipeline_Workspace'
 
            ###################################
            # 2 Deploy DacPac
            ###################################            
            - task: SqlAzureDacpacDeployment@1
              displayName: '2 Deploy DacPac'
              inputs:
                azureSubscription: '${{ parameters.ServiceConnection }}'
                AuthenticationType: 'servicePrincipal'
                ServerName: '${{ parameters.SqlServerName }}-ondemand.sql.azuresynapse.net'
                DatabaseName: '${{ parameters.SqlDatabaseName }}'
                deployType: 'DacpacTask'
                DeploymentAction: 'Publish'
                DacpacFile: '$(Pipeline.Workspace)/SQL_Dacpac/SQL/${{ parameters.SqlProjectName }}/bin/debug/${{ parameters.SqlProjectName }}.dacpac'
                PublishProfile: '$(Pipeline.Workspace)/SQL_Dacpac/SQL/${{ parameters.SqlProjectName }}/${{ parameters.SqlProjectName }}.publish.xml'
                AdditionalArguments: /Variables:DeltaLocation=${{ parameters.ADLSLocation }}
                IpDetectionMethod: 'AutoDetect'

Deploy Serverless SQL pool database












Conclusion
In this post you learned to apply an 'old fashioned' solution to a relative new service. This solution doesn't only work for the Synapse Serverless SQL Pool, but also for the Synapse Dedicated SQL Pool.

There is still one bug/problem. For the Serverless SQL Pool it seems that updating the External Data Source is not possible. The first deployment is no problem, but altering statements are ignored. Probably because there are already external tables referencing this object. You are probably not updating this external source a lot to make it a big issue, but if you want to do it you can use a pre-deployment script in the database project to first drop those object.

In a next post we will show how to give Power BI access to this Synapse Serverless SQL Pool database.

Thanks to colleagues Ralph Koumans and Bart van Es for helping out setting it up.