Friday, 27 January 2023

Deploy a Power BI dataset via a DevOps - Publish

Case
I want to develop my Power BI dataset separately from my Power BI reports. So we started using Tabular Editor. This also allows us to work on the model with multiple people at the same time because every object is a separate JSON file. But how can we automatically deploy these files using Azure DevOps?
Deploy your Power BI datasets with DevOps











Solution
In this serie of posts we will deploy the Tabular Editor project as an Power BI dataset via DevOps. We will deploy this to the development workspace. For the deployment from Dev to Test and Test to Prod we will use Power BI Deployment pipelines.


In this third post we will create the YAML files that will do the actual publishing of the data sets. The first file uses Tabular Editor to publish to the Development environment and the second file calls the Power BI deployment pipeline to publish to test/acceptance and to production. Both YAML files are stored in the repository under CICD\YAML\*****.yml. 

Repository folder structure
























1) DeployPBIModelDev.yml
The First YAML file is BuildModel.yml that builds the JSON files of the tabular editor project into a deployable *.bim file (the dataset) and then calls the deployment YAML files to do the actual deployment.

First we are adding the parameters to this YAML file. The values will be passed through by the YAML file of the previous post.
parameters:
  - name: env
    displayName: Environment
    type: string
    values:
    - DEV
  - name: SpApplicationId
    displayName: Service Principal Application Id
    type: string
  - name: SpClientSecret
    displayName: Service Principal Client Secret
    type: string
  - name: SpTenantId
    displayName: Service Principal Tenant Id
    type: string
  - name: DownloadLinkTabularEditor
    displayName: Download Link Tabular Editor
    type: string

This part connects to a enviroment to which you can add approval steps in DevOps under Pipelines - Environments.
jobs:
  - deployment: deploymentjob${{ parameters.env }}
    displayName: Deployment Job ${{ parameters.env }} 
    environment: Deploy to ${{ parameters.env }}
    strategy:
      runOnce:
        deploy:
          steps:
The first step is a check out to get the files from the repository to the agent. For this simple example it is not required, but later on you perhaps want to execute some Powershells that are stored in the repository to for example connect the dataset to a gateway.
          ###################################
          # 1 Check out repository to agent
          ###################################
          - checkout: self
            displayName: '1 Retrieve Repository'
            clean: true 
In the second step we will download and unzip tabular editor. If you have a self-hosted agent instead of a Microsoft hosted agent you also could install this software on the VM manually. However this step allows you to easily upgrade to a newer version of the editor by just changing the URL in the variable group.
          ###################################
          # 2 Download Tabular Editor
          ###################################
          - powershell: |
              # Download URL for Tabular Editor portable:
              
              # Create Download folder
              $DownloadFolder = join-path (get-location) "TabularEditor"
              new-item -type directory -path $DownloadFolder -Force
              Write-Host "Create download location: $($DownloadFolder)" 

              # Download destination (root of PowerShell script execution path):
              $DownloadFile = join-path $DownloadFolder "TabularEditor.zip"
              Write-Host "Save download as: $($DownloadFile)"
              
              # Download from GitHub:
              Write-Host "Downloading Tabular Editor from: $(DownloadLinkTabularEditor)" 
              Invoke-WebRequest -Uri $(DownloadLinkTabularEditor) -OutFile $DownloadFile
              
              # Unzip Tabular Editor portable, and then delete the zip file:
              Write-host "Extracting downloaded zip file"
              Expand-Archive -Path $DownloadFile -DestinationPath $DownloadFolder
              Write-host "Clean up download file"
              Remove-Item $DownloadFile
            displayName: '2 Download Tabular Editor'
The third step is also an optional debug step showing you the artifact and repository files on your agent as well as the exact location of the executable of tabular editor which we need in the next step. Once everything is running fine you can remove or comment out this step.
            ###################################
            # 3 Show treeview of agent
            ###################################
            - powershell: |
                tree "$(Pipeline.Workspace)" /F
              displayName: '3 Treeview of Pipeline.Workspace'
The fourth step is the only mandatory step. It executes tabular editor for each project on the agent. Checkout all the parameters after TabularEditor.exe where the you will see the service principal and a lot of parameters of the tool it self.
          ###################################
          # 4 Publish data models
          ###################################
          - powershell: |
              # Get Artifact folder with model files
              $ModelsFolder = Join-Path -Path $(Pipeline.Workspace) -ChildPath "\PBIModelsArtifact\"

              # Loop through local artifact folder to get all datamodels
              $AllDatamodels = Get-ChildItem -Path $ModelsFolder -Recurse -Filter *.bim
              Write-Host "$($AllDatamodels.Count) model$(If ($AllDatamodels.Count -ne 1) {"s"}) found in $($ModelsFolder)"
              Write-Host "================================"
              foreach ($DataModel in $AllDatamodels)
              {
                $Path = "$($DataModel.Fullname)"
                Write-Host "Processing model $($Path)"

                # Extract datasetname and workspacename from folder path of model
                $DatasetName = $Path | Split-Path -Parent | Split-Path -Leaf
                $WorkspaceName = $Path | Split-Path -Parent | Split-Path -Parent | Split-Path -Leaf
                # Concat workspace name with the suffix used for the deployment workspace
                $WorkspaceName = "$($WorkspaceName)$(PowerBiWorkspaceSuffix)"
                  
                Write-Host "Publishing model [$($DatasetName)] in [$($WorkspaceName)]"
                # Publish model.bim to Power BI Workspace
                # Deleted -P argument at the end of the command line, because we don't want to overwrite the table partitions in the model
                $(Build.SourcesDirectory)\TabularEditor\TabularEditor.exe "$($DataModel.Fullname)" -D "Provider=MSOLAP;Data Source=powerbi://api.powerbi.com/v1.0/myorg/$($WorkspaceName);User ID=app:${{ parameters.SpApplicationId }}@${{ parameters.SpTenantId }};Password=${{ parameters.SpClientSecret }};" "$($DatasetName)" -O -C -R -M -E -V
                Wait-Process -Name "TabularEditor"

                Write-Host "================================"
              }
            displayName: '4 Deploy data models'

This is all you need to deploy your model to a Power BI Workspace. You could reuse this for other environments like test/acceptance and production as well. However we will be using the Power BI Deployment Pipelines for the next environments. This also allows you to change parameters and connections without writing code.

2) DeployPBIModelAccPrd.yml
By default the Power BI deployment pipeline will deploy everything from the source workspace to the destination workspace. We only want to deploy datasets and leave all the other stuff for other pipelines. To be even more specific we only want datasets from the respository and leave any manually deployed datasets. In the GUI you can make that selection manually. In the YAML code you can provide a semicolon separated string with the names of the datasets you want to promote. For this we will create a variable, fill it with some Powershell and then provide it as a parameter for the deployment.

First we are adding the parameters to this YAML file. The values will be passed through by the YAML file of the previous post. Note that the parameter differ from the YAML above. This is mainly because we can use a DevOps Service Connection instead of a Service Principal.
parameters:
  - name: env
    displayName: Environment
    type: string
    values:
    - ACC
    - PRD
  - name: DeployStagePowerBI
    displayName: DeployStagePowerBI
    type: string
    values:
    - Test
    - Production
  - name: ServiceConnection
    displayName: Service Connection
    type: string
  - name: ReposWorkspaceFolder
    displayName: ReposWorkspaceFolder
    type: string
  - name: DeploymentPipelinePBI
    displayName: Deployment Pipeline PBI
    type: string
This part connects to a enviroment to which you can add approval steps in DevOps under Pipelines - Environments. Note that we created an empty string variable named Datamodels. We will use this for the list of datasets that we want to promote.
jobs:
  - deployment: deploymentjob${{ parameters.env }}
    displayName: Deployment Job ${{ parameters.env }} 
    environment: Deploy to ${{ parameters.env }}
    variables:
    - name: Datamodels
      value: ""

    strategy:
      runOnce:
        deploy:
          steps:
The first step is a check out to get the files from the repository to the agent. For this simple example it is not required, but it could be handy for a more complex pipeline where you want to use Powershells from your repository.
          ###################################
          # 1 Check out repository to agent
          ###################################
          - checkout: self
            displayName: '1 Retrieve Repository'
            clean: true 
The second step is also optional, but shows the artifact (and repos) files on your agent. We need this in the next step. You can remove or comment out the code once everything works
          ###################################
          # 2 Show environment and treeview
          ###################################
          - powershell: |
              Write-Output "Deploying PBI models in the ${{ parameters.env }} environment"
              tree "$(Pipeline.Workspace)" /F
            displayName: '2 Show environment and treeview Pipeline_Workspace'
In this step we will loop through the artifact folder to get a list of all datasets. The code will first create an array of subfolders (the subfolder has the name of the dataset). Then the array will be used to create a semicolon separated string of dataset names. The last Write-Host line looks a bit strange, but will populate the YAML variable that we create above.
          ###################################
          # 3 Fill variable with list of models
          ###################################
          - powershell: |
              # Get Artifact folder with model files
              $WorkspacesFolder = Join-Path -Path $(Pipeline.Workspace) -ChildPath "\PBIModelsArtifact\Models\${{ parameters.ReposWorkspaceFolder }}"

              # Loop through local artifact folder to get all datamodels
              [array]$ModelsArray = Get-ChildItem -Path $WorkspacesFolder | ? {$_.PSIsContainer} | Select-Object Name
              Write-Host "$($ModelsArray.Count) workspace$(If ($ModelsArray.Count -ne 1) {"s"}) found in $($WorkspacesFolder)"

              [string]$ModelsList = $null
              $ModelsList = $ModelsArray.name -join ";"
              Write-Host "Value $($ModelsList)"
              Write-Host "##vso[task.setvariable variable=Datamodels;]$ModelsList"
            displayName: '3 Fill variable with list of Workspaces'
For debug purposes we show the variable with the semicolon separated string of dataset names. You can remove this once everything works or leave it and use it as log information for your pipeline.
          ###################################
          # 4 Show environment and treeview
          ###################################
          - powershell: |
              Write-Output "Variable value of Datamodels $(Datamodels)"
            displayName: '4 Show variable value'
The last step is calling the deployment task from the installed DevOps addon for Power BI. We use Selective for the deployType which allows us to provide a lost of datasets. You can use a similar construction for the reports which we we show in a later blog post.
          ###################################
          # 5 Deploy models
          ###################################
          - task: ms-pbi-api.pbi-automation-tools.Pipelines_Deploy.DeploymentPipelines-Deploy@1
            displayName: '5 Deploy models to ${{ parameters.DeployStagePowerBI }} stage'
            inputs:
                pbiConnection: PowerBI
                pipeline: ${{ parameters.DeploymentPipelinePBI }}
                stageOrder: ${{ parameters.DeployStagePowerBI }}
                deployType: Selective
                datasets: $(Datamodels)
                createNewWS: false

Running it once for dev and once for test(/acc) will look like this. Note that some of the stages are skipped. This is due the branch strategy and the conditions we used between the stages.
The result of 2 runs























In the runs overview it will look like this.














Conclusion
In this post we showed two different ways to deploy your Power BI datasets. The first with tabular editor can be used without a Power BI Premium license. A disadvantage is that it is a lot of work to change parameters and connections.

The second method with the Power BI Deployment pipelines require Power BI Premium, but those pipelines allow you to change parameters and connections without code. A big disadvantage is that your datasets already need to be present in the development workspace of Power BI. This can be done with a Manual deployment of by combining both methods to get the best of both worlds and versioning of your work in a repository.



Thursday, 26 January 2023

Deploy a Power BI dataset via a DevOps - Build

Case
I want to develop my Power BI dataset separately from my Power BI reports. So we started using Tabular Editor. This also allows us to work on the model with multiple people at the same time because every object is a separate JSON file. But how can we automatically deploy these files using Azure DevOps?
Deploy your Power BI datasets with DevOps











Solution
In this serie of posts we will deploy the Tabular Editor project as an Power BI dataset via DevOps. We will deploy this to the development workspace. For the deployment from Dev to Test and Test to Prod we will use Power BI Deployment pipelines.


In this second post we will create the YAML file that will build the Tabular Editor project that is stored in the repository. The filename we used is BuildModel.yml and it is stored in the repository under CICD\YAML\BuildModel.yml. There are two additional YAML files in that same folder and those will be explained in the next post. However they will be mentioned in one of the tasks below.

Repository folder structure
























1) BuildModel.yml
The first YAML file is BuildModel.yml that builds the JSON files of the tabular editor project into a deployable *.bim file (the dataset) and then calls the deployment YAML files to do the actual deployment.

First we are adding the general variable group (mentioned in the previous post) to the YAML file. This allows us to use those variables in the YAML and Powershell code later on.
###################################
# General Variables
###################################
variables:
  - group: PBIModelParamsGen


Secondly we need to determine when to trigger this pipeline. This part is of course heavily depending on your branch strategy. In this example it triggers on changes in multiple branches but only for models in a certain Power BI workspace folder. Note that the path is hardcoded and that it can't use variables from your Variable groups.
###################################
# When to create a pipeline run
###################################
trigger:
  branches:
    include:
    - Development
    - Acceptance
    - main
  paths:
    include:
    - Models/myPowerBIWorkspace/*


The next step is describing the first stage of the pipeline where we build the model and set it ready for publishing. The important property is pool where you either configure a self hosted agent (via name) or a Microsoft hosted agent (via vmImage). If you are using a self hosted agent then the clean: all setting helps you to cleanup your agent before you start. 
stages:
###################################
# Create Artifact of datamodels
###################################
- stage: CreateModelArtifact
  displayName: Create PBI Model Artifact

  jobs:
  - job: CreateModel
    displayName: 'Create Model'
    workspace:
      clean: all
    pool:
      name: mySelfHostedAgent
      # vmImage: 'windows-latest' #'ubuntu-latest'
    steps:


The first real task is retrieving all the files from the repository. After this step your datasets/models will be available on the agent it self. This is necessary for the build step.
    ###################################
    # 1 Retrieve Repository
    ###################################
    - checkout: self
      displayName: '1 Retrieve Repository'
      clean: true


The next task is downloading and unzipping the tool Tabular Editor. If you have a self hosted agent then you could also choose to download/install this on your Virtual Machine. However installing it via YAML makes it very easy to use a newer version just by changing the 'DownloadLinkTabularEditor' variable in the general variable group (see previous post). That variable should be filled with the download url of the portable/zip file: https://github.com/TabularEditor/TabularEditor/releases/
    ###################################
    # 2 Download Tabular Editor
    ###################################
    - powershell: |
        # Download URL for Tabular Editor portable:
        
        # Create Download folder
        $DownloadFolder = join-path (get-location) "TabularEditor"
        new-item -type directory -path $DownloadFolder -Force
        Write-Host "Create download location: $($DownloadFolder)" 

        # Download destination (root of PowerShell script execution path):
        $DownloadFile = join-path $DownloadFolder "TabularEditor.zip"
        Write-Host "Save download as: $($DownloadFile)"
        
        # Download from GitHub:
        Write-Host "Downloading Tabular Editor from: $(DownloadLinkTabularEditor)" 
        Invoke-WebRequest -Uri $(DownloadLinkTabularEditor) -OutFile $DownloadFile
        
        # Unzip Tabular Editor portable, and then delete the zip file:
        Write-host "Extracting downloaded zip file"
        Expand-Archive -Path $DownloadFile -DestinationPath $DownloadFolder
        Write-host "Clean up download file"
        Remove-Item $DownloadFile
      displayName: '2 Download Tabular Editor'


To check whether the checkout step and the download/unzip step were succesful we have an optional step that shows a treeview of the agent. Once everything is working you can remove or comment-out this step.
    ###################################
    # 3 Show treeview of agent
    ###################################
    - powershell: |
        tree "$(Pipeline.Workspace)" /F
      displayName: '3 Treeview of Pipeline.Workspace'


Now that we have succesfully copied the models to the agent and downloaded Tabluar editor, we can build the json files into a BIM file which we can later on release as a dataset to a Power BI workspace.

This Powershell step loops through all database.json files (the main/project file) that are on the agent. It retrieves the subfolder/parentfolder name so that is can be used as a dataset name and then uses TabularEditor.exe to build the project. All BIM files are stored in the artifact staging folder.

    ###################################
    # 4 Build data models
    ###################################
    - powershell: |
        # Get repos folder with model files
        $ModelsFolder = Join-Path -Path $(Build.SourcesDirectory) -ChildPath "\Models\$($ReposWorkspaceFolder)\"
      
        # Loop through local repos folder to get all datamodels
        $AllDatamodels = Get-ChildItem -Path $ModelsFolder -Recurse -Filter database.json
        Write-Host "$($AllDatamodels.Count) model$(If ($AllDatamodels.Count -ne 1) {"s"}) found in $($ModelsFolder)`n"
        foreach ($DataModel in $AllDatamodels)
        {
          $Path = "$($DataModel.Fullname)"
          Write-Host "Building model $($Path)"

          # extract datasetname and workspacename from folder path of model
          $DatasetName = $Path | Split-Path -Parent | Split-Path -Leaf
          $WorkspaceName = $Path | Split-Path -Parent | Split-Path -Parent | Split-Path -Leaf

          # Create target path for staging folder
          $TargetFolder = Join-Path -Path $(Build.ArtifactStagingDirectory) -ChildPath "\Models\"
          $TargetFolder = Join-Path -Path $TargetFolder -ChildPath $WorkspaceName
          $TargetFolder = Join-Path -Path $TargetFolder -ChildPath $DatasetName

          Write-Host "Saving build model in $($TargetFolder)"
          # Build model into bim file by executing tabular editor
          $(Build.SourcesDirectory)\TabularEditor\TabularEditor.exe "$($DataModel.Fullname)" -B "$($TargetFolder)\Model.bim"
          Wait-Process -Name "TabularEditor"
        }
      displayName: '4 Build data models'


This is again the same treeview step to check the result of the build. In this case it shows all files and subfolders of the artifact staging folder. You can remove or comment-out this step once everything works. 
    ###################################
    # 5 Show treeview of agent
    ###################################
    - powershell: |
        tree "$(Build.ArtifactStagingDirectory)" /F
      displayName: '5 Treeview of Build.ArtifactStagingDirectory'


The last step of the build stage is creating an artifact of all (BIM) files in the artifact staging folder. This artifact will auttomatically be downloaded to the agent in the next stage. 
    ###################################
    # 6 Publish artifact
    ###################################
    - task: PublishPipelineArtifact@1
      displayName: '6 Publish ARM template as artifact'
      inputs:
        targetPath: $(Build.ArtifactStagingDirectory)
        artifact: 'PBIModelsArtifact'
        publishLocation: 'pipeline'


Now it's time to do the actual deployment. We have 3 extra stages (Deploy to Dev, Acc and Prd), but we used a condition on them. If a change happened in the Acceptance branch then we will only execute the Deploy to Acc stage and skip the rest of the stages. In the pipeline runs overview it will look like this:

Stages of pipeline runs



















In this example the deployment method for Development is a little PowerShell script and for Acceptance and Production we will use the Power BI Deployment Pipeline. Therefore those stages have different parameters. The details of the those yaml files will be explained in the next blog post.
###################################
# Deploy Dev environment
###################################
- stage: DeployDev
  displayName: Deploy DEV
  variables:
    - group: PBIModelParamsDev
  pool:
    name: mySelfHostedAgent
    #vmImage: 'windows-latest'
  condition: eq(variables['Build.SourceBranchName'], 'Development')
  jobs:
    - template: DeployPBIModelDev.yml
      parameters:
        env: DEV
        SpApplicationId: $(SpApplicationId)
        SpClientSecret: $(SpClientSecret)
        SpTenantId: $(SpTenantId)
        DownloadLinkTabularEditor: $(DownloadLinkTabularEditor)
        
###################################
# Deploy Acc environment
###################################
- stage: DeployAcc
  displayName: Deploy ACC
  variables:
    - group: PBIModelParamsAcc
  pool:
    name: mySelfHostedAgent
    #vmImage: 'windows-latest'
  condition: eq(variables['Build.SourceBranchName'], 'Acceptance')
  jobs:
    - template: DeployPBIModelAccPrd.yml
      parameters:
        env: ACC
        DeployStagePowerBI: Test # PBI doesn't use Acceptance
        ServiceConnection: 'PowerBI'
        ReposWorkspaceFolder: $(ReposWorkspaceFolder)
        DeploymentPipelinePBI: $(DeploymentPipelinePBI)


###################################
# Deploy Prd environment
###################################
- stage: DeployPrd
  displayName: Deploy PRD
  variables:
    - group: PBIModelParamsPrd
  pool:
    name: mySelfHostedAgent
    #vmImage: 'windows-latest'
  condition: eq(variables['Build.SourceBranchName'], 'main')
  dependsOn: CreateModelArtifact
  jobs:
    - template: DeployPBIModelAccPrd.yml
      parameters:
        env: PRD
        DeployStagePowerBI: Production
        ServiceConnection: 'PowerBI'
        ReposWorkspaceFolder: $(ReposWorkspaceFolder)
        DeploymentPipelinePBI: $(DeploymentPipelinePBI)
Conclusion
In this post we showed how to build a Tabular Editor project with JSON files into one BIM file which we can deploy to the Development workspace in Power BI. You could do the same for the other enviroments, but in this example we will use the Power BI Deployment pipeline for Test and Production which you will see in the next post.

Deploy a Power BI dataset via a DevOps - Setup

Case
I want to develop my Power BI dataset separately from my Power BI reports. So we started using Tabular Editor. This also allows us to work on the model with multiple people at the same time because every object is a separate JSON file. But how can we automatically deploy these files using Azure DevOps?
Deploy your Power BI datasets with DevOps











Solution
In this serie of posts we will deploy the Tabular Editor project as an Power BI dataset via DevOps. We will deploy this to the development workspace. For the deployment from Dev to Test and Test to Prod we will use Power BI Deployment pipelines.


In this first post we are setting up the project in Azure DevOps as a preparation for publishing Power BI Datasets. We will show you
  • the folder stucture of the repository
  • which addon is handy in combination with Power BI Premium 
  • how to authorize the Service Principal and create a Service Connection
  • which variables we use in the variable groups


1) Repository
You need to store those JSON files from your Tabular Editor project in a DevOps (or GITHUB) repository. For this example we created the following folder structure where we separated de models from the reports. Within the models folder we have one or more subfolders for each PBI workspace we want to target and then an other subfolder level for each dataset/project.
Repository folder structure





















2) Power BI Automation tools addon
We will use Tabular Editor to deploy from the respository to the Dev workspace in Power BI, but for this example we will use Power BI Deployment pipelines to deploy from Dev tot Tst(/Acc) and then to Prd. This is optional because you need Power BI Premium for this, but it is saving you a lot of time writing and maintaining code.

To call the Power BI Deployment pipelines from within Azure DevOps we need a (free) add-on from Microsoft called: Power BI automation tools. Requesting to add the extentions is just a few clicks, but for approving the installation of the extention you need enough rights on organization level in DevOps.
Power BI automation tools add on for DevOps














3) Service Principal
To do the release of the dataset to the Power BI workspace we need an Azure Active Directory Service Principal (SP).
  1. This SP needs to be in an Azure Active Directory Group that will used within the Tenant Settings of Power BI to allow the usage of Power BI rest APIs.
  2. This SP needs to get Contributor access in the workspace to do the deployment of the dataset
  3. (optional) This SP needs to get access to an Power BI Deployment pipeline (=Power BI premium)
3a) Allow service principal to use Power BI APIs
For this first step you need access to the Power BI admin portal to give the Service Principal access to the Rest APIs from Power BI. 
  • Log in to Power BI online and click on the gear icon in the upper right corner
  • Click on Admin portal in the menu to open the portal
  • Then click on Tenant settings (if you don't see this, continue after the next image)
  • Scroll down to the Developer settings and locate 'Allow service principal to use Power BI APIs'
Either allow the entire organization to use the Rest APIs (not recommended) or specify an Azure Active Directory Group and make sure your Service Principal is a member of that group. 
PBI Admin portal - Tenant Settings















If you couldn't find the tenant settings in the Admin Portal from Power BI then you need an Power BI administrator role. Your Office 365 administrator can either give you the role or do the steps above for you. If you are able get this role then you should probably use Azure AD Privileged Identity Management (PIM) to first get approval from a colleage to temporarily get this role and also use the PBI audit log to record all logins to the admin portal and record all changes in the tenant settings. 
Not a PBI admin













3b) Give permissions in Power BI Workspace
Next step is to give your Service Principal contributor rights in the PBI Workspaces where you want to deploy the datasets.
  • Go to your Power BI workspace and click on Access in the upper right corner
  • Search for your Service Principal by its name (ours is called PowerBI)
  • Select the right permission (contributor) and click on the Add button
  • Verify the changed list of permissions and then click on the Close button
Give SP permissions in your workspace

















3c) Give permissions to Deployment pipeline
The Power BI Deployment Pipelines are a premium feature to release Power BI components from a Development workspace to a Test Workspace and then from Test to Production. These pipelines will save you writing (and maintaining) a lot of PowerShell code for Azure DevOps pipelines, but they also have 2 major disadvantages. They don't use a repository to store versions of your work and the can't deploy to the Development workspace. This is where the combination DevOps and Power BI can help.

For this you need to create a Power BI Deployment Pipeline, connect it to your PBI workspaces (dev, tst & prd) and at last give the Service Principal from above access to your newly created Deployment pipeline. At the moment the only role is admin.
Give SP access to PBI deployment pipeline
















4) Service Connection Power BI
Create an Azure DevOps Service Connection. The type should be a Power BI Service Connection which can only be added once the add-on of step 2 is installed. Use the Service Principal of step 3 for this.
  • Go to the Project Settings of your DevOps project (bottom left)
  • Find Service connections under Pipelines
  • Click New service connection (upper right)
  • Choose Power BI and click next
  • Fill in the details of your Service Principal of step 3
Power BI Service Connection (via add-on)



















5) Libarary - Variable groups
Before we start writing the YAML (and some PowerShell) code we need to create some Variable groups under Pipelines - Libary. We have a general variable group with a couple of general variables that don't vary per environment like the download URL for Tabular Editor or the SP properties. We also have three variable groups (one for each environment) that contain variables that do have different values per environment.
Create Variable groups (1 general and 3 for enviroments)









General Variable Group




















General Variable Group
  • DeploymentPipelinePBI
    The name of your Power BI Deployment Pipeline for publishing from Dev 2 Tst and Tst 2 Prd
  • DownloadLinkTabularEditor
    The url of the zipped portable version of tabular editor. See all versions.
  • ServiceConnection
    The name of the Azure DevOps Service Connection created in step 4
  • SpApplicationId
    The application id of the SP from step 3
  • SpClientSecret
    The client secret of the SP from step 3 (change type to secret)
  • SpTenantId
    The tentant id of your Azure Actice Directory. You can also find this in Power BI  online in the ?-menu and then About Power BI and in the popup you see the Tenant URL with the ID at the end
  • ReposWorkspaceFolder
    This contains the name of the workspace folder in the repository ("Cloud Team" in the repository picture) and it should also be the (start of the) workspace name in Power BI. We will add something behind it to distinguish dev from tst and prd. So in the repos picture you see "Cloud Team" and for Development the actual workspace name will be "Cloud Team - dev" (see next variable group)
Development Variable Group














Development Variable Group
  • PowerBiWorkspaceSuffix
    The respository contains the name of the workspace, but for this example we added a suffix after it to distinguish dev from tst and prd
If you don't want to use the Power BI Deployment Pipelines then just Clone this variable group for test/acceptance and production. If you do want to use the Power BI Deployment Pipelines then you don't need variable groups for test/acceptance and production.

Conclusion
In this first post we did all the preparation work for DevOps and for the Service Principal. In the next blog post we will build the project and make it available as an Artifact in DevOps to release it to the Development workspace in Power BI. In the third post we will show two different ways for deployment.