Showing posts with label RICARDO. Show all posts
Showing posts with label RICARDO. Show all posts

Monday 25 September 2017

Power BI Snack : Drillthrough in Power BI

Case
In SQL Server Reporting Services (SSRS) we are used to use subreports and drillthrough, but in Power BI this was not possible until the release in September. How does it work?

Drillthrough in Power BI









Solution
As said, we use this functionality a lot in SSRS. For example, when you are dealing with multiple audiences who use the report. Managing board or senior management are interested in the total sales per year/month. An operational manager is probably interested in the same numbers, but then by week/day, per location and more details about the products sold.

To build a drilltrough report, we created a Power BI report based on the sample database of Microsoft. You can download the database WideWorldImporters and more here. Our first report is the main report, where we are showing the total sales and profit per month and per state. You can also filter this data per year.

1) Configure the drillthrough filter
After creating the main report, we want to show more details about the sales and profit per state. For this we have built another report (subreport) with a number of graphs and one table for the details. This table shows the sales and profit per city. Next, we add the Drillthrough filter in the subreport. Go to Filters and now you also see Drillthrough filters as possibility. Add the appropriate column here, in this case 'State Province'.

Power BI - Drillthrough filter




















Once you added the filter, an arrow (icon) appears automatically in the upper left corner of the "subreport". By clicking this arrow, you return to the main report. You can customize this arrow to your desired layout.

Power BI - Back to main report navigator (default and customized)














2) Result
Now we can use the drillthrough functionality in the main report. Go to the table and right click on a state, for example 'Alabama'. At the bottom, select Drillthrough and the subreport Sales per State. The subreport will now be opened with all sales data for the state Alabama. When you want to return to the main report, click the blue arrow in the upper left corner.

Unfortunately you can only filter the subreport with the drillthrough filter. So when you have multiple report filters in your report, for example 'Calender Year', the subreport will not filter on year. As soon this is the possibility, the other sub rapport called 'Sales per State (future)' is a better solution.

You can download the Power BI report here.

Power BI - Drillthrough to your subreport















Conclusion
Power BI has taken a good step for more interaction between different reports, hopefully this is just a beginning. There are still some improvements to be made. For example, the possibility to pass through all selected report filters in the main report to the subreport. Now only the selected drillthrough filter is passed by.

You can vote on this idea here. Of course, we already voted.

Tuesday 29 August 2017

Azure - Continue with Azure Data Lake for Big Data


Case
I an earlier post we showed you how to transform sensor data using Azure Data Lake. Many companies are gathering (or already have) a lot of Big Data in many different files. How can we use Azure Data Lake Analytics (ADLA) to handle these files?

Big Data and U-SQL















Solution
Just like the previous post, the sensor data is already stored in an Azure Data Lake Store (ADLS). Next, we build and configure an U-SQL Job. This is Microsoft's new Big Data query language that you can use in ADLA. Last time we developed in the Azure Portal, but there are other options. Last month, Microsoft released a Visual Studio plug-in for Azure Data Lake and Stream Analytics. This allows you, while writing U-SQL queries, to use other benefits of Visual Studio like Team Foundation Server (TFS), debugging and adding C# code for custom inputs and outputs.

In this case we have sensor data from one year. The data is stored in several files: one file per day. We want to create a U-SQL job that aggregates the data per day and then stores all the data. For now we focus on the query itself. See here how to create an ADLA service/account and to create a new U-SQL Job.

1) Install plug-in for Visual Studio
First we have to download and install the plug-in Microsoft Azure Data Lake and Stream Analytics Tools for Visual Studio. You can download the plug-in here. Besides the creating and debugging of U-SQL scripts, you can also build queries of Azure Stream Analytics jobs using this plug-in.

2) Write the Query
Open Visual Studio and create a new U-SQL project. Our U-SQL script is called 'multipleFiles'. The starting point is the query we made in an earlier post extracting one single sensor file.

Variables
Because we have multiple files, we are creating a dynamic FROM clause using variables. In this case the folder path from ADLS. We use the following syntax for this:"bitools_sample_data_{*}.csv". This is a wildcard and will get you every file of the year (see comment in the query below for the input files structure). We also skip the first row, the headers.

// File naming convention: bitools_sample_data_01-01-2016.csv, bitools_sample_data_01-02-2016.csv etc.
// Create variable for input files
DECLARE @folderInput string = "/SensorData/Input/";
DECLARE @inputString string = @folderInput + "bitools_sample_data_{*}.csv";

Extract
To retrieve the data from the files, we use an EXTRACT statement. In an earlier post, we extracted the data as a string. Now we extract the 'time' column as date time format (just like the source file), using the variable in the FROM clause we created earlier.

// Extract the sensor data from CSV file (skip the header)
@sensorData = 
    EXTRACT 
        [time]                    DateTime
    ,   [dsplid]                  string
    ,   [dspl]                    string
    ,   [temp]                    string
    ,   [hmdt]                    string
    ,   [status]                  string
    ,   [location]                string
    ,   [EventProcessedUtcTime]   string
    ,   [PartitionId]             string
    ,   [EventEnqueuedUtcTime]    string
    FROM @inputString
    USING Extractors.Csv(skipFirstNRows:1);

Transformation
Next we aggregate the data into averages based on the 'time' and 'location' column, using a SELECT statement. We convert the 'time' column to a date format, because we want to aggregate per day. We give the column names a suitable name. You may have noticed that we do not select all the columns, because we do not need all columns from the source file.

// Aggregate the sensor data (average per location) and data type conversions
@result =
    SELECT
        time.ToString("yyyy-MM-dd") AS Date
    ,   AVG(Convert.ToInt32([temp])) AS Temperature
    ,   AVG(Convert.ToInt32([hmdt])) AS Humidity
    ,   [location] AS Location
    FROM @sensorData
    GROUP BY
        time.ToString("yyyy-MM-dd")
    ,   [location];

Output
Finally, we save the data in a new CSV file. In the OUTPUT statement, you can also add an ORDER BY clause. We want the header back in our output data and therefore we use 'outputHeader'.

// Save the sensor data to a new CSV file
OUTPUT @result
TO "/SensorData/Output/bitools_sample_data_AveragePerDayPerLocation.csv"
ORDER BY 
    [Location] ASC
USING Outputters.Csv(outputHeader : true, quoting:false);

See below a screenshot of the full query in Visual Studio.

Visual Studio - U-SQL script





















3) Run the Job
When you have built the query, click 'Submit' and then the Job View screen automatically appears. This is similar to Job Details in the Azure Portal that we used earlier. But when you look closely, you see Visual Studio offers more information then the portal. For example, more details at 'Job Summary' and errors details.


Visual Studio - Run U-SQL script















Error details
When you have an error in the U-SQL query, you can see often the details of this error directly in the 'Job View' screen. In case of an Vertex user code error, you do not immediately see the error details on this screen. If you want to see details of this error, scroll down in the 'Job Summary' and click on 'Resources'. Then choose 'Profile' and search for the keyword 'jobError'. This row contains the details of the error.

Visual Studio - U-SQL Query error details















4) Result
Now go to the Azure portal and to your Azure Data Lake Store. Open the new file in 'Data Explorer'. Our output file is located in the folder 'SensorData' and then 'Output'. The result should look like this:


Azure Portal - View result in Data Lake Store











Conclusion
In this post we went deeper into building an U-SQL script using Visual Studio. In our opinion, you should develop as much as possible in Visual Studio, because we all know the benefits of this tool like TFS and debugging.

Sunday 23 July 2017

Azure - Use Azure Data Lake for Big Data

Case
We have collected sensor data and we want to use this in a Data Warehouse (DWH). Because we do not want to store raw data in our database, we need to resolve this first. How can we accomplish this with Azure Data Lake?


Over Azure Data Lake











Solution
We use Azure Data Lake Store (ADLS) to store the sensor data. As we know from an earlier post, ADLS is extremely suitable for storing unstructured data and we showed an example of how you can store this sensor data. In the first example the data, one file of each day, is already stored in ADLS. You can download the file here. Next we are going to aggregate this data per day and create a file that is ready to load.

To accomplish this we will use another feature of Azure Data Lake, called Azure Data Lake Analytics (ADLA). With this and Data Lake Store, Microsoft offers new features similar to Apache Hadoop to deal with petabytes of Big Data. The advantage of Data Lake Analytics is that it supports Hadoop, but also introduce a similar language like T-SQL, called U-SQLThis is Microsoft's new Big Data query language. It is T-SQL with a little touch of C# to add even more features to the language. Click here for more details about this.

Note:
If you don't have sensor data ready, you can download a sensor generator for free. This 'SensorEventGenerator' can be found here. In addition, you have to create an Event Hub / IoT Hub to sending these sensor data to Azure. Click here for more information. In our case we have generated our own data with a similar program and uploaded to ADLS.


1) Create Data Lake Analytics
First we have to create a new Data Lake Service with a Data Lake Analytics account. We give the account a suitable name and we choose the same resource group as the ADLS uses. As a last step we need to choose the Data Lake Store, in our case 'bitoolsadls' where the sensor files are stored.

Azure Portal - Create Data Lake Analytics













Important to know is the pricing. The creation of the service is free, also while it is running. The payment starts with the use of U-SQL Jobs. You pay for computing power (measured in Analytics Units). More information about pricing here. For example, if we want to run a job 24 hours with 1 UAs (and that complies), this will results in the following costs:

Data Lake Analytics Cost Indication
Depending of the amount of data, the number of Analytics Units (UAs) must be increased. 

1) Create new Job
Now everything is set, we must create a new U-SQL Job. Go to your Data Lake Analytics you just created and click on 'New Job'. Give it a suitable name. Notice that you have two other options to change: Priority and UAs. By 'Priority' you can determine the importance of the job. For example, if the job has Priority 1, this job will always start first. The other one are the Analytics Units (as we explained earlier). Increasing the 'UAs', will give you a cost indication. For now we leave this both on default, because it is our only job. 

2) Write a Query
Let's get started with the query. As I mentioned before, it similar to T-SQL, because you can also write SELECT statements with the familiar FROM and WHERE clauses and transfer the data to a new location. In addition, you must use C# for data type conversions and for example to determine today's date. In this case we retrieve the raw data, aggregate this data and store it in a new CSV file in a new ADLS folder.

To retrieve the data from the file, use EXTRACT and FROM. You can also use T-SQL variables in the FROM clause to avoid hard coded paths, but for now we fill in our file path hard coded. We do not need to fill in the Data Lake name itself, because the default is our ADLS account. We also have to use the USING statement for specify the extract format, in this case a CSV file. More details about this statement here. Important to know is that we specify that the first row must be skipped, because that's the header. The result of this query will be stored in the variable '@sensorData'.

Next we want to aggregate our data, because the granularity of the raw data is per second. We take the averages of temperature (temp) and humidity (hmdt) per day and per location. We need a SELECT statement on '@sensorData' variable for this. Besides the aggregations, we are doing some datatype conversions, because extracting the data as string is currently the preferred way. We also use GROUP BY for dividing the result into groups (per day and per location). You may have noticed that we do not select all the columns, because we do not need all columns from the source file.

Once we have retrieved and transform the data, we want to save it in a new file. Therefore, we use the OUTPUT statement. Define the folder path. This will be automatically created with the new file. Because we are also saving the new file in a CSV format, we will use the same extractor by USING, except now without the 'skipFirstNRows' parameter. Every time you start the job, it will overwrite the file destination.

The query:

// Extract the sensor data from CSV file (skip the header)
@sensorData = 
    EXTRACT 
        [time]                    string
    ,   [dsplid]                  string
    ,   [dspl]                    string
    ,   [temp]                    string
    ,   [hmdt]                    string
    ,   [status]                  string
    ,   [location]                string
    ,   [EventProcessedUtcTime]   string
    ,   [PartitionId]             string
    ,   [EventEnqueuedUtcTime]    string
    FROM "/SensorData/Input/bitools_sample_data_01-01-2016.csv"
    USING Extractors.Csv(skipFirstNRows:1);

// Aggregate the sensor data (average per location) and data type conversions
@result =
    SELECT 
        DateTime.ParseExact([time], "yyyy-MM-dd HH:mm:ss", null).Date AS Date
    ,   AVG(Convert.ToInt32([temp])) AS Temperature
    ,   AVG(Convert.ToInt32([hmdt])) AS Humidity
    ,   [location] AS Location
    FROM @sensorData
    GROUP BY
        DateTime.ParseExact([time], "yyyy-MM-dd HH:mm:ss", null).Date
    ,   [location];

// Save the sensor data to a new CSV file
OUTPUT @result
TO "/SensorData/Output/bitools_sample_data_01-01-2016_AveragePerDayPerLocation.csv"
USING Outputters.Csv(outputHeader : true, quoting:false);

Note 1:
The statements EXTRACT and OUTPUT use absolute or relative file paths. That's why we cannot use SELECT for retrieving the data. We also use the C# syntax (//) to add comments. Click here and here for more information about the U-SQL language.

Note 2:
PolyBase does not handle column headers that well. If you want to read this output file with PolyBase, you could consider removing "outputHeader : true" from the OUTPUT part of the query.


Azure Portal - Create the U-SQL Job













3) Run the Job
When the query is done, we must click on 'Submit Job'. Now a new screen will appear where we can monitor the running job. When the job is successfully 'Finalizing', we can preview the output file. Notice that the new file has 10 rows: one average row for temperature and humidity per location.

Azure Portal - Run the U-SQL Job













Finally, let see if the file is stored in the Azure Data Lake Store. Go to your ADLS and click on 'Data Explorer'. Find your output folder and there it is!

Azure Portal - See result in Data Lake Store












Summary
We showed you how Azure Data Lake is suitable for storing and transform Big Data, in this case sensor data. Off course, there are more ways to get this together using the Cortana Intelligence Suite, for example Stream Analytics.

In this post we used one single file, but often companies have hundreds or thousand of files. In a next post we show you how to handle multiple input files in Azure Data Lake Analytics.

Sunday 21 May 2017

Azure - Setting Up Azure Analysis Service (AAS)

Case
I want my BI Semantic Layer in the Cloud. How can I use Azure for this?

Azure Analysis Services (AAS) as your BI Semantic Layer














Solution
On-premises we, as Microsoft BI specialists, use SQL Server Analysis Server (SSAS) for this, but we now have an Analysis Service in Azure. This is called Azure Analysis Services (AAS). Just like SSAS, you can choose between multiple data sources (both on-premises and Cloud) and use all the known tools for presenting your data, like SQL Server Reporting Services (SSRS) and Power BI.

In this case, I will use a SQL Server on-premises. On this local server we have a database called World Wide Importers. This is the new sample database of Microsoft. It replaces the 'good old' AdventureWorks. Because we are building an Analysis Cube, I choose the database WorldWideImportersDW. This is the full sample database for OLAP (OnLine Analytical Processing). You can download this database and more here.

1) Create an Azure Analysis Service server
Go to your Azure portal and search for Analysis Services. Give your server a suitable name and choose a resource group or create a new one. For now, we are setting up a Developer server (D1). This version is very suitable for development and demo scenarios. More pricing details and feature differences between the tiers (Developer, Basic and Standard) here. Fill in the Administrator, in this case my own credentials of my company account (part of Azure AD).

We also create a new Storage account. We choose Locally-redundant storage (LRS). More information about the different storage options can be found here.

Azure - Create the Azure Analysis Server













2) Create an Analysis Services Tabular Project
Open in Visual Studio a new Analysis Services Tabular Project. Now fill your Azure Analysis Service (AAS) server that you have created earlier in combination with Compatibility level SQL Server 2016 RTM (1200). This one is supported by AAS. More information about compatibility levels for Analysis Services Tabular models here. Finally click on Test Connection and sign in with your credentials (which also has access to the World Wide Importers database). Now we can build our Tabular model.

Visual Studio - Create Tabular project

In this case we build a small model for testing purposes.We import the fact table Order and the associated dimensions: City, Customer, Date, Employee and Stock Item. We give the tables appropriate names.

Visual Studio - Build the model















'On-Premises data gateway'
Make sure you have installed the 'On-Premise Gateway'. Click here for more information.

2) Deploy the Tabular model
Now we that have created the model, we can deploy this to the AAS server. This works exactly the same as when you build and deploy a Tabular model for an on-premises Analysis Server. So right click on your Tabular project and deploy.

Visual Studio - Deploy the model











Result
Let's see if we can find the model. To do this, connect to the AAS server in SQL Server Management Studio (SSMS). Fill in the server (asazure://westeurope.asazure.windows.net/bitools) and choose Active Directory Password Authentication. For now we use the same credentials as the Analysis Services admin (which you must fill in when creating the AAS server). Once connected we can find our dimension and fact tables under 'bitools AAS', great!

SSMS - Connect to the model














Note:
Off course you can manage your AAS server in the portal and give, for example, other users access to the server. This will be part of a future post.

Summary
For this blog we have set up a small Tabular model in the Cloud. It is not difficult (it works the same as developing an on-premise Tabular model), but you need to install a gateway for on-premises data and the big difference is now that your AAS (Azure Analysis Service) server is hosted in Azure. Good to know is that you can also turn on/off the AAS with PowerShell. Click here for more information.

Future AAS posts will show how to connect to AAS with Excel and Power BI or how to add additional users.

Friday 28 April 2017

Azure Snack - AAS and On-Premise Data

Case
You are building a Tabular model in combination with Azure Analysis Server (AAS) and you want to use an on-premise SQL database as source, but you are getting the following error:

Visual Studio - Error while importing the data
















Error message:
Failed to save modifications to the server. Error returned: 'On-Premise Gateway is required to access the data source and the gateway is not installed for the server bitools.

How do you fix this error?

Solution
As the error message indicates, we need a gateway to use data from an on-premise data source. This On-Premise Gateway can be downloaded here. In this case we install the gateway for our AAS server. Important to know is that you are installing a gateway for a specific AAS server. For example, if you have an Azure subscription with two servers that connect to on-premises data sources, the gateway must be installed on two separate computers in your (organization) network.

Run the setup, choose a installation folder/path and install the gateway. After installation, you must configure the gateway for your AAS server. Sign in to Azure and fill in your server name. Now you are ready to go!

Azure - Install on-premises data gateway


















Note:
Perhaps you saw that I had to update my gateway, this is because I have installed earlier a gateway for Power BI. These gateways are the same, but for AAS it is configured differently. More information about this gateway here.

Result
Go back to your Tabular model and try to import the data again. It works!

Visual Studio - Importing the data succedeed

























Note:
If it doesn't work the first time, restart Visual Studio and open your solution again. The connection to the AAS server may be lost after inactivity.

Common Errors
A common installation error of the gateway is the following:

The server you have provided does not exist or you are not an administrator of the server. 

{"code":"NotFound","subCode":0,"message":"Server 'ssastest' is not found.","timeStamp":"2017-05-15T10:07:28.1324395Z","httpStatusCode":404,"details":[{"code":"RootActivityId","message":"b49c33e2-7e45-4e67-98f0-ab86faf21c12"},{"code":"Param1","message":"ssastest"}]} 

Or the following error:

The server you have provided does not exist or you are not an administrator of the server. 

{"code":"Unauthorized","subCode":0,"message":"Either server 'asazure://westeurope.asazure.windows.net/ssastest' does not exist or user is not the administrator of 'asazure://westeurope.asazure.windows.net/ssastest'","timeStamp":"2017-05-15T09:57:48.7294661Z","httpStatusCode":400,"details":[{"code":"RootActivityId","message":"a029ee64-56b7-4f85-96da-bb2f78c8eba6"},{"code":"Param1","message":"asazure://westeurope.asazure.windows.net/ssastest"}]} 

Based on this, we have two types of errors: the AAS server is not found or you have no access to the server. See the screenshots below for more information (possible fixes).

On-premise gateway - AAS server not found


On-premise gateway - No access to AAS server

Saturday 25 February 2017

Azure - Understanding Stream Analytics Data Lake Store Destination

Case
How do you use Azure Data Lake Store to store Stream Analytics data and why would you do that?

Solution
In an earlier post we have set up a Stream Analytics Job where sensor data is send to a Blob file. In this case we create a new Stream Analytics Job with Data Lake Store as output and were we use the sensor data as input. Despite the fact that the Input and Output is different, the query stays the same as other Stream Analytics Jobs. You can find the configuration of the Stream Analytics Job query here.

A Data Lake Store is a scalable repository optimized for storing IoT data, log files and other large datasets for Big Data scenarios. It contains folders and these folders contain the files (data). As said before, you can also store (sensor) data using Blob storage. These two storage options are similar, but there are some important differences, which we will explain later in this post.

Overview Data Lake Store with different types of data




















1) Create a Data Lake Store
First we have to create the Data Lake Store. Click on New (+ icon) in your portal and under the Storage category you will find the Data Lake Store. Give it a suitable name. Next we choose the Resource Group which we created earlier by setting up the IoT Hub. The benefit of choosing the same group is that among other things, the rights are the same. At this time there are not many options available in both Location and Pricing, so it's default.

Azure Portal - Create Data Lake Store














2) Create the Stream Analytics Job
Before creating this new job, we already added a new consumer group to our IoT Hub, called 'DataLake'. Using multiple consumer groups makes it possible for several consumer applications to read data from this IoT Hub independently. Click here to see where you can add/manage consumer group(s).

Note:
The Azure Portal is still in development, so adding a new consumer group is now at a different place than in an earlier post. See screenshot below.

Azure Portal - Adding Consumer Groups














Now we can create the job. We choose the same Resource Group as the IoT Hub and Storage account, just like the creation of the Data Lake. Our Location is the Netherlands, so we choose West-Europe.

Azure Portal - Create Stream Analytics Job














3) Configure the Stream Analytics Job
We start with defining the Input. We choose 'Data stream' as Source Type because the sensor data is an ongoing stream and is derived from the IoT Hub. Under Source we choose 'IoT hub'. In our case we have one IoT Hub, but when you have multiple IoT Hubs you can choose one from the list. Now our IoT Hub appears automatically. Next we choose 'datalake' as Consumer group. This is the one we have created earlier. Finally we choose 'JSON' as Event serialization format.

Now we can specify the Output. Give it a suitable name and choose 'Data Lake Store' as Sink. The corresponding (in thise case the one we have created earlier) Account Name will be selected automatically. Next we enter a file path to store the files in our Data Lake Store account. Optionally, you can specify this path with date and time. Now the data will be stored in multiple instances per day and per hour and this makes the storage more clearly. At last you will choose the 'JSON' format.

In this post we only configuring the Input and Output of the job, as mentioned earlier. Now we can run the job with a valid query.

Azure Portal - Configure Stream Analytics Job














Result
After running the Stream Analytics Job, the data is now stored in the Data Lake Store. You can find this in the Azure portal. Open the Data Lake Store and go to Data Explorer. Now you see one folder named 'sensordata'. This folder contains multiple subfolders: year, month, day and hour. Now we have only one file with data in the subfolder 'hour', but each next hour there will be a new file (as long as your Stream Analytics Job is running). This is exactly what we have configured earlier. In each file, the data is stored per 10 seconds. It works!

Azure Portal - Result Data Lake Store















Differences between Blog Storage and Data Lake Store
The first big difference is the size limits. There are no limitations in account/file size or number of files at a Data Lake Store, while Blob storage has such restrictions. In addition Data Lake Store has built-in Hadoop integration. Therefore (along with the unlimited storage) this makes it suitable for storing Big Data and then analyze this data. Another difference is the authentication. Blob storage works with generated storage access keys, while a Data Lake Store use Azure Active Directory for this. 

Overall a Data Lake Store has more possibilities then Blob storage and is optimized for Big Data purposes. The general purpose of Blob Storage is storing data in different scenarios like backups or media files (for streaming). The starting prices are lower for Blob storage, but you have different storage prices for your Blob Storage account. This means: you can make it as expensive as you want and in some cases the monthly charges per GB will be higher than a Data Lake Store. Click here for more details about the differences and prices between a Data Lake Store and Blob storage.

Conclusion
Azure Data Lake Store is very useful with Big Data scenarios because it can combine storage (to more then 1 petabyte) with the ability to analyze this data. This can be done with Hadoop (built-in integration) or Azure Data Lake Analytics, which is specially optimized to work with Azure Data Lake Store. From this perspective it offers more than, for example Blob storage.

Thursday 5 January 2017

Azure Snack - Grant Access to your Azure SQL DB

Case
You want to give an user permission to connect to your database in Azure through SQL Server Management Studio. How do you do this?

Solution
In SQL Server Management Studio (SSMS) with an on-premise database you can do a lot with a GUI, like creating an user and giving this user the right permissions. In this case we use SQL Server Authentication. With a database in Azure you have to write code in Transact-SQL for this, instead of using GUI. Of course you can also use T-SQL in combination with an on-premise database, but for the convenience I use a GUI. Unfortunately, this GUI is not supported in an Azure database. Just like 'IntelliSense' for example.

When you connect to your database server and you want to create a new login, you will see two different windows: a GUI within an on-premise database and a new query window in an Azure database. We will continue with query's, which needs to be run in a particular order.

SSMS - SQL Server on-premise versus Azure SQL Server














1) Create new login
First we need to create a new login to grant access to the database server. Therefore you have to be administrator on the database server. Click on the Security folder and then on Logins to create a 'New Login...'.




















Then we edit the generated query to this:

CREATE LOGIN [RobertSmith] 
WITH PASSWORD = 'Abcdefg123!' 
GO

Note:
The password must include letters, numbers, symbols and have a minimum length.

2) Create user 
When we have created the login, we can link this login to a specific database (user). Click on the Security folder in the specific database and then on Users to create a 'New User...'. The engine also generates a code at this point.


















We change the generated query to:

USE [sensory]
GO

CREATE USER [RobertSmith]
FOR LOGIN [RobertSmith]
WITH DEFAULT_SCHEMA = [dbo]
GO

3) Permissions
Once the user has been created, you can further specify the authorization . For example, read access to one of more schemas. This can be done by the following query:

USE [sensory]
GO

GRANT select ON SCHEMA :: [analyse] TO [RobertSmith] 
GO

Now the user can login with SQL Authentication with the created credentials. Important is to select the right database, otherwise SSMS will automatically make connection to the master database. In this case we gave permissions to a specific database, so the attempt will fail.

SSMS - Select the database before connecting to server

















Conclusion
This isn't very difficult to understand for anybody with some experience within SSMS, but this is not a daily action and in that scenario there is a chance that you do not have the query's ready. Anyway, for now you have to do it with T-SQL in Azure, but perhaps in the future also with an GUI.

Wednesday 23 November 2016

Azure - Understanding Stream Analytics Blob Destination

Case
How do you use Azure Blob Storage to store stream analytics data and why would you do that?

Solution
In earlier posts we have set up an IoT environment with an IoT Hub and a couple of Stream Analytics Jobs where sensor data is sent to different destinations: Power BI for a real-time dashboard and Azure SQL Database to store the data. In this case we create a new Stream Analytics Job with Blob Storage as output and we use the sensor data as input. The Input and Output are different, but the query is the same as other Stream Analytics Jobs. Configuring the query of the Stream Analytics Job can you find here.

Reasons to use Blob Storage is the diversity of storing in the cloud of text or binary files. The unstructured data can include documents, social data (photos, videos, music and blogs), Big Data (logs, IoT and large datasets) or images and text for web applications. Click here for more pricing details about Blob Storage. You can also store (sensor) data in an Azure Data Lake. Click here for more information about this, along with the major differences between Blob Storage and Data Lake.

For the use of Azure Blob storage you need a Storage account. Then you can add containers to this account, which include the Blob files. In the case of an image, you also find a metadata file. In our case the Blob files are JSON files and contain sensor data.

Overview Azure Blob Storage with images











1) Create a Storage account
To make use of Blob Storage you have to create a Storage account first. After we give it a suitable name, we choose Blob storage and 'RA-GRS' as Replication. This is the default and it contains the most options. Click here for more information about this. Next we choose 'Hot' by Access tier, because we want access the sensor data frequently. 


Azure Portal - Create a Storage account















Note:
You can also create a Storage account when setting up the Output of the Stream Analytics Job, but you have less options so it is not recommended. 

2) Create the Stream Analytics Job
Before creating the new job, we had already add a new consumer group to our IoT Hub. We called it 'blob'. Using multiple consumer groups makes it possible for several consumer applications to read data from this IoT Hub independently. Click here to see where you can add/manage consumer group(s). 

Now we can create the job. We choose the same Resource group as the IoT Hub and Storage account, because these are in the same life cycle. Our Location is the Netherlands, so we choose West-Europe.

Azure Portal - Create Stream Analytics Job














3) Configure the Stream Analytics Job
First we must add a new Input to the job. The default Source Type is 'Data stream'. We choose this because  the sensor data is an ongoing stream and is derived from the IoT Hub. The Source is 'IoT hub' and then the IoT Hub that you have created automatically appears. If you have more then one IoT Hub, you can choose one from the drop-down list. After this you must choose the right Consumer group. This is the new group (blob) we have created earlier. Finally you choose 'JSON' as Event serialization format

Next we add a new Output. First choose 'Blob storage' in Sink and 'Use blob storage from current subscription' as Subscription, because you have configured the storage account earlier. Otherwise you can edit the storage account settings here by choosing 'Provide blob storage settings manually'. Then you create a new container. Optionally, you can define one or more instances (subfolders) within the container. With this option you can change the date and time format so you can have multiple instances including different dates and/or time folders. This makes it more clear (just like your own local File Explorer) and you have the choice to select data from a specific day/time. We made a instance called 'sensor'. At last you will choose the 'JSON' format. 

As we said, we discuss only the configuration of the Input and Output of the job in this post. After configured the job we will run the job with a valid query. 

Azure Portal - Configure the Stream Analytics Job for Blob














Result
Now the data is stored, we want to see what's in our Blob. Therefore you have to go to your Azure Storage account in the portal. Every object, in our case a Blob, that you store in Azure Storage has a unique URL address. For the Blob service with the storage account name (bitoolsblobstorage) you have created and the container name (sensordata) with the instance (sensor) the URL/endpoint is: 
http://bitoolsblobstorage.blob.core.windows.net/sensordata/sensor
More information about the Azure Storage endpoints here.

In the portal go to the Storage account you have made earlier. Click on the container URL and then you see the instance (subfolder). Now you can drill down further on the specific month, day and hour of the incoming sensor data.

Azure Portal - Your Blob file














Conclusion
It looks a lot like the other Stream Analytics posts, but in this case the data is stored in a Blob file. From this point you have several options to do something with this data. First you can do nothing off course and in that case you use Blob purely for storage (backup). You will find the other options in the Cortana Intelligence Suite. For example process the data with Azure Data Factory (this can also be done with traditional SSIS, which is off course not a part of the CIS), analyse the data with Machine Learning or visualize the data in Power BI.