Tuesday 29 August 2017

Azure - Continue with Azure Data Lake for Big Data


Case
I an earlier post we showed you how to transform sensor data using Azure Data Lake. Many companies are gathering (or already have) a lot of Big Data in many different files. How can we use Azure Data Lake Analytics (ADLA) to handle these files?

Big Data and U-SQL















Solution
Just like the previous post, the sensor data is already stored in an Azure Data Lake Store (ADLS). Next, we build and configure an U-SQL Job. This is Microsoft's new Big Data query language that you can use in ADLA. Last time we developed in the Azure Portal, but there are other options. Last month, Microsoft released a Visual Studio plug-in for Azure Data Lake and Stream Analytics. This allows you, while writing U-SQL queries, to use other benefits of Visual Studio like Team Foundation Server (TFS), debugging and adding C# code for custom inputs and outputs.

In this case we have sensor data from one year. The data is stored in several files: one file per day. We want to create a U-SQL job that aggregates the data per day and then stores all the data. For now we focus on the query itself. See here how to create an ADLA service/account and to create a new U-SQL Job.

1) Install plug-in for Visual Studio
First we have to download and install the plug-in Microsoft Azure Data Lake and Stream Analytics Tools for Visual Studio. You can download the plug-in here. Besides the creating and debugging of U-SQL scripts, you can also build queries of Azure Stream Analytics jobs using this plug-in.

2) Write the Query
Open Visual Studio and create a new U-SQL project. Our U-SQL script is called 'multipleFiles'. The starting point is the query we made in an earlier post extracting one single sensor file.

Variables
Because we have multiple files, we are creating a dynamic FROM clause using variables. In this case the folder path from ADLS. We use the following syntax for this:"bitools_sample_data_{*}.csv". This is a wildcard and will get you every file of the year (see comment in the query below for the input files structure). We also skip the first row, the headers.

// File naming convention: bitools_sample_data_01-01-2016.csv, bitools_sample_data_01-02-2016.csv etc.
// Create variable for input files
DECLARE @folderInput string = "/SensorData/Input/";
DECLARE @inputString string = @folderInput + "bitools_sample_data_{*}.csv";

Extract
To retrieve the data from the files, we use an EXTRACT statement. In an earlier post, we extracted the data as a string. Now we extract the 'time' column as date time format (just like the source file), using the variable in the FROM clause we created earlier.

// Extract the sensor data from CSV file (skip the header)
@sensorData = 
    EXTRACT 
        [time]                    DateTime
    ,   [dsplid]                  string
    ,   [dspl]                    string
    ,   [temp]                    string
    ,   [hmdt]                    string
    ,   [status]                  string
    ,   [location]                string
    ,   [EventProcessedUtcTime]   string
    ,   [PartitionId]             string
    ,   [EventEnqueuedUtcTime]    string
    FROM @inputString
    USING Extractors.Csv(skipFirstNRows:1);

Transformation
Next we aggregate the data into averages based on the 'time' and 'location' column, using a SELECT statement. We convert the 'time' column to a date format, because we want to aggregate per day. We give the column names a suitable name. You may have noticed that we do not select all the columns, because we do not need all columns from the source file.

// Aggregate the sensor data (average per location) and data type conversions
@result =
    SELECT
        time.ToString("yyyy-MM-dd") AS Date
    ,   AVG(Convert.ToInt32([temp])) AS Temperature
    ,   AVG(Convert.ToInt32([hmdt])) AS Humidity
    ,   [location] AS Location
    FROM @sensorData
    GROUP BY
        time.ToString("yyyy-MM-dd")
    ,   [location];

Output
Finally, we save the data in a new CSV file. In the OUTPUT statement, you can also add an ORDER BY clause. We want the header back in our output data and therefore we use 'outputHeader'.

// Save the sensor data to a new CSV file
OUTPUT @result
TO "/SensorData/Output/bitools_sample_data_AveragePerDayPerLocation.csv"
ORDER BY 
    [Location] ASC
USING Outputters.Csv(outputHeader : true, quoting:false);

See below a screenshot of the full query in Visual Studio.

Visual Studio - U-SQL script





















3) Run the Job
When you have built the query, click 'Submit' and then the Job View screen automatically appears. This is similar to Job Details in the Azure Portal that we used earlier. But when you look closely, you see Visual Studio offers more information then the portal. For example, more details at 'Job Summary' and errors details.


Visual Studio - Run U-SQL script















Error details
When you have an error in the U-SQL query, you can see often the details of this error directly in the 'Job View' screen. In case of an Vertex user code error, you do not immediately see the error details on this screen. If you want to see details of this error, scroll down in the 'Job Summary' and click on 'Resources'. Then choose 'Profile' and search for the keyword 'jobError'. This row contains the details of the error.

Visual Studio - U-SQL Query error details















4) Result
Now go to the Azure portal and to your Azure Data Lake Store. Open the new file in 'Data Explorer'. Our output file is located in the folder 'SensorData' and then 'Output'. The result should look like this:


Azure Portal - View result in Data Lake Store











Conclusion
In this post we went deeper into building an U-SQL script using Visual Studio. In our opinion, you should develop as much as possible in Visual Studio, because we all know the benefits of this tool like TFS and debugging.

Sunday 20 August 2017

Use PolyBase to read Blob Storage in Azure SQL DW

Case
I have a file in an Azure Blob Storage container which I want to use in my Azure SQL Data Warehouse. How can I push the content of that file to Azure SQL DW?
One of the options: PolyBase














Solution
You could of course use an ETL product or Azure Data Factory, but you can also use PolyBase technology in Azure SQL DW and use that file as an external table. The data stays in the Azure Blob Storage file, but you can query the data like a regular table.

Starting position
Starting position is a file in an Azure Blob Storage container. This file was created with U-SQL in an other post to quickly process large amounts of files in Azure.
The content of the CSV file














Before we start, make sure your Azure SQL Data Warehouse is started and use SQL Server Management Studio (SSMS) to connect to your Data Warehouse. Notice that the icon of a SQL DW is different than SQL DB.
Icon SQL DW vs SQL DB















1) Master key
In the next step we will use a credential that points to the Azure Blob Storage. To encrypt that credential, we first need to create a master key in our Azure SQL Data Warehouse, but only if you do not already have one. You can check that in the table sys.symmetric_keys. If a row exists where the symmetric_key_id column is 101 (or the name column is '##MS_DatabaseMasterKey##') then you already have a master key. Else we need to create one. For Azure SQL Data Warehouse a password for that master key is optional. For this example we will not use the password.
--Master key
IF NOT EXISTS (SELECT * FROM sys.symmetric_keys WHERE symmetric_key_id = 101)
BEGIN
    PRINT 'Creating Master Key'
    CREATE MASTER KEY;
END
ELSE
BEGIN
    PRINT 'Master Key already exists'
END 

2) Credentials
Next step is to create a credential which will be used to access the Azure Blob Storage. Go to the Azure portal and find the Storage Account that contains your blob file. Then go to the Access keys page and copy the key1 (or key2).
Access keys















Then execute the following code where IDENTITY contains a random string and SECRET contains the copied key from your Azure Storage account.
--Credential
CREATE DATABASE SCOPED CREDENTIAL AzureStorageCredential
WITH
    IDENTITY = 'user',
    SECRET = 'JGadV/tAt1npuNwkiH9HnI/wosi8YS********=='
;

Tip: give the credential a descriptive name so that you know where it is used for. You can find all credentials in the table sys.database_credentials:
--Find all credential
SELECT * FROM sys.database_credentials

3) External data source
With the credential from the previous step we will create an External data source that points to the Azure Blob Storage container where your file is located. Execute the code below where:
  • TYPE = HADOOP (because PolyBase uses the Hadoop APIs to access the container)
  • LOCATION = the connection string to the container (replace [ContainerName] with the name of the container and [StorageAccountName] with the name of your storage account).
  • CREDENTIAL = the name of the credentials created in the previous step.
--Create External Data Source
CREATE EXTERNAL DATA SOURCE AzureStorage
WITH (
    TYPE = HADOOP,
    LOCATION = 'wasbs://[ContainerName]@[StorageAccountName].blob.core.windows.net',
    CREDENTIAL = AzureStorageCredential
);

Tip: give the external source a descriptive name so that you know where it is used for. You can find all external data sources in the table sys.external_data_sources:
--Find all external sources
SELECT * FROM sys.external_data_sources

Notice that the filename is not mentioned in the External Data Source. This is done in the External Table. This allows you to use multiple files from the same container as External Tables.
Filename not in External Data Source
















4) External File format
Now we need to describe the format used in the source file. In our case we have a comma delimited file. You can also use this file format to supply the date format, compression type or encoding.
--Create External Data Source
CREATE EXTERNAL FILE FORMAT TextFile
WITH (
    FORMAT_TYPE = DelimitedText,
    FORMAT_OPTIONS (FIELD_TERMINATOR = ',')
);

Tip: give the format a descriptive name so that you know where it is used for. You can find all external file formats in the table sys.external_file_formats:
--Find all external file formats
SELECT * FROM sys.external_file_formats

5) External Table
The last step before we can start quering, is creating the external table. In this create table script you need to specify all columns, datatypes and the filename that you want to read. The filename starts with a forward slash. You also need the datasource from step 3 and the file format from step 4.
--Create External table
CREATE EXTERNAL TABLE dbo.sensordata (
    [Date] DateTime2(7) NOT NULL,
    [temp] INT NOT NULL,
    [hmdt] INT NOT NULL,
    [location] nvarchar(50) NOT NULL
)
WITH (
    LOCATION='/bitools_sample_data_AveragePerDayPerBuilding.csv',
    DATA_SOURCE=AzureStorage, -- from step 3
    FILE_FORMAT=TextFile      -- from step 4
);
Note: PolyBase does not like columnname headers. It will handle it like a regular data row and throw an error when the datatype doesn't match. There is a little workaround for this with REJECT_TYPE and REJECT_VALUE. However this only works when the datatype of the header is different than the datatypes of the actual rows. Else you have to filter the header row in a subsequent step.
--Create External table with header
CREATE EXTERNAL TABLE dbo.sensordata5 (
    [Date] DateTime2(7) NOT NULL,
    [temp] INT NOT NULL,
    [hmdt] INT NOT NULL,
    [location] nvarchar(50) NOT NULL
)
WITH (
    LOCATION='/bitools_sample_data_AveragePerDayPerBuilding.csv',
    DATA_SOURCE=AzureStorage,
    FILE_FORMAT=TextFile,
    REJECT_TYPE = VALUE, -- Reject rows with wrong datatypes
    REJECT_VALUE = 1     -- Allow 1 failure (the header)
);
You can find all external tables in the table sys.external_tables.
--Find all external tables
SELECT * FROM sys.external_tables
However you can also find the External Table (/the External Data Source/the External File Format) in the Object Explorer of SSMS.
SSMS Object Explorer























6) Query external table
Now you can query the external table like any other regular table. However the table is read-only so you can not delete, update or insert records. If you update the source file then the data in this external table also changes instantly because the file is used to get the data.
--Testing
SELECT count(*) FROM dbo.sensordata;
SELECT * FROM dbo.sensordata;
Quering an external table
























7) What is next?
Most likely you will be using a CTAS query (Create Table As Select) to copy and transform the data to an other table since this is the fasted/preferred way in SQL DW. In a subsequent post we will explain more about CTAS, but here is how a CTAS query looks like.
--CTAS
CREATE TABLE [dbo].[Buildings]
WITH
(
    DISTRIBUTION = ROUND_ROBIN
,   CLUSTERED COLUMNSTORE INDEX
)
AS
SELECT  [location]
,       [date]
,       [temp]
,       [hmdt]
FROM    [dbo].[sensordata]
;

In some cases you could also use an SELECT INTO query as an alternative for CTAS.

Summary
In this post you saw how easy it was to read a file from the Azure Blob Storage and use it as a table in Azure SQL Data Warehouse. The big advantage of PolyBase is that you only have one copy of the data because the data stays in the file. In a next post we will see how to read the same file from the Azure Data Lake Store which does not use the Access keys.
In an other post we will explain the basic usage of the CTAS query which is the preferred way to handle large sets of data in Azure SQL DW and in its on-premises precursor APS (a.k.a. PDW).

Sunday 30 July 2017

Azure SQL Database vs Azure SQL Data Warehouse

Case
As we slowly move from on-premises data warehouses with Microsoft SQL Server to cloud data warehouses in Microsoft Azure, we need to know more about the various options in Azure. You probably already used an Azure SQL Database, but Microsoft also introduced Azure SQL Data Warehouse. What are the differences between these two databases?
Azure SQL DB vs Azure SQL DW














Solution
Back in 2013, Microsoft introduced Azure SQL Database which has its origin in the on-premises Microsoft SQL Server. In 2015 (however public availability was in July 2016) Microsoft added SQL Data Warehouse to the Azure cloud portfolio which has its origin in the on-premises Microsoft Analytics Platform System (APS). This was a Parallel Data Warehouse (PDW) combined with Massively Parallel Processing (MPP) technology and included standard hardware. It is the 'big brother' of SQL Server, but with a slightly different purpose.

In this post we will briefly describe the differences between these two Microsoft Azure Services, but first Microsofts own definitions:
  • Azure SQL Database is a relational database-as-a service using the Microsoft SQL Server Engine (more);
  • Azure SQL Data Warehouse is a massively parallel processing (MPP) cloud-based, scale-out, relational database capable of processing massive volumes of data (more);

Differences
1) Purpose: OLAP vs OLTP
Although both Azure SQL DB and Azure SQL DW are cloud based systems for hosting data, their purpose is different. The biggest difference is that SQL DB is specifically for Online Transaction Processing (OLTP). This means operational data with a lot of short transactions like INSERT, UPDATE and DELETE by multiple people and/or processes. The data is most often highly normalized stored in many tables.

On the other hand SQL DW is specifically for Online Analytical Processing (OLAP) for data warehouses. This means consolidation data with a lower volume, but more complex queries. The data is most often stored de-normalized with fewer tables using a star or snowflake schema.

2) Achitecture
In order to make the differences more clear a quick preview of the architecture of Azure SQL Data Warehouse, where you see a whole collection of Azure SQL Databases and separated storage. The maximum number of compute notes at the moment is 60.
    Architecture SQL DW: Decouples (Blob) storage from compute (SQL DB)





















    3) Storage size
    The current size limit of an Azure SQL Database is 4TB, but it has been getting bigger over the past few years and will probably end up around 10TB in the near future. On the other hand we have the Azure SQL Data Warehouse which has no storage limit at all (only the limit of your wallet), because the storage is separated from the compute.

    3) Pricing
    The pricing is also quite different. Where Azure SQL DB starts with €4,20 a month, Azure SQL DW starts around €900,- a month excluding the cost of storage which is included in SQL DB. The storage costs for Azure SQL DW are around €125,- per TB per month. And the maximum costs of a single SQL DB is around €13500,- where SQL DW ends around a massive €57000,- (excl. storage). But when you take a look at the architecture above, it should be no surprise that SQL DW is more expensive than SQL DB, because it consists of multiple SQL DBs.

    However, SQL DW has one big trick up its sleeve that SQL DB hasn't: you can pause it completely and then you only pay for storage. If you start your SQL DW with your ETL job and pause it right after you processed your Azure Analysis Services then you only need it a small percentage of the month.

    Note: prices are from July 2017

    4) DTU vs DWU
    SQL DB has 15 different pricing tiers which specify the number of Database Transaction Units  (DTU) and the storage size/type:
    - Basic
    - Standard (S0, S1, S2, S3)
    - Premium (P1, P2, P3, P4, P6, P11, P15)
    - Premium RS (PRS1, PRS2, PRS4, PRS6)
    Basic has only 5 DTUs and the highest number of DTUs is, at the time of writing, 4000.
    The term DTU is a bit vague. It is a mysterious combination of RAM, CPU and read-write rates, but basically if you want to double the performance of your current database you just need to double the number of DTU's for your database.

    SQL DW has 12 different pricing tiers and uses Data Warehouse Units (DWU) to specify the performance level.
    - DWU100, 200, 300, 400, 500, 600, 1000, 1200, 1500, 2000, 3000, 6000
    The term DWU is a little less vague, because if you divide that number by 100 you have the number of compute nodes available for that pricing tier. On the other hand the exact combination of CPU, memory and IOPS per compute note is unknown.

    Because both services have a different purpose it is a bit strange to compare the hardware, but according to this MSDN blog post 1 DWU is approximately 7,5 DTU.

    But there is also some similarity: for both services you can use the same script to change the pricing tier on the fly to either give the performance a real boost when needed or the save money in the quiet hours.

    5) Concurrent Connection
    Although SQL DW is a collection of SQL Databases the maximum number of concurrent connections is much lower than with SQL DB. SQL DW has a maximum of 1024 active connections where SQL DB can handle 6400 concurrent logins and 30000 concurrent sessions. This means that in the exceptional case where you have over a thousand active users for your dashboard you probably should consider SQL DB to host the data instead of SQL DW.
    For more details see the SQL DB Recource Limitations and SQL DW Recource Limitations.

    6) Concurrent Queries
    Besides the maximum connections, the number of concurrent queries is also much lower. SQL DW can execute a maximum of 32 queries at one moment where SQL DB can have 6400 concurrent workers (requests). This is where you see the differences between OLTP and OLAP.
    For more details see the SQL DB Recource Limitations and SQL DW Recource Limitations.

    7) PolyBase
    Azure SQL Data Warehouse supports PolyBase. This technology allows you to access data outside the database with regular Transact SQL. It can for example use a file in an Azure Blob Storage container as a (external) table. Other options are importing and exporting data from Hadoop or Azure Data Lake Store. Although SQL Server 2016 also supports PolyBase, Azure SQL Database does not (yet?) support it.

    8) Query language differences
    Although SQL DW uses SQL DB in the background there are a few minor differences when quering or creating tables:
    - SQL DW cannot use cross databases queries. So all your data should be in the same database.
    - SQL DW can use IDENTITY, but only for INT or BIGINT. Moreover the IDENTITY column cannot be used as part of the distribution key.
    - Also see this SQL DW list of unsupported table features.

    9) Replication
    SQL DB supports active geo-replication. This enables you to configure up to four readable secondary databases in the same or different location. SQL DW does not support active geo-replication, only Azure Storage replication. However this is not a live, readable, synchronized copy of your database! It's more like a backup.

    10) In Memory OLTP tables
    SQL DB supports in-memory OLTP. SQL DW is OLAP and does not support it.

    11) Always encrypted
    SQL DB supports Always Encrypted to protect sensitive data. SQL DW does not support it.

    Conclusion
    Although Azure SQL DB looks much cheaper on a monthly basis, this doesn't mean you should always choose SQL DB by default. One big advantage is that you can pause SQL DW. For example a stage database is only used during the ETL process. Why should you always have this database up an running? Or why should your datamart stay online 24*7 if your end users only use Analysis Services to browse the data.

    On the other hand the original purpose of both services is different (OLTP vs OLAP), but this doesn't mean you should always use Azure SQL DW for your data warehouses. Depending on several factors like data size, complexity, required up-time and budget, Azure SQL DB could also host your data warehouse. You could even mix them in your project. For example Stage and Historical/Persistent stage in Azure SQL DW and your Datamart in Azure SQL DB.



    Please leave a comment if you know more significant differences that are worth mentioning.

    Sunday 23 July 2017

    Azure - Use Azure Data Lake for Big Data

    Case
    We have collected sensor data and we want to use this in a Data Warehouse (DWH). Because we do not want to store raw data in our database, we need to resolve this first. How can we accomplish this with Azure Data Lake?
    
    
    Over Azure Data Lake











    Solution
    We use Azure Data Lake Store (ADLS) to store the sensor data. As we know from an earlier post, ADLS is extremely suitable for storing unstructured data and we showed an example of how you can store this sensor data. In the first example the data, one file of each day, is already stored in ADLS. You can download the file here. Next we are going to aggregate this data per day and create a file that is ready to load.

    To accomplish this we will use another feature of Azure Data Lake, called Azure Data Lake Analytics (ADLA). With this and Data Lake Store, Microsoft offers new features similar to Apache Hadoop to deal with petabytes of Big Data. The advantage of Data Lake Analytics is that it supports Hadoop, but also introduce a similar language like T-SQL, called U-SQLThis is Microsoft's new Big Data query language. It is T-SQL with a little touch of C# to add even more features to the language. Click here for more details about this.

    Note:
    If you don't have sensor data ready, you can download a sensor generator for free. This 'SensorEventGenerator' can be found here. In addition, you have to create an Event Hub / IoT Hub to sending these sensor data to Azure. Click here for more information. In our case we have generated our own data with a similar program and uploaded to ADLS.


    1) Create Data Lake Analytics
    First we have to create a new Data Lake Service with a Data Lake Analytics account. We give the account a suitable name and we choose the same resource group as the ADLS uses. As a last step we need to choose the Data Lake Store, in our case 'bitoolsadls' where the sensor files are stored.

    Azure Portal - Create Data Lake Analytics













    Important to know is the pricing. The creation of the service is free, also while it is running. The payment starts with the use of U-SQL Jobs. You pay for computing power (measured in Analytics Units). More information about pricing here. For example, if we want to run a job 24 hours with 1 UAs (and that complies), this will results in the following costs:

    Data Lake Analytics Cost Indication
    Depending of the amount of data, the number of Analytics Units (UAs) must be increased. 

    1) Create new Job
    Now everything is set, we must create a new U-SQL Job. Go to your Data Lake Analytics you just created and click on 'New Job'. Give it a suitable name. Notice that you have two other options to change: Priority and UAs. By 'Priority' you can determine the importance of the job. For example, if the job has Priority 1, this job will always start first. The other one are the Analytics Units (as we explained earlier). Increasing the 'UAs', will give you a cost indication. For now we leave this both on default, because it is our only job. 

    2) Write a Query
    Let's get started with the query. As I mentioned before, it similar to T-SQL, because you can also write SELECT statements with the familiar FROM and WHERE clauses and transfer the data to a new location. In addition, you must use C# for data type conversions and for example to determine today's date. In this case we retrieve the raw data, aggregate this data and store it in a new CSV file in a new ADLS folder.

    To retrieve the data from the file, use EXTRACT and FROM. You can also use T-SQL variables in the FROM clause to avoid hard coded paths, but for now we fill in our file path hard coded. We do not need to fill in the Data Lake name itself, because the default is our ADLS account. We also have to use the USING statement for specify the extract format, in this case a CSV file. More details about this statement here. Important to know is that we specify that the first row must be skipped, because that's the header. The result of this query will be stored in the variable '@sensorData'.

    Next we want to aggregate our data, because the granularity of the raw data is per second. We take the averages of temperature (temp) and humidity (hmdt) per day and per location. We need a SELECT statement on '@sensorData' variable for this. Besides the aggregations, we are doing some datatype conversions, because extracting the data as string is currently the preferred way. We also use GROUP BY for dividing the result into groups (per day and per location). You may have noticed that we do not select all the columns, because we do not need all columns from the source file.

    Once we have retrieved and transform the data, we want to save it in a new file. Therefore, we use the OUTPUT statement. Define the folder path. This will be automatically created with the new file. Because we are also saving the new file in a CSV format, we will use the same extractor by USING, except now without the 'skipFirstNRows' parameter. Every time you start the job, it will overwrite the file destination.

    The query:

    // Extract the sensor data from CSV file (skip the header)
    @sensorData = 
        EXTRACT 
            [time]                    string
        ,   [dsplid]                  string
        ,   [dspl]                    string
        ,   [temp]                    string
        ,   [hmdt]                    string
        ,   [status]                  string
        ,   [location]                string
        ,   [EventProcessedUtcTime]   string
        ,   [PartitionId]             string
        ,   [EventEnqueuedUtcTime]    string
        FROM "/SensorData/Input/bitools_sample_data_01-01-2016.csv"
        USING Extractors.Csv(skipFirstNRows:1);
    
    // Aggregate the sensor data (average per location) and data type conversions
    @result =
        SELECT 
            DateTime.ParseExact([time], "yyyy-MM-dd HH:mm:ss", null).Date AS Date
        ,   AVG(Convert.ToInt32([temp])) AS Temperature
        ,   AVG(Convert.ToInt32([hmdt])) AS Humidity
        ,   [location] AS Location
        FROM @sensorData
        GROUP BY
            DateTime.ParseExact([time], "yyyy-MM-dd HH:mm:ss", null).Date
        ,   [location];
    
    // Save the sensor data to a new CSV file
    OUTPUT @result
    TO "/SensorData/Output/bitools_sample_data_01-01-2016_AveragePerDayPerLocation.csv"
    USING Outputters.Csv(outputHeader : true, quoting:false);
    

    Note 1:
    The statements EXTRACT and OUTPUT use absolute or relative file paths. That's why we cannot use SELECT for retrieving the data. We also use the C# syntax (//) to add comments. Click here and here for more information about the U-SQL language.

    Note 2:
    PolyBase does not handle column headers that well. If you want to read this output file with PolyBase, you could consider removing "outputHeader : true" from the OUTPUT part of the query.


    Azure Portal - Create the U-SQL Job













    3) Run the Job
    When the query is done, we must click on 'Submit Job'. Now a new screen will appear where we can monitor the running job. When the job is successfully 'Finalizing', we can preview the output file. Notice that the new file has 10 rows: one average row for temperature and humidity per location.

    Azure Portal - Run the U-SQL Job













    Finally, let see if the file is stored in the Azure Data Lake Store. Go to your ADLS and click on 'Data Explorer'. Find your output folder and there it is!

    Azure Portal - See result in Data Lake Store












    Summary
    We showed you how Azure Data Lake is suitable for storing and transform Big Data, in this case sensor data. Off course, there are more ways to get this together using the Cortana Intelligence Suite, for example Stream Analytics.

    In this post we used one single file, but often companies have hundreds or thousand of files. In a next post we show you how to handle multiple input files in Azure Data Lake Analytics.