We have collected sensor data and we want to use this in a Data Warehouse (DWH). Because we do not want to store raw data in our database, we need to resolve this first. How can we accomplish this with Azure Data Lake?
Over Azure Data Lake |
Solution
We use Azure Data Lake Store (ADLS) to store the sensor data. As we know from an earlier post, ADLS is extremely suitable for storing unstructured data and we showed an example of how you can store this sensor data. In the first example the data, one file of each day, is already stored in ADLS. You can download the file here. Next we are going to aggregate this data per day and create a file that is ready to load.
To accomplish this we will use another feature of Azure Data Lake, called Azure Data Lake Analytics (ADLA). With this and Data Lake Store, Microsoft offers new features similar to Apache Hadoop to deal with petabytes of Big Data. The advantage of Data Lake Analytics is that it supports Hadoop, but also introduce a similar language like T-SQL, called U-SQL. This is Microsoft's new Big Data query language. It is T-SQL with a little touch of C# to add even more features to the language. Click here for more details about this.
Note:
If you don't have sensor data ready, you can download a sensor generator for free. This 'SensorEventGenerator' can be found here. In addition, you have to create an Event Hub / IoT Hub to sending these sensor data to Azure. Click here for more information. In our case we have generated our own data with a similar program and uploaded to ADLS.
1) Create Data Lake Analytics
First we have to create a new Data Lake Service with a Data Lake Analytics account. We give the account a suitable name and we choose the same resource group as the ADLS uses. As a last step we need to choose the Data Lake Store, in our case 'bitoolsadls' where the sensor files are stored.
Important to know is the pricing. The creation of the service is free, also while it is running. The payment starts with the use of U-SQL Jobs. You pay for computing power (measured in Analytics Units). More information about pricing here. For example, if we want to run a job 24 hours with 1 UAs (and that complies), this will results in the following costs:
Depending of the amount of data, the number of Analytics Units (UAs) must be increased.
Azure Portal - Create Data Lake Analytics |
Important to know is the pricing. The creation of the service is free, also while it is running. The payment starts with the use of U-SQL Jobs. You pay for computing power (measured in Analytics Units). More information about pricing here. For example, if we want to run a job 24 hours with 1 UAs (and that complies), this will results in the following costs:
Data Lake Analytics Cost Indication |
1) Create new Job
Now everything is set, we must create a new U-SQL Job. Go to your Data Lake Analytics you just created and click on 'New Job'. Give it a suitable name. Notice that you have two other options to change: Priority and UAs. By 'Priority' you can determine the importance of the job. For example, if the job has Priority 1, this job will always start first. The other one are the Analytics Units (as we explained earlier). Increasing the 'UAs', will give you a cost indication. For now we leave this both on default, because it is our only job.
2) Write a Query
Let's get started with the query. As I mentioned before, it similar to T-SQL, because you can also write SELECT statements with the familiar FROM and WHERE clauses and transfer the data to a new location. In addition, you must use C# for data type conversions and for example to determine today's date. In this case we retrieve the raw data, aggregate this data and store it in a new CSV file in a new ADLS folder.
To retrieve the data from the file, use EXTRACT and FROM. You can also use T-SQL variables in the FROM clause to avoid hard coded paths, but for now we fill in our file path hard coded. We do not need to fill in the Data Lake name itself, because the default is our ADLS account. We also have to use the USING statement for specify the extract format, in this case a CSV file. More details about this statement here. Important to know is that we specify that the first row must be skipped, because that's the header. The result of this query will be stored in the variable '@sensorData'.
Next we want to aggregate our data, because the granularity of the raw data is per second. We take the averages of temperature (temp) and humidity (hmdt) per day and per location. We need a SELECT statement on '@sensorData' variable for this. Besides the aggregations, we are doing some datatype conversions, because extracting the data as string is currently the preferred way. We also use GROUP BY for dividing the result into groups (per day and per location). You may have noticed that we do not select all the columns, because we do not need all columns from the source file.
Once we have retrieved and transform the data, we want to save it in a new file. Therefore, we use the OUTPUT statement. Define the folder path. This will be automatically created with the new file. Because we are also saving the new file in a CSV format, we will use the same extractor by USING, except now without the 'skipFirstNRows' parameter. Every time you start the job, it will overwrite the file destination.
The query:
To retrieve the data from the file, use EXTRACT and FROM. You can also use T-SQL variables in the FROM clause to avoid hard coded paths, but for now we fill in our file path hard coded. We do not need to fill in the Data Lake name itself, because the default is our ADLS account. We also have to use the USING statement for specify the extract format, in this case a CSV file. More details about this statement here. Important to know is that we specify that the first row must be skipped, because that's the header. The result of this query will be stored in the variable '@sensorData'.
Next we want to aggregate our data, because the granularity of the raw data is per second. We take the averages of temperature (temp) and humidity (hmdt) per day and per location. We need a SELECT statement on '@sensorData' variable for this. Besides the aggregations, we are doing some datatype conversions, because extracting the data as string is currently the preferred way. We also use GROUP BY for dividing the result into groups (per day and per location). You may have noticed that we do not select all the columns, because we do not need all columns from the source file.
Once we have retrieved and transform the data, we want to save it in a new file. Therefore, we use the OUTPUT statement. Define the folder path. This will be automatically created with the new file. Because we are also saving the new file in a CSV format, we will use the same extractor by USING, except now without the 'skipFirstNRows' parameter. Every time you start the job, it will overwrite the file destination.
The query:
// Extract the sensor data from CSV file (skip the header) @sensorData = EXTRACT [time] string , [dsplid] string , [dspl] string , [temp] string , [hmdt] string , [status] string , [location] string , [EventProcessedUtcTime] string , [PartitionId] string , [EventEnqueuedUtcTime] string FROM "/SensorData/Input/bitools_sample_data_01-01-2016.csv" USING Extractors.Csv(skipFirstNRows:1); // Aggregate the sensor data (average per location) and data type conversions @result = SELECT DateTime.ParseExact([time], "yyyy-MM-dd HH:mm:ss", null).Date AS Date , AVG(Convert.ToInt32([temp])) AS Temperature , AVG(Convert.ToInt32([hmdt])) AS Humidity , [location] AS Location FROM @sensorData GROUP BY DateTime.ParseExact([time], "yyyy-MM-dd HH:mm:ss", null).Date , [location]; // Save the sensor data to a new CSV file OUTPUT @result TO "/SensorData/Output/bitools_sample_data_01-01-2016_AveragePerDayPerLocation.csv" USING Outputters.Csv(outputHeader : true, quoting:false);
The statements EXTRACT and OUTPUT use absolute or relative file paths. That's why we cannot use SELECT for retrieving the data. We also use the C# syntax (//) to add comments. Click here and here for more information about the U-SQL language.
Note 2:
PolyBase does not handle column headers that well. If you want to read this output file with PolyBase, you could consider removing "outputHeader : true" from the OUTPUT part of the query.
Azure Portal - Create the U-SQL Job |
3) Run the Job
When the query is done, we must click on 'Submit Job'. Now a new screen will appear where we can monitor the running job. When the job is successfully 'Finalizing', we can preview the output file. Notice that the new file has 10 rows: one average row for temperature and humidity per location.
Azure Portal - Run the U-SQL Job |
Finally, let see if the file is stored in the Azure Data Lake Store. Go to your ADLS and click on 'Data Explorer'. Find your output folder and there it is!
Azure Portal - See result in Data Lake Store |
Summary
We showed you how Azure Data Lake is suitable for storing and transform Big Data, in this case sensor data. Off course, there are more ways to get this together using the Cortana Intelligence Suite, for example Stream Analytics.
In this post we used one single file, but often companies have hundreds or thousand of files. In a next post we show you how to handle multiple input files in Azure Data Lake Analytics.
No comments:
Post a Comment
All comments will be verified first to avoid URL spammers. यूआरएल स्पैमर से बचने के लिए सभी टिप्पणियों को पहले सत्यापित किया जाएगा।