Showing posts with label MARK. Show all posts
Showing posts with label MARK. Show all posts

Friday 1 May 2020

Azure Data Factory - Use Key Vault Secret in pipeline

Case
I want to use secrets from Azure Key Vault in my Azure Data Factory (ADF) pipeline, but only certain properties of the Linked Services can be filled by secrets of a Key Vault. Is it possible to retrieve Key Vault secrets in the ADF pipeline itself?

Using Azure Key Vault for Azure Data Factory













Solution
Yes the easiest way is to use a web activity with a RestAPI call to retrieve the secret from Key Vault. The documentation is a little limited and only shows how to retrieve a specific version of the secret via the secret identifier (a guid). This is not a workable solution for two reasons:
  1. The guid changes each time you change the secret. In probably 99% of the cases you just want to get the latest of version of the secret. This means you need to change that guid in ADF as well when you change a secret.
  2. The guid differs on each environment of your key vault (dev, test, acceptance and production). This makes it hard to use this solution in a multi ADF environment.
For this example we will misuse Key Vault a little bit as a configuration table and retrieve the RestAPI url of a database from the Key Vault. The example assumes you already have a Key Vault filled with secrets. If you don't have that then executes the first two steps of this post.

1) Access policies
First step is to give ADF access to the Key Vault to read its content. You can now find ADF by its name so you don't have to search for its managed identity guid, but using that guid is also still possible.
  • Go to Access policies in the left menu of your Key Vault
  • Click on the blue + Add Access Policy link
  • Leave Configure from template empty
  • Leave Key permissions unselected (we will only use a Secret for this example)
  • Select Get for Secret permissions
  • Leave Certificate permissions unselected (we will only use a Secret for this example)
  • Click on the field of Select principal to find the name of your Azure Data Factory
  • Leave Authorized application unchanged
  • Click on Add and a new Application will appear in the list of Current Access Policies
Add Access policy
















Note: for this specific example we do not need to create a Key Vault Linked Service in ADF.

2) Determine URL of secret
To retrieve the secrets we need the RestAPI URL of that secret. This URL is constructed as
https://{Name Keyvault}.vault.azure.net/secrets/{SecretName}?api-version=7.0

{Name Keyvault} : is the name of the keyvault you are using
{SecretName} : is the secretName

In this example the secretName is "SQLServerURL" and the URL should be looking like this https://{Name Keyvault}.vault.azure.net/secrets/SQLServerURL?api-version=7
Get the SecretName from Key Vault












3) Web activity
Next we have to add the activity ‘web’ into the ADF pipeline. Use the following settings in the settings tab.
  • Set URL to https://{Name Keyvault}.vault.azure.net/secrets/SQLServerURL?api-version=7.0
  • Set Method to Get
  • Under Advanced select MSI
  • And set the resource to https://vault.azure.net

Configuring the Web activity to retrieve the secret


















4) Retrieve value
Now we want to use the secret from the Key Vault in a successive activity, in this case another web activity to upscale a database. In the URL property of this activity we now use the output value from the previous webactivity.
@activity('GetURL').output.value
Retrieve output value via expression















5) The result
To check the result of the changes we need to execute the pipeline.
Execute the pipeline to check the result

















Note: if you are using this to retrieve real secrets like passwords and you don't want them to show up in the logging of Azure Data Factory then check the Secure output property on the general tab of your activity.
Don't show secret in logging




















Conclusion
In this blogpost your learned how to retrieve Key Vault secrets in ADF. The trick is to retrieve them by there name instead of by there version Guid. This will always give you the latest version and allows you to use this construction in multiple environments.

Update: ADF now supports Global parameters to store parameters that can be used by all pipelines

Monday 20 April 2020

Databases in DevOps - Publishing profile

Case
I can’t release my database project due an error “data loss could occur”
error code





Updating database (Failed)
*** Could not deploy package.
Warning SQL72015: The column [dbo].[Table1].[ColumnToRemove] is being dropped, data loss could occur. 

This happens due the fact that my source table changed/removed a column that’s still available in the target table.


Solution
When deploying a database project, you can add an publish file with this deployment. In this file (it’s a xml file) you can set/adjust certain settings
First, we setup visual studio and secondly, we adjust je build pipeline

1) Visual Studio
Here we are going to create the publish file
  • Right click on the solution and click “Publish”
  • Then a new window pop-up, go to “Advanced”
  • Uncheck - 'Block incremental deployment if data loss might occur’ checkbox.
Publish settings - general
















  • Go to the second tab and check - 'DROP objects in target but not in project' and click “ok”
Publish settings - drop











  • Then select “Create profile” and a new a ‘***.publish.xml’ will be added to the solution

Add publish file
















  • The next step is, is to add the publish file to source control “add ignored file to source control”
Add to source control


















2) DevOps 
First, we edit the build pipeline
  • Go to Azure DevOps -> Pipelines -> Pipelines
  • Then edit your pipeline (please see the blog of Joost for creating a pipeline)
  • Go to the task “Copy files” and add to ‘contents’ “**\*.publish.xml” 
Build pipeline








Last step, is to add the xml file to the release pipeline
  • Go to pipelines -> release
  • Then edit your pipeline and add the publish file
Release pipeline








Summary
In this post you learned how to add a Publish Profile file. In this file you can change the publish settings. In this example we did want to make it possible to truncate tables when a column has been removed.
But it can also be used to disable the deployment of security-related objects to our database, like:
  • ExcludeUsers
  • ExcludeLogins
  • ExcludeDatabaseRoles





Monday 24 April 2017

Use BIML to create SSIS packages from csv files

Case
The Case is about importing flat files (CSV’s) without the necessity of metadata.  Because BIML always check’s if the tables are accessible before creating the packages. The first step is to create the tables with BIML and the Second step is creating the SSIS packages to transport the data.

1. Creating tables in the database
2. Packages to fill this database

Because of the size of the solution I’ve created two solution, one for creating the tables and secondly creating the SSSI packages. you can click on the link to go to the other solution (later this month I’will deploy to second script).
In this solution, we create a BIML that is going to create the tables and all the columns are defined as strings

Solution - Creating SSIS packages
we have created the tables in the database, now we can start creating the SSIS package that extract the data from the CSV and transport them into the database.
Our SSIS packages contain a table truncation and a dataflow task with a simple transport from source to destination. We also create a masterpackage which execute the SSIS packages.
The CSV file we are going to use for this solution looks like this

CSV file









1) Determine the colums
Normally (if we look to the example above) we can use this code to determine the columns:

</Columns>
                <Column Name="AgeId" Delimiter=","></Column>
                <Column Name="AgeFrom" Delimiter=","></Column>
                <Column Name="AgeTo" Delimiter=","></Column>
                <Column Name="AgeCategoryEmployee" Delimiter=","></Column>
                <Column Name="AgeCategoryClient" Delimiter="CRLF"></Column>
 <Colums>
 
Note that the last colums uses the CRLF delimiter (CR = Carriage Return and LF = Line Feed)
Now we want BIML to do this for us, so we add a loop that loops through the first row of the textfile  and when the loop is at the end of the row, it uses the CRLF delimiter
The code should look like this:
<Columns>
            <# { # >
                
                StreamReader myFile = new StreamReader(filePath);
                myColumns = myFile.ReadLine().Split(',');
                 myFile.Close();
                
                // to determine the colum delimeter 
                int columnCount = 0;
                string columnDelimiter = ",";

                foreach (string myColumn in myColumns) 
                {
                    columnCount++;
                    if (columnCount == myColumns.Length)
                    {
                        columnDelimiter = "CRLF";
                    }
                    {
                        columnDelimiter = ",";
                    }
                #>
                <Column Name="&lt#=myColumn#>" Delimiter="&lt#=columnDelimiter#>">&lt/Column>
                 } #>
</Columns>


2) Creating the ssis package
The SSIS package we are going to create looks like this:
SISS package









The complete BIML code for creating this packages is:

<Biml xmlns="http://schemas.varigence.com/biml.xsd">
    <#
    string Applicatie = "Manual";
    string Prefix = "Man";
    
    string fileName;
    string path = @"D:\Drop\Man";
    string[] myFiles = Directory.GetFiles(path, "*.csv");
    
    string[] myColumns;
    #>
    <FileFormats>
    <#
    foreach (string filePath in myFiles)
    {

    #>
    <FlatFileFormat Name="FlatFileFormat<#=Path.GetFileNameWithoutExtension(filePath)#>" RowDelimiter="CRLF" ColumnNamesInFirstDataRow="true" IsUnicode="false">
        <Columns>
            <# 
                
                StreamReader myFile = new StreamReader(filePath);
                myColumns = myFile.ReadLine().Split(',');
                 myFile.Close();
                
                // to determine the column delimeter 
                int columnCount = 0;
                string columnDelimiter = ",";

                 foreach (string myColumn in myColumns) 
                {
                    columnCount++;
                    if (columnCount == myColumns.Length)
                    {
                        columnDelimiter = "CRLF";
                    }
                    {
                        columnDelimiter = ",";
                    }
                #>
                <Column Name="<#=myColumn#>" Delimiter="<#=columnDelimiter#>"></Column>
                <# } #>
            </Columns>
        </FlatFileFormat>
            <#}#>
    </FileFormats>
    <Connections>
            <#
            foreach (string filePath in myFiles)
            {
            
            #>
            <FlatFileConnection Name="FF_CSV - <#=Path.GetFileNameWithoutExtension(filePath)#>" FilePath="<#=filePath#>" FileFormat="FlatFileFormat<#=Path.GetFileNameWithoutExtension(filePath)#>">
                
            </FlatFileConnection>
            <# } #>
            <OleDbConnection 
                Name="STG_<#=Applicatie#>" 
                ConnectionString="Data Source=APPL43;Initial Catalog=dummy_STG;Provider=SQLNCLI11.1;Integrated Security=SSPI;Auto Translate=False;">
            </OleDbConnection>
    </Connections> 
        <Packages>

    <#       // Loop trough the files
            int TableCount = 0;
            foreach (string filePath in myFiles)
            { 
                TableCount++;
                fileName = Path.GetFileNameWithoutExtension(filePath);
                #>
                <Package Name="stg_<#=Prefix#>_<#=TableCount.ToString()#>_<#=fileName#>" ConstraintMode="Linear" AutoCreateConfigurationsType="None" ProtectionLevel="DontSaveSensitive">
                    <Variables>
                        <Variable Name="CountStage" DataType="Int32" Namespace="User">0</Variable>
                    </Variables>               
                    <Tasks>
                        <ExecuteSQL ConnectionName="STG_<#=Applicatie#>" Name="SQL - Truncate <#=fileName#>">
                            <DirectInput>TRUNCATE TABLE dbo.<#=Prefix#>_<#=fileName#></DirectInput>
                        </ExecuteSQL>
                        
                        <Dataflow Name="DFT - Transport CSV_<#=fileName#>">
                            <Transformations>
                                <FlatFileSource Name="SRC_FF - <#=fileName#> " ConnectionName="FF_CSV - <#=Path.GetFileNameWithoutExtension(filePath)#>">
                                </FlatFileSource>
                                
                                <OleDbDestination ConnectionName="STG_<#=Applicatie#>" Name="OLE_DST - <#=fileName#>" >
                                <ExternalTableOutput Table="dbo.<#=Prefix#>_<#=fileName#>"/>
                                </OleDbDestination>
                            </Transformations>
                        </Dataflow>
                    </Tasks>
                </Package>
    <#    }    #>

                <!-- Create Master Package -->
                <Package Name="stg_<#=Prefix#>_0_Master" ConstraintMode="Parallel" AutoCreateConfigurationsType="None" ProtectionLevel="DontSaveSensitive">
                    <Tasks>
                    <#  int TableCount2 = 0;
                        foreach (string filePath in myFiles)
                        { 
                                TableCount2++;
                                fileName = Path.GetFileNameWithoutExtension(filePath); #>
                
                            <ExecutePackage Name="stg_<#=Prefix#>_<#=TableCount2.ToString()#>_<#=fileName#>">
                                <ExternalProjectPackage  Package="stg_<#=Prefix#>_<#=TableCount2.ToString()#>_<#=fileName#>.dtsx" />
                            </ExecutePackage>
                        <#
                        }
                        #>    
                    </Tasks>
                </Package>

    </Packages>
</Biml>

<!--Includes/Imports for C#-->
<#@ template language="C#" hostspecific="true"#>
<#@ import namespace="System.Data"#>
<#@ import namespace="System.IO"#>


We call this “3_Generate_Man_SSIS.bml”
If we run this BIML script this will be the end result

End result










Summary
In the first blogspot, we explained how to create the tables, we need these tables which transport the actual data.
It’s important to first create the tables in the database and then create the SSIS packages.
Of course, this is a simple example for filling the staging area, feel free to add more component, like a row count transformation etc.

This whole solution is built with BIML Express.





Tuesday 21 February 2017

Use BIML and csv files to create tables

Case
The case is about importing flat files (CSV’s) without the necessity of metadata. Because BIML always checks if the tables are accessible before creating the packages, the first step is to create the tables with BIML and the second step is to create the SSIS packages for transporting the data.

  1. Creating tables in the database 
  2. Packages to fill this database

Because of the size of the solution I’ve created two separate solutions, one for creating the tables and a second for creating the SSSI packages. You can click on the link to go to the other solution (which I will deploy later this month).


Solution - Creating tables with BIML
In this solution, we create a BIML that is going to create the tables and all the columns are defined as strings.
We have to create two BIML scripts, the first script for defining the table definition and the second for creating the actual package.

1) CSV files
For this example we are using two CSV files (age and sickleave) which are comma separated and have columnnames on the first row. These columnnames will be used in the create table statement

the drop map








content csv file










2) Tabledefinitions
The first biml is called “1_TableDefinitions.biml”
In this biml we define the path were the CSV files are located, an array with the names of the csv files and also some string which we going to use further in the code.
We use two “foreach loops”, the first one loops trough the array with files and the second one loops trough the actual file (to extract the column names).

Normally (without the loop) the code should look like this:

< tables>
 <columns>
   <column datatype="Int32" identityincrement="1" identityseed="1" name="AgeID">
   <column datatype="Int32" name="AgeFrom">
   <column name="AgeTo">
   <column datatype="String" length="255" name="AgeCategoryEmployee">
  </columns>
 </columns>
</tables>

Default BIML uses INT as an default datatype, in this case we use a string.
Now we add the loop in place and the complete code looks like this
<Biml xmlns="http://schemas.varigence.com/biml.xsd">

<#  
    string Prefix="Man";
    
    // the locatie of the csv's'
    string path = @"D:\Drop\Man";
    // Put all the filenames with the extension csv in a string array
    string[] myFiles = Directory.GetFiles(path, "*.csv");
    // string that will be filled with the filename
    string filename;
    // string array for columnnames extracted from CSV
    string[] myColumns;
#>
        <Connections>
            <OleDbConnection 
            Name="OLEDB_STG_<#=Prefix#>" 
            ConnectionString="Data Source=APPL43;Initial Catalog=dummy_STG;Provider=SQLNCLI11.1;Integrated Security=SSPI;Auto Translate=False;">
            </OleDbConnection>
        </Connections>
        <Databases>
             <Database ConnectionName="OLEDB_STG_<#=Prefix#>" Name="dummy_STG"/>
        </Databases>
       <Schemas>
              <Schema Name="dbo" DatabaseName="dummy_STG" Owner="dbo"/>
       </Schemas>
       <Tables>
            <!-- loop trough the array of files-->
            <#  foreach(string filePath in myFiles) 
                {
               // extract the filename from the path to use as tablename     
               fileName = Path.GetFileNameWithoutExtension(filePath);
            
            #>
            <Table Name="<#=Prefix#>_<#=fileName#>" SchemaName="dummy_STG.dbo">
                <Columns>
                   <!-- loop trough the file looking for the columnnames-->
                    <#
                    // read first row of csv to extract to columnnames
                    // and split on comma to create an array
                    StreamReader myFile = new StreamReader(filePath);

                    myColumns = myFile.ReadLine().Split(',');
                    // close file after reading first line
                    myFile.Close();

                    // Loop through column array
                    foreach(string myColumn in myColumns) 
                    {  
                    #>
                      <Column Name="<#=myColumn#>" DataType="String" Length="255"></Column>
                 <#  }   #>
                </Columns>    
            </Table>
            <# }#>
       </Tables>
</Biml>
 
<#@ template language="C#" hostspecific="true"#>
<#@ import namespace="System.IO"#>


3) CreateTable
Secondly we are going to create the biml, called 2_CreateTables.biml. which creates the actual package that contains the create statements to generate the tables.
BIML has an method to create SQL tables “RootNode.Tables.First().GetTableSql();”
We use this method to create ‘SQL create statement’ the of table

The code looks like this

<Biml xmlns="http://schemas.varigence.com/biml.xsd">
    <Packages>
        <Package Name="CreateTables" AutoCreateConfigurationsType="None" ConstraintMode="Linear">
            <Tasks>
                <# 
                // Loop trough the table definition os the first biml
                foreach(var table in RootNode.Tables) {#>
                <ExecuteSQL Name="SQL - Drop_Create <#=table.Name#>" ConnectionName="<#=table.Connection.Name#>">
                    <DirectInput>
                        <#=table.GetTableSql()#>    
                    </DirectInput>
                </ExecuteSQL>
                <# } #>
            </Tasks>
        </Package>
    </Packages>
</Biml>

<!--Includes/Imports for C#-->
<#@ template language="C#" hostspecific="true"#>
<#@ import namespace="System.Data"#>
<#@ import namespace="System.Data.SqlClient"#>


We’ve created 2 bimls, 1_TableDefinitions.biml and 2_CreateTables.biml. Now comes the important part (I’m using Biml Express) for generating the package. First we click on 1_TableDefinitions and secondly on and 2_CreateTables, if you have selected the 2 biml scripts  you click with your right mouse on 1_TableDefinitions.biml and generate SSIS packages. If you do this otherwise, you will get an empty SSIS package. .

Generate SSIS package







Below you can see the result of your BIML scripts: a package with an execute SQL Task for each table you need to create.
Visual studio











The actual create statement looks like this

SET ANSI_NULLS ON
SET QUOTED_IDENTIFIER ON
GO
-------------------------------------------------------------------
IF EXISTS (SELECT * from sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[Man_Age]') AND type IN (N'U'))
DROP TABLE [dbo].[Man_Age]
GO

CREATE TABLE [dbo].[Man_Age]
(
-- Columns Definition
 [AgeID] nvarchar(255) NOT NULL 
, [AgeFrom] nvarchar(255) NOT NULL 
, [AgeTo] nvarchar(255) NOT NULL 
, [AgeCategoryEmployee] nvarchar(255) NOT NULL 
, [AgeCategoryClient] nvarchar(255) NOT NULL 

-- Constraints

)
ON "default"
WITH (DATA_COMPRESSION = NONE)
GO
-------------------------------------------------------------------


Summary
We created two biml scripts one for creating to table definition and one which creates the actuale packages.The result in Management Studio looks like this.

Management studio result














In the next blog I’m going to explain how to create SSIS packages that transport the data from the csv files







Thursday 15 December 2016

Azure Event Hub vs IoT Hub

Case
During our journey we noticed that in our team there is some confussion about the differences between an Event hub and an IoT hub. After some research we find out that there are a lot of similarities but also differences. In this blog I will explain the concept of an Event/IoT hub and a best practice when to use an event hub and when to use an IoT hub.
The goal of this article is to give a global image of the Event hub and IoT hub. Please follow the links for more in-depth information. 














Solution
Before we can find out what the differences and similarities are, the first question that is: “what is an event hub, and how do we use it?”

1) Event hub
An Event hub is a gateway to  the Azure cloud. It’s main purpose is to collect the incoming data and pas it to the Azure cloud, as seen in figure 1. An Event hub process the income data, but on a low profile scale. It doesn’t have advanced sequencing or delivery guaranties. Therefore Event hubs are a high scale messaging service, with a low latency and a high reliability. In our cases we use an event hub to collect the data from the raspberry,  but it can also be used in other cases, like collecting data from console games or other telemetry.
Figure1: Event Hub











Protocol
The connected devices/entities are called: Event publisher
Connecting Event publisher to the Azure event hub is easy, because it support the HTTP/AMQP protocols. The most used protocol is AMQP protocol. See here for more information about this subject 

Partition
The Event hub uses partitions. Partitions are an ordered sequences that keep events in the Event hub.  This sequence is based on the ‘first in first out’ principal. The number of partitions that can be used at the same time is between 2 and 32. Please note that when you create an event hub, you have to set the number of partitions. Strangely it cannot be changed afterwards. Mostly the number of partitions are ased on the amount of readers you are going to use (meaning the use of partitions further in the process). The default number of partitions is 4.  

In short an Event hub is a high scale telemetry, one way,  service, using the HTTP/AMQP protocol and is generally available worldwide.

For more information ‘how to develop with event hub’ see the programming guide  
Setting up an Event hub follows in an other blog (comming soon).

2) IoT Hub
But with the ‘grow of Iot’  there came additional needs: control, device authentication and authorization, protocol translation, etc.
Since an Event hub is an one way point of entry it’s limited in the additional needs as mentioned before..
Figure2: IoT hub

And this is where the IoT Hub kick in. The IoT hub can do the same things as an Event hub, but it’s capable of much more. The most important thing, it can handle bi directional traffic, meaning that an IoT hub is capable of sending data back to the connected devices.
Now it’s possible to command and control the devices, e.g. you can send a disconnect event to the device or a threshold event, e.g. when the machine reach a certain temperature that you can shutdown the machine.
Devices can be registered, so you can identify devices to check whether they are allowed to connect. It’s possible to connect more than 10 million devices (where the Event hub can handle up to 1 million devices) , it is also easy to import bulk device identities (which is easy when you are use 10 million devices ) .
The IoT hub can handle device error reporting, e.g. you can check the failed connection attempts per device. This can result in disconnection/disabling the device/Sensor in the IoT hub (so the sensor isn’t allowed to connect to the hub anymore).
It also support the AMQP over webSockets en MQTT protocol whereby the latter no protocol gateway is needed (when using Azure IoT SDKs).

For more in-depth information about the IoT hub, please see also the reference architecture 
For setting up the IoT hub see our earlier blogspot: Setting up IoT hub

Summary
The IoT Hub can do the same as an Event hub, but much more. Mostly because the bi-directional communication possibility, ergo an IoT hub is 'Event Hub plus'.


So why not all use the IoT hub instead of the Event hub? Well one thing we didn’t mentioned was the pricing. With all the extra capabilities of the IoT hub the pricing is also a lot higher. Sometimes up to 40 times higher. So for simple event, like reading data from a weather station, or counting how many times a door is opening a IoT hub is not necessary. 

Friday 8 July 2016

IoT Adventure: 1 - Setting up Raspberry Pi with sensors

Case

For our Internet of Things experience we use the Raspberry pi 2. With the raspberry it’s easy to connect different sensors and program them to work. There are many connections on the Raspberry, from HDMI to USB and GPIO. 
To connect the sensors to the Raspberry you can use the GPIO (General Purpose Input/Output) pins on the Raspberry board. In figure 1 you will see a picture of the Raspberry pi with the GPIO headers marked.


Figure 1 : Raspberry Pi














You can find more information about the GPIO on the following wiki page : GPIO header.

Solution
Here you will find a brief description about the layout of the GPIO header, including some explanation of the function of the different pins. Also a simple example how to connect the pins with a breadboard.
Once you understand the basics behind the GPIO you will notice that it is pretty easy to work with.
You can use every GPIO PIN on the board as an Input or Output connector. For example you can use one of the pins to receive temperature information from a sensor as input to the Raspberry Pi or you can use a pin as output to turn on or off a LED.

In order for you to make use of these pins as input or output channels you have to write code in programming languages such as Python, C#., VB etc and upload it to the Raspberry Pi. You will find more about the programming in other blogposts.


1) Layout 
There are different Raspberry pi models, we are using the Raspberry pi B/B+ model. For more information about the different models see: Raspberry models
In figure 2 you will find the pin connections (GPIO Header) of the Raspberry Pi 2 B/B+
tip: Print this layout on paper and keep it near your Pi

Figure 1: Pin layout
Figure 2: Pin layout

There are 17 GPIO pins available, all the green pins in figure 1 are standard GPIO pins. But there are more pins.




































Below a brief summary of the remaining pins:
  • PIN 2 and 4 have an output of 5 V
  • PIN 1 and 17 have an output of 3,3V
  • PIN 3 and 5 are used for the I2C protocol (see wiki: I2C Protocol ) (SDA stands for Serial Data and SCL stands for Serial Clock).
  • PIN 19, 21, 23, 24, 26 are used for the Serial Peripheral Interface (SPI) BUS (see wiki: SPI Bus)
  • PIN 8 and 10 are used for Universal Asynchronous Receiver/Transmitter (UART) (see wiki: UART). It is possible to send data from multiple sensors using 1 wire (serial) instead of using 1 wire per sensor.
In the beginning all the different functions of the pins can be a bit daunting, but in our experience it suffices to use the common GPIO pins for the most sensors.


2) Wire Colours
In our posts we will be using the following wire colour standards;
- 3,3V pins: orange wire
- 5V pins: red wire
- GND pins: black wire
- Common GPIO pins: green wire
- I2C pins: blue wire
- UART pins: white wire
- Overall pins: yellow wire
- between the different components: Grey

It’s not necessary for you to use all the different colors I just described, you can use any color (especially if you don’t have all the colors).

3) Breadboard?
When you bought a Raspberry Pi Starters Kit, most likely they included a breadboard. A breadboard is an easy way to plug and unplug LED’s, Resitors, Sensors and more without soldering. They layout of a breadboard is pretty straight forward (if you know what you are doing). 
 
Figure 3 shows the breadboard as it is included in the Starters kit.
Figure 3: Breadboard









In the picture the horizontal line (row) is marked as number 1. The horizontal line is used to connect the 5 or 3,3 Volt pins to the + row and the GND (Ground) pin to the - row. Connecting the pins to these horizontal lines enables you to power the entire board with 5 or 3,3 Volt.


The vertical lines (depicted as the number 2 marker in Figure 2) are to be used to connect the sensors to either the horizontal lines to power the sensor or directly to the GPIO pins on the Raspberry Pi. With the horizontal lines it is usefull to know that it doesnt matter how high or low you connect the wires. The entire vertical line is used to connect to the horizontal line or the GPIO pins on the Raspberry Pi.

The breadboard is split in two, so you can use e.g. one side for the 3,3V+ and the other side for the 5V


4) Connection a sensor
Now that I have described the GPIO header layout and the breadboard we can try something simple like connecting a LED to our Raspberry Pi. In this post we will only explain how to properly connect the sensor and not how to write the correct code to activate/deactivate the sensor. The code will be explained in other posts per sensor using C#, which is our choice of code.

Figure 3 depicts how to use a 3,3V GPIO pin and a GND pin and connect it to the horizontal lines.
- GPIO pin 1 (3,3V) is connected to the breadboard pin using a orange wire
- GPIO pin 6 (GND) is connected to the breadboard using a black wire
- A 560 Ohm Resistor is connected via pins 10C and 14C to protect the power supply to the LED
- A Red led is connected via pins 14E and 15E

When you turn on the Raspberry Pi the LED will light up.

Figure 4: Breadboard


Figure 5 is a power schema of how you can connect the components to the Raspberry. In my opinion the best thing to do is first drawing a power schema. And then connect the components to the breadboard. By drawing a power schema you can see where you have to put the components and calculate e.g. the resistors.
Figure 5: Power schema



Conclusion
In this case I described the GPIO Header layout and how to connect a simple component like the LED. 
Figure 6 is the end resulate of this case.This is just the basic, from here you can build your empire of sensors ;)  
In other blogposts we will describe how to program the sensors. 


Figure 6: the end




















Note
For the schematics we are using Fritzing.