Azure Archives - Page 15 of 16 - - Page 15

Category Archives: Azure

How to copy data from rest API to Azure data lake via Azure Data Factory

Posted On August 11, 2020 by Sandip Patel Posted in

In this blog I am going to explain you how to copy data from rest API to Azure data lake using Azure Data Factory. Step 1: Create an azure storage account via azure portal using your credentials. Step 2: Once your storage creation done, create a container called “restapidata” that will going to use to store rest API data. Step 3: If you have rest API endpoint then its fine otherwise search dummy rest api example in google and copy one of the URL. Here I have taken employee endpoint (http://dummy.restapiexample.com/api/v1/employees) Step 4: Create an Azure Data Factory service in azure portal and create a pipeline. Step 5: Create a link service for rest API. Step 6: Create a link service for Azure data storage. Step 7: Create a dataset for rest API. Step 8: Create a dataset for Azure data lake storage. Step 9: Drag a copy activity in the pipeline and set a general property. Step 10: Set a source property of copy activity. You can also click on preview data to check whether data is populated or not. Step 11: Set a sink property of copy activity. Step 11: Set a mapping property of copy activity. Step 12: Debug the pipeline and check the output. Also click on details to check record proceed. I hope this will help you.

Share Story :

How to capture the detail level logs when Azure Data Factory Pipeline fails or success

Posted On July 15, 2020 by Sandip Patel Posted in

In this blog I am going to explain you how to store detail level logs when Azure Data Factory Pipeline fails or success. First you need to create ADF_LogsRecording table using following SQL script. CREATE TABLE [dbo].[ADF_LogsRecording]( [LogsRecordingID] [int] IDENTITY(1,1) NOT NULL, [DataFactoryName] [nvarchar](200) NULL, [PipelineName] [nvarchar](200) NULL, [PipelineId] [nvarchar](200) NULL, [PipelineStartTime] [datetime] NULL, [ErrorCode] [nvarchar](1000) NULL, [ErrorDescription] [nvarchar](max) NULL, [ErrorLogTime] [datetime] NULL, [ActivityID] [nvarchar](100) NULL, [ActivityName] [nvarchar](200) NULL, [ActivityStartTime] [datetime] NULL, [ActivityEndTime] [datetime] NULL, [ActivityDuration] [time](7) NULL, [ActivityStatus] [nvarchar](100) NULL, [Itemcode] [nvarchar](50) NULL, [FrgnName] [nvarchar](100) NULL, [U_COR_BU_TXTS] [nvarchar](max) NULL, [U_COR_BU_TXTQ] [nvarchar](max) NULL, [CreatedOn] [datetime] NULL,  CONSTRAINT [PK_ADF_LogsRecording] PRIMARY KEY CLUSTERED ( [LogsRecordingID] ASC )WITH (STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF) ON [PRIMARY] ) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY] GO ALTER TABLE [dbo].[ADF_LogsRecording] ADD  CONSTRAINT [DF_CreatedOn]  DEFAULT (getdate()) FOR [CreatedOn] GO Also create a store procedure suppose you are going to use in ADF Pipeline. CREATE PROCEDURE [dbo].[sp_ADF_LogsRecording_Insert] ( @DataFactoryName nvarchar(200), @PipelineName nvarchar(200), @PipelineId nvarchar(200), @PipelineStartTime datetime, @ErrorCode nvarchar(1000), @ErrorDescription nvarchar(max), @ErrorLogTime datetime, @ActivityID nvarchar(100), @ActivityName nvarchar(200), @ActivityStartTime datetime, @ActivityEndTime datetime, @ActivityDuration time, @ActivityStatus nvarchar(200), @Itemcode nvarchar(100), @FrgnName nvarchar(200), @U_COR_BU_TXTS nvarchar(max), @U_COR_BU_TXTQ nvarchar(max) ) AS BEGIN     INSERT INTO ADF_LogsRecording ( DataFactoryName, PipelineName, PipelineId, PipelineStartTime, ErrorCode, ErrorDescription, ErrorLogTime, ActivityID, ActivityName, ActivityStartTime, ActivityEndTime, ActivityDuration, ActivityStatus, Itemcode, FrgnName, U_COR_BU_TXTS, U_COR_BU_TXTQ ) VALUES ( @DataFactoryName, @PipelineName, @PipelineId, @PipelineStartTime, @ErrorCode, @ErrorDescription, @ErrorLogTime,        @ActivityID, @ActivityName, @ActivityStartTime, @ActivityEndTime, @ActivityDuration, @ActivityStatus, @Itemcode, @FrgnName, @U_COR_BU_TXTS, @U_COR_BU_TXTQ ) END We are also creating SQL store procedure to get all record suppose you want to either insert or update in destination. CREATE procedure [dbo].[SP_GETItem] AS BEGIN SELECT itemcode,FrgnName,U_COR_BU_TXTS,U_COR_BU_TXTQ FROM OITM WITH (NOLOCK)  END GO CREATE procedure [dbo].[sp_GetItemByItemCode] ( @ItemCode nvarchar(200) ) AS BEGIN SELECT ItemCode,FrgnName,U_COR_BU_TXTS,U_COR_BU_TXTQ FROM OITM where ItemCode=@ItemCode END GO Now create a pipeline Step 1: Use Lookup activity to get all item. Step 2: ForEach activity should use to loop each itemcode wise, configure the settings of foreach activity Step 3: Add a Copy activity inside ForEach activity and set source properties. Set Sink properties as per below screenshot. Step 4: Add Store procedure activity for both Success and Failure of Copy activity. Step 5: Add store procedure activity for success and configure properties as per below screenshot. Step 6: Add store procedure activity for failure and configure properties as per below screenshot. Now when we run the pipeline, based on failure or success of pipeline record level logs were stored in table. I hope this will help you.

Share Story :

Integration of CSV file from ADLS to Azure SQL DB using ADF

Go through the blog to understand how to integrate a CSV file stored in your ADLS to Azure SQL DB.

Share Story :

Azure Logic App for creating an API for SQL DB.

Read this blog to understand how one can create logic App using Azure portal to create an API for SQL DB.

Share Story :

Triggering A Logic App

Read this blog to understand about Triggers in Logic App. Learn how you can work with Simple Timer Invoke and how to trigger Logic App containing a start Block as ‘When an HTTP request is received’

Share Story :

Develop D365 Finance, SCM and Retail Reports using Azure Data Lake Gen2

The BYOD feature for the Dynamics 365 for finance and Operations was release in 2016, it’s provide feature for the customer to bring their own Azure SQL Database, but the drawback through this was Entity Store only accessible for the inbuilt data warehouse only which means it is not accessible to outside D365. The new feature for the Data Lake Gen2 makes the Entity Store get Sync with Data Lake. Following are the steps to setup the environment. Let’s create the Storage Account for the Data Lake Gen2 Click on the Create Resource and search for Storage accounts and go to the Storage Account and click on Add. Choose the Subscription and Resource group, resource group is the container that holds the resource. Here, we are creating the new resource for our Data Lake. Make sure you select the Location same as your Power BI dataset environment and set the other option as shown below. Once storage account gets created, we can access it from the resource group open the storage account Now let’s take a copy of the connection string since it will be required later, the connection string can be access by going to setting under the storage account. Let’s create the Key vault resource to store the secret and create the secret for the connection string. Secret can be created by going to the key vault by going to the Key under the Settings tab, click on the generate the key. We are creating the secret for the connection string that we have copied earlier, set the value for the connection string as the key that we have copied and click on the create. Once we have created the app secret for the connection the next step is to authorize the user and resource to the request. Here we are going to register app for the authorization of the D365FO environment. To register the app, go to the Azure Active Directory and Select the app registration and click on the new registration. Fill the application name of your choice and redirect URI is set to WEB and select the D365 environment URI as shown below. Once we register the app the next step is to grant the API permission as a part of consent process. Grant all the permission that application should require. From Azure Key Vault select the Permission as user_impersonation which provide full access to Azure Key Vault service. The next step is to create client secret and make sure to note down the value of the app generated since we are going to use that in the D365FO environment. Next is to add the D365FO in access policy list and select the Key and Secret permission to Get and List from the drop down. Once we have added D365FO in access policy list the next step is to add the Application ID and Application Secret of Azure Key Vault in Data connection in D365FO environment, which can be access by going to Module > System administration > Setup > System parameters > Data connections tab. Note:- Make sure if data connection tab is not visible to you which means you are missing some configuration setting during environment setup, to make the data connection tab available we need to check the Value of CDSAMaster in SQL Studio Management Studio. Open SSMS go to the AxDW database and check the value in SYSFLIGHTING for CDSAMaster if not present then insert the value in table. SELECT * FROM SYSFLIGHTING  /*To check the Flightname values*/ INSERT INTO SYSFLIGHTING VALUES (‘CDSAMaster’,1,12719367,5637144576,5637144589,1) /*     CDSAMaster FlightName     1 Enabled     12719367 FlightserviceID     5637144576 Partition     5637144589 RecID     1 Recversion */ Before Update After Updating Once Data connections is added fill the Application ID, Application Secret, DNS name and connection secret name as shown below and make sure to enable the Data Lake Integration. Test the Azure Key Vault and Azure Storage. Note:- If you are getting error 401 which means Azure user in which data lake is hosted has no access to D365FO environment in that case you need to import the user in environment and assign role as Administrator. Create Reports using Azure Data Lake Gen2 Once done with above steps the next step is to configure the storage account, following are the requirement for the Power BI. The storage account must be created in the same AAD tenant as your Power BI tenant. The storage account must be created in the same region as your Power BI tenant. The storage account must have the Hierarchical Name Space feature enabled (Make sure to enable this at time of storage account creation) Power BI service must be granted a Reader role on the storage account. You must have a Global Administrator account, this account is required to connect and configure Power BI to store the dataflow definition, and data, in your Azure Data Lake Storage Gen2 account  As we have created storage account previously lets, grant the Reader role to the storage account, in Azure portal Go to the Storage account > Access control > Role Assignments then click on Add role. Once we assign role to the Storage Account the next step is to Assign the directory level permission to the Storage that we have created. Here we are granting Power BI permission to the file system. We need to get the collection of IDs of the Power BI. Which can get by navigating to the AAD > Enterprise Application > All Application, copy the Object IDs of the Power BI Premium, Power BI Service and Power Query Online. For each Power BI Object that we have collected in previous steps grant the below access for each of the object. Once we granted the access to the storage the next step is to connect your Azure Data Lake Storage Gen 2 to Power BI Go to your Power BI service > Click on Admin portal navigate to Dataflow settings then Select the Connect your Azure Data Lake Storage Gen2 button. The following … Continue reading Develop D365 Finance, SCM and Retail Reports using Azure Data Lake Gen2

Share Story :

How to Connect your Azure Data Factory with Github

Introduction: In order to move your Pipeline from one environment to other you require your Pipeline to be saved on Github. This Blog will guide you through how to connect your Azure Data Factory Pipeline with your Github account. Step 1: Create a new Azure Data Factory and Tick the Enable GIT checkbox.     Step 2: Create a new Repository in your Github Account.     Step 3: Copy the URL from of the newly created repository from the address bar.     Step 4: Enter the GIT URL, the repository name which you recently created, branch name (usually master) and your root folder (usually same as your repository name) and click on Create.     Step 5: Now you must authorize your GIT Account on ADF. You will be promoted to Login with your Github Account. Do the necessary and click on Authorize Azure Data Factory.     Step 6: Now you must select the Working Branch. We are using the default master branch. Click Save and then you can proceed to create your Pipeline.     Conclusion: Each Time you publish the changes in ADF they are simultaneously stored in the GIT repository’s branch.

Share Story :

Prerequisites To Copy Data From Dynamics AX to On-Premises SQL Server Using Azure Data Factory

Posted On March 5, 2020 by Shrusti Talati Posted in

In order to copy data from Dynamics AX to On-Premises SQL Server, we need certain prerequisites – Login into https://portal.azure.com/#home Create a Data Factory and Click on Author and Monitor. Go to Author and create the following- Connections (Linked Services) Datasets Pipelines CONNECTIONS         SOURCE Click on New Linked service and create a source connection. Give it a name, Description (optional) and select AutoResolveIntegrationRuntime. We select AutoResolveIntegrationRuntime as our source is cloud-based (Dynamics AX), If it is on-premises, then only we need to install and select Self-hosted Integration runtime. URL: OData Endpoint AAD resource: Same as URL without /data Service Principal ID:  Application Id from App Registrations. Service Principal Key: Value from Certificates and Secrets in App registrations.        SINK In our case, Sink is SQL Server so we enter the SSMS credentials to connect to our on-premises SQL Server. Install the Integration runtime to provide and establish connection of on-premise SQL Server with the cloud. Test the connection. INTERGRATION RUNTIME We can choose between Express-setup and Manual Setup to install the integration runtime. Then go to Dynamics AX, and grant this service principal proper permission to access your Dynamics AX. In order to grant permission, go to Dynamics AX -> System Administration and add your Client ID there. DATASETS Create Dataset for Source and Sink. For the source dataset, select the source connection from the drop down for Linked Service field. In the path field, Select the desired entity. For the sink dataset, In the table field, you can either select a specific table to copy your data or check the auto-create option to make a new table in the sink. Check the Edit option in the Table field and give the desired name of the table to auto create it.  

Share Story :

How to Upsert Records in SQL(Sink) through ADF?

Introduction We are performing Integration of Accounts from CRM to SQL using ADF Copy activity pipeline. We want to upsert the accounts instead of inserting duplicate records again. Step 1: Auto create the Table named “accounts” in SQL Server during the first Integration run by selecting the Auto create table option.   Step 2: Create a custom data type named “AccountType” using following query.   CREATE TYPE AccountType AS TABLE( accountid uniqueidentifier, transactioncurrencyid uniqueidentifier, address1_city nvarchar(MAX), createdon datetime2(7), accountnumber nvarchar(MAX), name nvarchar(MAX), address1_country nvarchar(MAX), address1_composite nvarchar(MAX), telephone1 nvarchar(MAX), emailaddress1 nvarchar(MAX), websiteurl nvarchar(MAX), primarycontactid uniqueidentifier ) Step 3: Create a Stored Procedure named “spUpsertAccounts”.   CREATE PROCEDURE spUpsertAccounts @account AccountType READONLY AS BEGIN   MERGE dbo.accounts AS target_sqldb USING @account AS source_tblstg ON (target_sqldb.accountid = source_tblstg.accountid) WHEN MATCHED THEN UPDATE SET accountid = source_tblstg.accountid, transactioncurrencyid = source_tblstg.transactioncurrencyid, address1_city = source_tblstg.address1_city, createdon = source_tblstg.createdon, accountnumber = source_tblstg.accountnumber, name = source_tblstg.name, address1_country = source_tblstg.address1_country, address1_composite = source_tblstg.address1_composite, telephone1 = source_tblstg.telephone1, emailaddress1 = source_tblstg.emailaddress1, websiteurl = source_tblstg.websiteurl, primarycontactid = source_tblstg.primarycontactid     WHEN NOT MATCHED THEN INSERT ( accountid, transactioncurrencyid, address1_city, createdon, accountnumber, name, address1_country, address1_composite, telephone1, emailaddress1, websiteurl, primarycontactid ) VALUES ( source_tblstg.accountid, source_tblstg.transactioncurrencyid, source_tblstg.address1_city, source_tblstg.createdon, source_tblstg.accountnumber, source_tblstg.name, source_tblstg.address1_country, source_tblstg.address1_composite, source_tblstg.telephone1, source_tblstg.emailaddress1, source_tblstg.websiteurl, source_tblstg.primarycontactid ); END Step 4: Enter the Stored Procedure Name, Table Type and Table type parameter as shown in the image below:     Step 5: Publish all the changes and debug your Pipeline.  You can Verify the results in SQL Server “accounts” table.  

Share Story :

Schedule Serverless CRON Job to Pull data from REST APIs (Part – 1)

REST API is an Application Program Interface that uses HTTP request to GET, PUT, POST and DELETE data, it’s an architecture style approach to communicate with third party application. In order to integrate our module with third party applications or in order to pull data from third party application to our database for analysis purpose the REST APIs are useful. For the analysis purpose we can consume REST API with SSIS using third party connector, but the problem with this approach is that, it will be required On Premise server for package deployment and job schedule. The alternate approach is to use serverless CRON expression. Since, mid 2018 the serverless development methodology is ditching the traditional development. In this blog we are going to see how to consume REST API with Serverless CRON expression. Here, we are using Zoho People API for integrating HR modules data to Azure DB. With Zoho People API, you can extract employee’s data and form data in XML or JSON format to develop new applications or integrate with your existing business applications. Zoho People API is independent of programming languages which helps you to develop applications in any programming languages (reference). Implementation Authentication token generation In order to access Zoho People API, Zoho People authentication token is required. The token can be generated using Browser mode and API mode. For the API mode each request has Username or Email and Password needs to include in the POST body Another approach is to register your app with Zoho by going to zoho.com/developerconsole and Add Client ID, Once added it can be used to generate new access_token. We can check the response by use using POST request, Open any REST Client and send request by using any REST API method, here we are send request to get leave data and we are using VS Code REST Client. If the request is valid then the we can get response in following format containing the data requested. 3. As we are getting proper response, the next step is to create CRON expression (Serverless approach) to pull the data from REST API, here we are going to use App services on Azure to create Timer Trigger Function App. The CRON expression is a time-based job schedular. which has six field to define time precision in following format. {second} {minute} {hour} {day} {month} {day-of-week} Each field can have one of the following types of values: Type Example When triggered A specific value “0 5 * * * *” at hh:05:00 where hh is every hour (once an hour) All values (*) “0 * 5 * * *” at 5:mm:00 every day, where mm is every minute of the hour (60 times a day) A range (- operator) “5-7 * * * * *” at hh:mm:05, hh:mm:06, and hh:mm:07 where hh:mm is every minute of every hour (3 times a minute) A set of values (, operator) “5,8,10 * * * * *” at hh:mm:05, hh:mm:08, and hh:mm:10 where hh:mm is every minute of every hour (3 times a minute) An interval value (/ operator) “0 */5 * * * *” at hh:05:00, hh:10:00, hh:15:00, and so on through hh:55:00 where hh is every hour (12 times an hour) 4. The next step is to create function app, for development purpose we are going to use Visual Studio Community 2019/2017. In Visual Studio create a project by selecting File > New > Project Select Visual C# (here we are going to use C# for development you can choose Php, Python or F#) any of your choice. Under the Visual C# select the Azure Functions and click on Next The next step is select a new Azure Function Application, Select the Timer trigger function choose your Azure Function version (v1, v2 or v3). Select Authorization level as Anonymous as we don’t want to include API Key for the function and kept the other settings as it is. After clicking on Create button, it will create the Azure Function App Solution for us, the directory structure is as follow. The next step is to publish the function on Azure, it will deploy your function on IIS and Azure. That we will see in next part of this blog.    

Share Story :

SEARCH BLOGS:

FOLLOW CLOUDFRONTS BLOG :


Secured By miniOrange