Azure Archives - Page 11 of 16 - - Page 11

Category Archives: Azure

How to use Create HTML Table block in Azure Logic Apps to format JSON data

Sometimes after extracting data from certain data sources in JSON format we have to format and make this data easily readable so as to send this data via Microsoft teams or email. In this blog I will format a sample JSON code into a HTML table Since I am not using a data source, I am initializing a variable with data type as array (Create HTML block supports array Variables) and put a sample JSON code in the value section. Now we will convert this JSON piece of code into respective HTML type, to do this we will use Create HTML block, we have to select the array variable we initialized earlier and the type of columns would be custom. Enter Header details, this can be any string value, For Value Field Click on on it, go to expression and type the following expression. item()?[‘Product_ID’] You can replace the “Product_ID” with the name of attribute in JSON string After this we will send this data via email and run the trigger. As you can see the JSON code is converted into readable Email via HTML Hope this blog helped.

Share Story :

Send Records from Microsoft Dynamics 365 through email using Azure Logic Apps.

In this blog we will copy a list of Account names that exists on our Microsoft Dynamics 365 system and send all these names via email using Azure logic Apps. To start the map select a HTTP request trigger, which would run on demand at the click of the run trigger button. After defining the trigger and an action to list rows from Dynamics 365 and select the entity needed from the drop down. In this case I have selected Accounts. You can also add filters using parameters to limit the data extracted. Initialize a variable in order to store the data, since there are more than one records the data type of the variable should be an array Now for each record (value) found in Dynamics 365 we have to add it to the array, therefore we use a For each loop and append new data to array inside the for loop Next we will use a Send Email block and add the variable which we used to store account names. After this we can run the flow On running the map an email containing all account names in our Dynamics 365 system gets received. Hope this blog helped!!

Share Story :

How to schedule a logic app to run on specific days and on specific time

Posted On October 17, 2021 by Aditya Somwanshi Posted in

Hi, in this blog we will see how you can set parameters for a logic app in such a way that it will only trigger on specific days of the week and specific time of the day. Step1: Create a azure resource logic app from the home page. Make sure to give proper tags while creating a resource. Select a recurrence type for the logic app trigger. Step 2: Select frequency as weeks and add following parameters. Step 4: Since I wanted to trigger the logic app on week days that is Monday to Friday and at 7:30 am. I have added the data in following way. In this way you can set a trigger for logic apps.

Share Story :

Azure Synapse Analytics – How to ingest the Salesforce table data into a dedicated SQL pool using Notebook activity.

In this blog, we will learn how to ingest the Salesforce table data into a dedicated SQL pool using Notebook activity. In part1 we created an Azure synapse analytics workspace, dedicated SQL pool in this we have seen how to create dedicate SQL pool, and in Salesforce data we have written the python script to get data. In this blog, we will learn how to connect a dedicated SQL pool and ingest data into a table step by step. Step 1: Sign to the Azure portal. Open Azure Synapse Analytics and click on Open Synapse Studio to open your existing Notebook. Step 2: Once the Synapse Studio opens click on “Develop” and open your existing Notebook. Step 3: Add the following code to connect your dedicated SQL pool using the “pyodbc” library and write the SQL insert query to load the data into a table. Step 4: Once the script is ready, click on “Add to pipeline” as per the below screenshot. Step 5: Once you click on “New pipeline”, it will automatically create Notebook activity, give the proper pipeline name. Step 6: Debug the pipeline, here is the output of the pipeline. Hope this will help.

Share Story :

Azure Synapse Analytics – How to resolve ModuleNotFoundError: No module named ‘simple salesforce’ error in Notebook

In this blog, we will learn how to resolve ModuleNotFoundError: No module named ‘simple salesforce’ in Notebook. Step 1: To upload to your cluster you simply navigate to “Manage”, then choose “Apache Spark Pools”, click the three dots on your Spark cluster that you want to add the package to. Step 2: Once you clicked on Packages, you can see the requirement files option. In this, you have to select the upload option to upload the files. Step 3: There are two options to create resource requirement files(.txt or .yml). Here we will use yml file. A requirement is essentially a file that you upload to the Spark cluster and runs the equivalent of a “Pip install” when the cluster starts for all the packages listed in the file. You add your extra packages here and restart the cluster (or force apply). Upload your requirement file as per the below screenshot. Step 4: Once you select your requirement file, check the “Immediately apply settings change and cancel all active applications” option to force changes to apply. Once the package installs complete, you can re-run your Notebook, it will execute successfully. Hope this will help.

Share Story :

Azure Synapse Analytics – How to get Salesforce data using Notebook via a python script

In this blog, we will learn how to get Salesforce data using Notebook via a python script. In part1 we created an Azure synapse analytics workspace. In this, we will create a Notebook and write a python script to get Salesforce data step by step. Step 1: Sign to the Azure portal. Open Azure Synapse Analytics and click on Open Synapse Studio to create a Notebook. Step 2: Once the Synapse Studio opens click on “Develop” and create a new Notebook. Step 3: Provide the suitable name for your Notebook, select a language as python, and attached the apache-spark pool that you have created. Step 4: Before you write a python script to get data from Salesforce. You have to first create a new “Connected App” in your Salesforce portal(prod or sandbox). Go in “Setup”, open the “App Manager”. Then, create a “New Connected App”. Name your application. Tick the box “Enable OAuth Settings”. In “Selected OAuth Scopes”, make all scopes available. Type “http://localhost/” in “Callback URL”. Save. In the end, you should get and note down the “Consumer Key” and the “Consumer Secret”. Using user id, password, consumer key, and secret we can get the Salesforce access token. Step 5: Once you have the above information, write the following python script to get the Salesforce data. To read the data from Salesforce, here I have used the “Simple_Salesforce” python library. Step 6: Here is the output of the script. Hope this will help.

Share Story :

Azure Databricks – Part 1 – How to create Azure Databricks workspace and a Spark Cluster?

In this blog, we will learn how to create Azure Databricks workspace and a Spark Cluster step by step using the Azure portal. Create Azure Databricks workspace: Step 1: To create Azure Databricks workspace, sign in to the Azure portal. In the upper-left corner of the home page, select Create a resource. In the Search, the Marketplace box, enter Azure Databricks and select and press enter Step 2: Select Azure Databricks from the search result and click on the create button. Step 3: Click on the create button and enter the following information Subscription Resource group Workspace name Region Pricing tier Step 4: Click the Review + create tab before click on the create button. Once you click on the create button it will take 3 to 4 minutes to create a resource. Create a Spark Cluster in Azure Databricks: Step 1: In the Azure portal, go to the Databricks workspace that you created, and then click Launch Workspace Step 2: You are redirected to the Azure Databricks portal. From the portal, click New Cluster. Step 3: In the New cluster page, provide the values to create a cluster. Hope this will help.

Share Story :

Azure Databricks – Part 2 – How to read Amazon DynmoDB table data using NoteBooks

In this blog, we will learn how to connect AWS DynmoDB and read the table data using Python script step by step. Step 1: In the left pane, select Azure Databricks. From the Common Tasks, select New Notebook. Step 2: In the Create Notebook dialog box, enter a name, select Python as the language, and select the Spark cluster that you created earlier. Step 3: Once the Notebook creates you can write a python script to connect AWS DynmoDB using the boto3 client library. To connect AWS DynmoDB you must have an AWS access key ID and AWS secret access key. Python script : # Databricks notebook source import boto3 import pandas as pd session = boto3.session.Session(aws_access_key_id=’your AWS access key ID’,aws_secret_access_key=’your AWS secret access key’,region_name=’your region’) dynamodb = session.resource(“dynamodb”) table = dynamodb.Table(“Table Name”) response = table.scan() items = response[“Items”] data = pd.DataFrame(items) output = data.to_csv (index_label=”idx”, encoding = “utf-8”) print(output) Step 4: Now you can check the output by pressing the Shift + Enter key or click on the Run cell. Hope this will help.

Share Story :

SQL Trigger not populating with Table in Logic App

Wondered How to solve SQL triggered Azure Logic Apps issue of not being able to select your table in dropdown? This blog will help you fix this issue.

Share Story :

ADF’s Wrangling Data Flow (Power Query)– How do you get matched rows from the two data sources using Inner Joins?

Posted On April 25, 2021 by Sandip Patel Posted in Tagged in

In this blog, we will learn how to get matched rows from the two data sources using inner join in ADF’s Wrangling Data Flow step by step. Step 1: Add a Power query flow as per the below screenshot. Step 2: In the New power query give the proper power query name and add the data source that you want to merge. Here I am adding two datasets named “DS_EMP1” and “DS_EMP2”, both data sources have employee information. Step 3: By default, the UserQuery will point to the first dataset query. All the transformation should be done on the UserQuery. Step 4: Now click on Merge queries to merge your dataset. Step 5: select a table and matching columns to create a merge table, here I have select EmpID as a common key to merge the data, and the join kind will be “Inner”. Step 6: Once you click the OK button, you got a warning “Nested join must be expanded”. Step 7: Click on expand dataset button to expand your result and select columns whatever you want from the other data source, here in my case both the datasets have the same column name so I deselect all the columns from the result dataset. Step 8: Now the UserQuery will show the matched rows, that’s all you need to do to get matched rows in two data sources. Hope this will help.

Share Story :

SEARCH BLOGS:

FOLLOW CLOUDFRONTS BLOG :


Secured By miniOrange