Azure Archives - Page 9 of 15 - - Page 9

Category Archives: Azure

Send a message/notification on Microsoft Teams as soon as an Opportunity is created in Dynamics 365 via Azure Logic Apps.

In this blog we will see the steps in order to send a automated message via Teams as soon as an Opportunity is created in Microsoft Dynamics 365. Step 1: Go to portal.azure.com and select the Azure Logic App Resource. Step 2: Enter all the details such as the Name, Resource Group, Subscription, Region, etc. required while creating a Logic App. Step 3: Select the Dynamics 365 trigger When a record is updated. Step 4: Select Opportunities entity after setting by the Dynamics 365 CRM connection Step 5: Set the data refresh time as required. Step 6: Select the IF action in the next step and the condition would be status_label=won for true. Step 7: Inside True Block select Post a message in a chat or channel option. You can also handle the condition for False block, but in this case we can leave it blank. Step 8: You can post this message in a Group, Channel or send it as a Personal chat. Step 10: Wait for the trigger successful notification. Step 11: Go to Dynamics 365 CRM and navigate to opportunities entity. Step 12: Open a test opportunity or create one if doesn’t exist and close the opportunity as won. Step 13: As soon as you closed this opportunity you may have received the following message. Hence in this blog we saw how we can send messages on MS Teams using Azure Logic Apps on triggering certain conditions. Hope this helped!

Share Story :

How to create a Xero Data Source in Azure Data Factory

Posted On December 9, 2021 by Jaison Menezes Posted in Tagged in ,

Hello, in this blog we will understand the steps required to create Xero as a data source in Azure Data Factory which can be then used to copy data from Xero into various target systems such as Azure SQL. Create a new Data Source in Azure Data Factory and search for Xero. Add the desired name and description for the data source. Select OAuth 2.0 as authentication and select the host as api.xero.com Go to App management | Xero (https://developer.xero.com/app/manage) and create a new Web app. Enter developer.xero.com as company/application URL. Go to OAuth 2.0 configuration copy the client id and client secret after generating it. Download and install Postman windows app and import this collection in Postman Collection Web View | Postman (getpostman.com) Create a new environment and initialize the following variables. Client ID, Client Secret are the same that are obtained from Xero. For “scopes” variable add offline_access accounting.transactions as initial value and openid profile email accounting.contacts accounting.settings as current value respectively. Save the environment. No go to collections and select ‘Get Started’, under the Auth tab select type as OAuth 2.0 Make sure the environment we just created is selected Call all the variables set in the environment by typing “{{” in their respective fields. Add https://login.xero.com/identity/connect/authorize to the Auth URL field Add https://identity.xero.com/connect/token to the Access Token Field Save the collection. Now After saving, click on Get new access token. Sign in into Xero and provide access to the organization from the pop-up window that appears Now select the access token which will be shown on screen, right click and select Set “your environment name” put its value in the access token variable whose value we have set before. Set value of refresh token variable using the same method and paste this refresh token in Azure data factory field as well. After setting Access token and refresh token, scroll to the top and click on Use this token button. Send the GET request to Xero API After successful API call you will get the tenant id as well, paste this tenant id in Azure data factory. Make sure encrypted endpoints are enabled. Test the connection and after successful testing click on apply The Xero Connector is ready for use. Hope this blog helped!!

Share Story :

Azure Databricks – How to read CSV file from blob storage and push the data into a synapse SQL pool table

In this blog, we will learn how to read CSV file from blob storage and push data into a synapse SQL pool table using Azure Databricks python script. In part1 we created an Azure synapse analytics workspace, dedicated SQL pool in this we have seen how to create a dedicated SQL pool. In this blog, we will use a JDBC connection string to connect the SQL pool. Step 1: Sign to the Azure portal. Open Azure Databricks and click on lunch workspace to create a new Notebook. Step 2: Once the Azure Databricks Studio opens click on New Notebook and select your language, here I have selected “Python” language. Step 3: Add the following code to connect your dedicated SQL pool using the JDBC connection string and push the data into a table. Python script : from azure.storage.blob import BlobServiceClient import pandas as pd import io import pyspark.sql storage_account_name = ‘Your Storage account name’ storage_account_access_key = ‘Your Storage account access key’ spark.conf.set(‘fs.azure.account.key.’ + storage_account_name + ‘.blob.core.windows.net’, storage_account_access_key) blob_container = ‘Your container name’ filePath = “wasbs://” + blob_container + “@” + storage_account_name + “.blob.core.windows.net/Your CSV file name” empDf = spark.read.format(“csv”).load(filePath, inferSchema = True, header = True) connectionString=”Your JDSB connection sting;encrypt=true;trustServerCertificate=false;rewriteBatchedStatements=true;loginTimeout=30;” empDf.write.jdbc(connectionString,”[dbo].[Employee]”, mode=”append”) Step 4: You can get the JDBC connection string >> First open Synapse work space on the left pane in Analytics pools open SQL pool. Select your SQL pool, in overview you can find the link “Show database connection strings” and clicked on JDBC tab and copy the connection string. Step 7: Now click on the Run All button to execute the main script. Your script will execute successfully and also check in the SQL table. Hope this will help.

Share Story :

How to use Create HTML Table block in Azure Logic Apps to format JSON data

Sometimes after extracting data from certain data sources in JSON format we have to format and make this data easily readable so as to send this data via Microsoft teams or email. In this blog I will format a sample JSON code into a HTML table Since I am not using a data source, I am initializing a variable with data type as array (Create HTML block supports array Variables) and put a sample JSON code in the value section. Now we will convert this JSON piece of code into respective HTML type, to do this we will use Create HTML block, we have to select the array variable we initialized earlier and the type of columns would be custom. Enter Header details, this can be any string value, For Value Field Click on on it, go to expression and type the following expression. item()?[‘Product_ID’] You can replace the “Product_ID” with the name of attribute in JSON string After this we will send this data via email and run the trigger. As you can see the JSON code is converted into readable Email via HTML Hope this blog helped.

Share Story :

Send Records from Microsoft Dynamics 365 through email using Azure Logic Apps.

In this blog we will copy a list of Account names that exists on our Microsoft Dynamics 365 system and send all these names via email using Azure logic Apps. To start the map select a HTTP request trigger, which would run on demand at the click of the run trigger button. After defining the trigger and an action to list rows from Dynamics 365 and select the entity needed from the drop down. In this case I have selected Accounts. You can also add filters using parameters to limit the data extracted. Initialize a variable in order to store the data, since there are more than one records the data type of the variable should be an array Now for each record (value) found in Dynamics 365 we have to add it to the array, therefore we use a For each loop and append new data to array inside the for loop Next we will use a Send Email block and add the variable which we used to store account names. After this we can run the flow On running the map an email containing all account names in our Dynamics 365 system gets received. Hope this blog helped!!

Share Story :

How to schedule a logic app to run on specific days and on specific time

Posted On October 17, 2021 by Aditya Somwanshi Posted in

Hi, in this blog we will see how you can set parameters for a logic app in such a way that it will only trigger on specific days of the week and specific time of the day. Step1: Create a azure resource logic app from the home page. Make sure to give proper tags while creating a resource. Select a recurrence type for the logic app trigger. Step 2: Select frequency as weeks and add following parameters. Step 4: Since I wanted to trigger the logic app on week days that is Monday to Friday and at 7:30 am. I have added the data in following way. In this way you can set a trigger for logic apps.

Share Story :

Azure Synapse Analytics – How to ingest the Salesforce table data into a dedicated SQL pool using Notebook activity.

In this blog, we will learn how to ingest the Salesforce table data into a dedicated SQL pool using Notebook activity. In part1 we created an Azure synapse analytics workspace, dedicated SQL pool in this we have seen how to create dedicate SQL pool, and in Salesforce data we have written the python script to get data. In this blog, we will learn how to connect a dedicated SQL pool and ingest data into a table step by step. Step 1: Sign to the Azure portal. Open Azure Synapse Analytics and click on Open Synapse Studio to open your existing Notebook. Step 2: Once the Synapse Studio opens click on “Develop” and open your existing Notebook. Step 3: Add the following code to connect your dedicated SQL pool using the “pyodbc” library and write the SQL insert query to load the data into a table. Step 4: Once the script is ready, click on “Add to pipeline” as per the below screenshot. Step 5: Once you click on “New pipeline”, it will automatically create Notebook activity, give the proper pipeline name. Step 6: Debug the pipeline, here is the output of the pipeline. Hope this will help.

Share Story :

Azure Synapse Analytics – How to resolve ModuleNotFoundError: No module named ‘simple salesforce’ error in Notebook

In this blog, we will learn how to resolve ModuleNotFoundError: No module named ‘simple salesforce’ in Notebook. Step 1: To upload to your cluster you simply navigate to “Manage”, then choose “Apache Spark Pools”, click the three dots on your Spark cluster that you want to add the package to. Step 2: Once you clicked on Packages, you can see the requirement files option. In this, you have to select the upload option to upload the files. Step 3: There are two options to create resource requirement files(.txt or .yml). Here we will use yml file. A requirement is essentially a file that you upload to the Spark cluster and runs the equivalent of a “Pip install” when the cluster starts for all the packages listed in the file. You add your extra packages here and restart the cluster (or force apply). Upload your requirement file as per the below screenshot. Step 4: Once you select your requirement file, check the “Immediately apply settings change and cancel all active applications” option to force changes to apply. Once the package installs complete, you can re-run your Notebook, it will execute successfully. Hope this will help.

Share Story :

Azure Synapse Analytics – How to get Salesforce data using Notebook via a python script

In this blog, we will learn how to get Salesforce data using Notebook via a python script. In part1 we created an Azure synapse analytics workspace. In this, we will create a Notebook and write a python script to get Salesforce data step by step. Step 1: Sign to the Azure portal. Open Azure Synapse Analytics and click on Open Synapse Studio to create a Notebook. Step 2: Once the Synapse Studio opens click on “Develop” and create a new Notebook. Step 3: Provide the suitable name for your Notebook, select a language as python, and attached the apache-spark pool that you have created. Step 4: Before you write a python script to get data from Salesforce. You have to first create a new “Connected App” in your Salesforce portal(prod or sandbox). Go in “Setup”, open the “App Manager”. Then, create a “New Connected App”. Name your application. Tick the box “Enable OAuth Settings”. In “Selected OAuth Scopes”, make all scopes available. Type “http://localhost/” in “Callback URL”. Save. In the end, you should get and note down the “Consumer Key” and the “Consumer Secret”. Using user id, password, consumer key, and secret we can get the Salesforce access token. Step 5: Once you have the above information, write the following python script to get the Salesforce data. To read the data from Salesforce, here I have used the “Simple_Salesforce” python library. Step 6: Here is the output of the script. Hope this will help.

Share Story :

Azure Databricks – Part 1 – How to create Azure Databricks workspace and a Spark Cluster?

In this blog, we will learn how to create Azure Databricks workspace and a Spark Cluster step by step using the Azure portal. Create Azure Databricks workspace: Step 1: To create Azure Databricks workspace, sign in to the Azure portal. In the upper-left corner of the home page, select Create a resource. In the Search, the Marketplace box, enter Azure Databricks and select and press enter Step 2: Select Azure Databricks from the search result and click on the create button. Step 3: Click on the create button and enter the following information Subscription Resource group Workspace name Region Pricing tier Step 4: Click the Review + create tab before click on the create button. Once you click on the create button it will take 3 to 4 minutes to create a resource. Create a Spark Cluster in Azure Databricks: Step 1: In the Azure portal, go to the Databricks workspace that you created, and then click Launch Workspace Step 2: You are redirected to the Azure Databricks portal. From the portal, click New Cluster. Step 3: In the New cluster page, provide the values to create a cluster. Hope this will help.

Share Story :

SEARCH BLOGS:

FOLLOW CLOUDFRONTS BLOG :


Secured By miniOrange