Azure Archives - Page 3 of 14 - - Page 3

Category Archives: Azure

Error Handling in Azure Data Factory (ADF): Part 1

Posted On June 10, 2025 by Deepak Chauhan Posted in

Azure Data Factory (ADF) is a powerful ETL tool, but when it comes to error handling, things can get tricky—especially when you’re dealing with parallel executions or want to notify someone on failure. In this two-part blog series, we’ll walk through how to build intelligent error handling into your ADF pipelines. This post—Part 1—focuses on the planning phase: understanding ADF’s behavior, the common pitfalls, and how to set your pipelines up for reliable error detection and notification. In Part 2, we’ll implement everything you’ve planned to use ADF control flows. Part 1: Planning for Failures Step 1: Understand ADF Dependency Behavior In ADF, activities can be connected via dependency conditions like: When multiple dependencies are attached to a single activity, ADF uses an OR condition. However, if you have parallel branches, ADF uses an AND condition for the following activity—meaning the next activity runs only if all parallel branches succeed. Step 2: Identify the Wrong Approach Many developers attempt to add a “failure email” activity after each pipeline activity, assuming it will trigger if any activity fails. This doesn’t work as expected: Step 3: Design with a Centralized Failure Handler in Mind So, what’s the right approach? Plan your pipeline in a way that allows you to handle any failure from a centralized point—a dedicated failure handler. Here’s how: Step 4: Plan Your Notification Strategy Error detection is one half of the equation. The other half is communication. Ask yourself: To conclude, start thinking about Logic Apps, Webhooks, or Azure Functions that you can plug in later to send customized notifications. We’ll cover the “how” in the next blog, but the “what” needs to be defined now. Planning for failure isn’t pessimism—it’s smart architecture.By understanding ADF’s behavior and avoiding common mistakes with parallel executions, you can build pipelines that fail gracefully, alert intelligently, and recover faster. In Part 2, we’ll take this plan and show you how to implement it step-by-step using ADF’s built-in tools. Please refer to our case study https://www.cloudfronts.com/case-studies/city-council/ to know more about how we used the Azure Data Factory and other AIS to deliver seamless integration. We hope you found this blog post helpful! If you have any questions or want to discuss further, please contact us at transform@cloudfronts.com.

Share Story :

Automating File Transfers from Azure File Share to Blob Storage with a Function App

Efficient file management is essential for businesses leveraging Azure cloud storage. Automating file transfers between Azure File Share and Azure Blob Storage enhances scalability, reduces manual intervention, and ensures data availability. This blog provides a step-by-step guide to setting up an Azure Timer Trigger Function App to automate the transfer process. Why Automate File Transfers? Steps to Implement the Solution 1. Prerequisites To follow this guide, ensure you have: 2. Create a Timer Trigger Function App 3. Install Required Packages For C#: For Python: 4. Implement the File Transfer Logic C# Implementation 5. Deploy and Monitor the Function To conclude, automating file transfers from Azure File Share to Blob Storage using a Timer Trigger Function streamlines operations and enhances reliability. Implementing this solution optimizes file management, improves cost efficiency, and ensures compliance with best practices. Begin automating your file transfers today! Need expert assistance? Reach out for tailored Azure solutions to enhance your workflow. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfonts.com.

Share Story :

From Commit to Inbox: Automating Change Summaries with Azure AI

In our small development team, we usually merge code without formal pull requests. Instead, changes are committed directly by the developer responsible for the project, and while I don’t need to approve every change in my role as the senior developer, I still need to stay aware of what’s being merged.  Manually reviewing each commit was becoming too time-consuming, so I built an automated process using Power Automate, Azure DevOps, and Azure AI.Now, whenever a commit is made, it triggers a workflow that summarizes the changes and sends me an email.This simple system keeps me informed without slowing down the team’s work. Although I kept the automation straightforward, it could easily be extended further.For example, it could be improved to allow me to reply directly to the committer from the email or even display file changes in detail using a text comparison feature in Outlook.We didn’t need that level of detail, but it’s a good option if deeper insights are ever required. Journey We get started with the Azure DevOps trigger “When a code is pushed”. Here we specify the organization name, project name and repository name. We can also specify a specific branch if we want to limit our tracking to simply that branch otherwise it tracks all the available branches to the User. Then we have a foreach loop that iterates over the “Ref Updates” object array. It contains a list of all the changes but not the exact details.This action pops up automatically as well when we configure the next action. Then we set up a “Azure DevOps REST API request to invoke” action. This has connection capabilities to Azure DevOps directly so it is better to use over a simple REST API action. We specify the relative URL as {Repository Name}/_apis/git/repositories/{Repository ID}/commits/{Commit ID}/changes?api-version=6.0 The Commit ID shows up as newObjectId in the “When code is pushed” trigger. Then we pass the output of this action to a “Create Text with GPT using a prompt” action under the AI Builder group.I’ve passed the prompt as below but it took several trials and errors to get exactly what I wanted. The last action is a simple “Send an email” one where I’ve kept myself as a recepient and I’ve added a subject and a body. Now to put it all together and run it – And here is the final output – When the hyperlinks are clicked they take me straight to azure while pointing to the file which is referred. For instance, if I click on the Events Codeunit – Conclusion Summarizing commit changes is just one way automation can make life easier.This same idea can be applied to other tasks, like summarizing meeting notes, project updates, or customer feedback.With a bit of creativity, we can use tools like this to cut down on repetitive work and free up time to focus on learning new skills or tackling more challenging projects.By finding smart ways to streamline our workflows, we can work more efficiently and open up more time for growth and development. If you need further assistance or have specific questions about your ERP setup, feel free to reach out for personalized guidance. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfonts.com.

Share Story :

How to Recover Azure Function App Code

Azure Function Apps are a powerful tool for creating serverless applications, but losing the underlying code can be a stressful experience. Whether due to a missing backup, accidental deletion, or unclear deployment pipelines, the need to recover code becomes critical. Thankfully, even without backups, there are ways to retrieve and reconstruct your Azure Function App code using the right tools and techniques. In this blog, we’ll guide you through a step-by-step process to recover your code, explore the use of decompilation tools, and share preventive tips to help you avoid similar challenges in the future. Step 1: Understand Your Function App Configuration Step 2: Retrieve the DLL File To recover your code, you need access to the compiled assembly file (DLL).From Kudu (Advanced Tools), navigate to the site/wwwroot/bin directory where the YourFunctionApp.dll file resides and download it. Step 3: Decompile the DLL File Once you have the DLL file, use a .NET decompiler to extract the source code by opening .dll file using a .Net decompiler and running the decompiler script. The decompiler I have used here is dotPeek which is a free .Net decompiler. To Conclude, recovering a Function App without backups might seem daunting, but by understanding its configuration, retrieving the compiled DLL, and using decomplication tools, you can successfully reconstruct your code. To prevent such situations in the future you can enable Source Control to Integrate your Function App with GitHub or Azure DevOps or set backups. We hope you found this blog post helpful! If you have any questions or want to discuss further, please contact us at transform@cloudfronts.com. Please refer to our customer success story Customer Success Story – BUCHI | CloudFronts to know more about how we used the function app and other AIS to deliver seamless integration. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfonts.com.

Share Story :

Real-Time Monitoring with Azure Live Metrics

In modern cloud-based applications, real-time monitoring is crucial for detecting performance bottlenecks, identifying failures, and maintaining application health. Azure Live Metrics is a powerful feature of Application Insights that allows developers and operations teams to monitor application telemetry with minimal latency. Unlike traditional logging and telemetry solutions that rely on post-processing, Live Metrics enables real-time diagnostics, reducing the time to identify and resolve issues. What is Azure Live Metrics? Azure Live Metrics is a real-time monitoring tool within Azure Application Insights. It provides instant visibility into application performance without the overhead of traditional logging. Key features include: Benefits of Azure Live Metrics 1. Instant Issue Detection With real-time telemetry, developers can detect failed requests, exceptions, and performance issues instantly rather than waiting for logs to be processed. 2. Optimized Performance Traditional logging solutions can slow down applications by writing large amounts of telemetry data. Live Metrics minimizes overhead by using adaptive sampling and streaming only essential data. 3. Customizable Dashboards Developers can filter and customize Live Metrics dashboards to track specific KPIs, making it easier to diagnose performance trends and anomalies. 4. No Data Persistence Overhead Unlike standard telemetry logging, Live Metrics does not require data to be persisted in storage, reducing storage costs and improving performance. How to Enable Azure Live Metrics To use Azure Live Metrics in your application, follow these steps: Step 1: Install Application Insights SDK For .NET applications, install the required NuGet package: For Java applications, include the Application Insights agent: Step 2: Enable Live Metrics Stream In your Application Insights resource, navigate to Live Metrics Stream and ensure it is enabled. Step 3: Configure Application Insights Modify your appsettings.json (for .NET) to include Application Insights: For Azure Functions, set the APPLICATIONINSIGHTS_CONNECTION_STRING in Application Settings. Step 4: Start Monitoring in Azure Portal Go to the Application Insights resource in the Azure Portal, navigate to Live Metrics, and start observing real-time telemetry from your application. Key Metrics to Monitor Best Practices for Using Live Metrics To conclude, Azure Live Metrics is an essential tool for real-time application monitoring, providing instant insights into application health, failures, and performance. By leveraging Live Metrics in Application Insights, developers can reduce troubleshooting time and improve system reliability. If you’re managing an Azure-based application, enabling Live Metrics can significantly enhance your monitoring capabilities. Ready to implement Live Metrics? Start monitoring your Azure application today and gain real-time visibility into its performance! We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfonts.com.

Share Story :

Azure Integration Services (AIS): The Key to Scalable Enterprise Integrations

In today’s dynamic business environment, organizations rely on multiple applications, systems, and cloud services to drive operations, making scalable enterprise integrations essential. As businesses grow, their data flow and process complexity increase, demanding integrations that can handle expanding workloads without performance bottlenecks. Scalable integrations ensure seamless data exchange, real-time process automation, and interoperability between diverse platforms like CRM, ERP, and third-party services. They also provide the flexibility to adapt to evolving business needs, supporting digital transformation and innovation. Without scalable integration frameworks, enterprises risk inefficiencies, data silos, and high maintenance costs, limiting their ability to scale operations effectively.  Are you finding it challenging to scale your business operations efficiently?  In this blog, we’ll look into key Azure Integration Services that can help overcome common integration hurdles.  Before we get into AIS, let’s start with some business numbers—after all, money is what matters most to any business.  Several organizations have reported significant cost savings and operational efficiencies after implementing Azure Integration Services (AIS). Here are some notable examples:  Measurable Business Benefits with AIS  A financial study evaluating the impact of deploying AIS found that organizations experienced benefits totalling $868,700 over three years. These included:  Here are some articles to support this data:   Modernizing Legacy Integration: BizTalk to AIS  A financial institution struggling with outdated integration adapters transitioned to Azure Integration Services. By leveraging Service Bus for reliable message delivery and API Management for secure external API access, they reduced operational costs by 25% and improved system scalability.  These examples demonstrate the substantial cost reductions and efficiency improvements that businesses can achieve by leveraging Azure Integration Services.  To put this into perspective, we’ll explore real-world industry challenges and how Azure’s integration solutions can effectively resolve them.  Example 1: Secure & Scalable API Management for a Manufacturing Company  Scenario: A global auto parts manufacturer supplies components to multiple automobile brands. They expose APIs for:  Challenges: However, they are facing serious challenges  These are some simple top-level issues there can be many more complexities.  Solution: Azure API Management (APIM)  The manufacturer deploys Azure API Management (APIM) to secure, manage, and monitor their APIs.   Step 1: Secure APIs – APIM enforces OAuth-based authentication so only authorized suppliers can access APIs. Rate limiting prevents overuse.  Step 2: API Versioning – Different suppliers use v1 and v2 of APIs. APIM ensures smooth version transitions without breaking old integrations.  Step 3: Analytics & Monitoring – The company gets real-time insights on API usage, detecting slow queries and bottlenecks.  Result:  Example 2: Reliable Order Processing with Azure Service Bus for an E-commerce Company  Scenario: A fast-growing e-commerce company processes over 50,000 orders daily across multiple sales channels (website, mobile app, and third-party marketplaces). Orders are routed to:  Challenges:  Solution: Azure Service Bus (Message Queueing)  Instead of direct connections, the company decouples services using Azure Service Bus.  Step 1: Queue-Based Processing – Orders are sent to an Azure Service Bus queue, ensuring no data loss even if systems go down.  Step 2: Asynchronous Processing – Inventory, payment, and fulfilment consume messages independently, avoiding system overload.  Step 3: Dead Letter Queue (DLQ) Handling – Failed orders are sent to a DLQ for retry instead of getting lost.  Result:  Example 3: Automating Invoice Processing with Logic Apps for a Logistics Company  Scenario: A global shipping company receives thousands of invoices from suppliers every month. These invoices must be:  Challenges:  Solution: Azure Logic Apps for End-to-End Automation  The company automates the entire invoice workflow using Azure Logic Apps.  Step 1: Extract Invoice Data – Logic Apps connects to Office 365 & Outlook, extracts PDFs, and uses AI-powered OCR to read invoice details.  Step 2: Validate Data – The system cross-checks invoice amounts and supplier details against purchase orders in the ERP.  Step 3: Approval Workflow – If all details match, the invoice is auto-approved. If there’s a discrepancy, it’s sent to finance via Teams for review.  Step 4: Update SAP & Notify Suppliers – Once approved, the invoice is automatically logged in SAP, and the supplier gets a payment confirmation email.  Result:  With Azure API Management, Service Bus, and Logic Apps, businesses can:  Many organizations are also shifting towards no-code solutions like Logic Apps for faster integrations. Whether you’re looking for API security, event-driven automation, or workflow orchestration, Azure Integration Services has a solution for you.  Azure Integration Services (AIS) is not just a collection of tools—it’s a game-changer for businesses looking to modernize their integrations, reduce operational costs, and improve scalability. From secure API management to reliable messaging and automation, AIS provides the flexibility and efficiency needed to handle complex business workflows seamlessly.  The numbers speak for themselves—organizations have saved hundreds of thousands of dollars while improving their integration capabilities. Whether you’re looking to streamline supplier connections, optimize order processing, or migrate from legacy systems, AIS has a solution for you.  What’s Next?  In our next article, we’ll take a deep dive into a real-world scenario, showcasing how we helped our customer Buchi transform their integration landscape with Azure Integration Services.  Next Up: Why AIS? How Easily Azure Integration Services Can Adapt to Your EDI Needs.  Would love to hear your thoughts! How are you handling enterprise integrations today? Comment down below ???? or contact us at transform@cloudfronts.com 

Share Story :

Infrastructure as Code (IaC): Azure Resource Manager Templates vs. Bicep

Infrastructure as Code (IaC) has become a cornerstone of modern DevOps practices, enabling teams to provision and manage cloud infrastructure through code. In the Azure ecosystem, two primary tools for implementing IaC are Azure Resource Manager (ARM) templates and Bicep. While both serve similar purposes, they differ significantly in syntax, usability, and functionality. This blog will compare these tools to help you decide which one to use for your Azure infrastructure needs. Azure Resource Manager Templates ARM templates have been the backbone of Azure IaC for many years. Written in JSON, they define the infrastructure and configuration for Azure resources declaratively. Key Features: Advantages: Challenges: Bicep Bicep is a domain-specific language (DSL) introduced by Microsoft to simplify the authoring of Azure IaC. It is designed as a more user-friendly alternative to ARM templates. Key Features: Advantages: Challenges: Comparing ARM Templates and Bicep Feature ARM Templates Bicep Syntax Verbose JSON Concise DSL Modularity Limited Strong Support Tooling Mature Rapidly Improving Resource Support Full Full Ease of Use Challenging Beginner-Friendly Community Support Extensive Growing When to Use ARM Templates ARM templates remain a solid choice for: When to Use Bicep Bicep is ideal for: To conclude, both ARM templates and Bicep are powerful tools for managing Azure resources through IaC. ARM templates offer a mature, battle-tested approach, while Bicep provides a modern, streamlined experience. For teams new to Azure IaC, Bicep’s simplicity and modularity make it a compelling choice. However, existing users of ARM templates may find value in sticking with their current workflows or transitioning gradually to Bicep. Regardless of your choice, both tools are fully supported by Azure, ensuring that you can reliably manage your infrastructure in a consistent and scalable manner. Evaluate your team’s needs, skills, and project requirements to make the best decision for your IaC strategy. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfonts.com.

Share Story :

“Understanding and Using WEBSITE_CONTENTSHARE in Azure App Services”

When deploying applications on Azure App Service, certain environment variables play a pivotal role in ensuring smooth operation and efficient resource management. One such variable is WEBSITE_CONTENTSHARE. In this blog, we will explore what WEBSITE_CONTENTSHARE is, why it matters, and how you can work with it effectively. What is WEBSITE_CONTENTSHARE? The WEBSITE_CONTENTSHARE environment variable is a unique identifier automatically generated by Azure App Service. It specifies the name of the Azure Storage file share used by an App Service instance when its content is deployed to an Azure App Service plan using shared storage, such as in a Linux or Windows containerized environment. This variable is particularly relevant for scenarios where application code and content are stored and accessed from a shared file system. It ensures that all App Service instances within a given plan have consistent access to the application’s files. Key Use Cases How WEBSITE_CONTENTSHARE Works When you deploy an application to Azure App Service: Example Value: This value points to a file share named app-content-share1234 in the configured Azure Storage account. Configuring WEBSITE_CONTENTSHARE While the WEBSITE_CONTENTSHARE variable is automatically managed by Azure, there are instances where you may need to adjust configurations: Troubleshooting Common Issues 1. App Service Cannot Access File Share 2. Variable Not Set 3. File Share Quota Exceeded Best Practices To conclude that, The WEBSITE_CONTENTSHARE variable is a crucial part of Azure App Service’s infrastructure, facilitating shared storage access for applications. By understanding its purpose, configuration, and best practices, you can ensure your applications leverage this feature effectively and run seamlessly in Azure’s cloud environment. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfonts.com.

Share Story :

Understanding Azure Function Trigger Methods and Recurrence Syntax in Dynamics 365

Azure Functions are a vital component of serverless computing, offering the flexibility to run event-driven code without the need to manage infrastructure. When integrated with Dynamics 365, they provide a robust mechanism for automating processes and extending the platform’s functionality. This blog explores Azure Function trigger methods and recurrence syntax, highlighting their relevance in Dynamics 365 scenarios. Azure Function Trigger Methods Azure Functions can be triggered by various events. These triggers determine how and when the function executes. Here are some commonly used trigger methods in Dynamics 365 integrations: 1. HTTP Trigger Example: 2. Queue Storage Trigger Example: 3. Timer Trigger Example: 4. Service Bus Trigger Example: Recurrence Syntax for Timer Triggers Timer Triggers in Azure Functions rely on CRON expressions to define their schedule. Understanding this syntax is crucial for scheduling Dynamics 365-related tasks. CRON Expression Format: Examples: 2. Run daily at 2:30 AM: 3. Run every Monday at 9:00 AM: Key Points: Integrating Azure Functions with Dynamics 365 To integrate Azure Functions with Dynamics 365: 4. For asynchronous processes, leverage Azure Storage Queues or Service Bus to manage workload distribution To conclude that, Azure Functions, with their diverse trigger options, provide unmatched flexibility for extending Dynamics 365 capabilities. The recurrence syntax in Timer Triggers ensures that tasks are executed precisely when needed, enabling efficient process automation. By combining these tools, organizations can unlock the full potential of Dynamics 365 in their digital transformation journey. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfonts.com.

Share Story :

How to connect logic App with APIM

In a cloud-first world, seamless integrations are the backbone of modern applications. Azure Logic Apps and API Management (APIM) are two powerful tools that enable businesses to automate workflows and manage APIs effectively.By connecting Logic Apps to APIM, you can expose your automated workflows as APIs, ensuring they are secure, scalable, and easy to manage. In this blog, we’ll walk you through the process of integrating Logic Apps with APIM to maximize the potential of your Azure ecosystem. 1. What Are Logic Apps and API Management? Logic Apps:Logic Apps is an Azure service for automating workflows, integrating various systems, and processing data efficiently. Whether it’s connecting SaaS apps, on-premises systems, or cloud services, Logic Apps excels at simplifying complex integrations. API Management (APIM):APIM is an Azure service that allows you to publish, manage, secure, and monitor APIs. It acts as a gateway for APIs, providing essential features like throttling, caching, and access control. 2. Why Integrate Logic Apps with APIM? Step-by-Step Guide to Connecting Logic Apps with APIM Step 1: Open Azure APIM and click on APIs Step 2: Click on Add API and Logic app from the Azure Resource Step 3: Browse for the logic app and give the in APIM Step 4: Click on test to test the APIM request Step 5: Check the URL and send the request After sending the request from APIM you can check the logic app is triggered. Conclusion Integrating Azure Logic Apps with API Management is a game-changer for building secure, scalable, and manageable API-driven solutions. This integration empowers businesses to expose their workflows as reusable APIs, enhance security, and maintain centralized control. Ready to connect your Logic Apps with APIM? Start by designing a simple Logic App workflow and adding it to your API Management instance. If you need expert guidance, explore more Azure integration tips on our blog or reach out to us at transform@cloudfonts.com.

Share Story :

SEARCH BLOGS:

FOLLOW CLOUDFRONTS BLOG :


Secured By miniOrange