Category Archives: Azure
Using Open AI and Logic Apps to develop a Copilot agent for Elevator Pitches & Lead Qualification
In today’s competitive landscape, the ability to prepare quickly and deliver relevant, high-impact sales conversations is more critical than ever. Sales teams often spend valuable time gathering case studies, reviewing past opportunities, and preparing client-specific messaging — time that could be better spent engaging prospects. To address this, we developed “Smart Pitch” — a Microsoft Teams-integrated AI Copilot designed to equip our sales professionals with instant, contextual access to case studies, opportunity data, and procedural documentation. Challenge Sales professionals routinely face challenges such as: These hurdles not only slow down the sales cycle but also affect the consistency and quality of conversations with prospects. How It Works Platform Data Sources CloudFronts SmartPitch pulls information from the following knowledge sources: AI Integration Key Features MQL – SQL Summary Generator Users can request MQL – SQL document which contains The copilot prompts the user to provide the prospect name, contact person name, and client requirement. This is achieved via an adaptive card for better UX. HTTP Request to Logic App At Logic App we used ChatGPT API to fetch company and client information Extract the company location from the company information, and similarly, extract the industry as well. Render it to custom copilot via request to the Logic App. Use Generative answers node to display the results as required with proper formatting via prompt/Agent Instructions. Generative AI can also be instructed to directly create a formatted json based on parsed values. This formatted JSON can be passed to converted to an actual JSON and is used to populate a liquid template for the MQL-SQL file to dynamically create MQL-SQL for every searched company and contact person. This returns an HTML File with dynamically populated company and contact details as well as similar case studies, and work with client in similar region and industry. This triggers an auto download of the MQL-SQL created as a PDF file on your system. Content Search Users can ask questions related to – 1. Case Study FAQ: Helps users ask questions about client success stories and project case studies, retrieves relevant information from a knowledge source, and offers follow-up FAQs before ending the conversation. Cloudfronts official website is used for fetching Case Studies information. 2. Opportunities: Helps users inquire about past projects or opportunities, detailing client names, roles, estimated revenue and outcomes. 3. SOPs: Provides quick answers and summaries for frequently asked questions related to organizational processes and SOPs. Users can ask questions like “Smart Pitch” searches SharePoint documents, public case studies, and the opportunity table to return relevant results — structured and easy to consume. Security & Governance Integrated in Microsoft Teams, so the same authentication as Teams. Access to Dataverse and SharePoint is read-only and scoped to organizational permissions. To conclude, Smart Pitch reflects our commitment to leveraging AI to drive business outcomes. By combining Microsoft’s AI ecosystem with our internal data strategy, we’ve created a practical and impactful sales assistant that improves productivity, accelerates deal cycles, and enhances client engagement. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfonts.com
Share Story :
Copy On-Premises SQL Database to Azure SQL Server Using ADF: A Step-by-Step Guide
Migrating an on-premises SQL database to the cloud can streamline operations and enhance scalability. Azure Data Factory (ADF) is a powerful tool that simplifies this process by enabling seamless data transfer to Azure SQL Server. In this guide, we’ll walk you through the steps to copy your on-premises SQL database to Azure SQL Server using ADF, ensuring a smooth and efficient migration. Prerequisites Before you begin, ensure you have: Step 1: Create an Azure SQL Server Database First, set up your target database in Azure: Step 2: Configure the Azure Firewall To allow ADF to access your Azure SQL Database, configure the firewall settings: Step 3: Connect Your On-Premises SQL Database to ADF Next, use ADF Studio to link your on-premises database: Step 4: Set Up a Linked Service A Linked Service is required to connect ADF to your on-premises SQL database: Step 5: Install the Integration Runtime for On-Premises Data Since your data source is on-premises, you need an Integration Runtime: Finally, ensure everything is set up correctly: Step 6: Verify and Test the Connection To conclude, migrating you’re on-premises SQL database to Azure SQL Server using ADF is a straightforward process when broken down into these steps. By setting up the database, configuring the firewall, and establishing the necessary connections, you can ensure a secure and efficient data transfer. With your data now in the cloud, you can leverage Azure’s scalability and performance to optimize your workflows. Happy migrating! Please refer to our case study of the city Council https://www.cloudfronts.com/case-studies/city-council/ to know more about how we used the Azure Data Factory and other AIS to deliver seamless integration. We hope you found this blog post helpful! If you have any questions or want to discuss further, please contact us at transform@cloudfronts.com.
Share Story :
Error Handling in Azure Data Factory (ADF): Part 1
Azure Data Factory (ADF) is a powerful ETL tool, but when it comes to error handling, things can get tricky—especially when you’re dealing with parallel executions or want to notify someone on failure. In this two-part blog series, we’ll walk through how to build intelligent error handling into your ADF pipelines. This post—Part 1—focuses on the planning phase: understanding ADF’s behavior, the common pitfalls, and how to set your pipelines up for reliable error detection and notification. In Part 2, we’ll implement everything you’ve planned to use ADF control flows. Part 1: Planning for Failures Step 1: Understand ADF Dependency Behavior In ADF, activities can be connected via dependency conditions like: When multiple dependencies are attached to a single activity, ADF uses an OR condition. However, if you have parallel branches, ADF uses an AND condition for the following activity—meaning the next activity runs only if all parallel branches succeed. Step 2: Identify the Wrong Approach Many developers attempt to add a “failure email” activity after each pipeline activity, assuming it will trigger if any activity fails. This doesn’t work as expected: Step 3: Design with a Centralized Failure Handler in Mind So, what’s the right approach? Plan your pipeline in a way that allows you to handle any failure from a centralized point—a dedicated failure handler. Here’s how: Step 4: Plan Your Notification Strategy Error detection is one half of the equation. The other half is communication. Ask yourself: To conclude, start thinking about Logic Apps, Webhooks, or Azure Functions that you can plug in later to send customized notifications. We’ll cover the “how” in the next blog, but the “what” needs to be defined now. Planning for failure isn’t pessimism—it’s smart architecture.By understanding ADF’s behavior and avoiding common mistakes with parallel executions, you can build pipelines that fail gracefully, alert intelligently, and recover faster. In Part 2, we’ll take this plan and show you how to implement it step-by-step using ADF’s built-in tools. Please refer to our case study https://www.cloudfronts.com/case-studies/city-council/ to know more about how we used the Azure Data Factory and other AIS to deliver seamless integration. We hope you found this blog post helpful! If you have any questions or want to discuss further, please contact us at transform@cloudfronts.com.
Share Story :
Automating File Transfers from Azure File Share to Blob Storage with a Function App
Efficient file management is essential for businesses leveraging Azure cloud storage. Automating file transfers between Azure File Share and Azure Blob Storage enhances scalability, reduces manual intervention, and ensures data availability. This blog provides a step-by-step guide to setting up an Azure Timer Trigger Function App to automate the transfer process. Why Automate File Transfers? Steps to Implement the Solution 1. Prerequisites To follow this guide, ensure you have: 2. Create a Timer Trigger Function App 3. Install Required Packages For C#: For Python: 4. Implement the File Transfer Logic C# Implementation 5. Deploy and Monitor the Function To conclude, automating file transfers from Azure File Share to Blob Storage using a Timer Trigger Function streamlines operations and enhances reliability. Implementing this solution optimizes file management, improves cost efficiency, and ensures compliance with best practices. Begin automating your file transfers today! Need expert assistance? Reach out for tailored Azure solutions to enhance your workflow. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfonts.com.
Share Story :
From Commit to Inbox: Automating Change Summaries with Azure AI
In our small development team, we usually merge code without formal pull requests. Instead, changes are committed directly by the developer responsible for the project, and while I don’t need to approve every change in my role as the senior developer, I still need to stay aware of what’s being merged. Manually reviewing each commit was becoming too time-consuming, so I built an automated process using Power Automate, Azure DevOps, and Azure AI.Now, whenever a commit is made, it triggers a workflow that summarizes the changes and sends me an email.This simple system keeps me informed without slowing down the team’s work. Although I kept the automation straightforward, it could easily be extended further.For example, it could be improved to allow me to reply directly to the committer from the email or even display file changes in detail using a text comparison feature in Outlook.We didn’t need that level of detail, but it’s a good option if deeper insights are ever required. Journey We get started with the Azure DevOps trigger “When a code is pushed”. Here we specify the organization name, project name and repository name. We can also specify a specific branch if we want to limit our tracking to simply that branch otherwise it tracks all the available branches to the User. Then we have a foreach loop that iterates over the “Ref Updates” object array. It contains a list of all the changes but not the exact details.This action pops up automatically as well when we configure the next action. Then we set up a “Azure DevOps REST API request to invoke” action. This has connection capabilities to Azure DevOps directly so it is better to use over a simple REST API action. We specify the relative URL as {Repository Name}/_apis/git/repositories/{Repository ID}/commits/{Commit ID}/changes?api-version=6.0 The Commit ID shows up as newObjectId in the “When code is pushed” trigger. Then we pass the output of this action to a “Create Text with GPT using a prompt” action under the AI Builder group.I’ve passed the prompt as below but it took several trials and errors to get exactly what I wanted. The last action is a simple “Send an email” one where I’ve kept myself as a recepient and I’ve added a subject and a body. Now to put it all together and run it – And here is the final output – When the hyperlinks are clicked they take me straight to azure while pointing to the file which is referred. For instance, if I click on the Events Codeunit – Conclusion Summarizing commit changes is just one way automation can make life easier.This same idea can be applied to other tasks, like summarizing meeting notes, project updates, or customer feedback.With a bit of creativity, we can use tools like this to cut down on repetitive work and free up time to focus on learning new skills or tackling more challenging projects.By finding smart ways to streamline our workflows, we can work more efficiently and open up more time for growth and development. If you need further assistance or have specific questions about your ERP setup, feel free to reach out for personalized guidance. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfonts.com.
Share Story :
How to Recover Azure Function App Code
Azure Function Apps are a powerful tool for creating serverless applications, but losing the underlying code can be a stressful experience. Whether due to a missing backup, accidental deletion, or unclear deployment pipelines, the need to recover code becomes critical. Thankfully, even without backups, there are ways to retrieve and reconstruct your Azure Function App code using the right tools and techniques. In this blog, we’ll guide you through a step-by-step process to recover your code, explore the use of decompilation tools, and share preventive tips to help you avoid similar challenges in the future. Step 1: Understand Your Function App Configuration Step 2: Retrieve the DLL File To recover your code, you need access to the compiled assembly file (DLL).From Kudu (Advanced Tools), navigate to the site/wwwroot/bin directory where the YourFunctionApp.dll file resides and download it. Step 3: Decompile the DLL File Once you have the DLL file, use a .NET decompiler to extract the source code by opening .dll file using a .Net decompiler and running the decompiler script. The decompiler I have used here is dotPeek which is a free .Net decompiler. To Conclude, recovering a Function App without backups might seem daunting, but by understanding its configuration, retrieving the compiled DLL, and using decomplication tools, you can successfully reconstruct your code. To prevent such situations in the future you can enable Source Control to Integrate your Function App with GitHub or Azure DevOps or set backups. We hope you found this blog post helpful! If you have any questions or want to discuss further, please contact us at transform@cloudfronts.com. Please refer to our customer success story Customer Success Story – BUCHI | CloudFronts to know more about how we used the function app and other AIS to deliver seamless integration. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfonts.com.
Share Story :
Real-Time Monitoring with Azure Live Metrics
In modern cloud-based applications, real-time monitoring is crucial for detecting performance bottlenecks, identifying failures, and maintaining application health. Azure Live Metrics is a powerful feature of Application Insights that allows developers and operations teams to monitor application telemetry with minimal latency. Unlike traditional logging and telemetry solutions that rely on post-processing, Live Metrics enables real-time diagnostics, reducing the time to identify and resolve issues. What is Azure Live Metrics? Azure Live Metrics is a real-time monitoring tool within Azure Application Insights. It provides instant visibility into application performance without the overhead of traditional logging. Key features include: Benefits of Azure Live Metrics 1. Instant Issue Detection With real-time telemetry, developers can detect failed requests, exceptions, and performance issues instantly rather than waiting for logs to be processed. 2. Optimized Performance Traditional logging solutions can slow down applications by writing large amounts of telemetry data. Live Metrics minimizes overhead by using adaptive sampling and streaming only essential data. 3. Customizable Dashboards Developers can filter and customize Live Metrics dashboards to track specific KPIs, making it easier to diagnose performance trends and anomalies. 4. No Data Persistence Overhead Unlike standard telemetry logging, Live Metrics does not require data to be persisted in storage, reducing storage costs and improving performance. How to Enable Azure Live Metrics To use Azure Live Metrics in your application, follow these steps: Step 1: Install Application Insights SDK For .NET applications, install the required NuGet package: For Java applications, include the Application Insights agent: Step 2: Enable Live Metrics Stream In your Application Insights resource, navigate to Live Metrics Stream and ensure it is enabled. Step 3: Configure Application Insights Modify your appsettings.json (for .NET) to include Application Insights: For Azure Functions, set the APPLICATIONINSIGHTS_CONNECTION_STRING in Application Settings. Step 4: Start Monitoring in Azure Portal Go to the Application Insights resource in the Azure Portal, navigate to Live Metrics, and start observing real-time telemetry from your application. Key Metrics to Monitor Best Practices for Using Live Metrics To conclude, Azure Live Metrics is an essential tool for real-time application monitoring, providing instant insights into application health, failures, and performance. By leveraging Live Metrics in Application Insights, developers can reduce troubleshooting time and improve system reliability. If you’re managing an Azure-based application, enabling Live Metrics can significantly enhance your monitoring capabilities. Ready to implement Live Metrics? Start monitoring your Azure application today and gain real-time visibility into its performance! We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfonts.com.
Share Story :
Azure Integration Services (AIS): The Key to Scalable Enterprise Integrations
In today’s dynamic business environment, organizations rely on multiple applications, systems, and cloud services to drive operations, making scalable enterprise integrations essential. As businesses grow, their data flow and process complexity increase, demanding integrations that can handle expanding workloads without performance bottlenecks. Scalable integrations ensure seamless data exchange, real-time process automation, and interoperability between diverse platforms like CRM, ERP, and third-party services. They also provide the flexibility to adapt to evolving business needs, supporting digital transformation and innovation. Without scalable integration frameworks, enterprises risk inefficiencies, data silos, and high maintenance costs, limiting their ability to scale operations effectively. Are you finding it challenging to scale your business operations efficiently? In this blog, we’ll look into key Azure Integration Services that can help overcome common integration hurdles. Before we get into AIS, let’s start with some business numbers—after all, money is what matters most to any business. Several organizations have reported significant cost savings and operational efficiencies after implementing Azure Integration Services (AIS). Here are some notable examples: Measurable Business Benefits with AIS A financial study evaluating the impact of deploying AIS found that organizations experienced benefits totalling $868,700 over three years. These included: Here are some articles to support this data: Modernizing Legacy Integration: BizTalk to AIS A financial institution struggling with outdated integration adapters transitioned to Azure Integration Services. By leveraging Service Bus for reliable message delivery and API Management for secure external API access, they reduced operational costs by 25% and improved system scalability. These examples demonstrate the substantial cost reductions and efficiency improvements that businesses can achieve by leveraging Azure Integration Services. To put this into perspective, we’ll explore real-world industry challenges and how Azure’s integration solutions can effectively resolve them. Example 1: Secure & Scalable API Management for a Manufacturing Company Scenario: A global auto parts manufacturer supplies components to multiple automobile brands. They expose APIs for: Challenges: However, they are facing serious challenges These are some simple top-level issues there can be many more complexities. Solution: Azure API Management (APIM) The manufacturer deploys Azure API Management (APIM) to secure, manage, and monitor their APIs. Step 1: Secure APIs – APIM enforces OAuth-based authentication so only authorized suppliers can access APIs. Rate limiting prevents overuse. Step 2: API Versioning – Different suppliers use v1 and v2 of APIs. APIM ensures smooth version transitions without breaking old integrations. Step 3: Analytics & Monitoring – The company gets real-time insights on API usage, detecting slow queries and bottlenecks. Result: Example 2: Reliable Order Processing with Azure Service Bus for an E-commerce Company Scenario: A fast-growing e-commerce company processes over 50,000 orders daily across multiple sales channels (website, mobile app, and third-party marketplaces). Orders are routed to: Challenges: Solution: Azure Service Bus (Message Queueing) Instead of direct connections, the company decouples services using Azure Service Bus. Step 1: Queue-Based Processing – Orders are sent to an Azure Service Bus queue, ensuring no data loss even if systems go down. Step 2: Asynchronous Processing – Inventory, payment, and fulfilment consume messages independently, avoiding system overload. Step 3: Dead Letter Queue (DLQ) Handling – Failed orders are sent to a DLQ for retry instead of getting lost. Result: Example 3: Automating Invoice Processing with Logic Apps for a Logistics Company Scenario: A global shipping company receives thousands of invoices from suppliers every month. These invoices must be: Challenges: Solution: Azure Logic Apps for End-to-End Automation The company automates the entire invoice workflow using Azure Logic Apps. Step 1: Extract Invoice Data – Logic Apps connects to Office 365 & Outlook, extracts PDFs, and uses AI-powered OCR to read invoice details. Step 2: Validate Data – The system cross-checks invoice amounts and supplier details against purchase orders in the ERP. Step 3: Approval Workflow – If all details match, the invoice is auto-approved. If there’s a discrepancy, it’s sent to finance via Teams for review. Step 4: Update SAP & Notify Suppliers – Once approved, the invoice is automatically logged in SAP, and the supplier gets a payment confirmation email. Result: With Azure API Management, Service Bus, and Logic Apps, businesses can: Many organizations are also shifting towards no-code solutions like Logic Apps for faster integrations. Whether you’re looking for API security, event-driven automation, or workflow orchestration, Azure Integration Services has a solution for you. Azure Integration Services (AIS) is not just a collection of tools—it’s a game-changer for businesses looking to modernize their integrations, reduce operational costs, and improve scalability. From secure API management to reliable messaging and automation, AIS provides the flexibility and efficiency needed to handle complex business workflows seamlessly. The numbers speak for themselves—organizations have saved hundreds of thousands of dollars while improving their integration capabilities. Whether you’re looking to streamline supplier connections, optimize order processing, or migrate from legacy systems, AIS has a solution for you. What’s Next? In our next article, we’ll take a deep dive into a real-world scenario, showcasing how we helped our customer Buchi transform their integration landscape with Azure Integration Services. Next Up: Why AIS? How Easily Azure Integration Services Can Adapt to Your EDI Needs. Would love to hear your thoughts! How are you handling enterprise integrations today? Comment down below ???? or contact us at transform@cloudfronts.com
Share Story :
Infrastructure as Code (IaC): Azure Resource Manager Templates vs. Bicep
Infrastructure as Code (IaC) has become a cornerstone of modern DevOps practices, enabling teams to provision and manage cloud infrastructure through code. In the Azure ecosystem, two primary tools for implementing IaC are Azure Resource Manager (ARM) templates and Bicep. While both serve similar purposes, they differ significantly in syntax, usability, and functionality. This blog will compare these tools to help you decide which one to use for your Azure infrastructure needs. Azure Resource Manager Templates ARM templates have been the backbone of Azure IaC for many years. Written in JSON, they define the infrastructure and configuration for Azure resources declaratively. Key Features: Advantages: Challenges: Bicep Bicep is a domain-specific language (DSL) introduced by Microsoft to simplify the authoring of Azure IaC. It is designed as a more user-friendly alternative to ARM templates. Key Features: Advantages: Challenges: Comparing ARM Templates and Bicep Feature ARM Templates Bicep Syntax Verbose JSON Concise DSL Modularity Limited Strong Support Tooling Mature Rapidly Improving Resource Support Full Full Ease of Use Challenging Beginner-Friendly Community Support Extensive Growing When to Use ARM Templates ARM templates remain a solid choice for: When to Use Bicep Bicep is ideal for: To conclude, both ARM templates and Bicep are powerful tools for managing Azure resources through IaC. ARM templates offer a mature, battle-tested approach, while Bicep provides a modern, streamlined experience. For teams new to Azure IaC, Bicep’s simplicity and modularity make it a compelling choice. However, existing users of ARM templates may find value in sticking with their current workflows or transitioning gradually to Bicep. Regardless of your choice, both tools are fully supported by Azure, ensuring that you can reliably manage your infrastructure in a consistent and scalable manner. Evaluate your team’s needs, skills, and project requirements to make the best decision for your IaC strategy. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfonts.com.
Share Story :
“Understanding and Using WEBSITE_CONTENTSHARE in Azure App Services”
When deploying applications on Azure App Service, certain environment variables play a pivotal role in ensuring smooth operation and efficient resource management. One such variable is WEBSITE_CONTENTSHARE. In this blog, we will explore what WEBSITE_CONTENTSHARE is, why it matters, and how you can work with it effectively. What is WEBSITE_CONTENTSHARE? The WEBSITE_CONTENTSHARE environment variable is a unique identifier automatically generated by Azure App Service. It specifies the name of the Azure Storage file share used by an App Service instance when its content is deployed to an Azure App Service plan using shared storage, such as in a Linux or Windows containerized environment. This variable is particularly relevant for scenarios where application code and content are stored and accessed from a shared file system. It ensures that all App Service instances within a given plan have consistent access to the application’s files. Key Use Cases How WEBSITE_CONTENTSHARE Works When you deploy an application to Azure App Service: Example Value: This value points to a file share named app-content-share1234 in the configured Azure Storage account. Configuring WEBSITE_CONTENTSHARE While the WEBSITE_CONTENTSHARE variable is automatically managed by Azure, there are instances where you may need to adjust configurations: Troubleshooting Common Issues 1. App Service Cannot Access File Share 2. Variable Not Set 3. File Share Quota Exceeded Best Practices To conclude that, The WEBSITE_CONTENTSHARE variable is a crucial part of Azure App Service’s infrastructure, facilitating shared storage access for applications. By understanding its purpose, configuration, and best practices, you can ensure your applications leverage this feature effectively and run seamlessly in Azure’s cloud environment. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfonts.com.
