Azure Archives -

Category Archives: Azure

Automate Azure Functions Flex Consumption Deployments with Azure DevOps and Azure CLI

Building low-latency, VNET-secure APIs with Azure Functions Flex Consumption is only the beginning.The next step toward modernization is setting up a DevOps release pipeline that automatically deploys your Function Apps-even across multiple regions – using Azure CLI. In this blog, we’ll explore how to implement a CI/CD pipeline using Azure DevOps and Azure CLI to deploy Azure Functions (Flex Consumption), handle cross-platform deployment scenarios, and ensure global availability. Step-by-Step Guide: Azure DevOps Pipeline for Azure Functions Flex Consumption Step 1: Prerequisites You’ll need: Step 2: Provision Function Infrastructure Using Azure CLI Step 3: Configure Azure DevOps Release Pipeline Important Note: Windows vs Linux in Flex Consumption While creating your pipeline, you might notice a critical difference: The Azure Functions Flex Consumption plan only supports Linux environments. If your existing Azure Function was originally created on a Windows-based plan, you cannot use the standard “Azure Function App Deploy” DevOps task, as it assumes Windows compatibility and won’t deploy successfully to Linux-based Flex Consumption. To overcome this, you must use Azure CLI commands (config-zip deployment) — exactly as shown above — to manually upload and deploy your packaged function code. This method works regardless of the OS runtime and ensures smooth deployment to Flex Consumption Functions without compatibility issues. Tip: Before migration, confirm that your Function’s runtime stack supports Linux. Most modern stacks like .NET 6+, Node.js, and Python run natively on Linux in Flex Consumption. Step 4: Secure Configurations and Secrets Use Azure Key Vault integration to safely inject configuration values: Step 5: Enable VNET Integration If your Function App accesses internal resources, enable VNET integration: Step 6: Multi-Region Deployment for High Availability For global coverage, you can deploy your Function Apps to multiple regions using Azure CLI: Dynamic Version (Recommended): This ensures consistent global rollouts across regions. Step 7: Rollback Strategy If deployment fails in a specific region, your pipeline can automatically roll back: Best Practices a. Use YAML pipelines for version-controlled CI/CDb. Use Azure CLI for Flex Consumption deployments (Linux runtime only)c. Add manual approvals for productiond. Monitor rollouts via Azure Monitore. Keep deployment scripts modular and parameterized To conclude, automating deployments for Azure Functions Flex Consumption using Azure DevOps and Azure CLI gives you: If your current Azure Function runs on Windows, remember — Flex Consumption supports only Linux-based plans, so CLI-based deployments are the way forward. Next Step:Start with one Function App pipeline, validate it in a Linux Flex environment, and expand globally. For expert support in automating Azure serverless solutions, connect with CloudFronts — your trusted Azure integration partner. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com

Share Story :

Why Modern Enterprises Are Standardizing on the Medallion Architecture for Trusted Analytics

Enterprises today are collecting more data than ever before, yet most leaders admit they don’t fully trust the insights derived from it. Inconsistent formats, missing values, and unreliable sources create what’s often called a data swamp an environment where data exists but can’t be used confidently for decision-making.  Clean, trusted data isn’t just a technical concern; it’s a business imperative. Without it, analytics, AI, and forecasting lose credibility and transformation initiatives stall before they start.  That’s where the Medallion Architecture comes in. It provides a structured, layered framework for transforming raw, unreliable data into consistent, analytics-ready insights that executives can trust.  At CloudFront’s, a Microsoft and Databricks partner, we’ve implemented this architecture to help enterprises modernize their data estates and unlock the full potential of their analytics investments.  Why Data Trust Matters More Than Ever  CIOs and data leaders today face a paradox: while data volumes are skyrocketing, confidence in that data is shrinking.  Poor data quality leads to:  In short, when data can’t be trusted, every downstream process from reporting to machine learning is compromised. The Medallion Architecture directly addresses this challenge by enforcing data quality, lineage, and governance at every stage.  What Is the Medallion Architecture?  The Medallion Architecture is a modern, layered data design framework introduced by Databricks. It organizes data into three progressive layers Bronze, Silver, and Gold each refining data quality and usability.  This approach ensures that every layer of data builds upon the last, improving accuracy, consistency, and performance at scale.  Inside Each Layer    Bronze Layer —> Raw and Untouched   The Bronze Layer serves as the raw landing zone for all incoming data. It captures data exactly as it arrives from multiple sources, preserving lineage and ensuring that no information is lost. This layer acts as a foundational source for subsequent transformations.   Silver Layer —> Cleansing and Transformation   At the Silver Layer, the raw data undergoes cleansing and standardization. Duplicates are removed, inconsistent formats are corrected, and business rules are applied. The result is a curated dataset that is consistent, reliable, and analytics ready.   Gold Layer —> Insights and Business Intelligence   The Gold Layer aggregates and enriches data around key business metrics. It powers dashboards, reporting, and advanced analytics, providing decision-makers with accurate and actionable insights.   Example: Data Transformation Across Layers   Layer   Data Example   Processing Applied   Outcome   Bronze   Customer ID: 123, Name: Null, Date: 12-03-24 / 2024-03-12   Raw data captured as-is   Unclean, inconsistent   Silver   Customer ID: 123, Name: Alex, Date: 2024-03-12   Standardization & de-duplication   Clean & consistent   Gold   Customer ID: 123, Name: Alex, Year: 2024   Aggregation for KPIs   Business-ready dataset   This layered approach ensures data becomes progressively more accurate, complete, and valuable.   Building Reliable, Performant Data Pipelines  By leveraging Delta Lake on Databricks, the Medallion Architecture enables enterprises to unify streaming and batch data, automate validations, and ensure schema consistency creating an end-to-end, auditable data pipeline.  This layered approach turns chaotic data flows into a structured, governed, and performant data ecosystem that scales as business needs evolve.  Client Example: Retail Transformation in Action  A leading hardware retailer in the Maldives faced challenges managing inventory and forecasting demand across multiple locations. They needed a unified data model that could deliver real-time visibility and predictive insights.  CloudFront’s implemented the Medallion Architecture using Databricks:  Results:  Key Benefits for Enterprise Leaders  Final Thoughts  Clean, trusted data isn’t a luxury, it’s the foundation of every successful analytics and AI strategy.  The Medallion Architecture gives enterprises a proven, scalable framework to transform disorganized, unreliable data into valuable, business-ready insights.  At CloudFront’s, we help organizations modernize their data foundations with Databricks and Azure delivering the clarity, consistency, and confidence needed for data-driven growth.  Ready to move from data chaos to clarity? Explore our Databricks Services or Talk to a Cloud Architect to start building your trusted analytics foundation today.  We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com

Share Story :

Advantages and Future Scope of the Unified Databricks Architecture – Part 2

Following our unified data architecture implementation using Databricks Unity Catalog, the next step focuses on understanding the advantages and future potential of this Lakehouse-driven ecosystem. The architecture consolidates data from multiple business systems and transforms it into an AI-powered data foundation that will support advanced analytics, automation, and conversational insights. Key Advantages Centralized Governance:Unity Catalog provides complete visibility into data lineage, security, and schema control — eliminating silos. Dynamic and Scalable Data Loading:A single Databricks notebook can dynamically load and transform data from multiple systems, simplifying maintenance. Enhanced Collaboration:Teams across domains can access shared data securely while maintaining compliance and data accuracy. Improved BI and Reporting:More than 30 Power BI reports are being migrated to the Gold layer for unified reporting. AI & Automation Ready:The architecture supports seamless integration with GenAI tools like Genie for natural language Q&A and predictive insights. Future Aspects In the next phase, we aim to:– Integrate Genie for conversational analytics.– Enable real-time insights through streaming pipelines.– Extend the Lakehouse to additional business sources.– Automate AI-based report generation and anomaly detection. For example, business users will soon be able to ask questions like:“How many hours did a specific resource submit in CRM time entries last week?”Databricks will process this query dynamically, returning instant, AI-driven insights. To conclude, the unified Databricks architecture is more than a data pipeline — it’s the foundation for AI-powered decision-making. By merging governance, automation, and intelligence, CloudFronts is building the next generation of data-first, AI-ready enterprise solutions.

Share Story :

Unified Data Architecture with Databricks Unity Catalog – Part 1

At CloudFronts Technologies, we are implementing a Unified Data Architecture powered by Databricks Unity Catalog to bring together data from multiple business systems into one governed, AI-ready platform. This solution integrates five major systems — Zoho People, Zoho Books, Business Central, Dynamics 365 CRM, and QuickBooks — using Azure Logic Apps, Blob Storage, and Databricks to build a centralized Lakehouse foundation. Objective To design a multi-source data architecture that supports:– Centralized data storage via Unity Catalog.– Automated ingestion through Azure Logic Apps.– Dynamic data loading and transformation in Databricks.– Future-ready integration for AI and BI analytics. Architecture Overview Data Flow Summary:1. Azure Logic Apps extract data from each of the five sources via APIs.2. Data is stored in Azure Blob Storage containers.3. Blob containers are mounted to Databricks for unified access.4. A dynamic Databricks notebook reads and processes data from all sources. Each data source operates independently while following a governed and modular design, making the solution scalable and easily maintainable. Role of Unity Catalog Unity Catalog enables lineage, and secure access across teams. Each layer — Bronze (raw), Silver (refined), and Gold (business-ready) — is managed under Unity Catalog, ensuring clear visibility into data flow and ownership. This ensures that as data grows, governance and performance remain consistent across all environments. Implementation Preview:In the upcoming blog, I will demonstrate the end-to-end implementation of one Power BI report using this unified Databricks architecture. This will include connecting the gold layer dataset from Databricks to Power BI, building dynamic visuals, and showcasing how the unified data foundation simplifies report creation and maintenance across multiple systems. To conclude, this architecture lays the foundation for a unified, governed, and scalable data ecosystem. By combining Azure Logic Apps, Blob Storage, and Databricks Unity Catalog, we are enabling a single source of truth that supports analytics, automation, and future AI innovations.

Share Story :

Connecting Databricks to Power BI: A Step-by-Step Guide for Secure and Fast Reporting

Azure Databricks has become the go-to platform for data engineering and analytics, while Power BI remains the most powerful visualization tool in the Microsoft ecosystem. Connecting Databricks to Power BI bridges the gap between your data lakehouse and business users, enabling real-time insights from curated Delta tables. In this blog, we’ll walk through the process of securely connecting Power BI to Databricks, covering both DirectQuery and Import mode, and sharing best practices for performance and governance. Architecture Overview The connection involves:– Azure Databricks → Your compute and transformation layer.– Delta Tables → Your curated and query-optimized data.– Power BI Desktop / Service → Visualization and sharing platform. Flow:1. Databricks processes and stores curated data in Delta format.2. Power BI connects directly to Databricks using the built-in connector.3. Users consume dashboards that are either refreshed on schedule (Import) or query live (DirectQuery). Step 1: Get Connection Details from Databricks In your Azure Databricks workspace:1. Go to the Compute tab and open your cluster (or SQL Warehouse if using Databricks SQL).2. Click on ‘Advanced → JDBC/ODBC’ tab.3. Copy the Server Hostname and HTTP Path — you’ll need these for Power BI. For example:– Server Hostname: adb-1234567890123456.7.azuredatabricks.net– HTTP Path: /sql/1.0/endpoints/1234abcd5678efgh Step 2: Configure Databricks Personal Access Token (PAT) Power BI uses this token to authenticate securely.1. In Databricks, click your profile icon → User Settings → Developer → Access Tokens.2. Click Generate New Token, provide a name and expiration, and copy the token immediately. (You won’t be able to view it again.) Step 3: Connect from Power BI Desktop 1. Open Power BI Desktop.2. Go to Get Data → Azure → Azure Databricks.3. In the connection dialog:   – Server Hostname: paste from Step 1   – HTTP Path: paste from Step 14. Click OK, and when prompted for credentials:   – Select Azure Databricks Personal Access Token   – Enter your token in the Password field. You’ll now see the list of Databricks tables and databases available for import. To conclude, you’ve successfully connected Power BI to Azure Databricks, unlocking analytical capabilities over your Lakehouse. This setup provides flexibility to work in Import mode for speed or Direct Query mode for live data — all while maintaining enterprise security through Azure AD or Personal Access Tokens. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com

Share Story :

How Delta Lake Keeps Your Data Clean, Consistent, and Future-Ready

Delta Lake is a storage layer that brings reliability, consistency, and flexibility to big data lakes. It enables advanced features such as Time Travel, Schema Evolution, and ACID Transactions, which are crucial for modern data pipelines. Feature Benefit Time Travel Access historical data for auditing, recovery, or analysis. Schema Evolution Adapt automatically to changes in the data schema. ACID Transactions Guarantee reliable and consistent data with atomic upserts. 1. Time Travel Time Travel allows you to access historical versions of your data, making it possible to “go back in time” and query past snapshots of your dataset. Use Cases:– Recover accidentally deleted or updated data.– Audit and track changes over time.– Compare dataset versions for analytics. How it works:Delta Lake maintains a transaction log that records every change made to the table. You can query a previous version using either a timestamp or a version number. Example: 2. Schema Evolution Schema Evolution allows your Delta table to adapt automatically to changes in the data schema without breaking your pipelines. Use Cases:– Adding new columns to your dataset.– Adjusting to evolving business requirements.– Simplifying ETL pipelines when source data changes. How it works:When enabled, Delta automatically updates the table schema if the incoming data contains new columns. Example: 3. ACID Transactions (with Atomic Upsert) ACID Transactions (Atomicity, Consistency, Isolation, Durability) ensure that all data operations are reliable and consistent, even in the presence of concurrent reads and writes. Atomic Upsert guarantees that an update or insert operation happens fully or not at all. Key Benefits:– No partial updates — either all changes succeed or none.– Safe concurrent updates from multiple users or jobs.– Consistent data for reporting and analytics.– Atomic Upsert ensures data integrity during merges. Atomic Upsert Example (MERGE): Here:– whenMatchedUpdateAll() updates existing rows.– whenNotMatchedInsertAll() inserts new rows.– The operation is atomic — either all updates and inserts succeed together or none. To conclude, Delta Lake makes data pipelines modern, maintainable, and error-proof. By leveraging Time Travel, Schema Evolution, and ACID Transactions, you can build robust analytics and ETL workflows with confidence, ensuring reliability, consistency, and adaptability in your data lake operations. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com

Share Story :

Handling Errors and Retries in Dynamics 365 Logic App Integrations

Integrating Dynamics 365 (D365) with external systems using Azure Logic Apps is one of the most common patterns for automation. But in real-world projects, things rarely go smoothly – API throttling, network timeouts, and unexpected data issues are everyday challenges. Without proper error handling and retry strategies, these issues can result in data mismatches, missed transactions, or broken integrations. In this blog, we’ll explore how to handle errors and implement retries in D365 Logic App integrations, ensuring your workflows are reliable, resilient, and production-ready. Core Content 1. Why Error Handling Matters in D365 Integrations Without handling these, your Logic App either fails silently or stops execution entirely, causing broken processes.  2. Built-in Retry Policies in Logic Apps What They Are:Every Logic App action comes with a retry policy that can be configured to automatically retry failed requests. Best Practice: 3. Handling Errors with Scopes and “Run After” Scopes in Logic Apps let you group actions and then define what happens if they succeed or fail. Steps: Example: 4. Designing Retry + Error Flow Together Recommended Pattern: This ensures no transaction is silently lost. 5. Handling Dead-lettering with Service Bus (Advanced) For high-volume integrations, you may need a dead-letter queue (DLQ) approach: This pattern prevents data loss while keeping integrations lightweight. 6. Monitoring & Observability Error handling isn’t complete without monitoring. Building resilient integrations between D365 and Logic Apps isn’t just about connecting APIs—it’s about ensuring reliability even when things go wrong. By configuring retry policies, using scopes for error handling, and adopting dead-lettering for advanced cases, you’ll drastically reduce downtime and data mismatches. Next time you design a D365 Logic App, don’t just think about the happy path. Build error handling and retry strategies from the start, and you’ll thank yourself later when your integration survives the unexpected. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com

Share Story :

Seamless Automation with Azure Logic Apps: A Low-Code Powerhouse for Business Integration

In today’s data-driven business landscape, fast, reliable, and automated data integration isn’t just a luxury it’s a necessity. Organizations often deal with data scattered across various platforms like CRMs, ERPs, or third-party APIs. Manually managing this data is inefficient, error-prone, and unsustainable at scale. That’s where Azure Logic Apps comes into play. Why Azure Logic Apps? Azure Logic Apps is a powerful workflow automation platform that enables you to design scalable, no-code solutions to fetch, transform, and store data with minimal overhead. With over 200 connectors (including Dynamics 365, Salesforce, SAP, and custom APIs), Logic Apps simplifies your integration headaches. Use Case: Fetch Business Data and Dump to Azure Data Lake Imagine this:You want to fetch real-time or scheduled data from Dynamics 365 Finance & Operations or a similar ERP system.You want to store that data securely in Azure Data Lake for analytics or downstream processing in Power BI, Databricks, or Machine Learning models. What About Other Tools Like ADF or Synapse Link? Yes, there are other tools available in the Microsoft ecosystem such as: Why Logic Apps Is Better What You Get with Logic Apps Integration Business Value To conclude, automating your data integration using Logic Apps and Azure Data Lake means spending less time managing data and more time using it to drive business decisions. Whether you’re building a customer insights dashboard, forecasting sales, or optimizing supply chains—this setup gives you the foundation to scale confidently. 📧 Ready to modernize your data pipeline? Drop us a note at transform@cloudfronts.com — our experts are ready to help you implement the best-fit solution for your business needs. 👉 In our next blog, we’ll walk you through the actual implementation of this Logic Apps integration, step-by-step — from connecting to Dynamics 365 to storing structured outputs in Azure Data Lake. Stay tuned!

Share Story :

Adding Functionality to an AI Foundry Agent with Logic Apps

AI-powered agents are quickly becoming the round the clock assistants of modern enterprises. They automate workflows, respond to queries, and integrate with data sources to deliver intelligent outcomes. But what happens when your agent needs to extend its abilities beyond what’s built-in? That’s where Logic Apps come in. In this blog, we’ll explore how you can add functionality to an AI Foundry Agent by connecting it with Azure Logic Apps-turning your agent into a truly extensible automation powerhouse. Why Extend an AI Foundry Agent? AI Foundry provides a framework to build, manage, and deploy AI agents in enterprise environments. By default, these agents can handle natural language queries and interact with pre-integrated data sources. However, business use cases often demand more: To achieve this, you need a bridge between your agent and external systems. Azure Logic Apps is that bridge. Enter Logic Apps Azure Logic Apps is a cloud-based integration service that enables you to: When integrated with AI Foundry Agents, Logic Apps can serve as external tools the agent can call dynamically. Steps to achieve external integrations / extending functionality in AI Foundry Agents with Logic Apps :- 1] Assuming your Agent Instructions and Knowledge Sources are ready, go to your Actions under Knowledge – 2] In the pop-up window, select Azure Logic Apps, you can also use other actions based on your requirement – 3] Here you would see a list of Microsoft Authored as well as our custom-built Logic App based Tools. To be displayed here, for suitable use by the AI Foundry Agent, it should meet a certain criterion as follows – a] Should be preferably on Consumption Plan, b] Should have an HTTP Request Trigger, atleast one Action, and a Response, c] In the Methods, select “Default (Allow all Methods)”, d] And a suitable description in the trigger, e] A Request Body (Auto Generated if created directly from AI Foundry). The developer can either create a Trigger from AI Foundry or, manually create a Logic App in the same Azure Subscription as the AI Foundry Project, observing the criteria. 4] As you can see below, For the scope of the blog I am covering a simple requirement of getting the list of clients for the SmartPitch Project, to fetch the case studies based on it; As you can see, the Logic App Tool meets the requirements for compatibility with Azure AI Foundry, with the required logic between the request and response. 5] As you can see below, For the scope of the blog I am covering a simple requirement of getting the list of clients for the SmartPitch Project, to fetch the case studies based on those;Once the Logic App is successfully created it would be visible in the Logic App Actions; select that Logic App to enable it as Tool. 6] Verify the details of the Logic App Tool and proceed. 7] Next you need to provide / verify the following information –a) Tool Name – The Name by which the Logic App would be accessible as a tool in the Agent, b) Connection to the Agent (Automatically assigned), c) Description to invoke the Tool (Logic App) – This is a crucial part for providing intent to the agent to when and how to use this logic app, and also what to expect from it. “Provide as much details as possible about the circumstances in which the tool should be called by the agent” 8] Once the Tool is created, it would be visible in the Actions list, and be ready for use. Here to check if the Intent is being understood and the tool being called, I have specifically instructed it to mention the name of the tool as well, along with it’s result. As you can see in the screenshot, the tool is triggered successfully, and the expected output is displayed. Example Use Case: Smart Pitch Agent Imagine your sales team uses an AI Foundry Agent (like “Smart Pitch Agent”) to create tailored pitches. By connecting Logic Apps, you can enable the agent to: Which we already have achieved in the in the above AI Agent using the other Logic App Tools The aim is to expose each capability as a Logic App, and the agent calls them as tools in conversation flow. Benefits of This Approach To conclude, by combining AI Foundry Agents with Azure Logic Apps, you unlock a powerful pattern: Together, they create a flexible, extensible solution that evolves with your enterprise needs. I Hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com.

Share Story :

Smarter Data Integrations Across Regions with Dynamic Templates

At CloudFronts Technologies, we understand that growing organizations often operate across multiple geographies and business units. Whether you’re working with Dynamics 365 CRM or Finance & Operations (F&O), syncing data between systems can quickly become complex—especially when different legal entities follow different formats, rules, or structures. To solve this, our team developed a powerful yet simple approach: Dynamic Templates for Multi-Entity Integration. The Business Challenge When a global business operates in multiple regions (like India, the US, or Europe), each location may have different formats for project codes, financial categories, customer naming, or compliance requirements. Traditional integrations hardcode these rules—making them expensive to maintain and difficult to scale as your business grows. Our Solution: Dynamic Liquid Templates We built a flexible, reusable template system that automatically adjusts to each legal entity’s specific rules—without the need to rebuild integrations for each one. Here’s how it works: Why This Matters for Your Business Real-World Success Story One of our client’s needs to integrate project data from CRM to F&O across three different regions. Instead of building three separate integrations, we implemented a single solution with dynamic templates. The result? What Makes CloudFronts Different At CloudFronts, we build future-ready integration frameworks. Our approach ensures you don’t just solve today’s problems—but prepare your business for tomorrow’s growth. We specialize in Microsoft Dynamics 365, Azure, and enterprise-grade automation solutions. “Smart integrations are the key to global growth. Let’s build yours.” We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com.

Share Story :

SEARCH BLOGS:

FOLLOW CLOUDFRONTS BLOG :


Secured By miniOrange