From Legacy Middleware Debt to AI Innovation: Rebuilding the Digital Backbone of a 150-Year-Old Manufacturer
Summary A global manufacturing client was facing rising middleware costs, poor visibility, and growing pressure to support analytics and AI initiatives. A forced three-year middleware commitment became the trigger to rethink their integration strategy. This article shares how the client moved away from legacy middleware, reduced integration costs by nearly 95%, improved operational visibility, and built a strong data foundation for the future. Table of Contents 1. The Middleware Cost Problem 2. Building a New Integration Setup 3. Making Integrations Visible 4. Preparing Data for AI 5. How We Did It Savings Metrics The Middleware Cost Problem The client was running critical integrations on a legacy middleware platform that had gradually become a financial and operational burden. Licensing costs increased sharply, with annual fees rising from $20,000 to $50,000 and a mandatory three-year commitment pushing the total to $160,000. Despite the cost, visibility remained limited. Integrations behaved like black boxes, failures were difficult to trace, and teams relied on manual intervention to diagnose and fix issues. At the same time, the business was pushing toward better reporting, analytics, and AI-driven insights. These initiatives required clean and reliable data flows that the existing middleware could not provide efficiently. Building a New Integration Setup Legacy middleware and Scribe-based integrations were replaced with Azure Logic Apps and Azure Functions. The new setup was designed to support global operations across multiple legal entities. Separate DataAreaIDs were maintained for regions including TOUS, TOUK, TOIN, and TOCN. Branching logic handled country-specific account number mappings such as cf_accountnumberus and cf_accountnumberuk. An agentless architecture was adopted using Azure Blob Storage with Logic Apps. This removed firewall and SQL connectivity challenges and eliminated reliance on unsupported personal-mode gateways. Making Integrations Visible The previous setup offered no centralized monitoring, making it difficult to detect failures early. A Power BI dashboard built on Azure Log Analytics provided a clear view of integration health and execution status. Automated alerts were configured to notify teams within one hour of failures, allowing issues to be addressed before impacting critical business processes. Preparing Data for AI With stable integrations in place, the focus shifted from cost savings to long-term readiness. Clean data flows became the foundation for platforms such as Databricks and governance layers like Unity Catalog. The architecture supports conversational AI use cases, enabling questions like “Is raw material available for this production order?” to be answered from a unified data foundation. As a first step, 32 reports were consolidated into a single catalog to validate data quality and integration reliability. How We Did It Retrieve config.json and checkpoint.txt from Azure Blob Storage for configuration and state control. Run incremental HTTP GET queries using ModifiedDateTime1 gt [CheckpointTimestamp]. Check for existing records using OData queries in target systems with keys such as ScribeCRMKey. Transform data using Azure Functions with region-specific Liquid templates. Write data securely using PATCH or POST operations with OAuth 2.0 authentication. Update checkpoint timestamps in Azure Blob Storage after successful execution. Log step-level success or failure using a centralized Logging Logic App (TO-UAT-Logs). Savings Metrics 95% reduction in annual integration costs, from $50,000 to approximately $2,555. Approximately $140,000 in annual savings. Integrations across D365 Field Service, D365 Sales, D365 Finance & Operations, Shopify, and SQL Server. Designed to support modernization of more than 600 fragmented reports. FAQs Q: How does this impact Shopify integrations? A: Azure Integration Services acts as the middle layer, enabling Shopify orders to synchronize into Finance & Operations and CRM systems in real time. Q: Is the system secure for global entities? A: Yes. The solution uses Azure AD OAuth 2.0 and centralized key management for all API calls. Q: Can it handle attachments? A: Dedicated Logic Apps were designed to synchronize CRM annotations and attachments to SQL servers located behind firewalls using an agentless architecture. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com.
Share Story :
Bridging Project Execution and Finance: How PO F&O Connector Unlocks Full Value in Dynamics 365
In a world where timing, accuracy, and coordination make or break profitability, modern project-based enterprises demand more than isolated systems. You may be leveraging Dynamics 365 Project Operations (ProjOps) to manage projects, timesheets, and resource planning and Dynamics 365 Finance & Operations (F&O) for financials, billing, and accounting. But without seamless integration, you’re stuck with manual transfers, data silos, and delayed insights. That’s where PO F&O Connector app comes in built to synchronize Project Operations and F&O end-to-end, bringing together delivery and finance in perfect alignment. In this article, we’ll explore how it works, why it matters to CEOs, CFOs, and CTOs, and how adopting it gives you a competitive edge. The Pain Point: Disconnected Project & Finance Workflows When your project execution and financial systems aren’t talking: The result? Missed revenue, resource inefficiencies, and poor visibility into project financial health. The Solution: Cloudfronts Project-to-Finance Integration App Cloudfronts new app is purpose-built to connect Project Operations → Finance & Operations seamlessly, automating the flow of project data into financial systems and enabling real-time, consistent delivery-to-finance synchronization. Key capabilities include: Role Core Benefits Outcomes CEO Visibility into project margins and outcomes; faster time to value Better strategic decisions, competitive agility CFO Automates billing, enforces accounting rules, ensures audit compliance Revenue gets recognized faster, finance becomes a strategic enabler CTO Reduces custom integration burdens, ensures system integrity Lower maintenance costs, scalable architecture Beyond roles, your entire organization benefits through: We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com
Share Story :
Automating Prepayment Handling in Business Central – Part 2
In Part 1, we explored the core logic of handling prepayment invoices in Business Central using AL. In this part, we will dive deeper into the practical implementation, focusing on how prepayments are applied, invoices are generated, and item charges are assigned. This blog will break down the logic in a simplified, yet complete way. Why Automate Prepayments? In real-world business scenarios, companies often pay vendors before the invoice is fully posted. Handling these prepayments manually is tedious and error-prone: Our AL code automates this process: it creates purchase invoices, handles prepayment lines, applies payments, and ensures that item charges are correctly assigned. 1. Event Subscriber: Trigger After Posting Purchase Document The automation starts with an event subscriber that triggers after a purchase document is posted: [EventSubscriber(ObjectType::Codeunit, Codeunit::”Purch.-Post”, ‘OnAfterPostPurchaseDoc’, ”, false, false)]procedure OnAfterPostPurchaseDocHandler(var PurchaseHeader: Record “Purchase Header”)var Rec_PreppaymentLines: Record PrepaymentLinesandPayment; PurchInvoiceHeader: Record “Purchase Header”; VendorInvoiceMap: Dictionary of [Code[20], Code[20]]; VendorNo: Code[20];begin // Collect unique vendors Rec_PreppaymentLines.SetRange(“Purchase Order No.”, PurchaseHeader.”No.”); Clear(VendorList); if Rec_PreppaymentLines.FindSet() then repeat if not VendorList.Contains(Rec_PreppaymentLines.”Vendor No.”) then VendorList.Add(Rec_PreppaymentLines.”Vendor No.”); until Rec_PreppaymentLines.Next() = 0; // Process each vendor foreach VendorNo in VendorList do begin // Create or reuse invoice if VendorInvoiceMap.ContainsKey(VendorNo) then PurchInvoiceHeader.Get(PurchInvoiceHeader.”Document Type”::Invoice, VendorInvoiceMap.Get(VendorNo)) else begin PurchInvoiceHeader := CreatePurchaseInvoiceHeader(VendorNo); VendorInvoiceMap.Add(VendorNo, PurchInvoiceHeader.”No.”); end; // Handle prepayment lines Rec_PreppaymentLines.SetRange(“Purchase Order No.”, PurchaseHeader.”No.”); Rec_PreppaymentLines.SetRange(“Vendor No.”, VendorNo); if Rec_PreppaymentLines.FindSet() then repeat HandlePrepaymentLine(Rec_PreppaymentLines, PurchInvoiceHeader); until Rec_PreppaymentLines.Next() = 0; end;end; Key Takeaways: 2. Handling Prepayment Lines The HandlePrepaymentLine procedure ensures each prepayment is processed correctly: procedure HandlePrepaymentLine(var PrepaymentLine: Record PrepaymentLinesandPayment; var PurchHeader: Record “Purchase Header”)var PaymentEntryNo: Integer;begin // Unapply previous payments if any PaymentEntryNo := UnapplyPaymentFromPrepayInvoice(PrepaymentLine.”Prepayment Invoice”); if PaymentEntryNo = 0 then Error(‘Failed to unapply Vendor Ledger Entry for Document No. %1’, PrepaymentLine.”Prepayment Invoice”); // Create credit memo and invoice line CreateCreditMemoLine(PrepaymentLine, PrepaymentLine.”Prepayment Invoice”); CreatePurchaseInvoiceLine(PurchHeader, PrepaymentLine); // Assign item charges and post AssignItemChargeToReceiptAndPost(PrepaymentLine, PurchHeader.”No.”, PrepaymentLine.”Purchase Order No.”);end; Highlights: 3. Applying Payments to Invoice The ApplyPaymentToInvoice procedure ensures the invoice is linked with the correct prepayment: procedure ApplyPaymentToInvoice(InvoiceNo: Code[20]; PaymentEntryNo: Integer)var InvoiceEntry, VendLedEntry: Record “Vendor Ledger Entry”; ApplyPostedEntries: Codeunit “VendEntry-Apply Posted Entries”; ApplyUnapplyParameters: Record “Apply Unapply Parameters”;begin InvoiceEntry.SetRange(“Document No.”, InvoiceNo); InvoiceEntry.SetRange(Open, true); if InvoiceEntry.FindFirst() then begin VendLedEntry.SetRange(“Entry No.”, PaymentEntryNo); if VendLedEntry.FindFirst() then begin InvoiceEntry.Validate(“Amount to Apply”, InvoiceEntry.”Remaining Amount”); VendLedEntry.Validate(“Amount to Apply”, -InvoiceEntry.”Remaining Amount”); ApplyUnapplyParameters.”Document No.” := VendLedEntry.”Document No.”; ApplyPostedEntries.Apply(InvoiceEntry, ApplyUnapplyParameters); end; end;end; Benefits: 4. Assigning Item Charges Item charges from receipts are automatically assigned to invoices: procedure AssignItemChargeToReceiptAndPost(var PrepaymentLine: Record PrepaymentLinesandPayment; PurchInvoiceNo: Code[20]; PurchaseOrderNo: Code[20])var PurchRcptLine: Record “Purch. Rcpt. Line”; ItemChargeAssign: Record “Item Charge Assignment (Purch)”;begin PurchRcptLine.SetRange(“Order No.”, PrepaymentLine.”Purchase Order No.”); PurchRcptLine.SetFilter(Quantity, ‘>0’); PurchRcptLine.SetRange(“No.”, PrepaymentLine.”Item No.”); if PurchRcptLine.FindSet() then repeat ItemChargeAssign.Init(); ItemChargeAssign.”Document No.” := PurchInvoiceNo; ItemChargeAssign.”Applies-to Doc. No.” := PurchRcptLine.”Document No.”; ItemChargeAssign.”Item Charge No.” := PrepaymentLine.”Item Charge”; ItemChargeAssign.”Qty. to Assign” := 1; ItemChargeAssign.”Amount to Assign” := PrepaymentLine.Amount; ItemChargeAssign.Insert(true); until PurchRcptLine.Next() = 0;end; Outcome: To conclude, by implementing this automation: This code can save significant time for finance teams while keeping processes accurate and transparent. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com
Share Story :
PART 1 – Understanding the Core Logic Behind Automated Vendor Prepayments in Business Central
Managing prepayments can be challenging for businesses that work with multiple vendors on the same Purchase Order. In many industries, especially those that handle specialized procurement or complex supply chains, it is common for a single PO to include several lines that each involve a different vendor. This means every vendor has different payment terms, different prepayment requirements, and different financial workflows. A client in the gas distribution industry had this exact issue: each PO line belonged to a different vendor, and every vendor required a separate prepayment invoice, payment, and auto-application before goods could be received. Because of strict financial controls and vendor requirements, nothing could be posted or received until each prepayment was correctly processed and applied. Why Standard Business Central Was Not Enough Business Central supports prepayments, but only at the Purchase Order header level, not line by line.This means BC assumes the entire PO is for a single vendor, which is not always true in real-world scenarios. In addition, standard BC does not automatically: This forces users to manually: Thus, managing prepayments became a manual and error-prone process. As the number of PO lines increased, the amount of duplicated work increased as well, leading to delays, mistakes, and inconsistencies across the system. Our Solution – A Custom Prepayment Engine To solve this, we built a customized “Prepayment Lines” page where users can manage prepayments at the line level instead of the header level. On this page: This gives the user full control while keeping everything in one place. When the user confirms, Business Central automatically: All of this happens in a single automated process without requiring the user to manually open journal pages or vendor ledger entries. To conclude, this transformed a lengthy, manual workflow into a fully automated one. What previously took many steps across multiple pages and required careful tracking is now processed reliably with one action, saving time, reducing errors, and ensuring that goods can be received without financial delays. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com
Share Story :
Embedding AI Insights Directly into Power BI
Once the foundation of decision intelligence is established, the next step is embedding AI-generated insights directly into the tools business users already rely on. This is where Agent Bricks delivers maximum value. Role of Agent Bricks Agent Bricks operates through three core capabilities. The first is insight generation, where it identifies trends, detects anomalies, and calculates readiness or risk scores from analytical datasets. The second capability is contextual reasoning. Agent Bricks correlates KPIs across domains such as finance, operations, and projects. Instead of generic alerts, it produces explanations in clear business language that highlight root causes and implications. The third capability is automation. Insights can be generated on a schedule, triggered by events, or refreshed dynamically as data changes. This ensures intelligence remains timely and relevant. Embedding AI Insights in Power BI These AI-generated outputs are embedded directly into Power BI. Smart Narrative visuals can display explanations alongside charts. Text cards backed by Databricks tables can surface summaries and recommendations. In advanced scenarios, custom Power BI visuals can consume Agent Bricks APIs to provide near real-time intelligence. Business users receive insights without leaving their dashboards. Use Case: AI-Driven Project Readiness Monitoring A strong example of this approach is AI-driven Project Readiness Monitoring. Traditionally, readiness is assessed manually using fragmented indicators such as resource availability, budget usage, dependency status, and risk registers. Agent Bricks evaluates these signals holistically and generates a readiness score along with narrative explanations. Power BI displays not only the score but also why a project may not be ready and what actions should be taken next. Business Impact The business impact is significant. Decision latency is reduced, business users gain self-service intelligence, and organizations achieve greater ROI from Power BI investments. To conclude, when AI insights are embedded directly into Power BI, analytics becomes actionable. Agent Bricks transforms raw metrics into contextual explanations, recommendations, and readiness signals that business users can trust. By combining insight generation, contextual reasoning, and automation, Agent Bricks turns Power BI reports into decision systems rather than static dashboards. The result is faster decisions, greater confidence, and measurable business impact. In a world where speed and clarity define competitive advantage, embedding AI-powered intelligence into everyday analytics tools is no longer optional—it is essential. Final Thoughts Organizations that successfully integrate AI reasoning into their analytics stack will move beyond reporting and into outcome-driven intelligence. Agent Bricks, paired with Power BI, provides a scalable and practical path to make that transition. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com
Share Story :
Databricks Delta Live Tables vs Classic ETL: When to Choose What?
As data platforms mature, teams often face a familiar question:Should we continue with classic ETL pipelines, or move to Delta Live Tables (DLT)? Both approaches work. Both are widely used. The real challenge is knowing which one fits your use case, not which one is newer or more popular. In this blog, I’ll break down Delta Live Tables vs classic ETL from a practical, project-driven perspective, focusing on how decisions are actually made in real data engineering work. Classic ETL in Databricks Classic ETL in Databricks refers to pipelines where engineers explicitly control each stage of data movement and transformation. The pipeline logic is written imperatively, meaning the engineer decides how data is read, processed, validated, and written. Architecturally, classic ETL pipelines usually follow the Medallion pattern: Each step is executed explicitly, often as independent jobs or notebooks. Dependency management, error handling, retries, and data quality checks are all implemented manually or through external orchestration tools. This approach gives teams maximum freedom. Complex ingestion logic, conditional transformations, API integrations, and custom performance tuning are easier to implement because nothing is abstracted away. However, this flexibility also means consistency and governance depend heavily on engineering discipline. We implemented a Classic ETL pipeline in our internal Unity Catalog project, migrating 30+ Power BI reports from Dataverse into Unity Catalog to enable AI/BI capabilities. This architecture allows data to be consumed in two ways – through an agentic AI interface for ad-hoc querying and through Power BI for governed, enterprise-grade visualizations. We chose the ETL approach because it provides strong data quality control, schema stability, and predictable performance at scale. It also allows us to apply centralized transformations, enforce governance standards, optimize storage formats, and ensure consistent semantic models across reporting and AI workloads -making it ideal for production-grade analytics and enterprise adoption. Delta Live Tables Delta Live Tables is a managed, declarative pipeline framework provided by Databricks. Instead of focusing on execution steps, DLT encourages engineers to define what tables should exist and what rules the data must satisfy. From an architectural perspective, DLT formalizes the Medallion pattern. Pipelines are defined as a graph of dependent tables rather than a sequence of jobs. Databricks automatically understands lineage, manages execution order, applies data quality rules, and provides built-in monitoring. DLT pipelines are particularly well-suited for transformation and curation layers, where data is shared across teams and downstream consumers expect consistent, validated datasets. The platform takes responsibility for orchestration, observability, and failure handling, reducing operational overhead. In my next blog, I will demonstrate how to implement Delta Live Tables (DLT) in a hands-on, technical way to help you clearly understand how it works in real-world scenarios. We will walk through the creation of pipelines, data ingestion, transformation logic, data quality expectations, and automated orchestration. The Core Architectural Difference The fundamental difference between classic ETL and Delta Live Tables is how responsibility is divided between the engineer and the platform. In classic ETL, the engineer owns the full lifecycle of the pipeline. This provides flexibility but increases maintenance cost and risk. In Delta Live Tables, responsibility is shared: the engineer defines structure and intent, while Databricks enforces execution, dependencies, and quality. This shift changes how pipelines are designed. Classic ETL is optimized for control and customization. Delta Live Tables is optimized for consistency, governance, and scalability. When Classic ETL Makes More Sense Classic ETL is a strong choice when pipelines require complex logic, conditional execution, or tight control over performance. It is well suited for ingestion layers, API-based data sources, and scenarios where transformations are highly customized or experimental. Teams with strong engineering maturity may also prefer classic ETL for its transparency and flexibility, especially when governance requirements are lighter. When Delta Live Tables Is the Better Fit Delta Live Tables excels when pipelines are repeatable, standardized, and shared across multiple consumers. It is particularly effective for silver and gold layers where data quality, lineage, and operational simplicity matter more than low-level control. DLT is a good architectural choice for enterprise analytics platforms, certified datasets, and environments where multiple teams rely on consistent data definitions. A Practical Architectural Pattern In real-world platforms, the most effective design is often hybrid. Classic ETL is used for ingestion and complex preprocessing, while Delta Live Tables is applied to transformation and curation layers. This approach preserves flexibility where it is needed and enforces governance where it adds the most value. To conclude, Delta Live Tables is not a replacement for classic ETL. It is an architectural evolution that addresses governance, data quality, and operational complexity. The right question is not which tool to use, but where to use each. Mature Databricks platforms succeed by combining both approaches thoughtfully, rather than forcing a single pattern everywhere. Choosing wisely here will save significant rework as your data platform grows. Need help deciding which approach fits your use case? Reach out to us at transform@cloudfronts.com
Share Story :
Power BI Bookmarks and Buttons: Creating Interactive Report Experiences
Modern Power BI reports are no longer just static dashboards. Business users expect reports to behave more like applications-interactive, guided, and easy to explore without technical knowledge. This is where Bookmarks and Buttons in Microsoft Power BI become powerful. Used correctly, they allow you to control report navigation, toggle views, show or hide insights, and create app-like experiences-all without writing DAX or code. This blog explains what bookmarks and buttons are, how they work together, and how to design interactive report experiences, using clear steps and visual snapshots. What Are Power BI Bookmarks? A bookmark in Power BI captures the state of a report page at a specific point in time. This state can include: Think of a bookmark as a saved moment in your report that you can return to instantly. Common use cases include: What Are Power BI Buttons? Buttons are interactive triggers that allow users to perform actions inside a report. These actions can include: Buttons act as the user-facing control, while bookmarks store the logic behind what happens. On their own, buttons are simple. Combined with bookmarks, they unlock advanced interactivity. Step-by-Step: Creating an Interactive View Toggle Step 1: Design Visual States Start by creating different views on the same report page.For example: Use the Selection Pane to show or hide visuals for each state. Step 2: Create Bookmarks Open the Bookmarks Pane and create a bookmark for each visual state. Important settings to review: Rename bookmarks clearly, such as: Step 3: Add Buttons Insert buttons from the Insert → Buttons menu.Common button types include: Label buttons clearly so users understand what each action does. Step 4: Link Buttons to Bookmarks Select a button and configure its Action: This is the point where interactivity is activated. Common Interactive Scenarios Bookmarks and buttons are commonly used to: These patterns reduce clutter and improve usability, especially for non-technical users. To conclude, bookmarks and buttons transform Power BI reports from static dashboards into interactive, guided experiences. They allow report creators to design with intent, reduce user confusion, and present insights more effectively. When used thoughtfully, this feature bridges the gap between reporting and application-style analytics—without adding technical complexity. If you’re building reports for decision-makers, bookmarks and buttons are not optional anymore—they are essential. Need help deciding how to design interactivity in your Power BI reports?Reach out to us at transform@cloudfronts.com
Share Story :
Seamlessly Switching Lead-Based BPFs After Qualification in Dynamics 365 CRM
In Microsoft Dynamics 365 CRM, Business Process Flows (BPFs) are powerful tools that guide users through defined business stages. However, when working with Lead-based BPFs that persist into Opportunity, certain platform limitations surface-especially when multiple Lead-rooted BPFs are involved. This blog walks through a real-world challenge I encountered while working with a Houston-based technology consulting and cybersecurity services firm. The firm specializes in modern digital transformation and enterprise security solutions. I explore issues with Lead → Opportunity Business Process Flow (BPF) switching, explain why the out-of-the-box behavior falls short, and detail how I designed a robust client-side and server-side solution to safely and reliably switch BPFs-even after a Lead has already been qualified. How BPFs Work (Quick Recap) It ideally won’t allow a switch, either via brute forcing via client side or server side as – The Problem In my scenario: The Challenge Once a Lead is qualified: This is non-intuitive, error-prone, and inefficient, especially considering the manual effort that goes into it. Solution Overview I implemented a guided, safe, and reversible BPF switching mechanism that: High-Level Architecture This solution uses: Step-by-Step Methodology 1. Entry Point: Opportunity Ribbon Button A custom ribbon button on the Opportunity form: These fields act as a controlled handshake between Opportunity and Lead. 2. Lead OnLoad: Controlled Trigger Execution On Lead form load: if (diffSeconds > 20) { return;} Xrm.WebApi.updateRecord(“lead”, formContext.data.entity.getId(), { cf_shouldtrigger: false}); This ensures: 3. Identifying and Aborting the Existing BPF Before switching: var activeProcess = formContext.data.process.getActiveProcess(); Xrm.WebApi.updateRecord(bpfEntityName,result.entities[0].businessprocessflowinstanceid,{statecode: 1, // Inactivestatuscode: 3 // Aborted}); This is a critical step—without aborting the old instance, Dynamics can behave unpredictably. 4. Switching the UI BPF After aborting: 5. Handling BPF Instance Creation (First-Time Switch Logic) The solution explicitly checks: If it exists: If it does NOT exist (first switch): This dual-path logic makes the solution idempotent and reusable. 6. Server-Side Plugin: Persisting the Truth A plugin ensures that: // Identify BPF typebool isNewBpf = (context.PrimaryEntityName == “new_bpf_entity”); // Resolve related LeadGuid leadId = isNewBpf? ((EntityReference)target[“bpf_leadid”]).Id: ((EntityReference)target[“leadid”]).Id; // Retrieve related Opportunity via LeadEntity opportunity = GetOpportunityByLead(service, leadId); // Determine stages and pathstring qualifyStageId = isNewBpf ? NEW_QUALIFY_STAGE : OLD_QUALIFY_STAGE;string finalStageId = isNewBpf ? NEW_FINAL_STAGE : OLD_FINAL_STAGE;string traversedPath =START_STAGE + “,” + qualifyStageId + “,” + finalStageId; // PATCH 1 – Qualify stageservice.Update(new Entity(target.LogicalName, target.Id){[“activestageid”] = new EntityReference(“processstage”, new Guid(qualifyStageId)),[“traversedpath”] = START_STAGE + “,” + qualifyStageId}); // PATCH 2 – Final stage + Opportunity bindservice.Update(new Entity(target.LogicalName, target.Id){[“activestageid”] = new EntityReference(“processstage”, new Guid(finalStageId)),[“traversedpath”] = traversedPath,[isNewBpf ? “bpf_opportunityid” : “opportunityid”] =new EntityReference(“opportunity”, opportunity.Id)}); // Mark Lead as successfully processedservice.Update(new Entity(“lead”, leadId){[“cf_pluginsuccess”] = new OptionSetValue(1) // Yes}); This guarantees data consistency and auditability. 7. Final UI Sync & Redirect After successful completion: Xrm.Navigation.openForm({ entityName: “opportunity”, entityId: opportunityId }); From the user’s perspective: “I clicked a button, confirmed the switch, and landed back in my Opportunity—done.” Why This Solution Works ✔ Respects Dynamics 365 BPF constraints✔ Prevents orphaned or conflicting BPF instances✔ Handles first-time and repeat switches✔ Ensures server-side persistence✔ Minimal user disruption✔ Fully reversible Most importantly, it bridges the gap between platform limitations and real business needs. Dynamics 365 BPFs are powerful-but when multiple Lead-rooted processes coexist, manual switching is not enough. This solution demonstrates how: can be combined to deliver a seamless, enterprise-grade experience without unsupported hacks. If you’re facing similar challenges with Lead → Opportunity BPF transitions, this pattern can be adapted and reused with confidence. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com
Share Story :
Triggering Power Automate Flows Directly from Power BI Reports
Power BI is excellent at visualizing insights, but insights often need action. That’s where the Power Automate visual comes in. With this visual, report consumers can trigger instant Power Automate flows directly from a Power BI report, using the data and filters already applied on the page. No switching tools. No exporting data. Just click and act. This blog walks through how the Power Automate visual works, how to configure it, and what to consider before rolling it out. Understanding Power Automate Visuals – The Power Automate visual adds a button to your Power BI report. When clicked, it runs an instant cloud flow. Key capabilities: From a user’s perspective, it feels like a native action button inside Power BI. Adding the Power Automate Visual In Power BI Desktop One can add the visual in two ways: Once added, the visual appears on the report page with built-in instructions. In Power BI Service The process is identical: One can resize or reposition the button like any other visual. Choosing the Flow Environment Before creating or attaching a flow, select the environment where the flow will live. The environment picker: Choosing the right environment upfront avoids permission and governance issues later. Making the Flow Data-Contextual One of the most powerful features of the Power Automate visual is data context. How it works Example: This makes flows responsive to how users are interacting with the report. Creating or Editing the Flow Editing from Power BI Desktop or Service With the flow selected, add any data fields to the Power Automate Data region, to use as dynamic inputs for the flow. Select More options (…) > Edit to configure the button. In edit mode of the visual, either select an existing flow to apply to the button, or create a new flow to apply to the button. One can start from scratch or start with one of the built-in templates as an example. To start from scratch, select New > Instant cloud flow. Select New step. Here, one can choose a subsequent action or specify a Control if you want to add more logic to determine the subsequent action. Optionally, one can reference the data fields as dynamic content if they want the flow to be data contextual. This example uses the Region data field to create an item in a SharePoint list. Based on the end-user’s selection, Region could have multiple values or just one. After you configure your flow logic, name the flow, and select Save. Select the arrow button to go to the Details page of the flow you created. Here’s the Details page for a saved flow. Select the Apply button to attach the flow you created to your button. Formatting the Button The Power Automate button is fully customizable: This allows the button to match your report’s design and UX standards. Test the flow After the flow is applied to the button, we need to test it before you share the flow with others. These Power BI flows can only run in the context of a Power BI report. Thus one can’t run these flows in a Power Automate web app or elsewhere. If the flow is data contextual, make sure to test how the filter selections in the report affect the flow outcome. Sharing the Flow with Report Users When the flow runs successfully, it can be shared concerned personas of the report. Give users edit access Alternatively, you can give any users edit access to the flow, not just run permissions. Considerations and Limitations Before adopting the Power Automate visual, keep these points in mind: These constraints help maintain performance, security, and governance. When to Use the Power Automate Visual This pattern works best when you want to: In short, it bridges the gap between analysis and execution. Final Thoughts The Power Automate visual transforms Power BI from a read-only analytics tool into an interactive action surface: Analyze → Filter → Click → Automate When used thoughtfully, it empowers users to act on insights at the exact moment they discover them — without breaking their flow. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com
Share Story :
Transforming Lessor Reporting with Dynamics 365 Finance & Operations + Power BI
For global lessors, reporting is more than just a compliance requirement -it’s a strategic capability. Investors, regulators, and executives all expect real-time insights into lease performance, profitability, and funding structures. Traditional spreadsheets and disconnected tools can be replaced with Dynamics 365 Finance & Operations + Power BI. With Microsoft Dynamics 365 Finance & Operations (F&O) combined with Power BI, lessors can achieve compliance-ready reporting while unlocking deep financial and operational insights. The Reporting Challenges Lessors Face Lessor reporting must comply with these questions: Without automation, finance teams would need manual reconciliations. Dynamics 365 Finance & Operations + Power BI: The Reporting Engine Compliance Reporting Power BI dashboard with lease liabilities trend line and revenue recognition chart. Funding & ROI Transparency Stacked bar chart showing funding mix with KPI cards for ROI by source. Billing & Revenue Recognition Column chart comparing recurring vs usage-based revenue streams. Profitability Analysis Heatmap of profitability by customer with KPI margin %. Renewal & Churn Insights To conclude, with Dynamics 365 F&O and Power BI, lessors achieve: Reporting is no longer a back-office activity. With Dynamics 365 Finance & Operations and Power BI, lessors can transform reporting into a strategic driver – ensuring compliance while delivering actionable insights that improve investor confidence and portfolio profitability. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com
