How to use Dynamics 365 CRM Field-Level Security to maintain confidentiality of Intra-Organizational Data
Summary In most CRM implementations, data exposure should be encapsulated for both inside & outside the organization. Sales, Finance, Operations, HR, everyone works in the same system. Collaboration increases. Visibility increases. But so does risk. This is based on real-world project experience, for a practical example I had implemented for a technology consulting and cybersecurity services firm based in Houston, Texas, USA, specializing in modern digital transformation and enterprise security solutions. This blog explains: 1] Why Security Roles alone are not enough. 2] How users can still access data through Advanced Find, etc. 3] What Field-Level Security offers beyond entity-level restriction. 4] Step-by-step implementation. 5] Business advantages you gain. Table of Contents The Real Problem: Intra-Organizational Data Exposure Implementation of Field-Level Security Results Why Was a Solution Required? Business Impact The Real Problem: Intra-Organizational Data Exposure Let’s take a practical cross-department scenario. Both X Department and Y Department work in the same CRM system built on Microsoft Dynamics 365. Entities Involved 1] Entity 1 2] Entity 2 Working Model X Department Fully owns and manages Entity 1 Occasionally needs to refer to specific information in Entity 2 Y Department Fully owns and manages Entity 2 Occasionally needs to refer to specific information in Entity 1 This is collaborative work. You cannot isolate departments completely. But here’s the challenge: Each entity contains sensitive fields that should not be editable — or sometimes not even visible — to the other department. Security Roles in Microsoft Dynamics 365 operate at the entity (table) level, not at the field (column) level. Approach Result Remove Write access to Entity 2 for X Dept X Dept cannot update anything in Entity 2 — even non-sensitive fields Remove Read access to sensitive fields in Entity 2 Not possible at field level using Security Roles Restrict Entity 2 entirely from X Dept X Dept loses visibility — collaboration breaks Hide fields from the form only Data still accessible via Advanced Find or exports This is the core limitation. Security Roles answer: “Can the user access this record?” They do NOT answer: “Which specific data inside this record can the user access?” Implementation of Field-Level Security Step 1: Go to your Solution & Identify Sensitive Fields, usually Personal info, facts & figures, etc. e.g. cf_proficiencyrating. Step 2: Select the field and “Enable” it for Field Level Security (This is not possible for MS Out of the Box fields) Step 3: Go to Settings and then select “Security” Step 4: Go to Settings and then select “Security” -> “Field Security Profiles” Step 5: Either create or use existing Field Security Profile, as required Step 6: Within this one can see all the fields across Dataverse which are enabled for Field Security, Here the user should select their field and set create/read/update privileges (Yes/No). Step 7: Then select the system users, or the Team (having the stakeholder users), and save it. Results: Assume you are a user from X dept. who wants to access Entity 2 Record, and you need to see only the Proficiency Rating & Characteristic Name, but not Effective Date & Expiration Date; now since all fields have Field Level Security they would have a Key Icon on them, but the fields which do not have read/write access for you/your team, would have the Key Icon as well as a “—“. The same thing would happen in Views, subgrids, as well as if the user uses Advanced Find. Why this Solution was Required? The organization needed: 1] Cross-functional collaboration 2] Protection of confidential internal data 3] Clear separation of duties 4] No disruption to operational workflows They required a solution that: 1] Did not block entity access 2] Did not require custom development 3] Enforced true data-level protection Business Impact 1. Confidential Data Protection Sensitive internal data was secured without restricting overall entity access, enabling controlled collaboration. 2. Reduced Internal Data Exposure Risk Unauthorized users could no longer retrieve protected fields via Advanced Find, significantly lowering governance risk. 3. Clear Separation of Duties Departmental ownership of sensitive fields was enforced without disrupting cross-functional visibility. 4. Improved Audit Readiness Every modification to protected fields became traceable, strengthening accountability and compliance posture. 5. Reduced Operational Friction System-enforced field restrictions eliminated the need for entity blocking, duplicate records, and manual approval workarounds. 6. Efficiency Gains The solution was delivered through configuration — no custom code, no complex business rules, and minimal maintenance overhead. I Hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudFronts.com.
Share Story :
Plugin Class Code Recovery using XRMToolBox & C# DotPeek.
In an ideal Dynamics 365 (Dataverse) project, plugin source code lives safely in a version-controlled repository, flows cleanly through Azure DevOps pipelines, and is always recoverable. In reality, many of us inherit environments where that discipline didn’t exist. I recently worked with a customer where: This created a common but uncomfortable challenge in the Dynamics 365 world:How do you maintain, debug, or enhance plugins when the source code is lost? Rewriting everything from scratch was risky and time-consuming. Guessing behavior based on runtime results wasn’t reliable. Fortunately, Dynamics 365 and the .NET ecosystem give us a practical and effective alternative. Using XrmToolBox and JetBrains dotPeek, it is possible to recover readable C# plugin code directly from the deployed assembly. (Though the C# Class code recovered won’t be 100% exact, as the variable names would be different and generic; it is only suitable for close logic, structure & functional recovery) The Practical Solution The approach consists of two main steps: This does not magically restore comments or original formatting, but it does give a working, understandable code that closely reflects the original plugin logic. Tools Used Step 1: Extract the Plugin Assembly from Dataverse 1. Connect to the Environment 2. Load the Assembly Recovery Tool 3. Download the DLL At this point, you have successfully recovered the compiled plugin assembly exactly as it exists in the environment. Step 2: Decompile the DLL Using JetBrains dotPeek 1. Open dotPeek 2. Explore the Decompiled Code dotPeek will: One can now browse through: This is usually more than enough to understand how the plugin works. 3. Export to a Visual Studio Project (Optional but Recommended) One of dotPeek’s most powerful features is Export to Project: This gives you a proper .csproj with class files that you can open, build, and extend. Possibilities with the Recovered Code Once you have the decompiled C# code, several options open up: 1. Rebuild the Plugin Assembly 2. Re-register the Plugin 3. Maintain or Enhance Functionality Important Considerations Key Takeaway Losing plugin source code does not mean losing control of your Dynamics 365 solution. With XrmToolBox’s Assembly Recovery Tool and JetBrains dotPeek, you can: There are chances while working in Dynamics 365 technologies, that a developer might face this situation. Knowing this technique can save days-or weeks-of effort and give your customer confidence that their system remains fully supportable. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com
Share Story :
Browser-Level State Retention in Dynamics 365 CRM: Improving Performance & UX with Session Storage
Dynamics 365 model-driven apps are excellent at storing business data, but not every piece of information belongs in Dataverse. A common design folly is using Dataverse fields to store temporary UI state-things like selected views, filters, or user navigation preferences. While this works technically, it introduces unnecessary performance overhead and can create incorrect behavior in multi-user environments. In this blog, I’ll focus on browser-level retention of CRM UI data using “sessionStorage“, using subgrid view retention as a practical example for a technology consulting and cybersecurity services firm based in Houston, Texas, USA, specializing in modern digital transformation and enterprise security solutions. The Real Problem: UI State vs Business Data Let’s separate concerns clearly: Type Example Where it should live Business data Status, owner, amounts Dataverse UI state Selected view, filter, scroll position Browser Subgrid views fall squarely into the UI state category. Scenario: Subgrid View Resetting on Navigation Users reported the following behavior: This breaks user workflow, especially for records that users revisit frequently. Possible Solution: Persisting UI State in Dataverse (Original Approach) This would attempt to fix it by storing the selected subgrid view GUID in a Dataverse field on the parent record. How It Works Why this might look reasonable The Hidden Problems 1] Slower Form Execution 2] Data Model Pollution 3] Incorrect Multi-User Behavior 4] Scalability Issues In short, Dataverse was doing work it should never have been asked to do. Workaround to this Approach: Keep UI State in the Browser for that session, But practically: The selected subgrid view belongs to the user’s session, not the record. Once that boundary is respected, the solution becomes much cleaner. Practical Solution: Browser Session Storage (Improved Approach) Instead of persisting view selection in Dataverse, we store it locally in the browser using sessionStorage. sessionStorage is part of the Web Storage API, which provides a way to store key-value pairs in a web browser. Unlike localStorage, which persists data even after the browser is closed, sessionStorage is designed to store data only for the duration of a single session. This means that the data is available as long as the tab or window is open, and it is cleared when the tab or window is closed. Why Session Storage? How the Improved Solution Works 1. Store the View Locally on Subgrid Change 2. Restore the View on Form/Grid Load This ensures that when the user revisits the form, the subgrid opens exactly where they left off. Why This Approach Is Superior 1] Faster Execution 2] Correct User Experience 3] Clean Architecture 4] Zero Backend Impact When to Use Browser-Level Retention Use this pattern when: Examples: To conclude, not all data deserves to live in Dataverse. When you store UI state in the browser instead of the database, you gain: Subgrid view retention is just one example-but the principle applies broadly across Dynamics 365 customizations. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com
Share Story :
Exposing Plugins as Bound Actions for Power Automate Flows: A Practical Procedure for Efficient Record Processing, involving several records.
In complex business processes, like calculating commissions or validating data across multiple records, applying the same logic repeatedly in a Power Automate flow can quickly become inefficient and difficult to maintain. A more scalable approach is to encapsulate the logic in a Dataverse plugin, expose it as a bound action, and then call this action from a flow. This method centralizes business rules, reduces redundancy, and improves maintainability. In this post, we’ll walk through the steps to implement this approach and examine its advantages over applying the same logic directly within a flow for each individual record. We’ll illustrate this with a practical example from a Houston-based technology consulting and cybersecurity services firm that specializes in modern digital transformation and enterprise security solutions. Flow Diagram Step 1: Create the PluginThe first step is to write a plugin that contains the logic you want to apply to each record. Example: DuplicateCommissionsCounter Step 2: Expose the Plugin as a Bound ActionInstead of running plugin logic manually for each record, you can register it as a bound action in Dataverse. Procedure: E.g. 2. Attach your plugin to this action. Outcome: This exposes your plugin logic as a reusable, callable bound action. Any process or flow can now invoke it for a specific invoice record. Step 3: Use Power Automate to Call the Bound ActionOnce the plugin is exposed, you can loop through multiple records in a flow and call the action. Procedure in Power Automate: This approach ensures that all complex logic resides in the plugin, while the flow orchestrates which records need processing. Advantages Over Logic Directly in the Flow To conclude, exposing plugins as bound actions is a robust, maintainable way to apply complex logic across multiple records in Dataverse. It allows Power Automate flows to focus on orchestration rather than logic execution, leading to cleaner, faster, and easier-to-manage solutions. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com
Share Story :
Seamlessly Switching Lead-Based BPFs After Qualification in Dynamics 365 CRM
In Microsoft Dynamics 365 CRM, Business Process Flows (BPFs) are powerful tools that guide users through defined business stages. However, when working with Lead-based BPFs that persist into Opportunity, certain platform limitations surface-especially when multiple Lead-rooted BPFs are involved. This blog walks through a real-world challenge I encountered while working with a Houston-based technology consulting and cybersecurity services firm. The firm specializes in modern digital transformation and enterprise security solutions. I explore issues with Lead → Opportunity Business Process Flow (BPF) switching, explain why the out-of-the-box behavior falls short, and detail how I designed a robust client-side and server-side solution to safely and reliably switch BPFs-even after a Lead has already been qualified. How BPFs Work (Quick Recap) It ideally won’t allow a switch, either via brute forcing via client side or server side as – The Problem In my scenario: The Challenge Once a Lead is qualified: This is non-intuitive, error-prone, and inefficient, especially considering the manual effort that goes into it. Solution Overview I implemented a guided, safe, and reversible BPF switching mechanism that: High-Level Architecture This solution uses: Step-by-Step Methodology 1. Entry Point: Opportunity Ribbon Button A custom ribbon button on the Opportunity form: These fields act as a controlled handshake between Opportunity and Lead. 2. Lead OnLoad: Controlled Trigger Execution On Lead form load: if (diffSeconds > 20) { return;} Xrm.WebApi.updateRecord(“lead”, formContext.data.entity.getId(), { cf_shouldtrigger: false}); This ensures: 3. Identifying and Aborting the Existing BPF Before switching: var activeProcess = formContext.data.process.getActiveProcess(); Xrm.WebApi.updateRecord(bpfEntityName,result.entities[0].businessprocessflowinstanceid,{statecode: 1, // Inactivestatuscode: 3 // Aborted}); This is a critical step—without aborting the old instance, Dynamics can behave unpredictably. 4. Switching the UI BPF After aborting: 5. Handling BPF Instance Creation (First-Time Switch Logic) The solution explicitly checks: If it exists: If it does NOT exist (first switch): This dual-path logic makes the solution idempotent and reusable. 6. Server-Side Plugin: Persisting the Truth A plugin ensures that: // Identify BPF typebool isNewBpf = (context.PrimaryEntityName == “new_bpf_entity”); // Resolve related LeadGuid leadId = isNewBpf? ((EntityReference)target[“bpf_leadid”]).Id: ((EntityReference)target[“leadid”]).Id; // Retrieve related Opportunity via LeadEntity opportunity = GetOpportunityByLead(service, leadId); // Determine stages and pathstring qualifyStageId = isNewBpf ? NEW_QUALIFY_STAGE : OLD_QUALIFY_STAGE;string finalStageId = isNewBpf ? NEW_FINAL_STAGE : OLD_FINAL_STAGE;string traversedPath =START_STAGE + “,” + qualifyStageId + “,” + finalStageId; // PATCH 1 – Qualify stageservice.Update(new Entity(target.LogicalName, target.Id){[“activestageid”] = new EntityReference(“processstage”, new Guid(qualifyStageId)),[“traversedpath”] = START_STAGE + “,” + qualifyStageId}); // PATCH 2 – Final stage + Opportunity bindservice.Update(new Entity(target.LogicalName, target.Id){[“activestageid”] = new EntityReference(“processstage”, new Guid(finalStageId)),[“traversedpath”] = traversedPath,[isNewBpf ? “bpf_opportunityid” : “opportunityid”] =new EntityReference(“opportunity”, opportunity.Id)}); // Mark Lead as successfully processedservice.Update(new Entity(“lead”, leadId){[“cf_pluginsuccess”] = new OptionSetValue(1) // Yes}); This guarantees data consistency and auditability. 7. Final UI Sync & Redirect After successful completion: Xrm.Navigation.openForm({ entityName: “opportunity”, entityId: opportunityId }); From the user’s perspective: “I clicked a button, confirmed the switch, and landed back in my Opportunity—done.” Why This Solution Works ✔ Respects Dynamics 365 BPF constraints✔ Prevents orphaned or conflicting BPF instances✔ Handles first-time and repeat switches✔ Ensures server-side persistence✔ Minimal user disruption✔ Fully reversible Most importantly, it bridges the gap between platform limitations and real business needs. Dynamics 365 BPFs are powerful-but when multiple Lead-rooted processes coexist, manual switching is not enough. This solution demonstrates how: can be combined to deliver a seamless, enterprise-grade experience without unsupported hacks. If you’re facing similar challenges with Lead → Opportunity BPF transitions, this pattern can be adapted and reused with confidence. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com
Share Story :
Triggering Power Automate Flows Directly from Power BI Reports
Power BI is excellent at visualizing insights, but insights often need action. That’s where the Power Automate visual comes in. With this visual, report consumers can trigger instant Power Automate flows directly from a Power BI report, using the data and filters already applied on the page. No switching tools. No exporting data. Just click and act. This blog walks through how the Power Automate visual works, how to configure it, and what to consider before rolling it out. Understanding Power Automate Visuals – The Power Automate visual adds a button to your Power BI report. When clicked, it runs an instant cloud flow. Key capabilities: From a user’s perspective, it feels like a native action button inside Power BI. Adding the Power Automate Visual In Power BI Desktop One can add the visual in two ways: Once added, the visual appears on the report page with built-in instructions. In Power BI Service The process is identical: One can resize or reposition the button like any other visual. Choosing the Flow Environment Before creating or attaching a flow, select the environment where the flow will live. The environment picker: Choosing the right environment upfront avoids permission and governance issues later. Making the Flow Data-Contextual One of the most powerful features of the Power Automate visual is data context. How it works Example: This makes flows responsive to how users are interacting with the report. Creating or Editing the Flow Editing from Power BI Desktop or Service With the flow selected, add any data fields to the Power Automate Data region, to use as dynamic inputs for the flow. Select More options (…) > Edit to configure the button. In edit mode of the visual, either select an existing flow to apply to the button, or create a new flow to apply to the button. One can start from scratch or start with one of the built-in templates as an example. To start from scratch, select New > Instant cloud flow. Select New step. Here, one can choose a subsequent action or specify a Control if you want to add more logic to determine the subsequent action. Optionally, one can reference the data fields as dynamic content if they want the flow to be data contextual. This example uses the Region data field to create an item in a SharePoint list. Based on the end-user’s selection, Region could have multiple values or just one. After you configure your flow logic, name the flow, and select Save. Select the arrow button to go to the Details page of the flow you created. Here’s the Details page for a saved flow. Select the Apply button to attach the flow you created to your button. Formatting the Button The Power Automate button is fully customizable: This allows the button to match your report’s design and UX standards. Test the flow After the flow is applied to the button, we need to test it before you share the flow with others. These Power BI flows can only run in the context of a Power BI report. Thus one can’t run these flows in a Power Automate web app or elsewhere. If the flow is data contextual, make sure to test how the filter selections in the report affect the flow outcome. Sharing the Flow with Report Users When the flow runs successfully, it can be shared concerned personas of the report. Give users edit access Alternatively, you can give any users edit access to the flow, not just run permissions. Considerations and Limitations Before adopting the Power Automate visual, keep these points in mind: These constraints help maintain performance, security, and governance. When to Use the Power Automate Visual This pattern works best when you want to: In short, it bridges the gap between analysis and execution. Final Thoughts The Power Automate visual transforms Power BI from a read-only analytics tool into an interactive action surface: Analyze → Filter → Click → Automate When used thoughtfully, it empowers users to act on insights at the exact moment they discover them — without breaking their flow. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com
Share Story :
Automating Data Cleaning and Storage in Azure Using Databricks, PySpark, and SQL.
Managing and processing large datasets efficiently is a key requirement in modern data engineering. Azure Databricks, an optimized Apache Spark-based analytics platform, provides a seamless way to handle such workflows. This blog will explore how PySpark and SQL can be combined to dynamically process, and clean data using the medallion architecture (Only Raw → Silver) and store the results in Azure Blob Storage as PDFs. Understanding the Medallion Architecture: – The medallion architecture follows a structured approach to data transformation: Aggregated Layer (Gold): Optimized for analytics, reports, and machine learning. In our use case, we extract raw tables from Databricks, clean them dynamically, and store the refined data into the silver schema. Key technologies / dependencies used: – Step-by-Step Code Breakdown 1. Setting Up the Environment Install & import necessary libraries The above command installs reportlab, which is used to generate PDFs. This imports essential libraries for data handling, visualization, and storage. 2. Connecting to Azure Blob Storage This snippet authenticates the Databricks notebook with Azure Blob Storage and prepares a connection to upload the final PDFs; Initiates the Spark Session as well. 3. Cleaning Data: Raw to Silver Layer Fetch all raw tables This dynamically removes NULL values from raw data and creates a cleaned table in the silver layer. 4. Verifying and comparing the Raw and the Cleaned (Silver) 4. Converting Cleaned Data to PDFs 5. Converting Cleaned Data to PDFs Output at the Azure Storage Container This process reads cleaned tables, converts them into PDFs with structured formatting, and uploads them to Azure Blob Storage. 6. Automating cleaning at Databricks at fixed scheduleThis is automated by scheduling the notebook & it’s associated compute instance to run at fixed intervals and timestamps. Further actions: – Why Store Data in Azure Blob Storage? To conclude, by leveraging Databricks, PySpark, SQL, ReportLab, and Azure Blob Storage, we have automated the pipeline from raw data ingestion to cleaned and formatted PDF reports. This approach ensures: a. Efficient data cleansing using SQL queries dynamically. b. Structured data transformation within the medallion architecture. c. Seamless storage and accessibility through Azure Blob Storage. This methodology can be extended to include Gold Layer processing for advanced analytics and reporting. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudFronts.com
Share Story :
Filtering Dynamics 365 Subgrids Without Relationships: A JavaScript-Only Approach Using setFilterXml
In Microsoft Dynamics 365, subgrids are a powerful way to display related records on a form. But what happens when: Out-of-the-box, Dynamics 365 doesn’t give us many options here. We can select a view, but we cannot apply dynamic filters unless the entities are directly related or the criteria already exist in the view’s FetchXML. This is where the JavaScript setFilterXml() API becomes a life-saver. In this article, I’ll show you how to filter a subgrid dynamically using JavaScript — even when the subgrid’s entity is completely unrelated to the main form entity. Use Case Imagine your form has a field called Name, and you want to filter the subgrid so that it shows only records whose Name begins with the same prefix. But: As there are also records, where the lookup column might need to be empty on purpose, which further would break relationship based filtering in the subgrid. OOB? Impossible. With JavaScript? Totally doable. How the JS based subgrid filtering works In Dynamics 365, subgrids are rendered as independent UI components inside the form. Even though the form loads first, subgrids load asynchronously in the background, which means: The form and its fields may already be available, but the subgrid control might not yet exist, so trying to apply a filter immediately on form load will fail. Here is the basic structure of a JS Function to perform Subgrid filtering – This control represents the interactive UI component that displays the records for the view.It gives you programmatic access to:-> Set filters-> Refresh the grid-> Access its view ID-> Handle events (in some versions) However, because subgrids load later than the form, this line may return null the first several times. If you proceed at that point, your script will break.So we implement a retry pattern: If the subgrid is not ready, wait 100ms -> Try again -> Repeat until the control becomes availableThis guarantees that our next steps run only when the subgrid is fully loaded. var oAnnualTCVTargetGridFilter = oAnnualTCVTargetGridFilter || {}; oAnnualTCVTargetGridFilter.filterSubgrid = function(executionContext) {var formContext = executionContext.getFormContext(); }; To make sure the filter is applied correctly, we follow a three-step workflow: 1. Retry Until the Subgrid Control Is Loaded (setTimeout) – When the script runs, we attempt to retrieve the subgrid control using: var subgrid = formContext.getControl(“tcvtargets”); 2. Apply the Filter (setFilterXml()) – Once the subgrid control is found, we can safely apply a filter. Then we can apply our filtering logic, and utilize it in the FetchXML Query: -> Read the field Name (cf_name) from the main form & design a logic -> Construct a FetchXML <filter> element -> Passing this XML to the subgrid using: This tells Dynamics 365 to apply an additional filter on top of the existing view used by the subgrid. A few important things to note: If the cf_name field is empty, we instead apply a special filter that returns no rows. This ensures the grid displays only relevant and context-driven data. 3. Refresh the Subgrid (subgrid.refresh()) – After applying the filter XML, the subgrid must be refreshed: Without this call, Dynamics will not re-run the query, meaning your filter won’t take effect until the user manually refreshes the subgrid. Refreshing forces the system to: -> Re-query data using the combined view FetchXML + your custom filter -> Re-render the grid -> Display the filtered results immediately This gives the user a seamless, dynamic experience where the subgrid shows exactly the records that match the context. JS + FetchXML based filtering in action – Without filtering :- With filtering :- Key Advantages of This Approach Works Even When No Relationship Exists Possibility to filter a subgrid even if the target entity has no direct link to the form’s main entity. This is extremely useful in cases where the relationship must remain optional or intentionally unpopulated. Enables Dynamic, Contextual Filtering We can design filtering logic on the form field values, user selections, or business rules. Filtering on Fields Not Included in the View Since the filtering logic is applied client-side, there is no need to modify or clone the system view just to include filterable fields. Bypasses Limitations of Lookup-Based Relation Filtering This method works even when the lookup column is intentionally left empty, which is a scenario where OOB relationship-based filtering fails. More Flexible Than Traditional View Editing You can apply advanced logic such as prefix matching, conditional filters, or dynamic ranges—things not possible using standard UI-only configuration. To conclude, filtering subgrids dynamically in Dynamics 365 is not something the platform supports out-of-the-box- especially when the entities are unrelated or when the filter criteria doesn’t exist in the subgrid’s original view. However, with a small amount of JavaScript and the setFilterXml() API, you gain complete control over what data appears inside a sub grid, purely based on the context passed from the main form. I Hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudFronts.com.
Share Story :
Overcoming Dataverse Connector Limitations: The Power Automate Approach to Export Hidden
Working with Microsoft Dataverse Connector in Power BI is usually straightforward—until you encounter a table that simply refuses to load any rows, even though the data clearly exists in the environment. This happens especially with hidden, virtual, or system-driven tables (e.g. msdyn_businessclosure, msdyn_scheduleboardsetting) which are commonly used in Field Service and Scheduling scenarios. Before jumping to a workaround, it’s important to understand why certain Dataverse tables don’t load in Power BI, what causes this behavior, and why the standard Dataverse connector may legitimately return zero rows. Causes – 1] The Table Is a Virtual or System Table with Restricted AccessSystem-managed Dataverse tables like msdyn_businessclosure are not exposed to the Dataverse connector because they support internal scheduling and platform functions. 2] No Records Exist in the Root Business Unit Data owned by child business units is not visible to Power BI accounts associated with a different BU, resulting in zero rows returned. 3] The Table Is Not Included in the Standard Dataverse Connector Some solution-driven or non-standard tables are omitted from the Dataverse connector’s supported list, so Power BI cannot load them. Solution: Export Dataverse Data Using Power Automate + Excel Sync Since Power BI can read:-> OneDrive-hosted files-> Excel files-> SharePoint-hosted spreadsheets …a suitable workaround is to extract the restricted Dataverse table into Excel using a scheduled (When the records are few) / Dataverse triggered (When there are many records and you only want a single one, to avoid pagination) Power Automate flow. What it can do –-> Power Automate can access system-driven tables.-> Excel files in SharePoint can be refreshed by Power BI Service.-> we can bypass connector restrictions entirely.-> The method works even if entities have hidden metadata or internal platform logic. This ensures:-> Consistent refresh cycles-> Full visibility of all table rows-> No dependency on Dataverse connector limitations Use case I needed to use the Business Closures table (Dataverse entity: msdyn_businessclosure) for a few calculations and visuals in a Power BI report. However, when I imported it through the Dataverse Connector, the table consistently showed zero records, even though the data was clearly present inside Dynamics 365. There are 2 reasons possible for this –1] It is a System/Platform Tablemsdyn_businessclosure is a system-managed scheduling table, and system tables are often hidden from external connectors, causing Power BI to return no data. 2] The Table Is Not Included in “Standard Tables” Exposed to Power BIMany internal Field Service and scheduling entities are excluded from the Dataverse connector’s metadata, so Power BI cannot retrieve their rows even if they exist. So here, we would fetch the records via “Listing” in Power automate and write to an excel file to bypass the limitations that hinder the exposure of that data; without compromising on user privileges, or system roles; we can also control or filter the rows being referred directly at source before reaching PBI Report. Automation steps – 1] Select a suitable trigger to fetch the rows of that entity (Recurring or Dataverse, whichever is suitable). 2] List the rows from the entity (Sort/Filter/Select/Expand as necessary). 3] Perform a specific logic (e.g. clearing the existing rows, etc.) on the excel file where the data would be written to. 4] For each row in the Dataverse entity, select a primary key (e.g. the GUID), provide the path to the particular excel file (e.g. SharePoint -> Location -> Document Library -> File Name -> Sheet or Table in the Excel File), & assign the dynamic values of each row to the columns in the excel file. 5] Once this is done, import it to the PBI Report by using suitable Power Query Logic in the Advanced Editor as follows – -> a) Loading an Excel File from SharePoint Using Web.Contents() – Source = Excel.Workbook(Web.Contents(“https://<domain>.sharepoint.com/sites/<Location>/Business%20Closures/msdyn_businessclosures.xlsx”),null,true), What this step does: -> Uses Web.Contents() to access an Excel file stored in SharePoint Online.-> The URL points directly to the Excel file msdyn_businessclosures.xlsx inside the SharePoint site.-> Excel.Workbook() then reads the file and returns a structured object containing:All sheets, Tables, Named ranges Parameters used: null → No custom options (e.g., column detection rules)true → Indicates the file has headers (first row contains column names) -> b) Extracting a Table Named “Table1” from the Workbook – msdyn_businessclosures_Sheet = Source{[Item=”Table1″, Kind=”Table”]}[Data], This would search inside the Source object (which includes all workbook elements), and look specifically for an element where: Item = “Table1” → the name of the table in the Excel fileKind = “Table” → ensures it selects a table, not a sheet with the same name & would extract only the Data portion of that table. As a result, we get Power Query table containing the exact contents of Table1 inside the Excel workbook, to which we can further apply our logic filter, clean, etc. To conclude, when Dataverse tables refuse to load through the Power BI Dataverse Connector—especially system-driven entities like msdyn_businessclosure—the issue is usually rooted in platform-level restrictions, connector limitations, or hidden metadata. Instead of modifying these constraints, offloading the data through Power Automate → Excel → Power BI provides a controlled, reliable, and connector-independent integration path. By automating the extraction of Dataverse rows into an Excel file stored in SharePoint or OneDrive, you ensure: This method is simple to build, stable to maintain, and flexible enough to adapt to any Dataverse table -whether standard, custom, or system-managed. For scenarios where Power BI needs insights from hidden or restricted Dataverse tables, this approach remains one of the most practical and dependable solutions. I Hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudFronts.com.
Share Story :
Inside SmartPitch: How CloudFronts Built an Enterprise-Grade AI Sales Agent Using Microsoft and Databricks Technologies
Why SmartPitch? – The Idea and Pain Point The idea for SmartPitch came directly from observing the day-to-day struggles of sales and pre-sales teams. Every Marketing Qualified Lead (MQL) to Sales Qualified Lead (SQL) conversion required hours of manual work: searching through documents stored in SharePoint, combing through case studies, aligning them with solution areas, and finally packaging them into a client-ready pitch deck. The reality was that documents across systems—SharePoint, Dynamics 365, PDFs, PPTs—remained underutilized because there was no intelligent way to bring them together. Sales teams often relied on tribal knowledge or reused existing decks with limited personalization. We asked: What if a sales assistant could automatically pull the right case studies, map them to solution areas, and draft an elevator pitch on demand, in minutes? That became the SmartPitch vision: an AI-powered agent that: As a result of this product, it has helped us reduce pitch creation time by 70%. 2. The First Prototype – Custom Copilot Studio Our first step was to build SmartPitch using Custom Copilot Studio. It gave us a low-code way to experiment with conversational flows, integrate with Azure AI Search, and provide sales teams with a chat interface. 1. Knowledge Sources Integration 2. Data Flow 3. Conversational Flow Design 4. Integration and Security 5. Technical Stack 6. Business Process Enablement 7. Early Prototypes With Custom Copilot, we were able to: We successfully demoed these early prototypes in Zurich and New York. They showed that the idea worked but they also revealed serious limitations. 3. Challenges in Custom Copilot Despite proving the concept, Custom Copilot Studio had critical shortcomings: Lacked support for model fine-tuning or advanced RAG customization. However, incorporating complex external APIs or custom workflows was difficult. This limitation meant SmartPitch, in its Copilot form, couldn’t scale to meet enterprise standards. 4. Rebuilding in Azure AI Foundry – Smarter, Extensible, Connected The next phase was Azure AI Foundry, Microsoft’s enterprise AI development platform. Unlike Copilot Studio, AI Foundry gave us: Extending SmartPitch with Logic Apps One of the biggest upgrades was the ability to integrate Azure Logic Apps as external tools for the agent. This allowed SmartPitch to: This modular approach meant we could add new functionality simply by publishing a new Logic App. No redeployment of SmartPitch was required. Automating Document Vectorization We also solved one of the biggest bottlenecks—document ingestion and retrieval—by building a pipeline for automatic document vectorization from SharePoint: This allowed SmartPitch to search across text, images, tables, and PDFs, providing relevant answers instead of keyword matches. But There Were Limitations Even with these improvements, we hit roadblocks: At this point, we realized the true bottleneck wasn’t the agent itself, it was the quality of the data powering it. 5. Bad Data, Governance, and the Medallion Architecture SmartPitch’s performance was only as good as the data it retrieved from. And much of the enterprise data was dirty: duplicate case studies, outdated documents, inconsistent file formats. This led to irrelevant or misleading responses in pitches. To address this, we turned to Databricks’ Unity Catalog and Medallion Architecture: You can read our post on building a clean data foundation with Medallion Architecture [Link] Now, every result SmartPitch surfaced could be trusted, audited, and tied to a governed source. 6. SmartPitch in Mosaic AI – The Final Evolution The last stage was migrating SmartPitch into Databricks Mosaic AI, part of the Lakehouse AI platform. This was where SmartPitch matured into an enterprise-grade solution. What We Gained in Mosaic AI: In Mosaic AI, SmartPitch wasn’t just a chatbot it became a data-native enterprise sales assistant: From these, we came to know the following differences between agent development in AI Foundry & DataBricks Mosaic AI – Attribute / Aspect Azure AI Foundry Mosaic AI Focus Developer and Data Scientist Data Engineers, Analysts, and Data Scientists Core Use Case Create and manage your own AI agent Build, experiment, and deploy data-driven AI models with analytics + AI workflows Interface Code-first (SDKs, REST APIs, Notebooks) No-code/low-code UI + Notebooks + APIs Data Access Azure Blob, Data Lake, vector DBs Native integration with Databricks Lakehouse, Delta Lake, Unity Catalog, vector DBs MCP Server Only custom MCP servers supported; built-in option complex Native MCP support with Databricks ecosystem; simpler setup Models 90 models available Access to open-source + foundation models (MPT, Llama, Mixtral, etc.) + partner models Model Customization Full model fine-tuning, prompt engineering, RAG Fine-tuning, instruction tuning, RAG, model orchestration Publish to Channels Complex (Azure Bot SDK + Bot Framework + App Service) Direct integration with Databricks workflows, APIs, dashboards, and third-party apps Agent Update Real-time updates in Microsoft Teams Updates deployed via Databricks workflows; versioning and rollback supported Key Capabilities Prompt flow orchestration, RAG, model choice, vector search, CICD pipelines, Azure ML & responsible AI integration Data + AI unification (native to Lakehouse), RAG with Lakehouse data, multi-model orchestration, fine-tuning, end-to-end ML pipelines, secure governance via Unity Catalog, real-time deployment Key Components Workspace & agent orchestration, 90+ models, OpenAI pay-as-you-go or self-hosted, security via Azure identity Mosaic AI Agent Framework, Model Serving, Fine-Tuning, Vector Search, RAG Studio, Evaluation & Monitoring, Unity Catalog Integration Cost / License Vector DB: external, Model Serving: token-based pricing (GPT-3.5, GPT-4), Fine-tuning: case-by-case, Total agent cost variable (~$5k–$7k+/month) Vector Search: $605–$760/month for 5M vectors, Model Serving: $90–$120 per million tokens, Fine-Tuning Llama 3.3: $146–$7,150, Managed Compute built into DBU usage, End-to-end AI Agent ~$5k–$7k+/month Use Cases / Capabilities Agents intelligent, can interact/modify responses; single AI search per agent; infrastructure setup required; custom MCP server registration Agents intelligent, interact/modify responses; AI search via APIs (Google/Bing); in-built MCP server; complex infrastructure; slower responses as results batch sent together Development Approach Low-code, faster agent creation, SDK-based, easier experimentation Manual coding using MLflow library, more customization, API integration, higher chance of errors, slower build Models Comparison 90 models, Azure OpenAI (GPT-3.5, GPT-4), multi-modal ~10 base models, OSS & partner models (Llama, Claude, Gemma), many models don’t support tool usage Knowledge Source One knowledge source of each type (adding new replaces previous) No limitation; supports data cleaning via Medallion Architecture; SQL-only access inside agent; Spark/PySQL not supported in agent Memory / Context Window 8K–128K tokens (up to 1M for GPT-4.1) Moderate, not specified Modalities Text, code, vision, audio (some models) Likely text-only Special Enhancements Turbo efficiency, reasoning, tool calling, multimodal Varies per model (Llama, Claude, Gemma architectures) Availability Deployed via Azure AI Foundry Through Databricks platform Limitations Only one knowledge source of each type, infrastructure complexity for MCP server No multi-modal Spark/PySQL access, slower batch responses, limited model count, high manual development 7. Lessons Learned: … Continue reading Inside SmartPitch: How CloudFronts Built an Enterprise-Grade AI Sales Agent Using Microsoft and Databricks Technologies
