Category Archives: Blog

Scaling Customer Support: Implementing Service Request Management with Dynamics 365

Summary For growing businesses, service request management becomes crucial as volume and complexity increase. Email-driven support and Excel trackers often fail to scale effectively. A structured Service Request Management System (SRMS) centralizes tracking, automates processes, and enforces SLAs. Microsoft Dynamics 365 Customer Service enables operational, accountable support management. Implementing the right system ensures efficiency, transparency, and customer satisfaction as businesses grow. Table of Contents 1. Key Components of an SRMS 2. Setting Up Service Request Management 3. Operationalizing with Dynamics 365 4. Customer Success Story Key Components of a Service Request Management System For growing businesses, as much as their processes, reports, and efficient systems are important, service request management becomes equally crucial. As companies scale, the volume and complexity of service requests increase, making efficient management essential to maintaining operational flow and customer satisfaction. A well-designed Service Request Management System (SRMS) helps align workflows, reduce response times, and enhance service delivery. Centralized Request Tracking: A centralized service request management system (SRMS) allows businesses to log, track, and manage all service requests in one place. Unified Dashboard: A centralized dashboard provides a comprehensive view of all service requests, their statuses, and assigned personnel. Prioritization and Categorization: Service requests can be categorized and prioritized based on urgency, impact, and type. Automatic Case Creation and Update Rule: Automation features such as automatic ticket creation, escalation rules, and status updates help reduce manual effort. Service Level Agreements (SLAs): SLAs define expected response and resolution times and ensure deadlines are met. Automated Assignment: Service requests are automatically assigned based on expertise, workload, and availability. Real-Time Updates: Service agents update request statuses in real time, providing accurate information to stakeholders. Customer Self-Service Portal: Customers can submit and track service requests independently. Setting Up Service Request Management for a Growing Business Support Email Address: Customers send queries to support@companydomain.com. Automatic Case Creation: Requests sent to the support email are automatically converted into cases. Case Assignment: Once a case is created, it is assigned to a support team member. Acknowledgment Emails: A confirmation email is sent to the customer. Team Notification: The assigned team member receives a notification. Progress Updates: Team members add notes and updates to the ticket. Real-Time Updates: Updates are sent to the customer via email in real time. Defaulter Report: A defaulter report is generated internally to manage SLA breaches. Case Closure Notifications: Notifications are sent to customers upon cancellation or resolution. Email Tracking: All communication via email is automatically tracked by the system. Service Request Management Portal: A portal can be provided to selected customers for managing service requests. Operationalizing Customer Service with Dynamics 365 If your customer support still lives in shared inboxes and Excel trackers, you’re not alone. As volumes increase, this setup quickly turns into missed SLAs, inconsistent updates, and frustrated customers. Automatic case creation from support emails, portals, or web forms Queue-based assignment End-to-end tracking of customer interactions Business Process Flows (Identify → Research → Resolve) SLA-driven reminders and alerts One-click email responses from the case record Self-service portal support Watch the Full Webinar Demo To see how Dynamics 365 Customer Service operationalizes service management in real time, watch the complete webinar session below: Frequently Asked Questions Why does email-only support fail at scale? It lacks centralized tracking, SLA enforcement, ownership clarity, and structured workflows. What is the main benefit of an SRMS? Centralized visibility, automation, and accountability. Can customers continue using email? Yes. Emails are automatically converted into tracked cases internally. Customer Success Story Having a solid service request management system (SRMS) is a game-changer for any growing business. By centralizing service requests, automating processes, and setting clear expectations with SLAs, businesses can maintain efficiency and customer satisfaction as they grow. Here is our featured Customer Success Story: Revolution Cooking partnered with CloudFronts for Dynamics 365 enhancements and data integration with third-party applications.

Share Story :

Opening an HTML Web Resource from a Subgrid Button in Dynamics 365 CRM

Posted On February 12, 2026 by Admin Posted in

An Australia-based linen manufacturing and distribution company was using Microsoft Dynamics 365 to manage their sales lifecycle. Their sales process included: The issue arose when sales representatives needed to add multiple existing products to an Opportunity. The Real Problem Out-of-the-box behavior in Dynamics 365 allows users to: While this works functionally, it becomes inefficient for organizations managing: This resulted in: The business requirement was clear: Users should be able to select multiple existing products and create Opportunity Product records in bulk, from a single interface. Architectural Decision Instead of: To address this, we introduced a custom button on the Opportunity Products subgrid. When clicked, it opens an HTML web resource that allows users to select multiple existing products in a single interface. The selected products are then processed through a Custom Action, which handles the bulk creation of Opportunity Product records server-side. Why Use an HTML Web Resource? In Microsoft Dataverse, not every customization belongs directly inside the main form. Sometimes we need: HTML Web Resources allow us to build: Without disturbing the standard CRM experience. The Web Resource We created an HTML web resource named: The HTML web resource renders a searchable grid of existing Product records, allowing users to perform multi-selection. The selected product IDs are then passed to a Custom Action, which handles the creation of related Opportunity Product records against the active Opportunity. Opening the HTML Web Resource from the Subgrid Button The key technical step was opening the HTML page when the custom ribbon button is clicked. We used modern navigation APIs. JavaScript Used on the Ribbon Button How This Works When the custom button is clicked, the script first retrieves the current Opportunity record ID. This ID is then passed as a parameter to the HTML web resource so that the selected products can be associated with the correct Opportunity. The web resource is opened as a modal dialog using the target: 2 navigation option, ensuring that users can complete the bulk selection process without leaving the form. Inside the HTML Web Resource Within the HTML web resource: This design ensures: All without navigating away from the form. How This Approach Saved Time Faster User Workflow Users select multiple products in one screen. Clean Architecture Concern Where It Lives Business Data Dataverse UI Interaction HTML Web Resource Batch Logic Custom Action (C# Plugin) To encapsulate, with this design the Client was able to: Opening an HTML Web Resource from a button in Microsoft Dynamics 365 is a powerful extension technique. It allows organizations to: We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com

Share Story :

Implementing Plugin for Automated Lead Creation in Dynamics 365

Dynamics 365 CRM plugins are a powerful way to enforce business logic on the server but choosing when and how to use them is just as important as writing the code itself. In one implementation for a Netherlands-based sustainability certification organization, the client needed their certification journey to begin with a custom application entity while still ensuring that applicant and company details were captured as leads for downstream sales and engagement processes. This blog explores how a server-side plugin was used to bridge that gap, reliably creating and associating lead records at runtime while keeping the solution independent of UI behavior and future integrations. In this scenario, the certification application was the starting point of the business process, but sales and engagement still needed to operate on leads. Simply storing the same information in one place wasn’t enough, the system needed a reliable way to translate an application into a lead at the right moment, every time. That transformation logic is neither data storage nor UI behavior; its core business process logic, which is why it was implemented using a Dynamics 365 plugin. Scenario: Certification Application Not Flowing into Sales Users reported the following challenge: a. A user submits or creates a Certification Applicationb. Applicant and company details are captured on a custom entityc. Sales teams expect a Lead to be created for follow-upd. No Lead exists unless created manually or through inconsistent automatione. Application and sales data become disconnected This breaks the intended business flow, as certification teams and sales teams end up working in parallel systems without a reliable link between applications and leads. Possible Solution: Handling Lead Creation Through Manual Processes (Original Approach) Before implementing the plugin, the organization attempted to manage lead creation manually or through disconnected automation. How It Worked (Initially) a. A Certification Application was submittedb. Users reviewed the applicationc. Sales team manually created a Lead with applicant/company detailsd. They tried to match accounts/contacts manuallye. Both records remained loosely connected. Why This Might Look Reasonable a. Simple to explain operationallyb. No development effortc. Works as long as users follow the steps perfectly The Hidden Problems 1] Inconsistent Data Entry a. Users forgot to create leadsb. Leads were created with missing fieldsc. Duplicate accounts/contacts were createdd. Sales lost visibility into new certification inquiries 2] Broken Cross-Department Workflow a. Certification team worked in the custom entityb. Sales team worked in Leadsc. No structural linkage existed between the twod. Downstream reporting (pipeline, conversion, source tracking) became unreliable. Workaround to This Approach: Use Server-Side Logic Instead of Manual Steps Practically, the transformation of an application into a lead is business logic, not user behavior. Once that boundary is respected, the solution becomes stable, predictable, and automation-friendly. Practical Solution: A Server-Side Plugin (Improved Approach) Instead of depending on people or scattered automation, the lead is created centrally and automatically through a plugin registered on the Certification Application entity. Why a Plugin? a. Executes consistently regardless of data sourceb. Not tied to form events or UI interactionsc. Can safely check for existing accounts/contactsd. Ensures one source of truth for lead and application linkagee. Works for portal submissions, integrations, and bulk imports This is essential for a client, where applications may originate from multiple channels and must feed accurately into the sales funnel. How the Plugin-Based Solution Works The solution was implemented using a server-side plugin registered on the Certification Application entity. The plugin executes when a new application is created, retrieves the necessary applicant and organization details, performs basic checks for existing accounts and contacts, creates a Lead using the extracted data, and finally links the Lead back to the originating application record. This ensures that every certification application automatically enters the sales pipeline in a consistent and reliable manner. Implementation Steps (For Developers New to Plugins) If you’re new to Dynamics 365 plugins, the implementation followed these core steps: Build and Register the Plugin. Once the plugin logic is implemented, build the project to generate the signed assembly. After a successful build: After registration, every new Certification Application will automatically trigger the plugin, ensuring that a Lead is created and linked without any manual intervention. a. Open the Plugin Registration Toolb. Connect to the target Dynamics 365 environmentc. Register the compiled assemblyd. Register the plugin step on the Create message of the Certification Application entitye. Configure the execution stage (typically post-operation) and execution mode (Synchronous or Asynchronous, depending on business needs) To encapsulate, this solution shows why server-side plugins are the right place for core business logic in Dynamics 365. By automatically creating and linking a Lead when a Certification Application is created, the organization removed manual steps, prevented data inconsistencies, and ensured that every application reliably flowed into the sales pipeline. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com

Share Story :

Advanced Time Travel & Data Recovery Strategies in Delta Lake

In production Databricks environments, data issues such as accidental overwrites, faulty MERGE conditions, or incorrect backfills are common. Delta Lake’s Time Travel is not just a feature – it is a critical recovery and governance mechanism. This blog focuses only on practical recovery strategies that are actually used in real-world production systems. Why Time Travel Is Critical in Production Common failure scenarios include: •a. INSERT OVERWRITE wiping historical data • b. Incorrect MERGE conditions deleting valid records • c. Wrong filters during backfill corrupting data Reprocessing data is expensive and risky. Time Travel enables instant rollback with minimal impact. Version vs Timestamp (What You Should Use) Always prefer version-based time travel for recovery operations. Why version-based recovery is preferred: • a. Precise and deterministic • b. No time zone dependency • c. Safest option for production recovery Use timestamp-based queries only for auditing, not recovery. Identify the Last Safe State Before performing any recovery, always inspect the table history. DESCRIBE HISTORY crm_opportunities; Key fields to review: • a. version • b. timestamp • c. operation • d. userName This history acts as the single source of truth during incidents. Recovery Patterns That Actually Work 1. Partial Data Recovery (Recommended) Recover only the affected records instead of rolling back the entire table. Advantages: • a. No downtime • b. Safe for downstream reports • c. Most production-friendly approach 2. Full Table Restore (Use Carefully) Advantages: •a. Fast and atomic Risks: •a. Impacts all downstream consumers Use this approach only when the entire table is corrupted. Safe Validation Using CLONE Before restoring data in production, validate changes using a clone. Typical use cases: • a. Validate recovered data • b. Compare versions •c. Run business checks Retention & VACUUM (Most Common Mistake) The following command causes permanent data loss: Once vacuumed aggressively, time travel breaks and rollback becomes impossible. Production-Safe Retention Recommended retention: • a. Critical tables: 30 days • b. Reporting tables: 7–14 days Auditing & Root Cause Analysis (RCA) Track who changed data and when: Compare changes between versions: Key Best Practices • a. Capture table version before running risky jobs • b. Always use version-based time travel for recovery • c. Prefer partial recovery over full restores • d. Avoid aggressive VACUUM operations • e. Extend retention for critical tables • f. Validate using CLONE before restoring To conclude, Delta Lake Time Travel is not a backup mechanism, but it is the fastest and safest recovery tool in Databricks. When used correctly, it prevents downtime, reduces reprocessing cost, and improves production reliability. For enterprise Databricks pipelines, mastering this capability is mandatory, not optional. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com

Share Story :

From Dashboards to Decision Intelligence

Traditional business intelligence platforms have historically focused on visualization-charts, KPIs, and trend lines that describe what has already happened. Power BI excels at this, enabling users to explore data interactively and monitor performance at scale. However, modern business users expect more than visuals. They need clarity, reasoning, and guidance on what actions to take next. This marks the shift from dashboards toward true decision intelligence. Business Challenges Most organizations face a similar challenge. Dashboards answer what happened but rarely explain why it happened. Business users depend on analysts to interpret insights, which slows down decision-making and creates bottlenecks. At the same time, data is fragmented across CRM systems, ERP platforms, project tools, and external APIs. Bringing this data together is difficult, and forming a single, trusted view becomes increasingly complex as data volumes grow. Why Visualization Alone Is Not Enough Even with powerful visualization tools, interpretation remains manual. KPIs lack business context, anomalies are not automatically explained, and insights rely heavily on tribal knowledge. This creates a gap between data availability and decision confidence. Introducing Agent Bricks Agent Bricks is introduced to close this gap. It acts as an AI orchestration and reasoning layer that consumes curated analytical data and applies large language model-based reasoning. Instead of presenting raw numbers, Agent Bricks generates contextual insights, explanations, and recommendations aligned to business scenarios. Importantly, it enhances Power BI rather than replacing it. High-Level Architecture From an architecture standpoint, data flows from enterprise systems such as CRM, ERP, project management tools, and APIs. Azure Logic Apps manage ingestion, Azure Databricks handles analytics and modeling, Agent Bricks performs AI reasoning, and Power BI remains the consumption layer. To conclude, dashboards remain a critical foundation for analytics, but they are no longer enough to support modern decision-making. As data complexity and business expectations grow, organizations need systems that can interpret data, explain outcomes, and guide actions. Agent Bricks enables this shift by introducing AI-driven reasoning on top of existing Power BI investments. By bridging the gap between analytics and decision-making, it helps organizations move from passive reporting to proactive, insight-led execution. This marks the first step in the evolution from dashboards to true decision intelligence. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com

Share Story :

Plugin Class Code Recovery using XRMToolBox & C# DotPeek.

In an ideal Dynamics 365 (Dataverse) project, plugin source code lives safely in a version-controlled repository, flows cleanly through Azure DevOps pipelines, and is always recoverable. In reality, many of us inherit environments where that discipline didn’t exist. I recently worked with a customer where: This created a common but uncomfortable challenge in the Dynamics 365 world:How do you maintain, debug, or enhance plugins when the source code is lost? Rewriting everything from scratch was risky and time-consuming. Guessing behavior based on runtime results wasn’t reliable. Fortunately, Dynamics 365 and the .NET ecosystem give us a practical and effective alternative. Using XrmToolBox and JetBrains dotPeek, it is possible to recover readable C# plugin code directly from the deployed assembly. (Though the C# Class code recovered won’t be 100% exact, as the variable names would be different and generic; it is only suitable for close logic, structure & functional recovery) The Practical Solution The approach consists of two main steps: This does not magically restore comments or original formatting, but it does give a working, understandable code that closely reflects the original plugin logic. Tools Used Step 1: Extract the Plugin Assembly from Dataverse 1. Connect to the Environment 2. Load the Assembly Recovery Tool 3. Download the DLL At this point, you have successfully recovered the compiled plugin assembly exactly as it exists in the environment. Step 2: Decompile the DLL Using JetBrains dotPeek 1. Open dotPeek 2. Explore the Decompiled Code dotPeek will: One can now browse through: This is usually more than enough to understand how the plugin works. 3. Export to a Visual Studio Project (Optional but Recommended) One of dotPeek’s most powerful features is Export to Project: This gives you a proper .csproj with class files that you can open, build, and extend. Possibilities with the Recovered Code Once you have the decompiled C# code, several options open up: 1. Rebuild the Plugin Assembly 2. Re-register the Plugin 3. Maintain or Enhance Functionality Important Considerations Key Takeaway Losing plugin source code does not mean losing control of your Dynamics 365 solution. With XrmToolBox’s Assembly Recovery Tool and JetBrains dotPeek, you can: There are chances while working in Dynamics 365 technologies, that a developer might face this situation. Knowing this technique can save days-or weeks-of effort and give your customer confidence that their system remains fully supportable. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com

Share Story :

Browser-Level State Retention in Dynamics 365 CRM: Improving Performance & UX with Session Storage

Dynamics 365 model-driven apps are excellent at storing business data, but not every piece of information belongs in Dataverse. A common design folly is using Dataverse fields to store temporary UI state-things like selected views, filters, or user navigation preferences. While this works technically, it introduces unnecessary performance overhead and can create incorrect behavior in multi-user environments. In this blog, I’ll focus on browser-level retention of CRM UI data using “sessionStorage“, using subgrid view retention as a practical example for a technology consulting and cybersecurity services firm based in Houston, Texas, USA, specializing in modern digital transformation and enterprise security solutions. The Real Problem: UI State vs Business Data Let’s separate concerns clearly: Type Example Where it should live Business data Status, owner, amounts Dataverse UI state Selected view, filter, scroll position Browser Subgrid views fall squarely into the UI state category. Scenario: Subgrid View Resetting on Navigation Users reported the following behavior: This breaks user workflow, especially for records that users revisit frequently. Possible Solution: Persisting UI State in Dataverse (Original Approach) This would attempt to fix it by storing the selected subgrid view GUID in a Dataverse field on the parent record. How It Works Why this might look reasonable The Hidden Problems 1] Slower Form Execution 2] Data Model Pollution 3] Incorrect Multi-User Behavior 4] Scalability Issues In short, Dataverse was doing work it should never have been asked to do. Workaround to this Approach: Keep UI State in the Browser for that session, But practically: The selected subgrid view belongs to the user’s session, not the record. Once that boundary is respected, the solution becomes much cleaner. Practical Solution: Browser Session Storage (Improved Approach) Instead of persisting view selection in Dataverse, we store it locally in the browser using sessionStorage. sessionStorage is part of the Web Storage API, which provides a way to store key-value pairs in a web browser. Unlike localStorage, which persists data even after the browser is closed, sessionStorage is designed to store data only for the duration of a single session. This means that the data is available as long as the tab or window is open, and it is cleared when the tab or window is closed. Why Session Storage? How the Improved Solution Works 1. Store the View Locally on Subgrid Change 2. Restore the View on Form/Grid Load This ensures that when the user revisits the form, the subgrid opens exactly where they left off. Why This Approach Is Superior 1] Faster Execution 2] Correct User Experience 3] Clean Architecture 4] Zero Backend Impact When to Use Browser-Level Retention Use this pattern when: Examples: To conclude, not all data deserves to live in Dataverse. When you store UI state in the browser instead of the database, you gain: Subgrid view retention is just one example-but the principle applies broadly across Dynamics 365 customizations. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com

Share Story :

Filtering data in Business Central API

Business Central is its ability to integrate with other systems through APIs, allowing developers to retrieve and manipulate data programmatically. However, when dealing with large datasets or complex queries, retrieving only the relevant information efficiently becomes crucial. This is where API filtering comes in, and source table views can play a vital role in optimizing these operations. In this blog, we’ll explore how to implement API filtering using source table views within Microsoft Dynamics 365 Business Central. We’ll cover the benefits, best practices, and a step-by-step guide to making the most out of your API integrations. We’ll take one example, instead of fetching all sales orders, you might want to retrieve only those created within the last month or orders from a specific customer. Business Central APIs allow you to apply filters using OData queries, but in more complex scenarios, leveraging source table views can simplify the process and improve performance. What Are Source Table Views in Business Central? In Business Central, source table views are virtual tables created from SQL queries that extract and present data from one or more underlying tables. Views allow you to define complex data relationships and filters at the database level, which can then be consumed by your APIs. Advantage using Source Table View First, we have to create custom API using AL in Visual Studio Code Steps to achieve the goal: Using SourceTableView helps you to filter the data in your API itself without using filter work in your postman. Create an API page and added Source table View property where I have added filters only Sales order and Approved Orders data should be shown whenever this API is used. To conclude, API filtering using source table views in Business Central is a powerful way to optimize data retrieval and improve the performance of your integrations. By centralizing your filtering logic within the database, you can reduce the complexity of your APIs, improve security, and ensure consistent data access across your applications. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com

Share Story :

Exposing Plugins as Bound Actions for Power Automate Flows: A Practical Procedure for Efficient Record Processing, involving several records.

In complex business processes, like calculating commissions or validating data across multiple records, applying the same logic repeatedly in a Power Automate flow can quickly become inefficient and difficult to maintain. A more scalable approach is to encapsulate the logic in a Dataverse plugin, expose it as a bound action, and then call this action from a flow. This method centralizes business rules, reduces redundancy, and improves maintainability. In this post, we’ll walk through the steps to implement this approach and examine its advantages over applying the same logic directly within a flow for each individual record. We’ll illustrate this with a practical example from a Houston-based technology consulting and cybersecurity services firm that specializes in modern digital transformation and enterprise security solutions. Flow Diagram Step 1: Create the PluginThe first step is to write a plugin that contains the logic you want to apply to each record. Example: DuplicateCommissionsCounter Step 2: Expose the Plugin as a Bound ActionInstead of running plugin logic manually for each record, you can register it as a bound action in Dataverse. Procedure: E.g. 2. Attach your plugin to this action. Outcome: This exposes your plugin logic as a reusable, callable bound action. Any process or flow can now invoke it for a specific invoice record. Step 3: Use Power Automate to Call the Bound ActionOnce the plugin is exposed, you can loop through multiple records in a flow and call the action. Procedure in Power Automate: This approach ensures that all complex logic resides in the plugin, while the flow orchestrates which records need processing. Advantages Over Logic Directly in the Flow To conclude, exposing plugins as bound actions is a robust, maintainable way to apply complex logic across multiple records in Dataverse. It allows Power Automate flows to focus on orchestration rather than logic execution, leading to cleaner, faster, and easier-to-manage solutions. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com

Share Story :

Renewing SSL Certificates in Dynamics 365 Finance and Operations

Managing secure operations in Microsoft Dynamics 365 Finance and Operations (D365FO) is crucial for enterprise-level environments. While working in a Dev environment, I recently encountered an issue while posting the packing slip for a Sales Order. After troubleshooting, I identified the problem was related to expired SSL certificates in a cloud-hosted environment. SSL certificates in D365FO environments remain valid for one year. To maintain security and avoid disruptions, it’s necessary to renew these certificates regularly-a process called credential rotation, managed via the Lifecycle Services (LCS) portal. In this blog, I’ll guide you step-by-step on how to resolve this issue through SSL certificate rotation. Why SSL Certificate Rotation is Important When deploying Dynamics 365 Finance Operations as a cloud-hosted environment, SSL certificates are used to encrypt data and ensure secure communication between servers. Expired certificates can disrupt functionality. Regular rotation of credentials is a best practice to maintain smooth operations and robust cybersecurity. Step-by-Step Process to Rotate SSL Certificates in Dynamics 365 Finance & Operations Here’s how you can resolve the issue and renew SSL certificates in your environment: Step 1: Log into the LCS Environment Step 2: Navigate to the Implementation Project Step 3: Initiate Credential Rotation Step 4: Rotate SSL Secrets Certificates Step 5: Wait for the Process to Complete Step 6: Verify the Deployment Status To conclude, regularly rotating SSL certificates not only resolves operational issues but also ensures compliance with enterprise-level cybersecurity practices. By following the above steps, you can maintain the security and functionality of your Dynamics 365 Finance Operations cloud-hosted environments. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com

Share Story :

SEARCH BLOGS:

FOLLOW CLOUDFRONTS BLOG :


Secured By miniOrange