How to Enable Recycle Bin in Dynamics 365 CRM
When working with Dynamics 365 CRM, one common request from users and admins is:“How do we get a Recycle Bin to recover accidentally deleted records?” Unlike SharePoint or Windows, Dynamics 365 doesn’t come with a native Recycle Bin. But that doesn’t mean you’re out of luck! There are a few smart ways to implement soft delete or restore capabilities depending on your organization’s needs. In this blog, we’ll explore all the available options — from built-in Power Platform features to custom approaches — to simulate or enable Recycle Bin-like functionality in Dynamics 365 CRM. Option 1: Use the Built-in Dataverse Recycle Bin (Preview/GA in Some Regions) Microsoft is gradually rolling out a Recycle Bin feature for Dataverse environments. How to Enable: Option 2: Implement a Custom Recycle Bin (Recommended for Full Control) You can also write a bulk delete after 15-30 days to actually clear these records from Dataverse. Option 3: Restore from Environment Backups If a record is permanently deleted, your last line of defence is a full environment restore. Not ideal for frequent recovery, but lifesaving in major accidents. Tips and Tools you can use. If you also want to track who deleted what and when, Auditing might be helpful. You cannot restore deleted records using this. It is useful only for traceability and compliance, not recovery. XrmToolBox Plugins like Recycle Bin Manager simulate soft delete and allow browsing deleted records. While Dynamics 365 CRM doesn’t provide a built-in Recycle Bin like other Microsoft products, there are several reliable ways to implement soft-delete or recovery mechanisms that fit your organization’s needs. Whether you leverage Dataverse’s native capabilities, create a custom status based Recycle Bin, or track deletions through auditing and backups, it’s essential to plan ahead for data protection and user experience. By proactively enabling recovery options, you not only safeguard critical business data but also empower users with confidence and control over their CRM operations. What’s Your Approach? Have you built your own Recycle Bin experience in Dynamics 365? Share your thoughts or tips in the comments below! We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com.
Share Story :
Add or Remove Sample Data in a Dynamics 365 CRM Environment
Let’s say you configured a Dynamics 365 Sales or Project Operation or a field service trial for a client demo to save your efforts on creating sample data dynamics gives you an option to add data in any dynamics 365 environment, you can either choose to install the sample data while creating the environment however if you forgot to do so, here is how you can add sample data within your dynamics 365 environment. Step 1 – Go to https://admin.powerplatform.microsoft.com/environments select your dynamics 365 environment and click on view details. Step 2 – On the details page click on setting. Step 3 – On the setting page under data management you will see an option named sample data, click on it. Step 4 – Click installed and after a few minutes sample data will be added within your dynamics 365 environment. Similarly, if sample data is already installed and you wish to remove it, you will see a button Remove sample data instead of Install sample data. Hope this helps! 😊 I hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com.
Share Story :
QA Made Easy with KQL in Azure Application Insights
In today’s world of modern DevOps and continuous delivery, having the ability to analyze application behavior quickly and efficiently is key to Quality Assurance (QA). Azure Application Insights offers powerful telemetry collection, but what makes it truly shine is the Kusto Query Language (KQL)—a rich, expressive query language that enables deep-dive analytics into your application’s performance, usage, and errors. Whether you’re testing a web app, monitoring API failures, or validating load test results, KQL can become your best QA companion. What is KQL? KQL stands for Kusto Query Language, and it’s used to query telemetry data collected by Azure Monitor, Application Insights, and Log Analytics. It’s designed to be read like English, with SQL-style expressions, yet much more powerful for telemetry analysis. Challenges Faced with Application Insights in QA 1. Telemetry data doesn’t always show up immediately after execution, causing delays in debugging and test validation. 2.When testing involves thousands of records, isolating failed requests or exceptions becomes tedious and time-consuming. 3.The default portal experience lacks intuitive filters for QA-specific needs like test case IDs, custom payloads, or user roles. 4.Repeated logs from expected failures (e.g., negative test cases) can clutter insights, making it hard to focus on actual issues. 5.Out-of-the-box telemetry doesn’t group actions by test scenario or user session unless explicitly configured, making traceability difficult during test case validation. To overcome these limitations, QA teams need more than just default dashboards—they need flexibility, precision, and speed in analyzing telemetry. This is where Kusto Query Language (KQL) becomes invaluable. With KQL, testers can write custom queries to filter, group, and visualize telemetry exactly the way they need, allowing them to focus on real issues, validate test scenarios, and make data-driven decisions faster and more efficiently. Let’s take some examples for better understanding: Some Common scenarios where a KQL proves to be very effective. Check if the latest deployment introduced new exceptions Example: Find all failed requests Example: Analyse performance of a specific page or operation Example: Correlate request with exceptions Example: Validate custom event tracking (like button clicks) Example: Track specific user sessions for end-to-end QA testing Example: Test API performance under load Example: All of this can be Visualized too – You can pin your KQL queries to Azure Dashboards or even Power BI for real-time tracking during QA sprints. To conclude, KQL is not just for developers or DevOps. QA engineers can significantly reduce manual log-hunting and accelerate issue detection by writing powerful queries in Application Insights. By incorporating KQL into your testing lifecycle, you add an analytical edge to your QA process—making quality not just a gate but a continuous insight loop.Start with a few basic queries, and soon you’ll be building powerful dashboards that QA, Dev, and Product can all share! Hope this helps ! I hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com.
Share Story :
Merging Unmanaged Solutions in Power Platform with XRMToolBox
Let’s say you are developing a module driven app or some custom app development in CRM and multiple teams have created multiple different solutions involving customizations for the develop. Best would be to have all the customizations in a single solution before and then move it to UAT or Production. In this blog I will show you how you can move components of multiple solutions into a single main solution using the Solutions Component Mover tool in XRM Tool Box. So let’s begin. Step 1: Download XRM Tool Box from this link – https://www.xrmtoolbox.com/ Step 2: Make a connection to your Dynamics 365 Environment inside of the XRM Tool Box by clicking on Create a new connection. Step 2: Click on Microsoft Login Control Step 3: Click on Open Microsoft Login Control Step 4: Now Select Display list of available organizations & show advance –> put your username and password -> after successful authentication Name your Connection. Step 5: Now in Took Library Search for “Solution Component Mover” and hit install. Step 6: Once the tool is installed it will appear in your tool list click on it Step 7: once you are in the solution component mover tool click on Load Solution. To conclude, now, you will get a list of all Managed and Unmanaged solutions. Select the solutions you want to merge in the Source Solution section and select the target solution in which you want to move the components. All the elements from source solutions will be moved to the target solution (Selected Solutions are highlighted in light grey colour). Once you have selected the source and target solutions hit Copy Components and we are done. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfonts.com.
Share Story :
Azure Integration Services (AIS): The Key to Scalable Enterprise Integrations
In today’s dynamic business environment, organizations rely on multiple applications, systems, and cloud services to drive operations, making scalable enterprise integrations essential. As businesses grow, their data flow and process complexity increase, demanding integrations that can handle expanding workloads without performance bottlenecks. Scalable integrations ensure seamless data exchange, real-time process automation, and interoperability between diverse platforms like CRM, ERP, and third-party services. They also provide the flexibility to adapt to evolving business needs, supporting digital transformation and innovation. Without scalable integration frameworks, enterprises risk inefficiencies, data silos, and high maintenance costs, limiting their ability to scale operations effectively. Are you finding it challenging to scale your business operations efficiently? In this blog, we’ll look into key Azure Integration Services that can help overcome common integration hurdles. Before we get into AIS, let’s start with some business numbers—after all, money is what matters most to any business. Several organizations have reported significant cost savings and operational efficiencies after implementing Azure Integration Services (AIS). Here are some notable examples: Measurable Business Benefits with AIS A financial study evaluating the impact of deploying AIS found that organizations experienced benefits totalling $868,700 over three years. These included: Here are some articles to support this data: Modernizing Legacy Integration: BizTalk to AIS A financial institution struggling with outdated integration adapters transitioned to Azure Integration Services. By leveraging Service Bus for reliable message delivery and API Management for secure external API access, they reduced operational costs by 25% and improved system scalability. These examples demonstrate the substantial cost reductions and efficiency improvements that businesses can achieve by leveraging Azure Integration Services. To put this into perspective, we’ll explore real-world industry challenges and how Azure’s integration solutions can effectively resolve them. Example 1: Secure & Scalable API Management for a Manufacturing Company Scenario: A global auto parts manufacturer supplies components to multiple automobile brands. They expose APIs for: Challenges: However, they are facing serious challenges These are some simple top-level issues there can be many more complexities. Solution: Azure API Management (APIM) The manufacturer deploys Azure API Management (APIM) to secure, manage, and monitor their APIs. Step 1: Secure APIs – APIM enforces OAuth-based authentication so only authorized suppliers can access APIs. Rate limiting prevents overuse. Step 2: API Versioning – Different suppliers use v1 and v2 of APIs. APIM ensures smooth version transitions without breaking old integrations. Step 3: Analytics & Monitoring – The company gets real-time insights on API usage, detecting slow queries and bottlenecks. Result: Example 2: Reliable Order Processing with Azure Service Bus for an E-commerce Company Scenario: A fast-growing e-commerce company processes over 50,000 orders daily across multiple sales channels (website, mobile app, and third-party marketplaces). Orders are routed to: Challenges: Solution: Azure Service Bus (Message Queueing) Instead of direct connections, the company decouples services using Azure Service Bus. Step 1: Queue-Based Processing – Orders are sent to an Azure Service Bus queue, ensuring no data loss even if systems go down. Step 2: Asynchronous Processing – Inventory, payment, and fulfilment consume messages independently, avoiding system overload. Step 3: Dead Letter Queue (DLQ) Handling – Failed orders are sent to a DLQ for retry instead of getting lost. Result: Example 3: Automating Invoice Processing with Logic Apps for a Logistics Company Scenario: A global shipping company receives thousands of invoices from suppliers every month. These invoices must be: Challenges: Solution: Azure Logic Apps for End-to-End Automation The company automates the entire invoice workflow using Azure Logic Apps. Step 1: Extract Invoice Data – Logic Apps connects to Office 365 & Outlook, extracts PDFs, and uses AI-powered OCR to read invoice details. Step 2: Validate Data – The system cross-checks invoice amounts and supplier details against purchase orders in the ERP. Step 3: Approval Workflow – If all details match, the invoice is auto-approved. If there’s a discrepancy, it’s sent to finance via Teams for review. Step 4: Update SAP & Notify Suppliers – Once approved, the invoice is automatically logged in SAP, and the supplier gets a payment confirmation email. Result: With Azure API Management, Service Bus, and Logic Apps, businesses can: Many organizations are also shifting towards no-code solutions like Logic Apps for faster integrations. Whether you’re looking for API security, event-driven automation, or workflow orchestration, Azure Integration Services has a solution for you. Azure Integration Services (AIS) is not just a collection of tools—it’s a game-changer for businesses looking to modernize their integrations, reduce operational costs, and improve scalability. From secure API management to reliable messaging and automation, AIS provides the flexibility and efficiency needed to handle complex business workflows seamlessly. The numbers speak for themselves—organizations have saved hundreds of thousands of dollars while improving their integration capabilities. Whether you’re looking to streamline supplier connections, optimize order processing, or migrate from legacy systems, AIS has a solution for you. What’s Next? In our next article, we’ll take a deep dive into a real-world scenario, showcasing how we helped our customer Buchi transform their integration landscape with Azure Integration Services. Next Up: Why AIS? How Easily Azure Integration Services Can Adapt to Your EDI Needs. Would love to hear your thoughts! How are you handling enterprise integrations today? Comment down below ???? or contact us at transform@cloudfronts.com
Share Story :
Error Handling Techniques in Dynamics 365 Plugins
Have You Ever Struggled with Debugging Errors in Dynamics 365 Plugins? If you’ve been working with Dynamics 365 plugins, you’ve likely encountered scenarios where your plugin failed unexpectedly. Debugging these failures can be a challenge, especially in production environments where attaching a debugger is not always an option. How do you ensure that errors are logged effectively? How do you prevent the plugin from breaking critical business processes? In this blog, I will walk you through the best error-handling techniques for Dynamics 365 plugins, ensuring that you can capture, log, and handle errors gracefully. Why Trust Me? As a Microsoft Certified Trainer and Dynamics 365 Consultant, I have extensive experience working with Dynamics 365 CRM, Power Platform, and Azure. Over the years, I have encountered and resolved numerous plugin errors in live environments. Through my blogs and speaking engagements, I have shared valuable insights on building robust and scalable solutions in Dynamics 365. This expertise allows me to provide you with practical and effective error-handling strategies that you can implement immediately. Understanding Plugin Execution and Error Scenarios Before diving into error handling techniques, let’s briefly understand the plugin execution model. Plugins in Dynamics 365 execute in the sandbox (isolated) mode or full-trust (non-isolated) mode and can be synchronous or asynchronous. Common error scenarios in plugins include: Now, let’s explore how to handle these errors effectively. 1.) Using Try-Catch Blocks for Exception Handling The simplest and most effective way to handle errors is by wrapping your plugin logic inside a try-catch block. Why This Works: 2.) Using ITracingService for Logging Dynamics 365 provides the ITracingService to log debug messages, which is particularly useful in sandboxed plugins where direct debugging is not possible. Benefits: 3.) Logging Errors to a Custom Entity For persistent logging, consider storing error details in a custom entity (e.g., Plugin Error Log). Why This Helps: 4. Using Secure Configuration for External API Calls If your plugin interacts with external APIs, store credentials in the secure configuration rather than hardcoding them. Benefits: 5. Handling Recursion and Infinite Loops Dynamics 365 allows detecting recursive plugin execution using Depth in IPluginExecutionContext. Why? Conclusion Error handling in Dynamics 365 plugins is crucial for maintaining stability and ensuring seamless business operations. By implementing try-catch blocks, using tracing services, logging errors to a custom entity, managing secure configurations, and handling recursion, you can build robust and maintainable plugins. I encourage you to apply these techniques to your plugins and explore additional monitoring tools like Application Insights for even better observability. Have you faced any plugin debugging challenges? Share your experiences in the comments below! We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfonts.com.
Share Story :
Easy JavaScript Examples for Dynamics 365 CRM – Repository
Are you tired of spending hours searching for the right JavaScript functions to use in Dynamics 365 CRM? If so, you’re not alone. Developers often struggle to find commonly used functions scattered across different sources, making it frustrating to build quick solutions or bug fixing. What if you had a single repository containing all the essential JavaScript functions at your fingertips? That’s exactly what this blog offers, a one-stop resource where you’ll find everything you need, from retrieving field values to automating actions on forms. With these functions in one place, you can save time, eliminate guesswork, and focus on creating impactful solutions for your Dynamics 365 projects. As a Microsoft Certified Trainer (MCT) and Microsoft Certified Professional, I’ve spent my career deploying Dynamics 365 solutions for organizations across the globe. My hands-on experience in architecting and implementing complex solutions has given me deep insight into the challenges developers face—one of the most common being finding and applying the right JavaScript functions efficiently. Let’s explore the most commonly used JavaScript functions for quick reference and seamless development Best Practices: Always check if a field or control is null before interacting with it. Keep JavaScript functions modular and reusable. Avoid using deprecated APIs, always follow the latest Microsoft documentation. Conclusion: JavaScript is a game-changer when it comes to customizing Dynamics 365 CRM, and having a go-to repository for commonly used functions can save you significant time and effort. With these functions at your fingertips, you’ll be better equipped to build dynamic forms, automate processes, and enhance the overall user experience. And smoother operations for your business. Now that you’ve explored these essential JavaScript functions, why not take your Dynamics 365 knowledge even further? Check out this blog on error handling in Dynamics 365 plugins to strengthen your expertise in server-side customizations as well. Bookmark this repo, and let’s make development faster and easier together!
Share Story :
What a Service Request Management System Would Look Like for a Growing Business
Introduction: For growing businesses, as much as their processes, reports, and efficient systems are important, service request management becomes equally crucial. As companies scale, the volume and complexity of service requests increase, making efficient management essential to maintaining operational flow and customer satisfaction. A well-designed Service Request Management System (SRMS) helps align workflows, reduce response times, and enhance service delivery. In this article, we will cover what such a system typically entails and why it’s vital for a growing business. Let’s look at some of the key components that make an SRMS most effective. Now that we have covered the key components, let us look at what a service request management system should be like for a growing business and how to set it up. Here are some key points to consider. Every company should establish a support email address (support@companydomain.com) to facilitate customer queries. Customers typically prefer using a support email over a phone number or support portal, making it the most convenient method for logging service requests. Process flow diagram for a SRMS Conclusion: Having a solid service request management system (SRMS) is a game-changer for any growing business. By centralizing your service requests, automating processes, and setting clear expectations with SLAs, you can keep things running smoothly and keep your customers happy. Features like real-time updates, automatic case assignments, and a self-service portal make life easier for both your team and your customers. With these tools, you can handle more requests efficiently, ensure quick resolutions, and maintain high service standards as your business grows. Investing in a good SRMS means you’re building a responsive and customer-focused business that can thrive even as it expands. Here is our featured Customer Success Story: Revolution Cooking partnered with CloudFronts for Dynamics 365 enhancements and data integration with the third-party applications. We hope you found this article useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com
Share Story :
Integrating Project Operations to Financial Platforms
Introduction Dynamics 365 Project Operations (PO) is a project management application within the Dynamics 365 suite. It is designed to manage project-related tasks, schedules, resources, and budgets. While they may include some financial functionalities, they often lack the comprehensive financial management capabilities that dedicated financial platforms offer. In this article, we will explore several functions that Project Operations (PO) cannot perform as effectively as financial platforms like QuickBooks (QB) or Dynamics 365 Business Central (BC). We will also discuss how to bridge this gap and create a seamless integration between Project Operations and these financial platforms. Let’s first look at what Project Operations falls short of and what financial platforms like QuickBooks or Dynamics 365 Business Central can offer. Accounting Functionalities General Ledger Management: Financial platforms provide robust general ledger management, allowing for detailed tracking and reporting of all financial transactions across the entire organization. Accounts Payable and Receivable: They manage accounts payable (AP) and accounts receivable (AR) efficiently, including invoicing, bill payments, and collections. Tax Compliance: Financial platforms are equipped with tools to manage tax calculations, filings, and compliance with local and international tax regulations. Financial Reporting: Financial platforms offer extensive reporting capabilities, including profit and loss statements, balance sheets, and customizable financial reports. Audit Trails: Financial platforms maintain detailed audit trails of all financial transactions, which are crucial for internal audits and external regulatory audits. To leverage the Project Management features of Project Operations and the above-discussed features of financial platforms, businesses often choose to integrate both systems. Integration Approach Custom integration offers the utmost flexibility when connecting Project Operations with QuickBooks or Business Central. Several key considerations and entities are important to ensure a seamless integration: Data Mapping: Tables: Identifying the key entities (Tables) such as projects, expenses, invoices, customers, vendors, contacts, and accounts that need to be synchronized between project operations and financial platforms. Mapping: Map the fields and attributes of these entities between the two systems to ensure accurate data transfer and synchronization. Tip: The best practice is to create mapping Excel for maintaining the table and column mappings between the systems. Chart of Accounts (COA): Chart of Accounts: Proper alignment between the chart of accounts in Project Operations and the financial platforms is necessary to facilitate accurate financial reporting and reconciliation. Tip: Creating custom tables for your Chart of Accounts (COAs) and designating the financial systems as the source of truth for COAs is recommended. This approach offers flexibility to associate COAs with expenses, materials, roles, etc. API Integration: API Access: Check if the financial platforms offer APIs for integration. Integration Points: Determine the integration points where data will be exchanged between the two systems, such as project creation, expense tracking, invoice generation, and payment reconciliation. Data Flow: Data Direction: Define the direction of data flow between Project Operations and financial platforms, ensuring consistency and integrity of data. The source and the target systems should be defined. Real-Time Sync: Decide whether data synchronization will occur in real-time or through scheduled batch processes to meet business requirements. Currency: Currency Conversion: Consider currency conversion requirements when dealing with contracts or transactions in multiple currencies. Error Handling and Logging: Error Handling: Implement mechanisms to handle data validation errors, inconsistencies, and exceptions during data transfer between systems. Logging: Maintain logs of integration activities and errors for troubleshooting, audit trails, and compliance purposes. Security: Authentication: Implement secure authentication mechanisms to ensure data privacy and integrity during data exchange between systems. Access Control: Define roles and permissions to restrict access to sensitive data and functionalities based on user roles and responsibilities. Testing: Testing: Set up a dedicated testing session to validate the integration setup, data mappings, and synchronization processes before deploying to production. Integration process flow diagrams: Create a process flow diagram for all the entities, for example below, is an integration process flow diagram for integrating Accounts, Contacts, Vendors from Project Operations to Quick Books. In conclusion, while Project Operations is essential for managing the operational aspects of projects, it lacks the depth and breadth of functionalities offered by dedicated financial platforms. Financial platforms provide accounting, regulatory compliance, advanced financial reporting, cash flow management, and more, which are crucial for the overall financial health and strategic planning of an organization. Integrating these platforms with Project Operations tools leverages the strengths of both, ensuring efficient project management and robust financial oversight. Here is our featured Customer Success Story: Armexa, a leading US-based Industrial Cybersecurity Company, partnered with CloudFronts for Services Automation with Microsoft Dynamics 365 Project Operations and Business Central. We hope you found this article useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com
Share Story :
Get Owners of a Teams Channel Using Power Automate Flow
With Power Automate it has become easier to post automated messages, and alert approvals in Microsoft Teams, in the following blog we will explore some Power Automate actions which will help us to send these alerts messages and approvals to Microsoft Teams Channel Owners only. So let’s begin…! Let’s say we have a Teams Channel with members, as shown in the snapshot below. We need to send approvals to the Owner of the channel only. Here is how it is done, Step 1: In power automate flow, search for Office 365 Group Action, and now select List all members action. Select the Teams name from the dropdown. (Comments: When a Team is created it forms an Office 365 group) Step 2: Here we will be using the Microsoft Graph API to get the owners of the group more about it in the doc – https://learn.microsoft.com/en-us/graph/api/group-list-owners?view=graph-rest-1.0&tabs=http API : GET /groups/{id}/owners To get the ID go to https://admin.microsoft.com/ and follow the snapshot below. (Comments: to get the id of the group you need to have admin privileges) Step 3: After we run the flow, we get the output for step 2 as shown in the snapshot below. So now we need to get the “mail” from the “value” from step 2, hence here we use a Select action in Power Automate to get the emails from the values from step 2. Step 4: At last, we use a join expression to club the mails separated by (;)so that we can use them in the outlook action. Power Automate Flow Screenshots: Output: Hope this helps 😉