Latest Microsoft Dynamics 365 Blogs | CloudFronts - Page 12

Automating Document Vectorization from SharePoint Using Azure Logic Apps and Azure AI Search

In modern enterprises, documents stored across platforms like SharePoint often remain underutilized due to the lack of intelligent search capabilities. What if your organization could automatically extract meaning from those documents—turning them into searchable vectors for advanced retrieval systems? That’s exactly what we’ve achieved by integrating Azure Logic Apps with Azure AI Search. Workflow Overview Whenever a user uploads a file to a designated SharePoint folder, a scheduled Azure Logic App is triggered to: Once stored, a scheduled Azure Cognitive Search Indexer kicks in. This indexer: Technologies / resources used: –-> SharePoint: A common document repository for enterprise users, ideal for collaborative uploads. -> Azure Logic Apps: Provides low-code automation to monitor SharePoint for changes and sync files to Blob Storage. It ensures a reliable, scheduled trigger mechanism with minimal overhead. -> Blob Storage: Serves as the staging ground where documents are centrally stored for indexing—cheaper and more scalable than relying solely on SharePoint connectors. -> Azure AI Search (Cognitive Search): The intelligence layer that runs a skillset pipeline to extract, transform, and vectorize the content, enabling semantic search, multimodal RAG (Retrieval Augmented Generation), and other AI-enhanced scenarios. Why Not Vectorize Directly from SharePoint? Reference:-1. https://learn.microsoft.com/en-us/azure/search/search-howto-index-sharepoint-online2. https://learn.microsoft.com/en-us/azure/search/search-howto-indexing-azure-blob-storage How to achieve this? Stage 1: – Logic App to sync Sharepoint files to blob Firstly, create a designated Sharepoint directory to upload the required documents for vectorization. Then create the logic app to replicate the files along with it’s format and properties to the associated blob storage – 1] Assign the site address and the directory name where the documents are uploaded in Sharepoint – In the trigger action “When an item is created or modified”. 2] Assign a recurrence frequency, start time and time zone to check/verify for new documents and keep the blob container updated. 3] Add an action component – “Get file content using path”; and dynamically provide the full path (includes file extension), from the trigger 4] Finally, add an action to create blobs in the designated container that would be vectorized – provide the storage acc. name, directory path, the name of blob (Select to dynamically get the file name with extension for the trigger), blob content (from the get file content action). 5] On successfully saving & running this logic app, either manually or on trigger, the files are replicated in it’s exact form to the blob storage. Stage 2 :- Azure AI Search resource to vectorize the files in blob storage In Azure Portal (Home – Microsoft Azure), search for Azure AI Search service, and provide the necessary details, based on your requirement select a pricing tier. Once resource is successfully created, select “Import & vectorize data” From the 2 options – RAG and Multimodal RAG Index, select the latter one.RAG combines a retriever (to fetch relevant documents) with a generative language model (to generate answers) using text-only data. Multimodal RAG extends the RAG architecture to include multiple data types such as text, images, tables, PDFs, diagrams, audio, or video. Workflow: Now follow the steps and provide the necessary details for the index creation Enable deletion tracking, to remove the records of deleted documents from the index Provide a document intelligence resource to enable OCR, and to get location metadata for multiple document types. Select image verbalization (to verbalize text in images) or multimodal embedding to vectorize the whole image. Assign the LLM model for generating the embeddings for the text/images provide an image output location, to store images extracted from the files Assign a schedule to refresh the indexer and to keep the search index up to date with new documents. Once successfully created, search keywords in the search explorer of the index, to verify the vectorization, the results are provided based on it’s relevance and score/distance, to the user’s search query. Let us test this index in Custom Copilot Agent , by importing this index as an azure ai search knowledge source. On fetching details of certain document specific information, the index is searched for the most appropriate information, and the result is rendered in readable format by generative AI.  We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com.

Share Story :

From Data Chaos to Clarity: The Path to Becoming AI-Ready

Most organizations don’t fail because of bad AI models; they fail because their data isn’t clear. When your data is consistent, connected, and governed, your systems begin to “do the talking” for your teams. In our earlier article on whether your tech stack is holding you back from achieving AI success, we discussed the importance of building the right foundation.  This time, we go deeper because even the best stack cannot deliver without data clarity. That’s when deals close faster, billing runs itself, projects stay on track, and leaders decide with confidence. This blueprint shows how to move from fragments to structure to precision. The Reality Check For many businesses today, operations still depend on spreadsheets, scattered files, and long email threads. Reports exist in multiple versions, important numbers are tracked manually, and teams spend more time reconciling than acting. This is not a tool problem, it is a clarity problem. If your inventory lives in spreadsheets, if the “latest report” means three versions shared over email, no algorithm will save you. What scales is clarity: one language for data, one source of truth, and one way to connect systems and decisions. 💡 If you cannot trust your data today, you will second guess your decisions tomorrow. What Data Clarity Really Means (in Plain Business Terms) Clarity is not a buzzword. It makes data usable and enables scalability, giving every AI-ready business its foundation. Here’s what that looks like in practice: 💡Clarity is not another software purchase. It is a shared business agreement across systems and teams. From Chaos to Clarity: The Three Levels of Readiness Data clarity evolves step by step. Think of it as moving from fragments to gemstones, to jewels. Level 1: Chaos (fragments everywhere) Data is spread across applications, spreadsheets, inboxes, and vendor portals. Duplicates exist, numbers conflict, and no one fully trusts the information. Level 2: Structure (the gemstone taking shape) Core entities like Customers, Products, Projects, and Invoices are standardized and mapped across systems. Data is stored in structured tables or databases. Reporting becomes stable, handovers reduce, and everyone begins pulling from a shared source. Level 3: Composable Insights (precision) Data is modular, reusable, and intelligent. It feeds forecasting, guided actions, and proactive alerts. Business leaders move from asking “what happened?” to acting on “what should we do next?” [Image Alt: Data Fragments to Jewel Levels] How businesses progress: 💡Fragments refined into gemstones, and gemstones polished into jewels. Data clarity and maturity are the jewels of your business. In fragments, they hold little value. The Minimum Readiness Stack (Microsoft First) You don’t need a long list of tools. You need a stack that works together and grows with you: 💡A small, well-integrated Tech stack outperforms a large, disconnected toolkit, every time. What Data Clarity Unlocks Clarity does not just organize your data. It transforms how systems work with each other and how your business operates. CloudFronts provided solutions, the real-world impact: Benefits enabled across departments in your business: 💡When systems talk, your teams stop chasing data. They gain clarity. And clarity means speed, control, and growth. Governance That Accelerates Governance is not red tape, it is acceleration. Here’s how effective governance translates into business impact.  Proof in action includes: 💡Trust in data is engineered, not assumed. Outcomes That Matter to the Business Clarity shows up in business outcomes, not just dashboards. Tinius Olsen-Migrating from TIBCO to Azure Logic Apps for seamless integration between D365 Field Service and Finance & Operations  – CloudFronts BÜCHI’s customer-centric vision accelerates innovation using Azure Integration Services | Microsoft Customer Stories 💡Once clarity is established, every new process, report, or integration lands faster, cleaner, and with more impact. To conclude, getting AI-ready is not about chasing new models. It is about making your data consistent, connected, and governed so that systems quietly remove manual work while surfacing what matters. All of this leads to one truth: clarity is the foundation for AI readiness. This is where Dynamics 365, Power Platform, and Azure integrations shine, providing a Microsoft-first path from fragments to precision. If your business is ready to move from fragments to precision, let’s talk at transform@cloudfronts.com CloudFront will help you turn data chaos into clarity, and clarity into outcomes.

Share Story :

Auto Refresh Subgrid in Dynamics 365 CRM Based on Changes in Another Subgrid

In Dynamics 365 CRM implementations, subgrids are used extensively to show related records within the main form. But what if you want Subgrid B to automatically refresh whenever a new record is added to Subgrid A, especially when that record triggers some automation like a Power Automate flow or a plugin that creates or updates related data? In this blog, I’ll walk you through how to make one subgrid refresh when another subgrid is updated — a common real-world scenario that enhances user experience without needing a full form refresh. Let’s say you have two subgrids on your form: Whenever a new record is added in the Chargeable Categories subgrid, a Power Automate flow or backend logic creates corresponding records in Order Line Categories. However, these new records are not immediately visible in the second subgrid unless the user manually refreshes the entire form or clicks on the refresh icon. This can be confusing or frustrating for end-users. Solution Overview To solve this, we’ll use JavaScript to listen for changes in Subgrid A and automatically refresh Subgrid B once something is added. Here’s the high-level approach: Implementation Steps 1. Create the JavaScript Web Resource Create a new JS web resource and add the following code: How It Works To conclude, this simple yet effective approach ensures a smoother user experience by reflecting backend changes instantly without needing to manually refresh the entire form. It’s particularly helpful when automations or plugins create or update related records that must appear in real-time. By combining JavaScript with Dynamics’ form controls, you can add polish and usability to your applications without heavy customization. I hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com.

Share Story :

Add or Remove Sample Data in a Dynamics 365 CRM Environment

Posted On August 25, 2025 by Vidit Gholam Posted in Tagged in

Let’s say you configured a Dynamics 365 Sales or Project Operation or a field service trial for a client demo to save your efforts on creating sample data dynamics gives you an option to add data in any dynamics 365 environment, you can either choose to install the sample data while creating the environment however if you forgot to do so, here is how you can add sample data within your dynamics 365 environment.  Step 1 – Go to https://admin.powerplatform.microsoft.com/environments select your dynamics 365 environment and click on view details.  Step 2 – On the details page click on setting.  Step 3 – On the setting page under data management you will see an option named sample data, click on it.  Step 4 – Click installed and after a few minutes sample data will be added within your dynamics 365 environment.  Similarly, if sample data is already installed and you wish to remove it, you will see a button Remove sample data instead of Install sample data.  Hope this helps! 😊  I hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com.

Share Story :

Disqualify Single or Multiple Quotes in Dynamics 365 with One Click!

If you’ve ever needed to mark multiple quotes as lost in Dynamics 365, you know the default “Close Quote” dialog can be slow and repetitive.In my recent project, the sales team wanted a faster way, one click from the main Quotes view, to disqualify single or multiple quotes, without opening each one or clicking through extra prompts. Using the out-of-the-box CloseQuote action, a small JavaScript function, and a custom Ribbon Workbench button, I built exactly that. Here’s how. Why I Needed This Our sales team often manages multiple quotes at once. The standard process meant: It was time-consuming, especially for bulk operations. We needed a quick, grid-based action that would: Step 1 – Using the OOB CloseQuote Action Dynamics 365 provides a built-in CloseQuote bound action that changes the state and status of a quote to closed. Instead of creating a custom action, I decided to call this OOB action directly from JavaScript. Step 2 – JavaScript to Call the OOB Action Here’s the function I wrote to handle both single and multiple quotes: Step 3 – Adding the Ribbon Button in Ribbon Workbench Now, the button will be available directly on the Quotes view. Step 4 – Testing the Feature This reduced what was previously a multi-click, per-record task into a single action for any number of quotes. Why This Works Well To conclude, by combining the OOB Close Quote action with Ribbon Workbench, I could instantly disqualify quotes from the main grid, saving hours over the course of a month. If you’re looking to simplify repetitive processes in Dynamics 365, start by exploring what’s already available out of the box, then add just enough customization to make it your own. 🔗 Need help implementing a custom button or enhancing your Dynamics 365 sales process? At CloudFronts, we help businesses design and implement scalable, user-friendly solutions that streamline daily operations and improve adoption. Whether it’s customizing Ribbon Workbench, integrating OOB actions, or building tailored automation, our team can make it happen. 📩 Reach out to us at transform@cloudfronts.com and let’s discuss how we can optimize your Dynamics 365 environment.

Share Story :

Managing Post-Delivery Service and Repairs Using Cases in Dynamics 365 CRM

Why This Matters Imagine you’ve just delivered an order, and now there’s a service issue or repair request from the customer. What’s the best way to track and resolve that? That’s where Cases come in. This blog walks you through how your company can use Cases in Dynamics 365 CRM to efficiently handle post-delivery service and repair requests—directly linked to the order fulfillment process for better visibility and control. Let’s break it down step by step. Step 1: Navigate to Cases from an Order Fulfillment Record Start by opening the Order Fulfillment record.Click on the “Related” dropdown and select “Cases” from the list. This takes you directly to all service cases related to that order. Step 2: Create a New Case Click on the “New Case” button in the Cases tab. A Quick Create: Case form appears. Here’s what you’ll see and fill in: Optional fields like Contact, Origin, Entitlement, and others can be filled in if needed.You can also include details such as First Response By, Resolve By, and Description, depending on your business requirements. Once done, hit Save and Close. Step 3: View All Related Cases After saving, you’ll see a list of all Cases associated with the order under the Case Associated View. Each entry includes key info like: This makes it easy to monitor all service activity related to an order at a glance. Step 4: Manage Case Details Click on any Case Title to open the full Case record. From here, you can: Step 5: Monitor Service Performance Navigate to Dashboards > Service and Repair to track ongoing Case performance. Here’s what you’ll see: This allows your company’s service team to monitor progress, manage workload, and identify recurring product or fulfillment issues. To conclude, by following this process, your company ensures that every post-delivery service or repair request is captured, tracked, and resolved—while keeping everything connected to the original order. It’s simple, efficient, and fully integrated into Dynamics 365 CRM. Hope this helps!!! I hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com.

Share Story :

Automating Lease Lifecycle & Financing with Dynamics 365 Finance & Operations – For Lessor

Global equipment lessors often manage thousands of active contracts across multiple regions. Add in layered financing structures—equity, debt, and third-party investors—and the complexity grows rapidly. Manual processes in this environment create risks in billing accuracy, funding visibility, and profitability tracking. Choosing Microsoft Dynamics 365 Finance & Operations (F&O) by integrating Project Management, Subscription Billing, Dynamics 365 Sales Pro/CRM, Logic Apps, and Power BI, the platform automates the entire lease lifecycle while ensuring transparency and control. Lease Lifecycle Automation Subscription Billing Module: Lessors can: This automation ensures every lease follows consistent accounting treatment and reduces manual workload for finance teams. Multi-Layer Financing Most lessors fund contracts through multiple sources. Dynamics 365 F&O allows you to: This provides clarity not just for finance teams, but also for investors seeking insight into their returns. Business Impact To conclude, by automating lease setup and financing structures, lessors gain: If you are a Lessor and wish to digitize lease lifecycle management and layered financing, adopt the strategy explained above to scale systematically, reduce risks, and provide stakeholders with the visibility they expect. Let’s build the strategy together. You can reach out to us at transform@cloudfonts.com.

Share Story :

Replace OOB Business Closures with a Custom Web Page in Dynamics 365 CRM

Dynamics 365 CRM provides an Out-of-the-Box (OOB) Business Closures feature to define non-working days across your organization. However, in many real-world scenarios, we often face limitations with the default interface in terms of customization, UX/UI control, or extensibility. In this blog post, I’ll Walk you through how you can replace the OOB Business Closures using a custom HTML + CSS + JavaScript web page embedded in Dynamics 365. This custom page will interact with a custom Dataverse table that stores your closure dates and related details. Why Replace the OOB Business Closures? Some of the common limitations of OOB Business Closures: What We’ll Build A custom web resource embedded in a Dynamics 365 CRM dashboard or form tab that: Step-by-Step Implementation 1. Create a Custom Table in Dataverse 2. Create the HTML Web Resource 3. Upload as Web Resource in CRM Benefits of this Approach – Fully customizable UI -Supports additional metadata (reason, region, team) -Extensible for APIs, workflows, Power Automate, etc. -Better UX for users with real-time interactivity To conclude, by using a simple HTML, CSS, and JavaScript front end, we can extend Dynamics 365 CRM to manage business closures in a much more flexible and modern way. It not only overcomes the limitations of the OOB feature but also gives us complete control over the user experience and data model. I hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com.

Share Story :

Monitoring Job Queues: Setting Up Failure Notifications within Business Central

A job queue lets users set up and manage background tasks that run automatically. These tasks can be scheduled to run on a recurring schedule. For a long time, users had a common problem in Business Central—when a job queue failed, there was no alert or warning. You’d only notice something was wrong when a regular task didn’t run for a few days. Some people tried to fix this by setting up another job queue to watch and restart failed ones.But that didn’t always work, especially if an update happened at the same time. Now, Microsoft has finally added a built-in way to get alerts when a job queue fails.  You can get notified either inside Business Central or by using Business Events. In this blog, we’ll look at how to set up notifications directly within Business Central.  Configuration Search for “Assisted Setup” in Business Central’s global search. Scroll down till “Set up Job Queue notifications”. Click on Next. Add the additional users who need to be notified when the job queue fails along with the job creator (if required). Choose whether you want the notification to be in-product or using Business Events (and Power Automate). For now, I’m choosing in-product. Choose the frequency of the notification, you can have it either when a single job fails or you can have it after 3-5 jobs have failed (as per settings) Click on Finish. Now for testing I’ve run a job queue and on the home page, I get the following notification. Clicking on “Show more details” give me a list of all the failed job queues. To conclude, this new notification feature is a much-needed improvement and a good step forward. It helps users catch job queue failures quickly, without having to manually check every day.  However, the setup and experience still feel a bit buggy and lacklustre. With some refinements, it could be a lot smoother and more user-friendly.  If you need further assistance or have specific questions about your ERP setup, feel free to reach out for personalized guidance. I hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com.

Share Story :

Creating an MCP Server Using TypeScript

As artificial intelligence continues to transform how we build and interact with software, AI agents are emerging as the new interface for users. Instead of clicking through menus or filling out forms, users can simply instruct agents to fetch reports, analyze datasets, or trigger workflows. The challenge is: how do these agents access external tools, APIs, or enterprise systems in a secure and standardized way? This is where the Model Context Protocol (MCP) comes into play. MCP is a protocol designed to connect AI agents to tools in a structured, consistent manner. Instead of building ad-hoc integrations for each agent and each tool, developers can expose their tools once via MCP — making them discoverable and callable by any MCP-compliant AI agent. In this article, we’ll explore: What an MCP server is and how it works a) How MCP uses JSON-RPC 2.0 as its communication layer b) How MCP solves the M×N integration problem c) How to implement a simple Weather Data MCP server in TypeScript d) How to test it locally using Postman or cURL What is an MCP Server? An MCP server is an HTTP or WebSocket endpoint that follows the Model Context Protocol, allowing AI systems to query, interact with, and call tools hosted by developers. MCP consists of several components: -Base Protocol – Core JSON-RPC message types -Lifecycle Management – Connection initialization, capability negotiation, and session handling -Server Features – Resources, prompts, and tools exposed by servers -Client Features – Sampling and root directory lists provided by clients -Utilities – Cross-cutting features such as logging or argument completion All MCP implementations must support the Base Protocol and Lifecycle Management. Other features are optional depending on the use case. Architecture: JSON-RPC 2.0 in MCP MCP messages follow the JSON-RPC 2.0 specification — a stateless, lightweight remote procedure call protocol that uses JSON for request and response payloads. Request format: json Copy Edit {   “jsonrpc”: “2.0”,   “id”: 1,   “method”: “methodName”,   “params”: {     “key”: “value”   } } id is required, must be a string or number, and must be unique within the session. method specifies the operation. params contains the method arguments. Response format: json Copy Edit {   “jsonrpc”: “2.0”,   “id”: 1,   “result”: {     “key”: “value”   } } Or, if an error occurs: json Copy Edit {   “jsonrpc”: “2.0”,   “id”: 1,   “error”: {     “code”: -32603,     “message”: “Internal error”   } } The ID must match the request it is responding to. The M×N Problem and How MCP Solves It. Without MCP, connecting M AI agents to N tools requires M×N separate integrations. This is inefficient and unscalable. With MCP, each agent implements a single MCP client, and each tool implements a single MCP server. Agents and tools can then communicate through a shared protocol, reducing integration effort from M×N to M+N. Project Setup Create the project directory: mkdir weather-mcp-sdk cd weather-mcp-sdk npm init -y Install dependencies: npm install @modelcontextprotocol/sdk zod axios express npm install –save-dev typescript ts-node @types/node @types/express npx tsc –init Implementing the Weather MCP Server We’ll use the WeatherAPI to fetch real-time weather data for a given city and expose it via MCP as a getWeather tool. src/index.ts import express from “express”; import axios from “axios”; import { McpServer } from “@modelcontextprotocol/sdk/server/mcp.js”; import { StreamableHTTPServerTransport } from “@modelcontextprotocol/sdk/server/streamableHttp.js”; import { z } from “zod”; const API_KEY = “YOUR_WEATHER_API_KEY”; // replace with your API key function getServer() {   const server = new McpServer({     name: “Weather MCP Server”,     version: “1.0.0”,   });   server.tool(     “getWeather”,     { city: z.string() },     async ({ city }) => {       const res = await axios.get(“http://api.weatherapi.com/v1/current.json”, {         params: { key: API_KEY, q: city, aqi: “no” },       });       const data = res.data;       return {         content: [           {             type: “text”,             text: `Weather in ${data.location.name}, ${data.location.country}: ${data.current.temp_c}°C, ${data.current.condition.text}`,           },         ],       };     }   );   return server; } const app = express(); app.use(express.json()); app.post(“/mcp”, async (req, res) => {   try {     const server = getServer();     const transport = new StreamableHTTPServerTransport({});     res.on(“close”, () => {       transport.close();       server.close();     });     await server.connect(transport);     await transport.handleRequest(req, res, req.body);   } catch (error) {     if (!res.headersSent) {       res.status(500).json({         jsonrpc: “2.0”,         error: { code: -32603, message: “Internal server error” },         id: null,       });     }   } }); const PORT = 3000; app.listen(PORT, () => {   console.log(`MCP Stateless HTTP Server running at http://localhost:${PORT}/mcp`); }); Testing the MCP Server Since MCP requires specific request formats and content negotiation, use Content-Type: application/json and Accept: application/json, text/event-stream headers. Step 1 — Initialize curl -X POST http://localhost:3000/mcp \   -H “Content-Type: application/json” \   -H “Accept: application/json, text/event-stream” \   -d ‘{     “jsonrpc”: “2.0”,     “id”: 1,     “method”: “initialize”,     “params”: {       “protocolVersion”: “2025-06-18”,       “capabilities”: { “elicitation”: {} },       “clientInfo”: { “name”: “example-client”, “version”: “1.0.0” }     }   }’ Example response: {   “jsonrpc”: “2.0”,   “id”: 1,   “result”: {     “protocolVersion”: “2025-06-18”,     “capabilities”: { “tools”: { “listChanged”: true } },     “serverInfo”: { “name”: “Weather MCP Server”, “version”: “1.0.0” }   } } Step 2 — Call the getWeather Tool curl -X POST http://localhost:3000/mcp \   -H “Content-Type: application/json” \   -H “Accept: application/json, text/event-stream” \   -d ‘{     “jsonrpc”: “2.0”,     “id”: 2,     “method”: “tools/call”,     “params”: {       “name”: “getWeather”,       “arguments”: { “city”: “London” }     }   }’ Example response: {   “jsonrpc”: “2.0”,   “id”: 2,   “result”: {     “content”: [       {         “type”: “text”,         “text”: “Weather in London, United Kingdom: 21°C, Partly cloudy”       }     ]   } } To conclude, we have built an MCP-compliant server in TypeScript that exposes a weather-fetching tool over HTTP. This simple implementation demonstrates: How to define and register tools with MCP How JSON-RPC 2.0 structures communication, and how to make your server compatible with any MCP-compliant AI agent. From here, you … Continue reading Creating an MCP Server Using TypeScript

Share Story :

SEARCH BLOGS:

FOLLOW CLOUDFRONTS BLOG :


Secured By miniOrange