Blog Archives - Page 2 of 170 - - Page 2

Category Archives: Blog

Struggling to Bulk Upload “Item Revaluation Entries”? Here’s What Could Be Going Wrong

In this blog, one of our clients had made some small mistakes while providing the data for their opening balances, which caused the item costs to be wrong. After a lot of back-and-forth, we finally got a list of 100+ items with the correct costs. We thought it would be easy to fix using Edit in Excel but then we ran into an error:“Quantity must have a value in Item Journal Line.” But this is odd as when we are creating the entries manually, we don’t need to set the Quantity from anywhere. In fact, the Quantity field isn’t even editable, it is populated when the “Applies-to Entry” field is updated. We tried using Configuration Package.. same thing! We tried to create an excel import, that uploads data in the journal.. same thing! So what’s going on? Details After a bit of debugging we found this piece of code to be the problem –  And this -> When you update the “Unit Cost (Revalued)” field in Business Central, it also updates the “Inventory Value (Revalued)” field automatically. This part is simple. But the system also tries to update “Unit Cost (Revalued)” again based on the value you just changed—almost like it’s going in circles. To avoid this, the system checks which field is currently being updated. If it’s not “Unit Cost (Revalued)”, the update is allowed. When you make changes from the Business Central screen, the system knows which field you’re changing, thanks to something called CurrFieldNo. But when you use Edit in Excel, Config Packages, or AL code, this info is missing. That confuses the system and can cause it to divide by zero, which leads to an error. Also, there’s a rule that checks quantity in the “Applies-to Entry” field. This check only happens if the “Value Entry Type” is not set to “Revaluation”. This was raised back in 2018 on Github as a bug but it was closed as intended system design. At the end, we had to bypass the validation and assign the values directly to the fields. To conclude, what seemed like a simple task updating revalued costs turned into a deep dive into Business Central’s internal logic.  The issue stemmed from how the system handles field updates differently depending on the entry method.  While the manual interface sets background values like CurrFieldNo to help Business Central track changes properly, external methods like Edit in Excel or Config Packages don’t provide that context. If you need further assistance or have specific questions about your ERP setup, feel free to reach out for personalized guidance. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com.

Share Story :

Enhancing Workflow Observability with Open Telemetry in Azure Logic Apps

Struggling to Monitor Your Logic App Workflows End-to-End? Azure Logic Apps are a powerful tool for automating business workflows across services. But as these workflows grow in size and complexity, so do the challenges in tracking, debugging, and optimizing them. The built-in monitoring options, while helpful often don’t provide full visibility. This leaves teams scrambling to understand failures, bottlenecks, or performance issues. Here’s the good news: OpenTelemetry can change that. In this post, you’ll learn how to gain complete observability into your Logic Apps workflows using OpenTelemetry, the industry-standard framework for telemetry data. Why Observability Matters in Azure Logic Apps Logic Apps connect multiple services , APIs, databases, emails, on-prem systems, and more. But as you stitch these workflows together, it becomes harder to: While Azure provides diagnostics via Monitor and Application Insights, they often produce fragmented data. These tools lack native support for distributed tracing, which is essential when workflows span many components. That’s where OpenTelemetry helps. With it, you can gather: Together, these three “pillars of observability” give you actionable insights into your Logic App’s behavior. What is OpenTelemetry? OpenTelemetry is an open-source standard for collecting and exporting telemetry data. It supports multiple platforms, Azure, AWS, GCP and can export data to tools like Application Insights, Jaeger, or Prometheus. With OpenTelemetry, you can: It ensures a consistent observability strategy across your cloud-native systems — including Logic Apps. How to Integrate OpenTelemetry with Azure Logic Apps Azure Logic Apps don’t yet support OpenTelemetry out of the box. But with a smart setup, you can still plug them into an OpenTelemetry pipeline. 🛠️ Step-by-Step Guide: Real Example: Order Processing with Observability Imagine this: Without OpenTelemetry: With OpenTelemetry: This means faster resolution, less guesswork, and a better customer experience. ✅ Use correlation IDs across services✅ Add custom dimensions to enrich telemetry✅ Configure sampling to control trace volume✅ Monitor latency thresholds for each Logic App step✅ Log business-critical metadata (e.g., Order ID, region) Start Small, See Big Results Observability is no longer optional. It’s a must-have for teams building scalable, resilient workflows. Here’s your action plan:

Share Story :

The Hidden Power BI Feature That Will Transform Your Data Automation

Are you tired of manually writing complex DAX queries for your Power Automate flows? What if Power BI has been secretly recording every optimized query for you all along? The Challenge Every Power BI Developer Faces For growing businesses, as much as their dashboards and reports are important, automating data workflows becomes equally crucial. As organizations scale, the need to extract Power BI insights programmatically increases, making efficient query extraction essential to maintaining operational flow and development productivity. If you’re considering streamlining your Power BI to Power Automate integration process, this article is for you. I’m confident this article will guide you in mastering a Power BI technique that helps you achieve these impressive productivity gains. Key Takeaways What Exactly is Performance Analyzer? Performance Analyzer is Power BI’s built-in diagnostic tool that captures every single operation happening behind the scenes when you interact with your reports. Think of it as a detailed activity log that records not just what happened, but exactly how Power BI executed each query. Most developers use it for performance troubleshooting, but here’s the secret: it’s actually your gateway to extracting production-ready DAX queries for automation. Step 1: Unleashing the Performance Analyzer Accessing Your Hidden Toolkit The Performance Analyzer isn’t hidden in some obscure menu – it’s right there in your Power BI Desktop ribbon, waiting to revolutionize your workflow. To activate Performance Analyzer: Starting Your Query Capture Session Think of this as putting Power BI under a microscope. Every interaction you make will be recorded and analyzed. The capture process: Step 2: Extracting the Golden DAX Queries Decoding the Performance Data When you expand any visual event in the Performance Analyzer, you’ll see several components: Here’s where it gets exciting: Click on “Copy query” next to the DAX Query section. Real-World Example: Sales Dashboard Automation Let’s say you have a sales dashboard with a card showing total revenue. After recording and expanding the performance data, you might extract a DAX query like this: This is pure gold – it’s the exact query Power BI uses internally, optimized and ready for reuse! The DAX queries can be used in the following areas: To conclude, I encourage you to take a close look at your current Power BI automation processes. Identify one manual reporting task that you perform weekly – perhaps a sales summary, performance dashboard update, or data quality check. Start with this simple action today: Open one of your existing Power BI reports, activate Performance Analyzer, and extract just one DAX query. Then build a basic Power Automate flow using that query. This single step will demonstrate the power of this technique and likely save you hours in your next automation project. Need practical guidance on implementing this in your organization? Feel free to connect at transform@cloudfronts.com for specific solutions that can help you develop more effective Power BI automation workflows. Taking action now will lead to significant time savings and more robust automated reporting for your business.

Share Story :

From Clean Data to Insights: Integrating Azure Databricks with Power BI and MLflow

Cleaning data is only half the journey. The real value comes when that clean, reliable data powers dashboards for decision-makers and machine learning models for prediction. In this post, we’ll explore two powerful integrations of Azure Databricks: Why These Integrations Matter For growing businesses: Together, they create a bridge from cleaned data → insights → action. Practical Example 1: Databricks + Power BI 👉 Result: Executives can open Power BI and instantly see up-to-date sales performance across geographies. Practical Example 2: Databricks + MLflow 👉 Result: Your business can predict customer trends, forecast sales, or identify churn risk directly from cleaned Databricks data. To conclude, with these integrations: Together, they help organizations move from cleaned data → insights → intelligent action. ✅ Already cleaning data in Databricks? Try connecting your first Power BI dashboard today.✅ Want to explore AI? Start logging experiments with MLflow to track and deploy models seamlessly. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com.

Share Story :

Migrating from Dynamics GP to Business Central: A Leap Towards the Future

For years, Microsoft Dynamics GP has been a reliable ERP system, helping businesses streamline financial operations. But the world has changed. Markets move faster, customer expectations are higher, and technology is no longer just a support function – it’s the backbone of growth. This is why the transition from Dynamics GP to Microsoft Dynamics 365 Business Central isn’t just another upgrade. It’s a strategic leap forward. The Real Question: Maintain or Evolve? In today’s world, standing still is the same as moving backward. The choice is simple: maintain what works or evolve to what’s next. What Businesses Gain with Business Central A Transformation Story We’re currently working with a mid-sized client who has been running Dynamics GP for nearly 3 decades. While GP had served them well, the leadership team realized that GP will be obsolete in just a few years. Continuing with GP would only add more risk and cost. That’s why they made a strategic decision: migrate to Business Central, ensuring they move to a platform built for the future. Their goals for the migration are clear: This migration is underway, and the client sees it as the foundation for their next decade of growth. Why Now Is the Right Time Postponing migration might feel safe, but it carries hidden risks: increasing IT costs, reliance on outdated processes, and missing out on innovations competitors are already leveraging. Business Central is more than an ERP—it’s a platform for growth, intelligence, and resilience. The Takeaway Migrating from GP to Business Central is not a technical move – it’s a business transformation. It means: With GP reaching its end of life in the coming years, now is the time to make the transition confidently and strategically. Feel free to reach out. You can contact us at transform@cloudfronts.com. Let’s work together to find the right step for your success.

Share Story :

A Unified Approach to Developing Finance and Operations Applications

Microsoft’s Unified Developer Experience (UDE) helps developers build solutions that work across both Finance and Operations (F&O) and the Power Platform by providing a common, cloud-based environment. Challenges Before UDE Before UDE, developers often faced the following issues: What UDE Changes With UDE, Microsoft combines these tools into one environment, making it easier to: Why UDE Is Useful Adopting UDE brings several practical benefits for developers and organizations: Check Access, Licenses, and Capacity Before starting, make sure your user role, license, and environment capacity are all set up properly. You can check this in the Power Platform Admin Center. Starting the Setup with PowerShell To get started, open PowerShell ISE on your laptop. If you haven’t installed the required Power Platform module yet, run this command (skip it if it’s already installed): #Install the module Install-Module -Name Microsoft.PowerApps.Administration.PowerShell -Force Next, sign in to your account and prepare the JSON template that defines your environment settings. Make sure DevToolsEnabled is set to true so developer tools are available. You can also set DemoDataEnabled to true if you want sample Contoso data included by default. Write-Host “Creating a session against the Power Platform API” Add-PowerAppsAccount -Endpoint prod #To construct the json object to pass in $jsonObject= @” { “PostProvisioningPackages”: [ { “applicationUniqueName”: “msdyn_FinanceAndOperationsProvisioningAppAnchor”, “parameters”: “DevToolsEnabled=true|DemoDataEnabled=true” } ] } “@ | ConvertFrom-Json Finally, you’re ready to start the environment deployment. New-AdminPowerAppEnvironment -DisplayName “EnvironmentName” -EnvironmentSku Sandbox -Templates “D365_FinOps_Finance” -TemplateMetadata $jsonObject -LocationName “unitedstates” -ProvisionDatabase Example: New-AdminPowerAppEnvironment -DisplayName “Basic_Env” -EnvironmentSku Sandbox -LocationName “unitedstates” -Templates “D365_FinOps_Finance” -TemplateMetadata $jsonObject -ProvisionDatabase Make sure to use a proper name for your environment — it must be 20 characters or fewer. Also, pick the correct data center location based on your region (for example, I used unitedstates, but you could choose India or another available region). Alternatively: Install on an Existing Environment If you already have a Power Platform environment with a Dataverse database, you can use it to install Finance and Operations apps. Simply select the environment, navigate to Resources > Dynamics 365 apps, and then select Dynamics 365 Finance and Operations Provisioning App. Once your environment is successfully provisioned, you’ll see it listed in the Power Platform Admin Center — just like in the screenshot above. Here’s what the key information means: You’ll also see links to manage: These settings help control access and structure within your environment. This confirms your Finance + Power Platform environment is now fully functional and integrated — ready for development, testing, and customization. Make sure your user account has the System Administrator security role in Dataverse. Once assigned, this role will automatically carry over to the Finance and Operations (F&O) environment — no need to reassign it separately. If you navigate to the Dynamics 365 apps, you’ll also find pre-configured and installed solutions available. You can check out the Modules, Packages, and Operation History by simply clicking on the Environment URL. System Requirements for Setting Up the Development Environment Before you begin working with the Unified Development Experience (UDE), it’s important to make sure your machine meets the basic hardware and software requirements. Here’s what you’ll need: Workstation Requirements To ensure smooth performance while developing: Required Software The following software components are essential for working with UDE in Visual Studio: Once everything is set up, you’re ready to open Visual Studio. Make sure to run it as Administrator and choose the “Continue without code” option when prompted. This ensures all tools load properly and you’re ready to begin your development work. Install Power Platform VS Extension Go to VS > Manage Extensions > Search ‘Power Platform Tools Now, navigate to Tools > Options > Power Platform Tools and enable the specified parameters. Now, go to Tools > Connect to Dataverse Always show the full list of organizations. Avoid signing in with your current Windows user if it’s not the same account you’ve already connected to in Visual Studio. You can view the environments you previously created in PowerShell – just select the one you set up earlier. Choose the default option, unless you’re planning to create specific components for D365 CE or Power Platform—in that case, it’s best to create a dedicated solution and publisher for your work. If the X++ source code for your specific UnO DevBox version (e.g., 10.0.35) hasn’t been downloaded yet, you’ll be prompted to get it locally. After setting up the Power Platform Tools extension and connecting to your Dataverse sandbox, you’ll see an option to install the Finance and Operations extension for Visual Studio, along with the related metadata. If you didn’t get any option you can download it manually by going to “C:\Users\ShubhamPrajapati\AppData\Local\Microsoft\Dynamics365\10.0.2263.74” Meanwhile, in the background, the PackageLocalDirectory is being extracted. You can monitor the progress by going to View > Output. The installation typically takes around 30 minutes. After installation, you’ll see a few prompts the first time you open Visual Studio—just click “Yes” to continue. As you can see, all models have been downloaded successfully. You can switch between Classic View and Model View by right-clicking on the AOT. Once that’s done, navigate to Tools > Options > Power Platform Tools and apply the required changes as shown in the image below. The final step is to configure the Finance & Operations extension. In my case, I use LocalDB for the Cross Reference (Cross Ref) Database—it’s convenient because it’s already included when you install Visual Studio. If you’re using LocalDB, ensure your connection string is correct. A typical value is: (localdb)\ To set up LocalDB (if not already initialized), open Command Prompt and run: sqllocaldb create MSSQLLocalDB -s This command initializes and starts the LocalDB instance. Once LocalDB is running, your Cross Reference Database will be restored. This enables key development features such as: These features significantly enhance the development experience by improving code navigation and reference tracking. If you receive errors when trying to open certain class files (which are XML files under the hood), it’s likely because the Modeling SDK is not installed. This SDK is essential for working with … Continue reading A Unified Approach to Developing Finance and Operations Applications

Share Story :

Dealing with ISV Extension Updates in Business Central: A Practical Guide

As of August 2025, the number of third-party apps available for Dynamics 365 Business Central on Microsoft AppSource is estimated to be between 4,000 and 6,500. Microsoft regularly publishes marketplace updates, with 200–300 new offers added monthly across all product lines, a significant portion of which are Business Central extensions.  It’s clear that the Business Central ecosystem is rapidly growing, making extension update management increasingly critical. The majority of clients we’ve worked with use third-party modules to enhance their business processes, making the management of extension updates a critical part of our environment health checklist. References Apps added to MS App Source MSLearn | Automatically update appsource apps with business central updates Configuration If you don’t have access to the Business Central Admin Center, your only option is to uninstall the extension and then reinstall it from AppSource. However, if you do have access to the Admin Center and prefer to manage app updates manually. Navigate to your environment and click on Apps. Here, you can –  For those looking to automate this process, Microsoft offers the “Appa Update Cadence” setting, which controls how and when apps are updated alongside Business Central. There are three available settings: To conclude, managing third-party extension updates in D365 Business Central is essential to maintaining a stable and reliable environment.  Whether updates are handled manually through the Admin Center or automated using the App Update Cadence feature, having a clear process helps minimize disruptions.  With the growing number of extensions in AppSource, proactively testing updates, monitoring changes, and coordinating with ISV partners ensures your Business Central environment stays healthy and future ready. If you need further assistance or have specific questions about your ERP setup, feel free to reach out for personalized guidance. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com.

Share Story :

From Raw to Reliable: Cleaning Data at Scale with Azure Databricks

Are you struggling with messy spreadsheets full of duplicates, missing values, and inconsistent records? You’re not alone. Data professionals spend nearly 80% of their time cleaning and preparing data before any real analysis begins. The truth is simple: without clean data, business reports are unreliable, AI models fail, and decision-making slows down. In this blog, we’ll show you how Azure Databricks makes data cleaning easier, faster, and scalable—turning raw inputs into reliable insights with just a few lines of code. Why Clean Data Matters For business leaders, whether you’re a Team Lead, CTO, or CEO, clean data directly impacts growth: With Azure Databricks, you get a cloud-native, Spark-powered platform that handles big data at scale while integrating seamlessly with Azure Data Lake, Synapse, and Power BI. Practical Example: Cleaning a Sales Dataset in Azure Databricks Imagine you have a raw CSV file in Azure Data Lake with customer sales data: Issues in the data: Solution with PySpark in Databricks: Output after cleaning: CustomerID Name Country Sales 101 Alice USA 500 102 Bob USA 300 103 Unknown UK 450 104 David India 0 With just a few lines of Spark code, the dataset is now ready for reporting, visualization, or machine learning. To conclude, clean data is the foundation of every reliable business insight. With Azure Databricks, you can automate messy, manual processes and create repeatable, scalable pipelines that keep your data reliable—no matter how fast your business grows. ✅ Start small: try building a simple cleaning pipeline in Azure Databricks today.✅ Save time: focus more on insights, less on manual data prep.✅ Scale with confidence: as your data grows, Databricks grows with you. 👉 Want to take the next step? Explore how Databricks integrates with Power BI for real-time dashboards or with MLflow for machine learning pipelines. Stay tuned for our next post where we’ll cover these use cases in detail. ✨ With Databricks, your journey from raw to reliable data starts today. Contact us today at Transform@cloudfronts.com to get started. To learn more about functionalities of DataBricks and other Azure AI services, please refer to my other blogs from the links given below: – 1] The Hidden Cost of Bad Data:How Strong Data Management Unlocks Scalable, Accurate AI – CloudFronts 2] Automating Document Vectorization from SharePoint Using Azure Logic Apps and Azure AI Search – CloudFronts 3] Using Open AI and Logic Apps to develop a Copilot agent for Elevator Pitches & Lead Qualification – CloudFronts

Share Story :

Setting Up Unity Catalog in Databricks for Centralized Data Governance

The fastest way to lose control of enterprise data? Managing governance separately across workspaces. Unity Catalog solves this with one centralized layer for security, lineage, and discovery. Data governance is crucial for any organization looking to manage and secure its data assets effectively. Databricks’ Unity Catalog is a centralized solution that provides a unified interface for managing access control, auditing, data lineage, and discovery. This blog will guide you through the process of setting up Unity Catalog in your Databricks workspace. What is Unity Catalog? Unity Catalog is Databricks’ answer to centralized data governance. It enables organizations to enforce standards-compliant security policies, apply fine-grained access controls, and visualize data lineage across multiple workspaces. It ensures compliance and promotes efficient data management. Key Features: 1] Standards-Compliant Security: ANSI SQL-based access policies that apply across all workspaces in a region. 2] Fine-Grained Access Control: Support for row- and column-level permissions. 3] Audit Logging: Tracks who accessed what data and when. 4] Data Lineage: Provides visualization of data flow and dependencies. Unity Catalog Object Hierarchy Before diving into the setup, it’s important to understand the hierarchical structure of Unity Catalog: 1] Catalogs: The top-level container (e.g., Production, Development) that represents an organizational unit or environment. 2] Schemas: Logical groupings of tables, views, and AI models within a catalog. 3] Tables and Views: These include managed tables fully governed by Unity Catalog and external tables referencing existing cloud storage. Here is the procedure to setup a Unity Catalog Metastore in association with Azure Storage, as I have done for one of our products (SmartPitch Sales & Marketing Agent) – 1] First create a storage account  with primary service being – “Azure Blob Storage or Azure Data Lake Storage Gen 2”; Performance and Redundancy can be chosen based on the requirement for which the DataBricks service is being used.Here for my Mosaic AI Agent, I have used Locally Redundant Storage & Data Lake Gen 2 2] Once the storage account is created, ensure that you have enabled “Hierarchical Namespace” When creating a Unity Catalog metastore with Azure Blob Storage, Hierarchical Namespace (HNS) is required because Unity Catalog needs: a] Folder-like structure to organize catalogs, schemas, and tables. b] Atomic operations (rename, move, delete) on directories and files. c] POSIX-style access controls for fine-grained permissions. d] Faster metadata handling for lineage and governance. HNS turns Azure Blob into ADLS Gen2, which supports these features. 3] Upload any Raw/Unclean files to your metastore folder in the blob storage, which would be required for your use in DataBricks. 4] Create a Unity Catalog Connector in Azure Portal and assign it “Storage Blob Data Contributor” Role . 5] Assign CORS (Cross-Origin Resource Sharing) settings for that storage account. Why this is necessary: In short: Without configuring CORS, Databricks cannot communicate with your storage container to read/write managed tables, schema metadata, or logs. 6] Generate SAS Token   7] Navigate to your Workspace and select “Manage Account” – this should be done from the account admin.  8] Select Catalog tab on the left and then click “Create Metastore”    9] Assign a Name, Region (Same as Workspace), The path to the storage account, and the connector id. 10] Once the Metastore is created, assign it to a workspace .  11] Once this is done, the catalogs and the schemas, and tables in within it can be created. How does Unity Catalog differ from Hive Metastore ? Feature Hive Metastore Unity Catalog Scope Workspace or cluster-specific Centralized, spans multiple workspaces and regions Architecture Single metastore tied to Spark/Hive Cloud-native service integrated with Databricks Object Hierarchy Databases → Tables → Partitions Catalogs → Schemas → Tables/Views/Models Data Assets Supported Tables, views Tables, views, files, ML models, dashboards Security Basic GRANT/DENY at database/table level Fine-grained, ANSI SQL–based (catalog, schema, table, column, row) Lineage Not available Built-in lineage and impact analysis Auditing Limited or external Integrated audit logs across workspaces Storage Management Points to storage locations; no governance Manages external and managed tables with governance Cloud Integration Primarily on cluster storage or external path Secure integration with ADLS Gen2, S3, GCS Permissions Model Spark SQL statements Attribute- and role-based access, unified policies Use Cases Basic metadata store for Spark/Hive workloads Enterprise-wide data governance, sharing, and compliance To conclude, Unity Catalog is the next-generation governance and metadata solution for Databricks, designed to give organizations a single, secure, and scalable way to manage data and AI assets. Unlike the older Hive Metastore, it centralizes control across multiple workspaces, supports fine-grained access policies, delivers built-in lineage and auditing, and integrates seamlessly with cloud storage like Azure Data Lake, S3, or GCS. When setting it up, key steps include: 1] Creating a metastore and linking it to your workspaces. 2] Enabling hierarchical namespace on Azure storage for folder-level security and operations. 3] Configuring CORS to allow Databricks domains to interact with storage. 4] Defining catalogs, schemas, and tables for structured governance. By implementing Unity Catalog, you ensure stronger security, better compliance, and faster data discovery, making your Databricks environment enterprise-ready for analytics and AI. Business Outcomes of Unity Catalog By implementing Unity Catalog, organizations can achieve: Why now? As data volumes and regulatory requirements grow, organizations can no longer rely on fragmented or legacy governance tools. Unity Catalog offers a future-proof foundation for unified data management and AI governance—essential for any modern data-driven enterprise. At CloudFronts, we help enterprises implement and optimize Unity Catalog within Databricks to ensure secure, compliant, and scalable data governance for enterprise data governance.Book a consultation with our experts to explore how Unity Catalog can simplify compliance and boost productivity for your teams.Contact us today at Transform@cloudfronts.com to get started. To learn more about functionalities of DataBricks and other Azure AI services, please refer to my other blogs from the links given below: – 1] The Hidden Cost of Bad Data:How Strong Data Management Unlocks Scalable, Accurate AI – CloudFronts 2] Automating Document Vectorization from SharePoint Using Azure Logic Apps and Azure AI Search – CloudFronts 3] Using Open AI and Logic Apps to develop a Copilot agent for … Continue reading Setting Up Unity Catalog in Databricks for Centralized Data Governance

Share Story :

Connecting Your MCP Server to Microsoft Copilot Studio – Part 2

In Part 1, we built a simple MCP server in TypeScript that exposed a “getWeather” tool. Now, let’s take the next step: connecting our MCP server to Microsoft Copilot Studio so that Copilot agents can call it directly. This section will cover: Step 1 — Publish Your MCP Server to Azure To make your MCP server accessible to Copilot Studio, you’ll need to host it online. There are multiple ways to deploy it — Azure App Service, Azure Container Apps, or even Azure Functions if you prefer serverless. For example, using Azure App Service: Test using curl to ensure it responds with MCP-compatible JSON: Step 2 — Create a New Copilot in Copilot Studio Step 3 — Add Knowledge Sources Optionally, you can enrich your Copilot by adding: This gives your Copilot a baseline knowledge to answer broader questions, while the MCP server will handle specific tasks (like fetching live weather data). Step 4 — Create a Custom Connector in Dataverse To let Copilot Studio talk to our MCP server, we need a custom connector inside Dataverse/CRM. Step 5 — Add the Custom Connector to Copilot Studio you’ll see the MCP server in your Tools section of copilot. To test the setup, let’s ask Copilot: “What’s the current weather in Mumbai?” On the first attempt, Copilot will prompt you to establish a connection. Simply open the Connection Manager, click Connect, and authorize the link to your MCP server. Once connected, Copilot will fetch the live weather details for Mumbai directly from your MCP server. and click retry on the Test window of your copilot. And just like that, your MCP server is live and fully integrated. It can now provide real-time weather updates for any city mentioned in your conversation with Copilot. You can try out different variations of questions or phrasings — Copilot will intelligently interpret your request, extract the city name, and seamlessly call the MCP server to deliver accurate weather details. Beyond Weather: Business Integrations The same process works for enterprise systems. For example, instead of getWeather, you could expose: By publishing these tools via MCP, your Copilot becomes a true enterprise assistant, capable of pulling structured business data and triggering workflows on demand. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com.

Share Story :

SEARCH BLOGS:

FOLLOW CLOUDFRONTS BLOG :


Secured By miniOrange