Latest Microsoft Dynamics 365 Blogs | CloudFronts - Page 13

How We Used Azure Blob Storage and Logic Apps to Centralize Dynamics 365 Integration Configurations

Managing multiple Dynamics 365 integrations across environments often becomes complex when each integration depends on static or hardcoded configuration values like API URLs, headers, secrets, or custom parameters. We faced similar challenges until we centralized our configuration strategy using Azure Blob Storage to host the configs and Logic Apps to dynamically fetch and apply them during execution. In this blog, we’ll walk through how we implemented this architecture and simplified config management across our D365 projects. Why We Needed Centralized Config Management In projects with multiple Logic Apps and D365 endpoints: Key problems: Solution Architecture Overview Key Components: Workflow: Step-by-Step Implementation Step 1: Store Config in Azure Blob Storage Example JSON: json CopyEdit {   “apiUrl”: “https://externalapi.com/v1/”,   “apiKey”: “xyz123abc”,   “timeout”: 60 } Step 2: Build Logic App to Read Config Step 3: Parse and Use Config Step 4: Apply to All Logic Apps Benefits of This Approach To conclude, centralizing D365 integration configs using Azure Blob and Logic Apps transformed our integration architecture. It made our systems easier to maintain, more scalable, and resilient to changes.Are you still hardcoding configs in your Logic Apps or Power Automate flows? Start organizing your integration configs in Azure Blob today, and build workflows that are smart, scalable, and maintainable. I hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com.

Share Story :

The Hidden Cost of Bad Data:How Strong Data Management Unlocks Scalable, Accurate AI

. Key Takeaways 1. Bad data kills ML performance — duplicate, inconsistent, missing, or outdated data can break production models even if training accuracy is high.2. Databricks Medallion Architecture (Raw → Silver → Gold) is essential for turning raw chaos into structured, trustworthy datasets ready for ML & BI.3. Raw layer: capture all data as-is; Silver layer: clean, normalize, and standardize; Gold layer: curate, enrich, and prepare for modeling or reporting.4. Structured pipelines reduce compute cost, improve model reliability, and enable proactive monitoring & feature freshness.5. Data quality is as important as algorithms — invest in transformation and governance for scalable, accurate AI. While developing a Machine Learning Agent for Smart Pitch (Sales/Presales/Marketing Assistant chatbot) within the Databricks ecosystem, I’ve seen firsthand how unclean, inconsistent, and incomplete data can cripple even the most promising machine learning initiatives. While much attention is given to models and algorithms, what often goes unnoticed is the silent productivity killer lurking in your pipelines: bad data. In the chase to build intelligent applications, developers often overlook the foundational layer—data quality. Here’s the truth: It is difficult to scale machine learning on top of chaos. That’s where the Raw → Silver → Gold data transformation framework in Databricks becomes not just useful, but also essential. The Hidden Cost of Bad Data Imagine you’ve built a high-performance ML model. It’s accurate during training, but underperforms in production. Why?Here’s what I frequently detect when I scan input pipelines:– Duplicate records– Inconsistent data types– Missing values– Outdated information– Schema drift (Schema drift introduces malformed, inconsistent, or incomplete data that breaks validation rules and compromises downstream processes.)– Noise in DataThese issues inflate compute costs, introduce bias, and produce unstable predictions, resulting in wasted hours debugging pipelines, increased operational risks, and eroded trust in your AI outputs. Ref.: Data Poisoning: A Silent but Deadly Threat to AI and ML Systems | by Anya Kondamani | nFactor Technologies | Medium As you can see in this example – Despite successfully fetching data, it was unable to render it, due to hitting maximum request limit, as bad data was involved, thus AI Agents face issues if directly brute forced with Raw/Unclean data. But this isn’t just a data science problem. It’s a data engineering problem. And the solution lies in structured, governed data management—beginning with a robust medallion architecture. Ref.: Data Intelligence End-to-End with Azure Databricks and Microsoft Fabric | Microsoft Community Hub Raw → Silver → Gold: The Databricks Way Databricks ML Agents thrive when your data is managed through the medallion architecture, transforming raw chaos into clean, trustworthy features. For those new to Medallion architecture in Databricks, it is a structured data processing framework that progressively improves data quality through bronze (raw), silver (validated), and gold (enriched) layers, enabling scalable and reliable analytics. Raw Layer: Ingest Everything The raw layer is where we land all data, regardless of quality. It’s your unfiltered feed—logs, events, customer input, third-party APIs, CSV dumps, IoT signals, etc. e.g. CloudFronts Case studies as directly fetched from WordPress API This script automates the extraction, transformation, and optional upload of WordPress-based case study content into Azure Blob Storage as well as locally to dbfs of Databricks, making it ready for further processing (e.g., vector databases, AI ingestion, or analytics). What We’ve Done On running the above Raw to Silver Cleaning code in Databricks notebook we get a much formatted, relevant and cleaned fields specific to our requirement In Databricks Unity Catalog → Schema → Tables, it looks something like this: – Silver Layer: Structure and Clean Once ingested, we promote data to the silver layer, where transformation begins. Think of this layer as the data refinery. What happens here: Now we start to analyze trends, infer schemas, and prepare for more active feature generation. Once I have removed noise; I begin to see patterns. This script is part of a data pipeline that transforms unstructured or semi-clean data from a Raw Delta Table (knowledge_base.raw.case_studies) (Actually Silver, as we have already done much of the cleaning part in previous step) into a Gold Delta Table (knowledge_base.gold.case_studies) using Apache Spark. The transformation focuses on HTML cleaning, type parsing, and schema standardization, enabling downstream ML or BI workflow What have we done here ? Start Spark Session Initializes a Spark job using SparkSession, enabling distributed data processing within the Databricks environment. Define UDFs (User-Defined Functions) Read Data from Silver Table Loads structured data from the Delta Table: knowledge_base.raw.case_studies, which acts as the Silver Layer in the medallion architecture. Clean & Normalize Key Columns Write to Gold Table Saves the final transformed DataFrame to the Gold Layer as a Delta Table: knowledge_base.gold.case_studies, using overwrite mode and allowing schema updates. As you can see we have maintained separate schemas for Raw, Silver Gold etc; and at gold we finally managed to get a fully cleaned noiseless data suitable for our requirement. Gold Layer: Ready for ML & BI At the gold layer, data becomes a polished product, curated for specific use cases—ML models, dashboards, reports, or APIs. This is the layer where scale and accuracy become feasible, because it finally has clean, enriched, semantically meaningful data to learn from. Ref.: What is a Medallion Architecture? We can further make it more efficient to fetch with vectorization  Once we reach stage and test again in the Agent Playground, we no longer face the error we had seen previously as now it is easier for the agent to retrieve gold standard vectorized data. Why This Matters for AI at Scale The difference between “good enough” and “state of the art” often hinges on data readiness. Here’s how strong data management impacts real-world outcomes: Without Medallion Architecture With Raw → Silver → Gold Data drift goes undetected Proactive schema monitoring Models degrade silently Continuous feature freshness Expensive debugging cycles Clean lineage via Delta Lake Inconsistent outputs Predictable, testable results (Delta Lake is an open-source storage layer that brings ACID transactions, scalable metadata handling, and unified batch processing to data lakes, enabling reliable analytics on massive datasets.) … Continue reading The Hidden Cost of Bad Data:How Strong Data Management Unlocks Scalable, Accurate AI

Share Story :

Essential Integration Patterns for Dynamics 365 Using Azure Logic Apps

If you’ve worked on Dynamics 365 CRM projects, you know integration isn’t optional—it’s essential. Whether you’re connecting CRM with a legacy ERP, a cloud-based marketing tool, or a SharePoint document library, the way you architect your integrations can make or break performance and maintainability.  Azure Logic Apps makes this easier with its low code interface but using the right pattern matters. In this post, I’ll Walk through seven integration patterns I’ve seen in real projects, explain where they work best, and share some lessons from the field.  Whether you’re building real-time syncs, scheduled data pulls, or hybrid workflows using Azure functions, these patterns will help you design cleaner, smarter solutions.  A Common Real-World Scenario Let’s say you’re asked to sync Project Tasks from Dynamics 365 to an external project management system. The sync needs to be quick, reliable, and avoid sending duplicate data.  You might wonder:  Without a clear integration pattern, you might end up with brittle flows that break silently or overload your system.  Key Integration Patterns (With Real Use Cases)  1. Request-Response Pattern  What it is: A Logic App that waits for a request (usually via HTTP), processes it, and sends back a response.  Use Case: You’re building a web or mobile app that pulls data from CRM in real time—like showing a customer’s recent orders.  How it works:  Why use it:  Key Considerations:  2. Fire-and-Forget Pattern What it is: CRM pushes data to a Logic App when something happens. The Logic App does the work—but no one waits for confirmation.  Use Case: When a case is closed in CRM, you archive the data to SQL or notify another system via email.  How it works:  Why use it:  Keep users moving—no delays.  Great for logging, alerts, or downstream updates  Key Considerations:  Silent failures—make sure you’re logging errors or using retries  3. Scheduled Sync (Polling) What it is: A Logic App that runs on a fixed schedule and pulls new/updated records using filters.  Use Case: Every 30 minutes, sync new Opportunities from CRM to SAP.  How it works:  Why use it:  Key Considerations:  4. Event-Driven Pattern (Webhooks)  What it is: CRM triggers a webhook (HTTP call) when something happens. A Logic App or Azure Function listens and acts.  Use Case: When a Project Task is updated, push that data to another system like MS Project or Jira.  How it works:  Why use it:  Key Considerations:  5. Queue-Based Pattern  What it is: Messages are pushed to a queue (like Azure Service Bus), and Logic Apps process them asynchronously.  Use Case: CRM pushes lead data to a queue, and Logic Apps handle them one by one to update different downstream systems (email marketing, analytics, etc.)  How it works:  Why use it:  Key Considerations:  6. Blob-Driven Pattern (File-Based Integration)  What it is: Logic App watches a Blob container or SFTP location for new files (CSV, Excel), parses them, and updates CRM.  Use Case: An external system sends daily contact updates via CSV to a storage account. Logic App reads and applies updates to CRM.  How it works:  Why use it:  Key Considerations:  7. Hybrid Pattern (Logic Apps + Azure Functions)  What it is: Logic App does the orchestration, while Azure Function handles complex logic that’s hard to do with built-in connectors.  Use Case: You need to calculate dynamic pricing or apply business rules before pushing data to ERP.  How it works:  Why use it:  Key Considerations:  Implementation Tips & Best Practices  Area  Recommendation  Security  Use managed identity, OAuth, and Key Vault for secrets  Error Handling  Use “Scope” + “Run After” for retries and graceful failure responses  Idempotency  Track processed IDs or timestamps to avoid duplicate processing  Logging  Push important logs to Application Insights or a centralized SQL log  Scaling  Prefer event/queue-based patterns for large volumes  Monitoring  Use Logic App’s run history + Azure Monitor + alerts for proactive detection  Tools & Technologies Used Common Architectures You’ll often see combinations of these patterns in real-world systems. For example:  To conclude, integration isn’t just about wiring up connectors, it’s about designing flows that are reliable, scalable, and easy to maintain.  These seven patterns are ones I’ve personally used (and reused!) across projects. Pick the right one for your scenario, and you’ll save yourself and your team countless hours in debugging and rework.  I hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com.

Share Story :

Boost Job Automation in Business Central with Maximum Number of Attempts

Automation in Microsoft Dynamics 365 Business Central can drastically improve efficiency, especially when scheduled jobs run without manual intervention. But occasionally jobs fail due to temporary issues like network glitches or locked records. Setting the Maximum number of attempts ensures the system retries automatically, reducing failures and improving reliability. Steps to Optimize Job Automation 1. Navigate to Job Queue Entries Go to Search (Tell Me) → type Job Queue Entries → open the page. 2. Select or Create Your Job Either choose an existing job queue entry or create a new one for your automated task. 3. Set Maximum No. of Attempts In the Maximum No. of Attempts field, enter the number of retries you want (e.g., 3–5 attempts). This tells BC to retry automatically if the job fails. 4. Schedule the Job Define the Earliest Start Date/Time and Recurring Frequency if applicable. 5.Enable the Job Queue Make sure the Job Queue is On and your job is set to Ready. 6.Monitor and Adjust Review Job Queue Log Entries regularly. Pro Tips -Keep your jobs small and modular for faster retries. -Avoid setting attempts too high it may overload the system. -Combine retries with proper error handling in the code or process. To conclude, by leveraging the Maximum No. of Attempts feature in Business Central, you can make your job automation more resilient, reduce downtime, and ensure business processes run smoothly without constant supervision. I hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com.

Share Story :

QA Made Easy with KQL in Azure Application Insights

In today’s world of modern DevOps and continuous delivery, having the ability to analyze application behavior quickly and efficiently is key to Quality Assurance (QA). Azure Application Insights offers powerful telemetry collection, but what makes it truly shine is the Kusto Query Language (KQL)—a rich, expressive query language that enables deep-dive analytics into your application’s performance, usage, and errors. Whether you’re testing a web app, monitoring API failures, or validating load test results, KQL can become your best QA companion. What is KQL? KQL stands for Kusto Query Language, and it’s used to query telemetry data collected by Azure Monitor, Application Insights, and Log Analytics. It’s designed to be read like English, with SQL-style expressions, yet much more powerful for telemetry analysis. Challenges Faced with Application Insights in QA 1. Telemetry data doesn’t always show up immediately after execution, causing delays in debugging and test validation. 2.When testing involves thousands of records, isolating failed requests or exceptions becomes tedious and time-consuming. 3.The default portal experience lacks intuitive filters for QA-specific needs like test case IDs, custom payloads, or user roles. 4.Repeated logs from expected failures (e.g., negative test cases) can clutter insights, making it hard to focus on actual issues. 5.Out-of-the-box telemetry doesn’t group actions by test scenario or user session unless explicitly configured, making traceability difficult during test case validation. To overcome these limitations, QA teams need more than just default dashboards—they need flexibility, precision, and speed in analyzing telemetry. This is where Kusto Query Language (KQL) becomes invaluable. With KQL, testers can write custom queries to filter, group, and visualize telemetry exactly the way they need, allowing them to focus on real issues, validate test scenarios, and make data-driven decisions faster and more efficiently. Let’s take some examples for better understanding: Some Common scenarios where a KQL proves to be very effective. Check if the latest deployment introduced new exceptions Example: Find all failed requests Example: Analyse performance of a specific page or operation Example: Correlate request with exceptions Example: Validate custom event tracking (like button clicks) Example: Track specific user sessions for end-to-end QA testing Example: Test API performance under load Example: All of this can be Visualized too – You can pin your KQL queries to Azure Dashboards or even Power BI for real-time tracking during QA sprints. To conclude, KQL is not just for developers or DevOps. QA engineers can significantly reduce manual log-hunting and accelerate issue detection by writing powerful queries in Application Insights. By incorporating KQL into your testing lifecycle, you add an analytical edge to your QA process—making quality not just a gate but a continuous insight loop.Start with a few basic queries, and soon you’ll be building powerful dashboards that QA, Dev, and Product can all share! Hope this helps ! I hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com.

Share Story :

Simplifying Access Management in Dynamics 365 Business Central Through Security Groups

A Security Group is a way to group users together so that you can give access to all of them at once.For example, if everyone in the Finance team needs access to certain files or apps, you can add them to a group and give the group permission instead of doing it for each person. In Office 365, Security Groups are managed through Azure Active Directory, which handles sign-ins and user identities in Microsoft 365.  They help IT teams save time, stay organized, and keep company data safe. The same Security Groups you create in Azure Active Directory (AAD) can also be used in Dynamics 365 Business Central to manage user permissions. Instead of giving access to each user one by one in Business Central, you can connect a Security Group to a set of permissions. Then, anyone added to that group in Azure AD will automatically get the same permissions in Business Central. They’re also helpful when you want to control environment-level access, especially if your company uses different environments for testing and production. For example, only specific groups of users can be allowed into the production system. Security Groups aren’t just useful in Business Central; they can be used across many Microsoft 365 services. You can use them in tools like Power BI, Power Automate, and other Office 365 apps to manage who has access to certain reports, flows, or data. In Microsoft Entra (formerly Azure AD), these groups can be used in Conditional Access policies. This means you can set rules like “only users in this group can log in from trusted devices” or “users in this group must use multi-factor authentication.” References Compare types of groups in Microsoft 365 – Microsoft 365 admin | Microsoft Learn What is Conditional Access in Microsoft Entra ID? – Microsoft Entra ID | Microsoft Learn Simplify Conditional Access policy deployment with templates – Microsoft Entra ID | Microsoft Learn Usage Go to Home – Microsoft 365 admin center. Go to “Teams & Groups” > “Active Teams & Groups” > “Security Groups” Click on “Add a security group” to create a new group. Add a name and description for the group and click on Next and finish the process. Once the group is created, you can re-open it and click on “Members” tab to add Members. Click on “View all and manage members” > “Add Members” Select all the relevant Users and click on Add. Now, back in Business Central, search for Security Groups. Open it and click on New. Click on the drill down. You’ll see all the available security groups here, select the relevant one and click on OK. Mail groups are not considered in this list. You can change the Code it uses in Business Central if required.Once done, click on “Create” Select the new Security Group and click on Permissions. Assign the relevant permissions. Now, any User that will be added to this Security Group in Office 365 will have the D365 Banking Permission Set assigned to them. Further, these groups will also be visible in the Admin Center, from where you can define whether a particular group has access to a particular environment. To conclude, security Groups are a powerful way to manage user access across Microsoft 365 and Dynamics 365 Business Central. They save time, reduce manual effort, and help ensure that the right people have access to the right data and tools. By using Security Groups, IT teams can stay organized, manage permissions more consistently, and improve overall security. Whether you’re working with Business Central, Power BI, or setting up Conditional Access in Microsoft Entra, Security Groups provide a flexible and scalable solution for modern access management. If you need further assistance or have specific questions about your ERP setup, feel free to reach out for personalized guidance. I hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com.

Share Story :

Is Your Tech Stack Holding You Back from AI Success?

The AI Race Has Begun but Most Businesses Are Crawling Artificial Intelligence (AI) is no longer experimental it’s operational. Across industries, companies are trying to harness it to improve decision-making, automate intelligently, and gain competitive edge. But here’s the problem: only 48% of AI projects ever make it to production (Gartner, 2024). It’s not because AI doesn’t work.It’s because most tech stacks aren’t built to support it. The Real Bottleneck Isn’t AI. It’s Your Foundation You may have data. You may even have AI tools. But if your infrastructure isn’t AI-ready, you’ll stay stuck in POCs that never scale. Common signs you’re blocked: AI success starts beneath the surface, in your data pipelines, infrastructure, and architecture. Most machine learning systems fail not because of poor models, but because of broken data and infrastructure pipelines. What Does an AI-Ready Tech Stack Look Like? Being AI-Ready means preparing your infrastructure, data, and processes to fully support AI capabilities. This is not a checklist or quick fix. It is a structured alignment of technology and business goals. A truly AI-ready stack can: Area Traditional Stack AI-Ready Stack Why It Matters Infrastructure On-premises servers, outdated VMs Azure Kubernetes Service (AKS), Azure Functions, Azure App Services; then: AWS EKS, Lambda; GCP GKE, Cloud Run AI workloads need scalable, flexible compute with container orchestration and event-driven execution Data Handling Siloed databases, batch ETL jobs Azure Data Factory, Power Platform connectors, Azure Event Grid, Synapse Link; then: AWS Glue, Kinesis; GCP Dataflow, Pub/Sub Enables real-time, consistent, and automated data flow for training and inference Storage & Retrieval Relational DBs, Excel, file shares Azure Data Lake Gen2, Azure Cosmos DB, Microsoft Fabric OneLake, Azure AI Search (with vector search); then: AWS S3, DynamoDB, OpenSearch; GCP BigQuery, Firestore Modern AI needs scalable object storage and vector DBs for unstructured and semantic data AI Enablement Isolated scripts, manual ML Azure OpenAI Service, Azure Machine Learning, Copilot Studio, Power Platform AI Builder; then: AWS SageMaker, Bedrock; GCP Vertex AI, AutoML; OpenAI, Hugging Face Simplifies AI adoption with ready-to-use models, tools, and MLOps pipelines Security & Governance Basic firewall rules, no audit logs Microsoft Entra (Azure AD), Microsoft Purview, Microsoft Defender for Cloud, Compliance Manager, Dataverse RBAC; then: AWS IAM, Macie; GCP Cloud IAM, DLP API Ensures responsible AI use, regulatory compliance, and data protection Monitoring & Ops Manual monitoring, limited observability Azure Monitor, Application Insights, Power Platform Admin Center, Purview Audit Logs; then: AWS CloudWatch, X-Ray; GCP Ops Suite; Datadog, Prometheus AI success depends on observability across infrastructure, pipelines, and models In Summary: AI-readiness is not a buzzword. Not a checklist. It’s an architectural reality. Why This Matters Now AI is moving fast and so are your competitors. But success doesn’t depend on building your own LLM or becoming a data science lab. It depends on whether your systems are ready to support intelligence at scale. If your tech stack can’t deliver real-time data, run scalable AI, and ensure trust your AI ambitions will stay just that: ambitions. How We Help We work with organizations across industries to: Whether you’re just starting or scaling AI across teams, we help build the architecture that enables action. Because AI success isn’t about plugging in a tool. It’s about building a foundation where intelligence thrives. I hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com.

Share Story :

Common Mistakes to Avoid When Integrating Dynamics 365 with Azure Logic Apps

Integrating Microsoft Dynamics 365 (D365) with external systems using Azure Logic Apps is a powerful and flexible approach—but it’s also prone to missteps if not planned and implemented correctly. In our experience working with D365 integrations across multiple projects, we’ve seen recurring mistakes that affect performance, maintainability, and security. In this blog, we’ll outline the most common mistakes and provide actionable recommendations to help you avoid them. Core Content  1. Not Using the Dynamics 365 Connector Properly The Mistake: Why It’s Bad: Best Practice:  2. Hardcoding Environment URLs and Credentials The Mistake: Why It’s Bad: Best Practice:  3. Ignoring D365 API Throttling and Limits The Mistake: Why It’s Bad: Best Practice:  4. Not Handling Errors Gracefully The Mistake: Why It’s Bad: Best Practice:  5. Forgetting to Secure the HTTP Trigger The Mistake: Why It’s Bad: Best Practice:  6. Overcomplicating the Workflow The Mistake: Why It’s Bad: Best Practice:  7. Not Testing in Isolated or Sandbox Environments The Mistake: Why It’s Bad: Best Practice: To conclude, Integrating Dynamics 365 with Azure Logic Apps is a powerful solution, but it requires careful planning to avoid common pitfalls. From securing endpoints and using config files to handling throttling and organizing modular workflows, the right practices save you hours of debugging and rework. Are you planning a new D365 + Azure Logic App integration? Review your architecture against these 7 pitfalls. Even one small improvement today could save hours of firefighting tomorrow. I hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com.

Share Story :

Why Project-Based Firms Should Embrace AI Now (Not Later)

In project-based businesses, reporting is the final word. It tells you what was planned, what happened, where you made money, and where you lost it. But ask any project manager or CEO what they really think about project reporting today, and you’ll hear this: “It’s late. It’s manual. It’s siloed. And by the time I see it, it’s too late to act.” This is exactly why AI is no longer optional; it’s essential. Whether you’re in construction, consulting, IT services, or professional engineering, AI can elevate your project reporting from a reactive chore to a strategic asset. Here’s how. The Problem with Traditional Reporting. Most reporting today involves: Enter AI: The Game-Changer for Project Reporting AI isn’t about replacing humans; it’s about augmenting your decision-making. When embedded in platforms like Dynamics 365 Project Operations and Power BI, AI becomes the project manager’s smartest analyst and the CEO’s most trusted advisor. Here’s what that looks like: Imagine your system telling you: “Project Alpha is likely to overrun budget by 12% based on current burn rate and resource allocation trends.” AI models analyse historical patterns, resource velocity, and task progress to predict issues weeks in advance. That’s no longer science fiction—it’s happening today with AI-enhanced Power BI and Copilot in Dynamics 365. Instead of navigating dashboards, just ask: “Show me projects likely to miss deadlines this month.” With Copilot in Dynamics 365, you get answers in seconds with charts and supporting data. No need to wait for your analyst or export 10 spreadsheets. AI can clean, match, and validate data coming from: No more mismatched formats or chasing someone to update a spreadsheet. AI ensures your reports are built on clean, real-time data, not assumptions. You don’t need to check 12 dashboards daily. With AI, set intelligent alerts: These alerts are not static rules but learned over time based on project patterns and exceptions. To conclude, for CEOs and PMs alike: We can show you how AI and Copilot in Dynamics 365 can simplify reporting, uncover risks, and help your team act with confidence. Start small, maybe with reporting or forecasting, but start now. I hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com.

Share Story :

Building a Scalable Integration Architecture for Dynamics 365 Using Logic Apps and Azure Functions

If you’ve worked with Dynamics 365 CRM for any serious integration project, you’ve probably used Azure Logic Apps. They’re great — visual, no-code, and fast to deploy. But as your integration needs grow, you quickly hit complexity: multiple entities, large volumes, branching logic, error handling, and reusability. That’s when architecture becomes critical. In this blog, I’ll share how we built a modular, scalable, and reusable integration architecture using Logic Apps + Azure Functions + Azure Blob Storage — with a config-driven approach. Whether you’re syncing data between D365 and Finance & Operations, or automating CRM workflows with external APIs, this post will help you avoid bottlenecks and stay maintainable. Architecture Components Component Purpose Parent Logic App Entry point, reads config from blob, iterates entities Child Logic App(s) Handles each entity sync (Project, Task, Team, etc.) Azure Blob Storage Hosts configuration files, Liquid templates, checkpoint data Azure Function Performs advanced transformation via Liquid templates CRM & F&O APIs Source and target systems Step-by-Step Breakdown 1. Configuration-Driven Logic We didn’t hardcode URLs, fields, or entities. Everything lives in a central config.json in Blob Storage: { “integrationName”: “ProjectToFNO”,   “sourceEntity”: “msdyn_project”,   “targetEntity”: “ProjectsV2”,   “liquidTemplate”: “projectToFno.liquid”,   “primaryKey”: “msdyn_projectid” } 2. Parent–Child Logic App Model Instead of one massive workflow, we created a parent Logic App that: Each child handles: 3. Azure Function for Transformation Why not use Logic App’s Compose or Data Operations? Because complex mapping (especially D365 → F&O) quickly becomes unreadable. Instead: {   “ProjectName”: “{{ msdyn_subject }}”,   “Customer”: “{{ customerid.name }}” } 4. Handling Checkpoints For batch integration (daily/hourly), we store last run timestamp in Blob: {   “entity”: “msdyn_project”,   “modifiedon”: “2025-07-28T22:00:00Z” } This allows delta fetches like: ?$filter=modifiedon gt 2025-07-28T22:00:00Z After each run, we update the checkpoint blob. 5. Centralized Logging & Alerts We configured: This helped us track down integration mismatches fast. Why This Architecture Works Need How It’s Solved Reusability Config-based logic + modular templates Maintainability Each Logic App has one job Scalability Add new entities via config, not code Monitoring Blob + Monitor integration Transformation complexity Handled via Azure Functions + Liquid Key Takeaways To conclude, this architecture has helped us deliver scalable Dynamics 365 integrations, including syncing Projects, Tasks, Teams, and Time Entries to F&O all without rewriting Logic Apps every time a client asks for a tweak. If you’re working on medium to complex D365 integrations, consider going config-driven and breaking your workflows into modular components. It keeps things clean, reusable, and much easier to maintain in the long run. I hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com.

Share Story :

SEARCH BLOGS:

FOLLOW CLOUDFRONTS BLOG :


Secured By miniOrange