Latest Microsoft Dynamics 365 Blogs | CloudFronts

What Are Databricks Clusters? A Simple Guide for Beginners

A Databricks Cluster is a group of virtual machines (VMs) in the cloud that work together to process data using Apache Spark.It provides the memory, CPU, and compute power required to run your code efficiently. Clusters are used for: Each cluster has two main parts: Types of Clusters Databricks supports multiple cluster types, depending on how you want to work. Cluster Type Use Case Interactive (All-Purpose) Clusters Used for notebooks, ad-hoc queries, and development. Multiple users can attach their notebooks. Job Clusters Created automatically for scheduled jobs or production pipelines. Deleted after job completion. Single Node Clusters Used for small data exploration or lightweight development. No executors, only one driver node. How Databricks Clusters WorkWhen you execute a notebook cell, Databricks sends your code to the cluster.The cluster’s driver node divides your task into smaller jobs and distributes them to the executors.The executors process the data in parallel and send the results back to the driver.This distributed processing is what makes Databricks fast and scalable for handling massive datasets. Step-by-Step: Creating Your First Cluster Let’s create a cluster in your Databricks workspace. Step 1: Navigate to Compute In the Databricks sidebar, click Compute. You’ll see a list of existing clusters or an option to create a new one. Step 2: Create a New Cluster Click Create Compute in the top-right corner. Step 3: Configure Basic Settings Step 4: Select Node Type Choose the VM type based on your workload. For development, Standard_DS3_v2 or Standard_D4ds_v5 are cost-effective. Step 5: Auto-Termination Set the cluster to terminate after 10 or 20 minutes of inactivity. This prevents unnecessary cost when the cluster is idle. Step 6: Review and Create Click Create Compute. After a few minutes, your cluster will turn green, indicating it is ready to run code. Clusters in Unity Catalog-Enabled Workspaces If Unity Catalog is enabled in your workspace, there are a few additional configurations to note. Feature Standard Workspace Unity Catalog Workspace Access Mode Default is Single User. Must choose Shared, Single User, or No Isolation Shared. Data Access Managed by workspace permissions. Controlled through Catalog, Schema, and Table permissions. Data Hierarchy Database → Table Catalog → Schema → Table Example Query SELECT * FROM sales.customers; SELECT * FROM main.sales.customers; When you create a cluster with Unity Catalog, you will see a new Access Mode field in the configuration page. Choose “Shared” if multiple users need to access governed data under Unity Catalog. Managing Cluster Performance and CostClusters can become expensive if not managed properly. Follow these tips to optimize performance and cost: a. Use Auto-Termination to shut down idle clusters automatically.b. Choose the right VM size for your workload. Avoid oversizing.c. Use Job Clusters for production pipelines since they start and stop automatically.d. Leverage Autoscaling so Databricks can adjust the number of workers dynamically.e. Monitor with Ganglia metrics to identify performance bottlenecks. Common Cluster Issues and Fixes Issue Cause Fix Cluster stuck starting VM quota exceeded or region issue Change VM size or region. Slow performance Too few workers or data skew Increase worker count or repartition data. Access denied to data Missing storage credentials Use Databricks Secrets or Unity Catalog permissions. High cost Idle clusters running Enable auto-termination. Best Practices for Using Databricks Clusters1. Always attach your notebook to the correct cluster before running it.2. Use development, staging, and production clusters separately.3. Keep the cluster runtime version consistent across environments.4. Terminate unused clusters to reduce cost.5. If you use Unity Catalog, prefer Shared clusters for collaboration. To conclude, clusters are the heart of Databricks.They provide the compute power needed to process large-scale data efficiently. Without them, Databricks Notebooks and Jobs cannot run. Once you understand how clusters work, you will find it easier to manage costs, optimize performance, and build reliable data pipelines. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com

Share Story :

Designing Event-Driven Integrations Between Dynamics 365 and Azure Services

When integrating Dynamics 365 (D365) with other systems, most teams traditionally rely on scheduled or API-driven integrations. While effective for simple use cases, these approaches often introduce delays, unnecessary API calls, and scalability issues.That’s where event-driven architecture comes in. By designing integrations that react to business events in real-time, organizations can build faster, more scalable, and more reliable systems. In this blog, we’ll explore how to design event-driven integrations between D365 and Azure services, and walk through the key building blocks that make it possible. Core Content 1. What is Event-Driven Architecture (EDA)? Example in D365:Instead of running a scheduled job every hour to check for new accounts, an event is raised whenever a new account is created, and downstream systems are notified immediately. 2. How Events Work in Dynamics 365 Dynamics 365 doesn’t publish events directly, but it provides mechanisms to capture them: By connecting these with Azure services, we can push events to the cloud in near real-time. 3. Azure Services for Event-Driven D365 Integrations Once D365 emits an event, Azure provides services to process and route them: 4. Designing an Event-Driven Integration Pattern Here’s a recommended architecture: Example Flow:  5. Best Practices for Event-Driven D365 Integrations 6. Common Pitfalls to Avoid To conclude, moving from batch-driven to event-driven integrations with Dynamics 365 unlocks real-time responsiveness, scalability, and efficiency. With Azure services like Event Grid, Service Bus, Functions, and Logic Apps, you can design integrations that are robust, cost-efficient, and future proof. If you’re still relying on scheduled D365 integrations, start experimenting with event-driven patterns. Even small wins (like real-time customer syncs) can drastically improve system responsiveness and business agility. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudFronts.com

Share Story :

Databricks vs Azure Data Factory: When to Use Which in ETL Pipelines

Introduction: Two Powerful Tools, One Common Question If you work in data engineering, you’ve probably faced this question:Should I use Azure Data Factory or Databricks for my ETL pipeline? Both tools can move and transform data, but they serve very different purposes.Understanding where each tool fits can help you design cleaner, faster, and more cost-effective data pipelines. Let’s explore how these two Azure services complement each other rather than compete. What Is Azure Data Factory (ADF) Azure Data Factory is a data orchestration service.It’s designed to move, schedule, and automate data workflows between systems. Think of ADF as the “conductor of your data orchestra” — it doesn’t play the instruments itself, but it ensures everything runs in sync. Key Capabilities of ADF: Best For: What Is Azure Databricks Azure Databricks is a data processing and analytics platform built on Apache Spark.It’s designed for complex transformations, data modeling, and machine learning on large-scale data. Think of Databricks as the “engine” that processes and transforms the data your ADF pipelines deliver. Key Capabilities of Databricks: Best For: ADF vs Databricks: A Detailed Comparison Feature Azure Data Factory (ADF) Azure Databricks Primary Purpose Orchestration and data movement Data processing and advanced transformations Core Engine Integration Runtime Apache Spark Interface Type Low-code (GUI-based) Code-based (Python, SQL, Scala) Performance Limited by Data Flow engine Distributed and scalable Spark clusters Transformations Basic mapping and joins Complex joins, ML models, and aggregations Data Handling Batch-based Batch and streaming Cost Model Pay per pipeline run and Data Flow activity Pay per cluster usage (compute time) Versioning and Debugging Visual monitoring and alerts Notebook history and logging Integration Best for orchestrating multiple systems Best for building scalable ETL within pipelines In simple terms, ADF moves the data, while Databricks transforms it deeply. When to Use ADF Use Azure Data Factory when: Example:Copying data daily from Salesforce and SQL Server into Azure Data Lake. When to Use Databricks Use Databricks when: Example:Transforming millions of sales records into curated Delta tables with customer segmentation logic. When to Use Both Together In most enterprise data platforms, ADF and Databricks work together. Typical Flow: This hybrid approach combines the automation of ADF with the computing power of Databricks. Example Architecture:ADF → Databricks → Delta Lake → Synapse → Power BI This is a standard enterprise pattern for modern data engineering. Cost Considerations Using ADF for orchestration and Databricks for processing ensures you only pay for what you need. Best Practices Azure Data Factory and Azure Databricks are not competitors.They are complementary tools that together form a complete ETL solution. Understanding their strengths helps you design data pipelines that are reliable, scalable, and cost-efficient. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudFronts.com

Share Story :

Handling Errors and Retries in Dynamics 365 Logic App Integrations

Integrating Dynamics 365 (D365) with external systems using Azure Logic Apps is one of the most common patterns for automation. But in real-world projects, things rarely go smoothly – API throttling, network timeouts, and unexpected data issues are everyday challenges. Without proper error handling and retry strategies, these issues can result in data mismatches, missed transactions, or broken integrations. In this blog, we’ll explore how to handle errors and implement retries in D365 Logic App integrations, ensuring your workflows are reliable, resilient, and production-ready. Core Content 1. Why Error Handling Matters in D365 Integrations Without handling these, your Logic App either fails silently or stops execution entirely, causing broken processes.  2. Built-in Retry Policies in Logic Apps What They Are:Every Logic App action comes with a retry policy that can be configured to automatically retry failed requests. Best Practice: 3. Handling Errors with Scopes and “Run After” Scopes in Logic Apps let you group actions and then define what happens if they succeed or fail. Steps: Example: 4. Designing Retry + Error Flow Together Recommended Pattern: This ensures no transaction is silently lost. 5. Handling Dead-lettering with Service Bus (Advanced) For high-volume integrations, you may need a dead-letter queue (DLQ) approach: This pattern prevents data loss while keeping integrations lightweight. 6. Monitoring & Observability Error handling isn’t complete without monitoring. Building resilient integrations between D365 and Logic Apps isn’t just about connecting APIs—it’s about ensuring reliability even when things go wrong. By configuring retry policies, using scopes for error handling, and adopting dead-lettering for advanced cases, you’ll drastically reduce downtime and data mismatches. Next time you design a D365 Logic App, don’t just think about the happy path. Build error handling and retry strategies from the start, and you’ll thank yourself later when your integration survives the unexpected. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com

Share Story :

How We Used Azure Blob Storage and Logic Apps to Centralize Dynamics 365 Integration Configurations

Managing multiple Dynamics 365 integrations across environments often becomes complex when each integration depends on static or hardcoded configuration values like API URLs, headers, secrets, or custom parameters. We faced similar challenges until we centralized our configuration strategy using Azure Blob Storage to host the configs and Logic Apps to dynamically fetch and apply them during execution. In this blog, we’ll walk through how we implemented this architecture and simplified config management across our D365 projects. Why We Needed Centralized Config Management In projects with multiple Logic Apps and D365 endpoints: Key problems: Solution Architecture Overview Key Components: Workflow: Step-by-Step Implementation Step 1: Store Config in Azure Blob Storage Example JSON: json CopyEdit {   “apiUrl”: “https://externalapi.com/v1/”,   “apiKey”: “xyz123abc”,   “timeout”: 60 } Step 2: Build Logic App to Read Config Step 3: Parse and Use Config Step 4: Apply to All Logic Apps Benefits of This Approach To conclude, centralizing D365 integration configs using Azure Blob and Logic Apps transformed our integration architecture. It made our systems easier to maintain, more scalable, and resilient to changes.Are you still hardcoding configs in your Logic Apps or Power Automate flows? Start organizing your integration configs in Azure Blob today, and build workflows that are smart, scalable, and maintainable. I hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com.

Share Story :

Common Mistakes to Avoid When Integrating Dynamics 365 with Azure Logic Apps

Integrating Microsoft Dynamics 365 (D365) with external systems using Azure Logic Apps is a powerful and flexible approach—but it’s also prone to missteps if not planned and implemented correctly. In our experience working with D365 integrations across multiple projects, we’ve seen recurring mistakes that affect performance, maintainability, and security. In this blog, we’ll outline the most common mistakes and provide actionable recommendations to help you avoid them. Core Content  1. Not Using the Dynamics 365 Connector Properly The Mistake: Why It’s Bad: Best Practice:  2. Hardcoding Environment URLs and Credentials The Mistake: Why It’s Bad: Best Practice:  3. Ignoring D365 API Throttling and Limits The Mistake: Why It’s Bad: Best Practice:  4. Not Handling Errors Gracefully The Mistake: Why It’s Bad: Best Practice:  5. Forgetting to Secure the HTTP Trigger The Mistake: Why It’s Bad: Best Practice:  6. Overcomplicating the Workflow The Mistake: Why It’s Bad: Best Practice:  7. Not Testing in Isolated or Sandbox Environments The Mistake: Why It’s Bad: Best Practice: To conclude, Integrating Dynamics 365 with Azure Logic Apps is a powerful solution, but it requires careful planning to avoid common pitfalls. From securing endpoints and using config files to handling throttling and organizing modular workflows, the right practices save you hours of debugging and rework. Are you planning a new D365 + Azure Logic App integration? Review your architecture against these 7 pitfalls. Even one small improvement today could save hours of firefighting tomorrow. I hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com.

Share Story :

How to Build a Scorecard in Power BI

What Is a Scorecard in Power BI? A Scorecard is a visual performance monitoring tool that allows you to track key metrics (goals) against predefined targets. Power BI’s Metrics (formerly Goals) feature helps you: Why Use Scorecards? Here’s why Scorecards are powerful for any team: Benefit Description Goal Alignment Track KPIs aligned to strategic objectives. Accountability Assign owners and collaborators for each goal. Real-time Tracking Monitor progress with live metrics. Visual Reporting Easy-to-read dashboards and history tracking. Step-by-Step: How to Build a Scorecard in Power BI Step 1: Navigate to Power BI Service Go to Power BI Service and choose the workspace where you want to create your Scorecard (Premium or Pro workspaces only). Step 2: Create a New Scorecard  You’ll now land on a blank Scorecard canvas. Step 3: Add Metrics to the Scorecard You can connect it to an existing Power BI dataset or manually input values. Step 4: Link Metrics to Data (Optional but Recommended) To automate tracking: This ensures your Scorecard updates automatically with data refreshes. Step 5: Customize the Scorecard You can also create hierarchies — group related goals under broader objectives. Step 6: Share & Collaborate Once your Scorecard is built: To conclude, Power BI Scorecards turn your data into action. They help track goals in real time, assign ownership, and keep teams focused on what matters most. Whether you’re managing a sales team, a project, or company-wide objectives — Power BI Scorecards are a game-changer for performance tracking. Want to bring visibility and accountability to your team goals? Head to Power BI Service and start building your first Scorecard today! Need help connecting metrics to your datasets? Reach out, and we’ll guide you step by step. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfonts.com.

Share Story :

How to Implement Incremental Refresh in Power BI

Refreshing large datasets in Power BI can become time-consuming and resource-intensive as data volume grows. If your reports are based on millions of rows of historical data, refreshing everything daily is neither efficient nor necessary. This is where Incremental Refresh comes in. It allows Power BI to only refresh new or changed data, drastically improving performance and reducing load on your data source. In this blog, you’ll learn how to set up incremental refresh step-by-step—so your Power BI reports stay fast and efficient even with big data. What Is Incremental Refresh in Power BI? Incremental Refresh enables Power BI to load data in partitions, refreshing only the latest ones (e.g., the past 7 days) while keeping the older data static. Why use it? Step 1: Define Parameters in Power Query ·  Open your report in Power BI Desktop (Pro or Premium workspace) ·  Go to Transform Data (Power Query Editor) ·  Create two parameters: ·  Set default values (e.g., RangeStart = 01/01/2020, RangeEnd = 01/01/2021) Step 2: Filter Your Data with These Parameters This tells Power BI what time range to load and eventually refresh incrementally. Step 3: Enable Incremental Refresh in Data Model 📝 Example: This configuration refreshes only the recent week of data each time, while keeping the rest intact. Step 4: Publish to Power BI Service ✅ Done! You’ve now implemented incremental refresh. Best Practices To conclude, Incremental Refresh is a game-changer when it comes to handling large datasets in Power BI. It not only saves refresh time but also optimizes resource usage. By learning how to configure it properly, you can scale your reports with confidence and efficiency Got a large dataset slowing down your Power BI refresh? Implement Incremental Refresh today and see the difference. Explore more Power BI performance tips in our blog series—or reach out for help setting up enterprise-grade models. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfonts.com

Share Story :

How to Perform Data Transformation in Microsoft Dataverse

Microsoft Dataverse is a powerful data platform that supports secure and scalable data storage for business applications. However, raw data imported into Dataverse often needs transformation—cleaning, reshaping, filtering, or merging—to make it useful and reliable for apps and analytics.  In this blog, we’ll show you how to apply transformations to data before or after it reaches Dataverse using tools like Power Query, Dataflows, and business rules—ensuring you always work with clean, structured, and actionable data.  What is Data Transformation in Dataverse?  Why Data Transformation Matters Data transformation refers to modifying data’s structure, content, or format before or after it’s stored in Dataverse. This includes:  Step-by-Step Guide: Connecting a Database to Dataverse  Step 1: Open the Power Apps and select the proper Environment  Step 2: Open Dataflow in Power Apps and create a new Dataflow  Step 3: Connect to the Database using SQL Server Database.  Step 4: Add the required credentials to make the connection between the database and Dataverse.  Step 5: Add the transformation in the Dataverse  Step 6: Add proper mapping of the column and find the unique ID of the table   Step 7: Set the schedule refresh and publish the Dataflow.  Step 8: Once Dataflow is published, we can see the table in the Power apps  To conclude, transforming data in Dataverse is key to building reliable and high-performing applications. Whether using Power Query, calculated columns, or Power Automate, you can ensure your data is clean, structured, and actionable.  We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfonts.com. Ready to improve your Dataverse data quality? Start with a simple dataflow or calculated column today, and empower your business applications with better, transformed data.

Share Story :

Bridge Your Database and Dataverse: Complete Integration Guide

Modern applications demand seamless, real-time data access. Microsoft Dataverse—the data backbone of the Power Platform—makes it easier to build and scale low-code apps, but often your enterprise data resides in legacy databases.  Connecting a database to Dataverse enables automation, reporting, and app-building capabilities using the Power Platform’s ecosystem. In this blog, we’ll walk you through how to connect a traditional SQL database (Azure SQL or On-Premises) to Microsoft Dataverse.  What is Dataverse?  Dataverse is Microsoft’s cloud-based data platform, designed to securely store and manage data used by business applications. It’s highly integrated with Power Apps, Power Automate, and Dynamics 365.  Key Features:  Why Connect Your Database to Dataverse?  Step-by-Step Guide: Connecting a Database to Dataverse  Step 1: Open the Power Apps and select the proper Environment  Step 2: Open Dataflow in Power Apps and create a new Dataflow  Step 3: Connect to the Database using SQL Server Database.  Step 4: Add the required credentials to make the connection between the database and Dataverse.  Step 5: Add proper mapping of the column and find the unique ID of the table   Step 6: Set the schedule refresh and publish the Dataflow.  Step 7: Once Dataflow is published, we can see the table in the Power apps  To conclude, connecting your database to Dataverse amplifies the power of your data, enabling app development, automation, and reporting within a unified ecosystem. Whether you need real-time access or periodic data sync, Microsoft offers flexible and secure methods to integrate databases with Dataverse.  Start exploring virtual tables or dataflows today to bridge the gap between your existing databases and the Power Platform. Want to learn more? Check out our related guides on Dataverse best practices and virtual table optimization. We hope you found this blog useful. If you would like to discuss anything further, please reach out to us at transform@cloudfonts.com.

Share Story :

SEARCH BLOGS:

FOLLOW CLOUDFRONTS BLOG :


Secured By miniOrange