Blog Archives - Page 5 of 170 - - Page 5

Category Archives: Blog

QA Made Easy with KQL in Azure Application Insights

In today’s world of modern DevOps and continuous delivery, having the ability to analyze application behavior quickly and efficiently is key to Quality Assurance (QA). Azure Application Insights offers powerful telemetry collection, but what makes it truly shine is the Kusto Query Language (KQL)—a rich, expressive query language that enables deep-dive analytics into your application’s performance, usage, and errors. Whether you’re testing a web app, monitoring API failures, or validating load test results, KQL can become your best QA companion. What is KQL? KQL stands for Kusto Query Language, and it’s used to query telemetry data collected by Azure Monitor, Application Insights, and Log Analytics. It’s designed to be read like English, with SQL-style expressions, yet much more powerful for telemetry analysis. Challenges Faced with Application Insights in QA 1. Telemetry data doesn’t always show up immediately after execution, causing delays in debugging and test validation. 2.When testing involves thousands of records, isolating failed requests or exceptions becomes tedious and time-consuming. 3.The default portal experience lacks intuitive filters for QA-specific needs like test case IDs, custom payloads, or user roles. 4.Repeated logs from expected failures (e.g., negative test cases) can clutter insights, making it hard to focus on actual issues. 5.Out-of-the-box telemetry doesn’t group actions by test scenario or user session unless explicitly configured, making traceability difficult during test case validation. To overcome these limitations, QA teams need more than just default dashboards—they need flexibility, precision, and speed in analyzing telemetry. This is where Kusto Query Language (KQL) becomes invaluable. With KQL, testers can write custom queries to filter, group, and visualize telemetry exactly the way they need, allowing them to focus on real issues, validate test scenarios, and make data-driven decisions faster and more efficiently. Let’s take some examples for better understanding: Some Common scenarios where a KQL proves to be very effective. Check if the latest deployment introduced new exceptions Example: Find all failed requests Example: Analyse performance of a specific page or operation Example: Correlate request with exceptions Example: Validate custom event tracking (like button clicks) Example: Track specific user sessions for end-to-end QA testing Example: Test API performance under load Example: All of this can be Visualized too – You can pin your KQL queries to Azure Dashboards or even Power BI for real-time tracking during QA sprints. To conclude, KQL is not just for developers or DevOps. QA engineers can significantly reduce manual log-hunting and accelerate issue detection by writing powerful queries in Application Insights. By incorporating KQL into your testing lifecycle, you add an analytical edge to your QA process—making quality not just a gate but a continuous insight loop.Start with a few basic queries, and soon you’ll be building powerful dashboards that QA, Dev, and Product can all share! Hope this helps ! I hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com.

Share Story :

Simplifying Access Management in Dynamics 365 Business Central Through Security Groups

A Security Group is a way to group users together so that you can give access to all of them at once.For example, if everyone in the Finance team needs access to certain files or apps, you can add them to a group and give the group permission instead of doing it for each person. In Office 365, Security Groups are managed through Azure Active Directory, which handles sign-ins and user identities in Microsoft 365.  They help IT teams save time, stay organized, and keep company data safe. The same Security Groups you create in Azure Active Directory (AAD) can also be used in Dynamics 365 Business Central to manage user permissions. Instead of giving access to each user one by one in Business Central, you can connect a Security Group to a set of permissions. Then, anyone added to that group in Azure AD will automatically get the same permissions in Business Central. They’re also helpful when you want to control environment-level access, especially if your company uses different environments for testing and production. For example, only specific groups of users can be allowed into the production system. Security Groups aren’t just useful in Business Central; they can be used across many Microsoft 365 services. You can use them in tools like Power BI, Power Automate, and other Office 365 apps to manage who has access to certain reports, flows, or data. In Microsoft Entra (formerly Azure AD), these groups can be used in Conditional Access policies. This means you can set rules like “only users in this group can log in from trusted devices” or “users in this group must use multi-factor authentication.” References Compare types of groups in Microsoft 365 – Microsoft 365 admin | Microsoft Learn What is Conditional Access in Microsoft Entra ID? – Microsoft Entra ID | Microsoft Learn Simplify Conditional Access policy deployment with templates – Microsoft Entra ID | Microsoft Learn Usage Go to Home – Microsoft 365 admin center. Go to “Teams & Groups” > “Active Teams & Groups” > “Security Groups” Click on “Add a security group” to create a new group. Add a name and description for the group and click on Next and finish the process. Once the group is created, you can re-open it and click on “Members” tab to add Members. Click on “View all and manage members” > “Add Members” Select all the relevant Users and click on Add. Now, back in Business Central, search for Security Groups. Open it and click on New. Click on the drill down. You’ll see all the available security groups here, select the relevant one and click on OK. Mail groups are not considered in this list. You can change the Code it uses in Business Central if required.Once done, click on “Create” Select the new Security Group and click on Permissions. Assign the relevant permissions. Now, any User that will be added to this Security Group in Office 365 will have the D365 Banking Permission Set assigned to them. Further, these groups will also be visible in the Admin Center, from where you can define whether a particular group has access to a particular environment. To conclude, security Groups are a powerful way to manage user access across Microsoft 365 and Dynamics 365 Business Central. They save time, reduce manual effort, and help ensure that the right people have access to the right data and tools. By using Security Groups, IT teams can stay organized, manage permissions more consistently, and improve overall security. Whether you’re working with Business Central, Power BI, or setting up Conditional Access in Microsoft Entra, Security Groups provide a flexible and scalable solution for modern access management. If you need further assistance or have specific questions about your ERP setup, feel free to reach out for personalized guidance. I hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com.

Share Story :

Is Your Tech Stack Holding You Back from AI Success?

The AI Race Has Begun but Most Businesses Are Crawling Artificial Intelligence (AI) is no longer experimental it’s operational. Across industries, companies are trying to harness it to improve decision-making, automate intelligently, and gain competitive edge. But here’s the problem: only 48% of AI projects ever make it to production (Gartner, 2024). It’s not because AI doesn’t work.It’s because most tech stacks aren’t built to support it. The Real Bottleneck Isn’t AI. It’s Your Foundation You may have data. You may even have AI tools. But if your infrastructure isn’t AI-ready, you’ll stay stuck in POCs that never scale. Common signs you’re blocked: AI success starts beneath the surface, in your data pipelines, infrastructure, and architecture. Most machine learning systems fail not because of poor models, but because of broken data and infrastructure pipelines. What Does an AI-Ready Tech Stack Look Like? Being AI-Ready means preparing your infrastructure, data, and processes to fully support AI capabilities. This is not a checklist or quick fix. It is a structured alignment of technology and business goals. A truly AI-ready stack can: Area Traditional Stack AI-Ready Stack Why It Matters Infrastructure On-premises servers, outdated VMs Azure Kubernetes Service (AKS), Azure Functions, Azure App Services; then: AWS EKS, Lambda; GCP GKE, Cloud Run AI workloads need scalable, flexible compute with container orchestration and event-driven execution Data Handling Siloed databases, batch ETL jobs Azure Data Factory, Power Platform connectors, Azure Event Grid, Synapse Link; then: AWS Glue, Kinesis; GCP Dataflow, Pub/Sub Enables real-time, consistent, and automated data flow for training and inference Storage & Retrieval Relational DBs, Excel, file shares Azure Data Lake Gen2, Azure Cosmos DB, Microsoft Fabric OneLake, Azure AI Search (with vector search); then: AWS S3, DynamoDB, OpenSearch; GCP BigQuery, Firestore Modern AI needs scalable object storage and vector DBs for unstructured and semantic data AI Enablement Isolated scripts, manual ML Azure OpenAI Service, Azure Machine Learning, Copilot Studio, Power Platform AI Builder; then: AWS SageMaker, Bedrock; GCP Vertex AI, AutoML; OpenAI, Hugging Face Simplifies AI adoption with ready-to-use models, tools, and MLOps pipelines Security & Governance Basic firewall rules, no audit logs Microsoft Entra (Azure AD), Microsoft Purview, Microsoft Defender for Cloud, Compliance Manager, Dataverse RBAC; then: AWS IAM, Macie; GCP Cloud IAM, DLP API Ensures responsible AI use, regulatory compliance, and data protection Monitoring & Ops Manual monitoring, limited observability Azure Monitor, Application Insights, Power Platform Admin Center, Purview Audit Logs; then: AWS CloudWatch, X-Ray; GCP Ops Suite; Datadog, Prometheus AI success depends on observability across infrastructure, pipelines, and models In Summary: AI-readiness is not a buzzword. Not a checklist. It’s an architectural reality. Why This Matters Now AI is moving fast and so are your competitors. But success doesn’t depend on building your own LLM or becoming a data science lab. It depends on whether your systems are ready to support intelligence at scale. If your tech stack can’t deliver real-time data, run scalable AI, and ensure trust your AI ambitions will stay just that: ambitions. How We Help We work with organizations across industries to: Whether you’re just starting or scaling AI across teams, we help build the architecture that enables action. Because AI success isn’t about plugging in a tool. It’s about building a foundation where intelligence thrives. I hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com.

Share Story :

Common Mistakes to Avoid When Integrating Dynamics 365 with Azure Logic Apps

Integrating Microsoft Dynamics 365 (D365) with external systems using Azure Logic Apps is a powerful and flexible approach—but it’s also prone to missteps if not planned and implemented correctly. In our experience working with D365 integrations across multiple projects, we’ve seen recurring mistakes that affect performance, maintainability, and security. In this blog, we’ll outline the most common mistakes and provide actionable recommendations to help you avoid them. Core Content  1. Not Using the Dynamics 365 Connector Properly The Mistake: Why It’s Bad: Best Practice:  2. Hardcoding Environment URLs and Credentials The Mistake: Why It’s Bad: Best Practice:  3. Ignoring D365 API Throttling and Limits The Mistake: Why It’s Bad: Best Practice:  4. Not Handling Errors Gracefully The Mistake: Why It’s Bad: Best Practice:  5. Forgetting to Secure the HTTP Trigger The Mistake: Why It’s Bad: Best Practice:  6. Overcomplicating the Workflow The Mistake: Why It’s Bad: Best Practice:  7. Not Testing in Isolated or Sandbox Environments The Mistake: Why It’s Bad: Best Practice: To conclude, Integrating Dynamics 365 with Azure Logic Apps is a powerful solution, but it requires careful planning to avoid common pitfalls. From securing endpoints and using config files to handling throttling and organizing modular workflows, the right practices save you hours of debugging and rework. Are you planning a new D365 + Azure Logic App integration? Review your architecture against these 7 pitfalls. Even one small improvement today could save hours of firefighting tomorrow. I hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com.

Share Story :

Why Project-Based Firms Should Embrace AI Now (Not Later)

In project-based businesses, reporting is the final word. It tells you what was planned, what happened, where you made money, and where you lost it. But ask any project manager or CEO what they really think about project reporting today, and you’ll hear this: “It’s late. It’s manual. It’s siloed. And by the time I see it, it’s too late to act.” This is exactly why AI is no longer optional; it’s essential. Whether you’re in construction, consulting, IT services, or professional engineering, AI can elevate your project reporting from a reactive chore to a strategic asset. Here’s how. The Problem with Traditional Reporting. Most reporting today involves: Enter AI: The Game-Changer for Project Reporting AI isn’t about replacing humans; it’s about augmenting your decision-making. When embedded in platforms like Dynamics 365 Project Operations and Power BI, AI becomes the project manager’s smartest analyst and the CEO’s most trusted advisor. Here’s what that looks like: Imagine your system telling you: “Project Alpha is likely to overrun budget by 12% based on current burn rate and resource allocation trends.” AI models analyse historical patterns, resource velocity, and task progress to predict issues weeks in advance. That’s no longer science fiction—it’s happening today with AI-enhanced Power BI and Copilot in Dynamics 365. Instead of navigating dashboards, just ask: “Show me projects likely to miss deadlines this month.” With Copilot in Dynamics 365, you get answers in seconds with charts and supporting data. No need to wait for your analyst or export 10 spreadsheets. AI can clean, match, and validate data coming from: No more mismatched formats or chasing someone to update a spreadsheet. AI ensures your reports are built on clean, real-time data, not assumptions. You don’t need to check 12 dashboards daily. With AI, set intelligent alerts: These alerts are not static rules but learned over time based on project patterns and exceptions. To conclude, for CEOs and PMs alike: We can show you how AI and Copilot in Dynamics 365 can simplify reporting, uncover risks, and help your team act with confidence. Start small, maybe with reporting or forecasting, but start now. I hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com.

Share Story :

Building a Scalable Integration Architecture for Dynamics 365 Using Logic Apps and Azure Functions

If you’ve worked with Dynamics 365 CRM for any serious integration project, you’ve probably used Azure Logic Apps. They’re great — visual, no-code, and fast to deploy. But as your integration needs grow, you quickly hit complexity: multiple entities, large volumes, branching logic, error handling, and reusability. That’s when architecture becomes critical. In this blog, I’ll share how we built a modular, scalable, and reusable integration architecture using Logic Apps + Azure Functions + Azure Blob Storage — with a config-driven approach. Whether you’re syncing data between D365 and Finance & Operations, or automating CRM workflows with external APIs, this post will help you avoid bottlenecks and stay maintainable. Architecture Components Component Purpose Parent Logic App Entry point, reads config from blob, iterates entities Child Logic App(s) Handles each entity sync (Project, Task, Team, etc.) Azure Blob Storage Hosts configuration files, Liquid templates, checkpoint data Azure Function Performs advanced transformation via Liquid templates CRM & F&O APIs Source and target systems Step-by-Step Breakdown 1. Configuration-Driven Logic We didn’t hardcode URLs, fields, or entities. Everything lives in a central config.json in Blob Storage: { “integrationName”: “ProjectToFNO”,   “sourceEntity”: “msdyn_project”,   “targetEntity”: “ProjectsV2”,   “liquidTemplate”: “projectToFno.liquid”,   “primaryKey”: “msdyn_projectid” } 2. Parent–Child Logic App Model Instead of one massive workflow, we created a parent Logic App that: Each child handles: 3. Azure Function for Transformation Why not use Logic App’s Compose or Data Operations? Because complex mapping (especially D365 → F&O) quickly becomes unreadable. Instead: {   “ProjectName”: “{{ msdyn_subject }}”,   “Customer”: “{{ customerid.name }}” } 4. Handling Checkpoints For batch integration (daily/hourly), we store last run timestamp in Blob: {   “entity”: “msdyn_project”,   “modifiedon”: “2025-07-28T22:00:00Z” } This allows delta fetches like: ?$filter=modifiedon gt 2025-07-28T22:00:00Z After each run, we update the checkpoint blob. 5. Centralized Logging & Alerts We configured: This helped us track down integration mismatches fast. Why This Architecture Works Need How It’s Solved Reusability Config-based logic + modular templates Maintainability Each Logic App has one job Scalability Add new entities via config, not code Monitoring Blob + Monitor integration Transformation complexity Handled via Azure Functions + Liquid Key Takeaways To conclude, this architecture has helped us deliver scalable Dynamics 365 integrations, including syncing Projects, Tasks, Teams, and Time Entries to F&O all without rewriting Logic Apps every time a client asks for a tweak. If you’re working on medium to complex D365 integrations, consider going config-driven and breaking your workflows into modular components. It keeps things clean, reusable, and much easier to maintain in the long run. I hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com.

Share Story :

The Future of Financial Reporting: How SSRS in Dynamics 365 is Transforming Finance Teams

In Microsoft Dynamics 365 Finance and Operations (D365 F&O), reporting is a critical aspect of delivering insights, decision-making data, and compliance information. While standard reports are available out-of-the-box, many organizations require customized reporting tailored to specific business needs. This is where X++ and SSRS (SQL Server Reporting Services) come into play. In this blog, we’ll explore how reporting works in D365 F&O, the role of X++, and how developers can create powerful, customized reports using standard tools. Overview: Reporting in D365 F&O Dynamics 365 F&O offers multiple reporting options: Among these, SSRS reports with X++ (RDP) are the most common for developers who need to generate transaction-based, formatted reports—like invoices, purchase orders, and audit summaries. Key Components of an SSRS Report Using X++ To create a custom SSRS report using X++ in D365 F&O, you typically go through these components: Step-by-Step: Building a Report with X++ 1. Create a Temporary Table Create a temporary table that stores the data used for the report. Use InMemory or TempDB depending on your performance and persistence requirements. TmpCustReport tmpCustReport; // Example TempDB table 2. Build a Contract Class This class defines the parameters users will input when running the report. [DataContractAttribute]class CustReportContract{    private CustAccount custAccount; [DataMemberAttribute(“CustomerAccount”)]    public CustAccount parmCustAccount(CustAccount _custAccount = custAccount)    {        custAccount = _custAccount;        return custAccount;    }} 3. Write a Report Data Provider (RDP) Class This is where you write the business logic and data extraction in X++. This class extends SRSReportDataProviderBase. [SRSReportParameterAttribute(classStr(CustReportContract))]class CustReportDP extends SRSReportDataProviderBase{    TmpCustReport tmpCustReport; public void processReport()    {        CustReportContract contract = this.parmDataContract();        CustAccount custAccount = contract.parmCustAccount();         while select * from CustTable where CustTable.AccountNum == custAccount        {            tmpCustReport.AccountNum = CustTable.AccountNum;            tmpCustReport.Name = CustTable.Name;            tmpCustReport.insert();        }    } public TmpCustReport getTmpCustReport()    {        return tmpCustReport;    }} 4. Design the Report in Visual Studio 5. Create Menu Items and Add to Navigation To allow users to access the report: Security Considerations Always create a new Privilege and assign it to Duty and Role if this report needs to be secured. This ensures proper compliance with security best practices. Best Practices To conclude, creating reports using X++ in Dynamics 365 Finance and Operations is a powerful way to deliver customized business documents and analytical reports. With the structured approach of Contract → RDP → SSRS, developers can build maintainable and scalable reporting solutions. Whether you’re generating a sales summary, customer ledger, or compliance documentation, understanding how to use X++ for reporting gives you full control over data and design. I hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com.

Share Story :

Automatically Update Lookup Fields in Dynamics 365 Using Power Automate: From Custom Tables to Standard Entities

Imagine this: you update a product’s purchase date in a registration record and—boom—a related case automatically gets refreshed with the accurate “Purchased From” lookup. Saves time, reduces errors, and keeps everything in sync without you lifting a finger. Let’s walk through how to make that happen using Power Automate. The goal: When a Product Registration’s cri_purchasedat field is changed, the system will retrieve the related “Purchased From” record and update any linked Case(s) with the appropriate lookup reference. Let’s break down the step-by-step process of how this is done in Power Automate. Step 1: Trigger the Flow When Purchase Date Changes Flow trigger: When a row is added, modified, or deleted (Dataverse) This setup ensures that our flow only fires when that specific date field is modified. Step 2: Pull in the “Purchased From” Record Next, use List rows on the “Purchased From” table with a FetchXML query. We’re searching for a record whose name matches the updated cri_purchasedat. Set Row Count to 1, since we expect only one match. 3. Identify Any Linked Case Records Add another List rows action, this time on the Cases table. We look for records where cri_productregistrationid equals the current product registration’s ID:We now use the List Rows action to fetch all related Case records tied to the updated Product Registration. This time we’re targeting the Cases table (which is internally incident in Dataverse) and using a FetchXML query to match records where cri_productregistrationid equals the current record being modified. This step is critical because it gives us the list of Case records we need to update, based on the link with the modified product registration. <fetch> <entity name=”incident”>     <attribute name=”incidentid” />     <attribute name=”title” />     <attribute name=”cf_actualpurchasedfrom” />     <filter>       <condition attribute=”cri_productregistrationid” operator=”eq” value=”@{triggerOutputs()?[‘body/cri_productregistrationid’]}” />     </filter>  </entity></fetch> 5. Before updating anything, we add a Condition control to ensure that our previously fetched Purchased From record exists and is unique. Why? Because if there’s no match (or multiple matches), we don’t want to update the Cases blindly. We check if this length equals 1. If true → move forward with updates.If false → stop or handle the exception accordingly. To conclude, this kind of validation builds guardrails into your automation, making it more robust and preventing incorrect data from being applied across multiple records. After confirming a valid match, the flow loops through each related Case and updates the “Actual Purchased From” field with the correct value from the matched record, ensuring accurate linkage based on the latest update. Once this step runs, your staging automation is complete—with Cases now intelligently updated in real-time based on Product Registration changes. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfonts.com.

Share Story :

Merging Unmanaged Solutions in Power Platform with XRMToolBox

Let’s say you are developing a module driven app or some custom app development in CRM and multiple teams have created multiple different solutions involving customizations for the develop.  Best would be to have all the customizations in a single solution before and then move it to UAT or Production.  In this blog I will show you how you can move components of multiple solutions into a single main solution using the Solutions Component Mover tool in XRM Tool Box. So let’s begin.  Step 1: Download XRM Tool Box from this link – https://www.xrmtoolbox.com/   Step 2: Make a connection to your Dynamics 365 Environment inside of the XRM Tool Box by clicking on Create a new connection.  Step 2: Click on Microsoft Login Control  Step 3: Click on Open Microsoft Login Control  Step 4: Now Select Display list of available organizations & show advance –> put your username and password -> after successful authentication Name your Connection.  Step 5: Now in Took Library Search for “Solution Component Mover” and hit install.  Step 6: Once the tool is installed it will appear in your tool list click on it   Step 7: once you are in the solution component mover tool click on Load Solution.  To conclude, now, you will get a list of all Managed and Unmanaged solutions. Select the solutions you want to merge in the Source Solution section and select the target solution in which you want to move the components.  All the elements from source solutions will be moved to the target solution (Selected Solutions are highlighted in light grey colour).  Once you have selected the source and target solutions hit Copy Components and we are done.  We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfonts.com.

Share Story :

Building the AI Bridge: How CloudFronts Helps You Connect Systems That Talk to Each Other

When we say building a bridge? Does it mean something isn’t connected together? And what is it?It’s AI itself and your systems that are not connected. What this means if although your AI can access your systems to derive information, it’s still unreliable, slow. What is needed for AI to be successful? In order for AI to be successful, below is what to avoid: In order to eliminate the above, we must have a layer of ‘catalog’ which will house all business data together so that a common vocabulary is established between systems. AI then pools from this ‘Data catalog’ to perform agentic actions. The diagram below best explains, on a high level, how this looks : And all this is defined by how well the integrations between these systems are established. How CloudFronts Can Help? CloudFronts has deep integration expertise where we connected cloud-based applications with each other with the below in mind – Often times, we find ready-made plug and play cloud-based integration solutions which come with their own hefty licensing that keeps going up every few years. Using such integration tools not only affects cash flow but also adds a layer of opaqueness, as we don’t control the flow of integration, and we cannot granularize it beyond what’s offered. Custom integration gives you better control and analytics, which readymade solutions can’t.Here’s a CloudFronts Case Study published by Microsoft, wherein we connected systems for our customer with multiple systems driving data and insights. To conclude, AI Agents are meant to be for your organization aren’t optimized to work right away. This disconnect needs to be engineered just like any other implementation project today. As this gap is real and must be fulfilled by something called Unity Catalog and integrations, CloudFronts can help bridge this gap and make AI work for your organization to continue to optimize cash flow against rising costs. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfonts.com.

Share Story :

SEARCH BLOGS:

FOLLOW CLOUDFRONTS BLOG :


Secured By miniOrange