Category Archives: Azure
Seamless Automation with Azure Logic Apps: A Low-Code Powerhouse for Business Integration
In today’s data-driven business landscape, fast, reliable, and automated data integration isn’t just a luxury it’s a necessity. Organizations often deal with data scattered across various platforms like CRMs, ERPs, or third-party APIs. Manually managing this data is inefficient, error-prone, and unsustainable at scale. That’s where Azure Logic Apps comes into play. Why Azure Logic Apps? Azure Logic Apps is a powerful workflow automation platform that enables you to design scalable, no-code solutions to fetch, transform, and store data with minimal overhead. With over 200 connectors (including Dynamics 365, Salesforce, SAP, and custom APIs), Logic Apps simplifies your integration headaches. Use Case: Fetch Business Data and Dump to Azure Data Lake Imagine this:You want to fetch real-time or scheduled data from Dynamics 365 Finance & Operations or a similar ERP system.You want to store that data securely in Azure Data Lake for analytics or downstream processing in Power BI, Databricks, or Machine Learning models. What About Other Tools Like ADF or Synapse Link? Yes, there are other tools available in the Microsoft ecosystem such as: Why Logic Apps Is Better What You Get with Logic Apps Integration Business Value To conclude, automating your data integration using Logic Apps and Azure Data Lake means spending less time managing data and more time using it to drive business decisions. Whether you’re building a customer insights dashboard, forecasting sales, or optimizing supply chains—this setup gives you the foundation to scale confidently. 📧 Ready to modernize your data pipeline? Drop us a note at transform@cloudfronts.com — our experts are ready to help you implement the best-fit solution for your business needs. 👉 In our next blog, we’ll walk you through the actual implementation of this Logic Apps integration, step-by-step — from connecting to Dynamics 365 to storing structured outputs in Azure Data Lake. Stay tuned!
Share Story :
Adding Functionality to an AI Foundry Agent with Logic Apps
AI-powered agents are quickly becoming the round the clock assistants of modern enterprises. They automate workflows, respond to queries, and integrate with data sources to deliver intelligent outcomes. But what happens when your agent needs to extend its abilities beyond what’s built-in? That’s where Logic Apps come in. In this blog, we’ll explore how you can add functionality to an AI Foundry Agent by connecting it with Azure Logic Apps-turning your agent into a truly extensible automation powerhouse. Why Extend an AI Foundry Agent? AI Foundry provides a framework to build, manage, and deploy AI agents in enterprise environments. By default, these agents can handle natural language queries and interact with pre-integrated data sources. However, business use cases often demand more: To achieve this, you need a bridge between your agent and external systems. Azure Logic Apps is that bridge. Enter Logic Apps Azure Logic Apps is a cloud-based integration service that enables you to: When integrated with AI Foundry Agents, Logic Apps can serve as external tools the agent can call dynamically. Steps to achieve external integrations / extending functionality in AI Foundry Agents with Logic Apps :- 1] Assuming your Agent Instructions and Knowledge Sources are ready, go to your Actions under Knowledge – 2] In the pop-up window, select Azure Logic Apps, you can also use other actions based on your requirement – 3] Here you would see a list of Microsoft Authored as well as our custom-built Logic App based Tools. To be displayed here, for suitable use by the AI Foundry Agent, it should meet a certain criterion as follows – a] Should be preferably on Consumption Plan, b] Should have an HTTP Request Trigger, atleast one Action, and a Response, c] In the Methods, select “Default (Allow all Methods)”, d] And a suitable description in the trigger, e] A Request Body (Auto Generated if created directly from AI Foundry). The developer can either create a Trigger from AI Foundry or, manually create a Logic App in the same Azure Subscription as the AI Foundry Project, observing the criteria. 4] As you can see below, For the scope of the blog I am covering a simple requirement of getting the list of clients for the SmartPitch Project, to fetch the case studies based on it; As you can see, the Logic App Tool meets the requirements for compatibility with Azure AI Foundry, with the required logic between the request and response. 5] As you can see below, For the scope of the blog I am covering a simple requirement of getting the list of clients for the SmartPitch Project, to fetch the case studies based on those;Once the Logic App is successfully created it would be visible in the Logic App Actions; select that Logic App to enable it as Tool. 6] Verify the details of the Logic App Tool and proceed. 7] Next you need to provide / verify the following information –a) Tool Name – The Name by which the Logic App would be accessible as a tool in the Agent, b) Connection to the Agent (Automatically assigned), c) Description to invoke the Tool (Logic App) – This is a crucial part for providing intent to the agent to when and how to use this logic app, and also what to expect from it. “Provide as much details as possible about the circumstances in which the tool should be called by the agent” 8] Once the Tool is created, it would be visible in the Actions list, and be ready for use. Here to check if the Intent is being understood and the tool being called, I have specifically instructed it to mention the name of the tool as well, along with it’s result. As you can see in the screenshot, the tool is triggered successfully, and the expected output is displayed. Example Use Case: Smart Pitch Agent Imagine your sales team uses an AI Foundry Agent (like “Smart Pitch Agent”) to create tailored pitches. By connecting Logic Apps, you can enable the agent to: Which we already have achieved in the in the above AI Agent using the other Logic App Tools The aim is to expose each capability as a Logic App, and the agent calls them as tools in conversation flow. Benefits of This Approach To conclude, by combining AI Foundry Agents with Azure Logic Apps, you unlock a powerful pattern: Together, they create a flexible, extensible solution that evolves with your enterprise needs. I Hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com.
Share Story :
Smarter Data Integrations Across Regions with Dynamic Templates
At CloudFronts Technologies, we understand that growing organizations often operate across multiple geographies and business units. Whether you’re working with Dynamics 365 CRM or Finance & Operations (F&O), syncing data between systems can quickly become complex—especially when different legal entities follow different formats, rules, or structures. To solve this, our team developed a powerful yet simple approach: Dynamic Templates for Multi-Entity Integration. The Business Challenge When a global business operates in multiple regions (like India, the US, or Europe), each location may have different formats for project codes, financial categories, customer naming, or compliance requirements. Traditional integrations hardcode these rules—making them expensive to maintain and difficult to scale as your business grows. Our Solution: Dynamic Liquid Templates We built a flexible, reusable template system that automatically adjusts to each legal entity’s specific rules—without the need to rebuild integrations for each one. Here’s how it works: Why This Matters for Your Business Real-World Success Story One of our client’s needs to integrate project data from CRM to F&O across three different regions. Instead of building three separate integrations, we implemented a single solution with dynamic templates. The result? What Makes CloudFronts Different At CloudFronts, we build future-ready integration frameworks. Our approach ensures you don’t just solve today’s problems—but prepare your business for tomorrow’s growth. We specialize in Microsoft Dynamics 365, Azure, and enterprise-grade automation solutions. “Smart integrations are the key to global growth. Let’s build yours.” We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com.
Share Story :
Enhancing Workflow Observability with Open Telemetry in Azure Logic Apps
Struggling to Monitor Your Logic App Workflows End-to-End? Azure Logic Apps are a powerful tool for automating business workflows across services. But as these workflows grow in size and complexity, so do the challenges in tracking, debugging, and optimizing them. The built-in monitoring options, while helpful often don’t provide full visibility. This leaves teams scrambling to understand failures, bottlenecks, or performance issues. Here’s the good news: OpenTelemetry can change that. In this post, you’ll learn how to gain complete observability into your Logic Apps workflows using OpenTelemetry, the industry-standard framework for telemetry data. Why Observability Matters in Azure Logic Apps Logic Apps connect multiple services , APIs, databases, emails, on-prem systems, and more. But as you stitch these workflows together, it becomes harder to: While Azure provides diagnostics via Monitor and Application Insights, they often produce fragmented data. These tools lack native support for distributed tracing, which is essential when workflows span many components. That’s where OpenTelemetry helps. With it, you can gather: Together, these three “pillars of observability” give you actionable insights into your Logic App’s behavior. What is OpenTelemetry? OpenTelemetry is an open-source standard for collecting and exporting telemetry data. It supports multiple platforms, Azure, AWS, GCP and can export data to tools like Application Insights, Jaeger, or Prometheus. With OpenTelemetry, you can: It ensures a consistent observability strategy across your cloud-native systems — including Logic Apps. How to Integrate OpenTelemetry with Azure Logic Apps Azure Logic Apps don’t yet support OpenTelemetry out of the box. But with a smart setup, you can still plug them into an OpenTelemetry pipeline. 🛠️ Step-by-Step Guide: Real Example: Order Processing with Observability Imagine this: Without OpenTelemetry: With OpenTelemetry: This means faster resolution, less guesswork, and a better customer experience. ✅ Use correlation IDs across services✅ Add custom dimensions to enrich telemetry✅ Configure sampling to control trace volume✅ Monitor latency thresholds for each Logic App step✅ Log business-critical metadata (e.g., Order ID, region) Start Small, See Big Results Observability is no longer optional. It’s a must-have for teams building scalable, resilient workflows. Here’s your action plan:
Share Story :
From Clean Data to Insights: Integrating Azure Databricks with Power BI and MLflow
Cleaning data is only half the journey. The real value comes when that clean, reliable data powers dashboards for decision-makers and machine learning models for prediction. In this post, we’ll explore two powerful integrations of Azure Databricks: Why These Integrations Matter For growing businesses: Together, they create a bridge from cleaned data → insights → action. Practical Example 1: Databricks + Power BI 👉 Result: Executives can open Power BI and instantly see up-to-date sales performance across geographies. Practical Example 2: Databricks + MLflow 👉 Result: Your business can predict customer trends, forecast sales, or identify churn risk directly from cleaned Databricks data. To conclude, with these integrations: Together, they help organizations move from cleaned data → insights → intelligent action. ✅ Already cleaning data in Databricks? Try connecting your first Power BI dashboard today.✅ Want to explore AI? Start logging experiments with MLflow to track and deploy models seamlessly. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com.
Share Story :
From Raw to Reliable: Cleaning Data at Scale with Azure Databricks
Are you struggling with messy spreadsheets full of duplicates, missing values, and inconsistent records? You’re not alone. Data professionals spend nearly 80% of their time cleaning and preparing data before any real analysis begins. The truth is simple: without clean data, business reports are unreliable, AI models fail, and decision-making slows down. In this blog, we’ll show you how Azure Databricks makes data cleaning easier, faster, and scalable—turning raw inputs into reliable insights with just a few lines of code. Why Clean Data Matters For business leaders, whether you’re a Team Lead, CTO, or CEO, clean data directly impacts growth: With Azure Databricks, you get a cloud-native, Spark-powered platform that handles big data at scale while integrating seamlessly with Azure Data Lake, Synapse, and Power BI. Practical Example: Cleaning a Sales Dataset in Azure Databricks Imagine you have a raw CSV file in Azure Data Lake with customer sales data: Issues in the data: Solution with PySpark in Databricks: Output after cleaning: CustomerID Name Country Sales 101 Alice USA 500 102 Bob USA 300 103 Unknown UK 450 104 David India 0 With just a few lines of Spark code, the dataset is now ready for reporting, visualization, or machine learning. To conclude, clean data is the foundation of every reliable business insight. With Azure Databricks, you can automate messy, manual processes and create repeatable, scalable pipelines that keep your data reliable—no matter how fast your business grows. ✅ Start small: try building a simple cleaning pipeline in Azure Databricks today.✅ Save time: focus more on insights, less on manual data prep.✅ Scale with confidence: as your data grows, Databricks grows with you. 👉 Want to take the next step? Explore how Databricks integrates with Power BI for real-time dashboards or with MLflow for machine learning pipelines. Stay tuned for our next post where we’ll cover these use cases in detail. ✨ With Databricks, your journey from raw to reliable data starts today. Contact us today at Transform@cloudfronts.com to get started. To learn more about functionalities of DataBricks and other Azure AI services, please refer to my other blogs from the links given below: – 1] The Hidden Cost of Bad Data:How Strong Data Management Unlocks Scalable, Accurate AI – CloudFronts 2] Automating Document Vectorization from SharePoint Using Azure Logic Apps and Azure AI Search – CloudFronts 3] Using Open AI and Logic Apps to develop a Copilot agent for Elevator Pitches & Lead Qualification – CloudFronts
Share Story :
Setting Up Unity Catalog in Databricks for Centralized Data Governance
The fastest way to lose control of enterprise data? Managing governance separately across workspaces. Unity Catalog solves this with one centralized layer for security, lineage, and discovery. Data governance is crucial for any organization looking to manage and secure its data assets effectively. Databricks’ Unity Catalog is a centralized solution that provides a unified interface for managing access control, auditing, data lineage, and discovery. This blog will guide you through the process of setting up Unity Catalog in your Databricks workspace. What is Unity Catalog? Unity Catalog is Databricks’ answer to centralized data governance. It enables organizations to enforce standards-compliant security policies, apply fine-grained access controls, and visualize data lineage across multiple workspaces. It ensures compliance and promotes efficient data management. Key Features: 1] Standards-Compliant Security: ANSI SQL-based access policies that apply across all workspaces in a region. 2] Fine-Grained Access Control: Support for row- and column-level permissions. 3] Audit Logging: Tracks who accessed what data and when. 4] Data Lineage: Provides visualization of data flow and dependencies. Unity Catalog Object Hierarchy Before diving into the setup, it’s important to understand the hierarchical structure of Unity Catalog: 1] Catalogs: The top-level container (e.g., Production, Development) that represents an organizational unit or environment. 2] Schemas: Logical groupings of tables, views, and AI models within a catalog. 3] Tables and Views: These include managed tables fully governed by Unity Catalog and external tables referencing existing cloud storage. Here is the procedure to setup a Unity Catalog Metastore in association with Azure Storage, as I have done for one of our products (SmartPitch Sales & Marketing Agent) – 1] First create a storage account with primary service being – “Azure Blob Storage or Azure Data Lake Storage Gen 2”; Performance and Redundancy can be chosen based on the requirement for which the DataBricks service is being used.Here for my Mosaic AI Agent, I have used Locally Redundant Storage & Data Lake Gen 2 2] Once the storage account is created, ensure that you have enabled “Hierarchical Namespace” When creating a Unity Catalog metastore with Azure Blob Storage, Hierarchical Namespace (HNS) is required because Unity Catalog needs: a] Folder-like structure to organize catalogs, schemas, and tables. b] Atomic operations (rename, move, delete) on directories and files. c] POSIX-style access controls for fine-grained permissions. d] Faster metadata handling for lineage and governance. HNS turns Azure Blob into ADLS Gen2, which supports these features. 3] Upload any Raw/Unclean files to your metastore folder in the blob storage, which would be required for your use in DataBricks. 4] Create a Unity Catalog Connector in Azure Portal and assign it “Storage Blob Data Contributor” Role . 5] Assign CORS (Cross-Origin Resource Sharing) settings for that storage account. Why this is necessary: In short: Without configuring CORS, Databricks cannot communicate with your storage container to read/write managed tables, schema metadata, or logs. 6] Generate SAS Token 7] Navigate to your Workspace and select “Manage Account” – this should be done from the account admin. 8] Select Catalog tab on the left and then click “Create Metastore” 9] Assign a Name, Region (Same as Workspace), The path to the storage account, and the connector id. 10] Once the Metastore is created, assign it to a workspace . 11] Once this is done, the catalogs and the schemas, and tables in within it can be created. How does Unity Catalog differ from Hive Metastore ? Feature Hive Metastore Unity Catalog Scope Workspace or cluster-specific Centralized, spans multiple workspaces and regions Architecture Single metastore tied to Spark/Hive Cloud-native service integrated with Databricks Object Hierarchy Databases → Tables → Partitions Catalogs → Schemas → Tables/Views/Models Data Assets Supported Tables, views Tables, views, files, ML models, dashboards Security Basic GRANT/DENY at database/table level Fine-grained, ANSI SQL–based (catalog, schema, table, column, row) Lineage Not available Built-in lineage and impact analysis Auditing Limited or external Integrated audit logs across workspaces Storage Management Points to storage locations; no governance Manages external and managed tables with governance Cloud Integration Primarily on cluster storage or external path Secure integration with ADLS Gen2, S3, GCS Permissions Model Spark SQL statements Attribute- and role-based access, unified policies Use Cases Basic metadata store for Spark/Hive workloads Enterprise-wide data governance, sharing, and compliance To conclude, Unity Catalog is the next-generation governance and metadata solution for Databricks, designed to give organizations a single, secure, and scalable way to manage data and AI assets. Unlike the older Hive Metastore, it centralizes control across multiple workspaces, supports fine-grained access policies, delivers built-in lineage and auditing, and integrates seamlessly with cloud storage like Azure Data Lake, S3, or GCS. When setting it up, key steps include: 1] Creating a metastore and linking it to your workspaces. 2] Enabling hierarchical namespace on Azure storage for folder-level security and operations. 3] Configuring CORS to allow Databricks domains to interact with storage. 4] Defining catalogs, schemas, and tables for structured governance. By implementing Unity Catalog, you ensure stronger security, better compliance, and faster data discovery, making your Databricks environment enterprise-ready for analytics and AI. Business Outcomes of Unity Catalog By implementing Unity Catalog, organizations can achieve: Why now? As data volumes and regulatory requirements grow, organizations can no longer rely on fragmented or legacy governance tools. Unity Catalog offers a future-proof foundation for unified data management and AI governance—essential for any modern data-driven enterprise. At CloudFronts, we help enterprises implement and optimize Unity Catalog within Databricks to ensure secure, compliant, and scalable data governance for enterprise data governance.Book a consultation with our experts to explore how Unity Catalog can simplify compliance and boost productivity for your teams.Contact us today at Transform@cloudfronts.com to get started. To learn more about functionalities of DataBricks and other Azure AI services, please refer to my other blogs from the links given below: – 1] The Hidden Cost of Bad Data:How Strong Data Management Unlocks Scalable, Accurate AI – CloudFronts 2] Automating Document Vectorization from SharePoint Using Azure Logic Apps and Azure AI Search – CloudFronts 3] Using Open AI and Logic Apps to develop a Copilot agent for … Continue reading Setting Up Unity Catalog in Databricks for Centralized Data Governance
Share Story :
Connecting Your MCP Server to Microsoft Copilot Studio – Part 2
In Part 1, we built a simple MCP server in TypeScript that exposed a “getWeather” tool. Now, let’s take the next step: connecting our MCP server to Microsoft Copilot Studio so that Copilot agents can call it directly. This section will cover: Step 1 — Publish Your MCP Server to Azure To make your MCP server accessible to Copilot Studio, you’ll need to host it online. There are multiple ways to deploy it — Azure App Service, Azure Container Apps, or even Azure Functions if you prefer serverless. For example, using Azure App Service: Test using curl to ensure it responds with MCP-compatible JSON: Step 2 — Create a New Copilot in Copilot Studio Step 3 — Add Knowledge Sources Optionally, you can enrich your Copilot by adding: This gives your Copilot a baseline knowledge to answer broader questions, while the MCP server will handle specific tasks (like fetching live weather data). Step 4 — Create a Custom Connector in Dataverse To let Copilot Studio talk to our MCP server, we need a custom connector inside Dataverse/CRM. Step 5 — Add the Custom Connector to Copilot Studio you’ll see the MCP server in your Tools section of copilot. To test the setup, let’s ask Copilot: “What’s the current weather in Mumbai?” On the first attempt, Copilot will prompt you to establish a connection. Simply open the Connection Manager, click Connect, and authorize the link to your MCP server. Once connected, Copilot will fetch the live weather details for Mumbai directly from your MCP server. and click retry on the Test window of your copilot. And just like that, your MCP server is live and fully integrated. It can now provide real-time weather updates for any city mentioned in your conversation with Copilot. You can try out different variations of questions or phrasings — Copilot will intelligently interpret your request, extract the city name, and seamlessly call the MCP server to deliver accurate weather details. Beyond Weather: Business Integrations The same process works for enterprise systems. For example, instead of getWeather, you could expose: By publishing these tools via MCP, your Copilot becomes a true enterprise assistant, capable of pulling structured business data and triggering workflows on demand. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com.
Share Story :
Simplifying File-Based Integrations for Dynamics 365 with Azure Blob and Logic Apps
Integrating external systems with Dynamics 365 often involves exchanging files like CSVs or XMLs between platforms. Traditionally, these integrations require custom code, complex workflows, or manual intervention, which increases maintenance overhead and reduces reliability. Thankfully, leveraging Azure Blob Storage and Logic Apps can streamline file-based integrations, making them more efficient, scalable, and easier to maintain. Why File-Based Integrations Are Still Common While APIs are the preferred method for system integration, file-based methods remain popular in many scenarios: The challenge comes in orchestrating file movement, transforming data, and ensuring it reaches Dynamics 365 reliably. Enter Azure Blob Storage Azure Blob Storage is a cloud-based object storage solution designed for massive scalability. When used in file-based integrations, it acts as a reliable intermediary: Orchestrating with Logic Apps Azure Logic Apps is a low-code platform for building automated workflows. It’s particularly useful for integrating Dynamics 365 with file sources: Real-Time Example: Automating Sales Order Uploads Traditional Approach: Solution Using Azure Blob and Logic Apps: Outcome: Best Practices Benefits To conclude, file-based integrations no longer need to be complicated or error-prone. By leveraging Azure Blob Storage for reliable file handling and Logic Apps for automated workflows, Dynamics 365 integrations become simpler, more maintainable, and scalable. The real-time sales order example shows that businesses can save time, reduce errors, and ensure data flows seamlessly between systems allowing teams to focus on their core operations rather than manual file processing. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com.
Share Story :
Automating Document Vectorization from SharePoint Using Azure Logic Apps and Azure AI Search
In modern enterprises, documents stored across platforms like SharePoint often remain underutilized due to the lack of intelligent search capabilities. What if your organization could automatically extract meaning from those documents—turning them into searchable vectors for advanced retrieval systems? That’s exactly what we’ve achieved by integrating Azure Logic Apps with Azure AI Search. Workflow Overview Whenever a user uploads a file to a designated SharePoint folder, a scheduled Azure Logic App is triggered to: Once stored, a scheduled Azure Cognitive Search Indexer kicks in. This indexer: Technologies / resources used: –-> SharePoint: A common document repository for enterprise users, ideal for collaborative uploads. -> Azure Logic Apps: Provides low-code automation to monitor SharePoint for changes and sync files to Blob Storage. It ensures a reliable, scheduled trigger mechanism with minimal overhead. -> Blob Storage: Serves as the staging ground where documents are centrally stored for indexing—cheaper and more scalable than relying solely on SharePoint connectors. -> Azure AI Search (Cognitive Search): The intelligence layer that runs a skillset pipeline to extract, transform, and vectorize the content, enabling semantic search, multimodal RAG (Retrieval Augmented Generation), and other AI-enhanced scenarios. Why Not Vectorize Directly from SharePoint? Reference:-1. https://learn.microsoft.com/en-us/azure/search/search-howto-index-sharepoint-online2. https://learn.microsoft.com/en-us/azure/search/search-howto-indexing-azure-blob-storage How to achieve this? Stage 1: – Logic App to sync Sharepoint files to blob Firstly, create a designated Sharepoint directory to upload the required documents for vectorization. Then create the logic app to replicate the files along with it’s format and properties to the associated blob storage – 1] Assign the site address and the directory name where the documents are uploaded in Sharepoint – In the trigger action “When an item is created or modified”. 2] Assign a recurrence frequency, start time and time zone to check/verify for new documents and keep the blob container updated. 3] Add an action component – “Get file content using path”; and dynamically provide the full path (includes file extension), from the trigger 4] Finally, add an action to create blobs in the designated container that would be vectorized – provide the storage acc. name, directory path, the name of blob (Select to dynamically get the file name with extension for the trigger), blob content (from the get file content action). 5] On successfully saving & running this logic app, either manually or on trigger, the files are replicated in it’s exact form to the blob storage. Stage 2 :- Azure AI Search resource to vectorize the files in blob storage In Azure Portal (Home – Microsoft Azure), search for Azure AI Search service, and provide the necessary details, based on your requirement select a pricing tier. Once resource is successfully created, select “Import & vectorize data” From the 2 options – RAG and Multimodal RAG Index, select the latter one.RAG combines a retriever (to fetch relevant documents) with a generative language model (to generate answers) using text-only data. Multimodal RAG extends the RAG architecture to include multiple data types such as text, images, tables, PDFs, diagrams, audio, or video. Workflow: Now follow the steps and provide the necessary details for the index creation Enable deletion tracking, to remove the records of deleted documents from the index Provide a document intelligence resource to enable OCR, and to get location metadata for multiple document types. Select image verbalization (to verbalize text in images) or multimodal embedding to vectorize the whole image. Assign the LLM model for generating the embeddings for the text/images provide an image output location, to store images extracted from the files Assign a schedule to refresh the indexer and to keep the search index up to date with new documents. Once successfully created, search keywords in the search explorer of the index, to verify the vectorization, the results are provided based on it’s relevance and score/distance, to the user’s search query. Let us test this index in Custom Copilot Agent , by importing this index as an azure ai search knowledge source. On fetching details of certain document specific information, the index is searched for the most appropriate information, and the result is rendered in readable format by generative AI. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com.