Category Archives: AI
Creating an MCP Server Using TypeScript
As artificial intelligence continues to transform how we build and interact with software, AI agents are emerging as the new interface for users. Instead of clicking through menus or filling out forms, users can simply instruct agents to fetch reports, analyze datasets, or trigger workflows. The challenge is: how do these agents access external tools, APIs, or enterprise systems in a secure and standardized way? This is where the Model Context Protocol (MCP) comes into play. MCP is a protocol designed to connect AI agents to tools in a structured, consistent manner. Instead of building ad-hoc integrations for each agent and each tool, developers can expose their tools once via MCP — making them discoverable and callable by any MCP-compliant AI agent. In this article, we’ll explore: What an MCP server is and how it works a) How MCP uses JSON-RPC 2.0 as its communication layer b) How MCP solves the M×N integration problem c) How to implement a simple Weather Data MCP server in TypeScript d) How to test it locally using Postman or cURL What is an MCP Server? An MCP server is an HTTP or WebSocket endpoint that follows the Model Context Protocol, allowing AI systems to query, interact with, and call tools hosted by developers. MCP consists of several components: -Base Protocol – Core JSON-RPC message types -Lifecycle Management – Connection initialization, capability negotiation, and session handling -Server Features – Resources, prompts, and tools exposed by servers -Client Features – Sampling and root directory lists provided by clients -Utilities – Cross-cutting features such as logging or argument completion All MCP implementations must support the Base Protocol and Lifecycle Management. Other features are optional depending on the use case. Architecture: JSON-RPC 2.0 in MCP MCP messages follow the JSON-RPC 2.0 specification — a stateless, lightweight remote procedure call protocol that uses JSON for request and response payloads. Request format: json Copy Edit { “jsonrpc”: “2.0”, “id”: 1, “method”: “methodName”, “params”: { “key”: “value” } } id is required, must be a string or number, and must be unique within the session. method specifies the operation. params contains the method arguments. Response format: json Copy Edit { “jsonrpc”: “2.0”, “id”: 1, “result”: { “key”: “value” } } Or, if an error occurs: json Copy Edit { “jsonrpc”: “2.0”, “id”: 1, “error”: { “code”: -32603, “message”: “Internal error” } } The ID must match the request it is responding to. The M×N Problem and How MCP Solves It. Without MCP, connecting M AI agents to N tools requires M×N separate integrations. This is inefficient and unscalable. With MCP, each agent implements a single MCP client, and each tool implements a single MCP server. Agents and tools can then communicate through a shared protocol, reducing integration effort from M×N to M+N. Project Setup Create the project directory: mkdir weather-mcp-sdk cd weather-mcp-sdk npm init -y Install dependencies: npm install @modelcontextprotocol/sdk zod axios express npm install –save-dev typescript ts-node @types/node @types/express npx tsc –init Implementing the Weather MCP Server We’ll use the WeatherAPI to fetch real-time weather data for a given city and expose it via MCP as a getWeather tool. src/index.ts import express from “express”; import axios from “axios”; import { McpServer } from “@modelcontextprotocol/sdk/server/mcp.js”; import { StreamableHTTPServerTransport } from “@modelcontextprotocol/sdk/server/streamableHttp.js”; import { z } from “zod”; const API_KEY = “YOUR_WEATHER_API_KEY”; // replace with your API key function getServer() { const server = new McpServer({ name: “Weather MCP Server”, version: “1.0.0”, }); server.tool( “getWeather”, { city: z.string() }, async ({ city }) => { const res = await axios.get(“http://api.weatherapi.com/v1/current.json”, { params: { key: API_KEY, q: city, aqi: “no” }, }); const data = res.data; return { content: [ { type: “text”, text: `Weather in ${data.location.name}, ${data.location.country}: ${data.current.temp_c}°C, ${data.current.condition.text}`, }, ], }; } ); return server; } const app = express(); app.use(express.json()); app.post(“/mcp”, async (req, res) => { try { const server = getServer(); const transport = new StreamableHTTPServerTransport({}); res.on(“close”, () => { transport.close(); server.close(); }); await server.connect(transport); await transport.handleRequest(req, res, req.body); } catch (error) { if (!res.headersSent) { res.status(500).json({ jsonrpc: “2.0”, error: { code: -32603, message: “Internal server error” }, id: null, }); } } }); const PORT = 3000; app.listen(PORT, () => { console.log(`MCP Stateless HTTP Server running at http://localhost:${PORT}/mcp`); }); Testing the MCP Server Since MCP requires specific request formats and content negotiation, use Content-Type: application/json and Accept: application/json, text/event-stream headers. Step 1 — Initialize curl -X POST http://localhost:3000/mcp \ -H “Content-Type: application/json” \ -H “Accept: application/json, text/event-stream” \ -d ‘{ “jsonrpc”: “2.0”, “id”: 1, “method”: “initialize”, “params”: { “protocolVersion”: “2025-06-18”, “capabilities”: { “elicitation”: {} }, “clientInfo”: { “name”: “example-client”, “version”: “1.0.0” } } }’ Example response: { “jsonrpc”: “2.0”, “id”: 1, “result”: { “protocolVersion”: “2025-06-18”, “capabilities”: { “tools”: { “listChanged”: true } }, “serverInfo”: { “name”: “Weather MCP Server”, “version”: “1.0.0” } } } Step 2 — Call the getWeather Tool curl -X POST http://localhost:3000/mcp \ -H “Content-Type: application/json” \ -H “Accept: application/json, text/event-stream” \ -d ‘{ “jsonrpc”: “2.0”, “id”: 2, “method”: “tools/call”, “params”: { “name”: “getWeather”, “arguments”: { “city”: “London” } } }’ Example response: { “jsonrpc”: “2.0”, “id”: 2, “result”: { “content”: [ { “type”: “text”, “text”: “Weather in London, United Kingdom: 21°C, Partly cloudy” } ] } } To conclude, we have built an MCP-compliant server in TypeScript that exposes a weather-fetching tool over HTTP. This simple implementation demonstrates: How to define and register tools with MCP How JSON-RPC 2.0 structures communication, and how to make your server compatible with any MCP-compliant AI agent. From here, you … Continue reading Creating an MCP Server Using TypeScript
Share Story :
Is Your Tech Stack Holding You Back from AI Success?
The AI Race Has Begun but Most Businesses Are Crawling Artificial Intelligence (AI) is no longer experimental it’s operational. Across industries, companies are trying to harness it to improve decision-making, automate intelligently, and gain competitive edge. But here’s the problem: only 48% of AI projects ever make it to production (Gartner, 2024). It’s not because AI doesn’t work.It’s because most tech stacks aren’t built to support it. The Real Bottleneck Isn’t AI. It’s Your Foundation You may have data. You may even have AI tools. But if your infrastructure isn’t AI-ready, you’ll stay stuck in POCs that never scale. Common signs you’re blocked: AI success starts beneath the surface, in your data pipelines, infrastructure, and architecture. Most machine learning systems fail not because of poor models, but because of broken data and infrastructure pipelines. What Does an AI-Ready Tech Stack Look Like? Being AI-Ready means preparing your infrastructure, data, and processes to fully support AI capabilities. This is not a checklist or quick fix. It is a structured alignment of technology and business goals. A truly AI-ready stack can: Area Traditional Stack AI-Ready Stack Why It Matters Infrastructure On-premises servers, outdated VMs Azure Kubernetes Service (AKS), Azure Functions, Azure App Services; then: AWS EKS, Lambda; GCP GKE, Cloud Run AI workloads need scalable, flexible compute with container orchestration and event-driven execution Data Handling Siloed databases, batch ETL jobs Azure Data Factory, Power Platform connectors, Azure Event Grid, Synapse Link; then: AWS Glue, Kinesis; GCP Dataflow, Pub/Sub Enables real-time, consistent, and automated data flow for training and inference Storage & Retrieval Relational DBs, Excel, file shares Azure Data Lake Gen2, Azure Cosmos DB, Microsoft Fabric OneLake, Azure AI Search (with vector search); then: AWS S3, DynamoDB, OpenSearch; GCP BigQuery, Firestore Modern AI needs scalable object storage and vector DBs for unstructured and semantic data AI Enablement Isolated scripts, manual ML Azure OpenAI Service, Azure Machine Learning, Copilot Studio, Power Platform AI Builder; then: AWS SageMaker, Bedrock; GCP Vertex AI, AutoML; OpenAI, Hugging Face Simplifies AI adoption with ready-to-use models, tools, and MLOps pipelines Security & Governance Basic firewall rules, no audit logs Microsoft Entra (Azure AD), Microsoft Purview, Microsoft Defender for Cloud, Compliance Manager, Dataverse RBAC; then: AWS IAM, Macie; GCP Cloud IAM, DLP API Ensures responsible AI use, regulatory compliance, and data protection Monitoring & Ops Manual monitoring, limited observability Azure Monitor, Application Insights, Power Platform Admin Center, Purview Audit Logs; then: AWS CloudWatch, X-Ray; GCP Ops Suite; Datadog, Prometheus AI success depends on observability across infrastructure, pipelines, and models In Summary: AI-readiness is not a buzzword. Not a checklist. It’s an architectural reality. Why This Matters Now AI is moving fast and so are your competitors. But success doesn’t depend on building your own LLM or becoming a data science lab. It depends on whether your systems are ready to support intelligence at scale. If your tech stack can’t deliver real-time data, run scalable AI, and ensure trust your AI ambitions will stay just that: ambitions. How We Help We work with organizations across industries to: Whether you’re just starting or scaling AI across teams, we help build the architecture that enables action. Because AI success isn’t about plugging in a tool. It’s about building a foundation where intelligence thrives. I hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com.
Share Story :
Why Project-Based Firms Should Embrace AI Now (Not Later)
In project-based businesses, reporting is the final word. It tells you what was planned, what happened, where you made money, and where you lost it. But ask any project manager or CEO what they really think about project reporting today, and you’ll hear this: “It’s late. It’s manual. It’s siloed. And by the time I see it, it’s too late to act.” This is exactly why AI is no longer optional; it’s essential. Whether you’re in construction, consulting, IT services, or professional engineering, AI can elevate your project reporting from a reactive chore to a strategic asset. Here’s how. The Problem with Traditional Reporting. Most reporting today involves: Enter AI: The Game-Changer for Project Reporting AI isn’t about replacing humans; it’s about augmenting your decision-making. When embedded in platforms like Dynamics 365 Project Operations and Power BI, AI becomes the project manager’s smartest analyst and the CEO’s most trusted advisor. Here’s what that looks like: Imagine your system telling you: “Project Alpha is likely to overrun budget by 12% based on current burn rate and resource allocation trends.” AI models analyse historical patterns, resource velocity, and task progress to predict issues weeks in advance. That’s no longer science fiction—it’s happening today with AI-enhanced Power BI and Copilot in Dynamics 365. Instead of navigating dashboards, just ask: “Show me projects likely to miss deadlines this month.” With Copilot in Dynamics 365, you get answers in seconds with charts and supporting data. No need to wait for your analyst or export 10 spreadsheets. AI can clean, match, and validate data coming from: No more mismatched formats or chasing someone to update a spreadsheet. AI ensures your reports are built on clean, real-time data, not assumptions. You don’t need to check 12 dashboards daily. With AI, set intelligent alerts: These alerts are not static rules but learned over time based on project patterns and exceptions. To conclude, for CEOs and PMs alike: We can show you how AI and Copilot in Dynamics 365 can simplify reporting, uncover risks, and help your team act with confidence. Start small, maybe with reporting or forecasting, but start now. I hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com.
Share Story :
Getting Your Data Ready and Adopting the Right AI Framework for Your Organization
Cloudfronts is hosting an event focused on Data readiness + AI adoption on September 3rd, 2025, at the Microsoft Dallas 7000 State Highway 161, Building LC1, Irving, TX 750391, USA from 8:30 AM to 11:30 AM. Organizations want to adopt AI but are not sure how to do this effectively. There are a lot of AI products and technologies, and it’s difficult to know what to adopt for current & future needs. This causes companies to get into an analysis mode that seems exhausting. As a Data + AI partner for Microsoft and Databricks, CloudFronts has invested a lot of time and effort to test out various AI technologies and platforms through our own learnings & customer use cases. Join Anil Shah (CEO, CloudFronts), Marie Wiese (Founder, Marketing Copilot), Priyesh Wagh (Microsoft MVP & Practice Manager), and Kevin Dickinson (Director of Sales, North America, CloudFronts). The objective of this event is to help technical and business decision makers in their AI adoption and data readiness journey. We’ll share our journey and evaluations of AI platforms like Copilot and Azure AI Foundry and data platforms like Databricks, we will look at our recommendations and take deep dives through actual use cases. Register Here, Join us for an engaging morning and take the next step in preparing your enterprise and data readiness for AI adoption. “Discover How We’ve Enabled Businesses Like Yours – Explore Our Client Testimonials!” About CloudFronts CloudFronts is a global AI First Microsoft Solutions Partner for Business Applications, Data & AI, helping teams and organizations worldwide solve their complex business challenges with Microsoft Cloud, AI, and Azure Integration Services. We have a global presence with offices in U.S, Singapore & India. Since 2012, CloudFronts has empowered 200+ global clients small and medium-sized clients all over the world, such as North America, Europe, Australia, MENA, Maldives & India, with diverse experiences in sectors ranging from Professional Services, Financial Services, Manufacturing, Retail, Logistics/SCM, and Non-profits. Register Here: Join us on September 3rd at the Microsoft Dallas office for an engaging morning focused on helping you take the next step in preparing your enterprise and your data for successful AI adoption. For any queries reach us at transform@cloudfronts.com
Share Story :
Building the AI Bridge: How CloudFronts Helps You Connect Systems That Talk to Each Other
When we say building a bridge? Does it mean something isn’t connected together? And what is it?It’s AI itself and your systems that are not connected. What this means if although your AI can access your systems to derive information, it’s still unreliable, slow. What is needed for AI to be successful? In order for AI to be successful, below is what to avoid: In order to eliminate the above, we must have a layer of ‘catalog’ which will house all business data together so that a common vocabulary is established between systems. AI then pools from this ‘Data catalog’ to perform agentic actions. The diagram below best explains, on a high level, how this looks : And all this is defined by how well the integrations between these systems are established. How CloudFronts Can Help? CloudFronts has deep integration expertise where we connected cloud-based applications with each other with the below in mind – Often times, we find ready-made plug and play cloud-based integration solutions which come with their own hefty licensing that keeps going up every few years. Using such integration tools not only affects cash flow but also adds a layer of opaqueness, as we don’t control the flow of integration, and we cannot granularize it beyond what’s offered. Custom integration gives you better control and analytics, which readymade solutions can’t.Here’s a CloudFronts Case Study published by Microsoft, wherein we connected systems for our customer with multiple systems driving data and insights. To conclude, AI Agents are meant to be for your organization aren’t optimized to work right away. This disconnect needs to be engineered just like any other implementation project today. As this gap is real and must be fulfilled by something called Unity Catalog and integrations, CloudFronts can help bridge this gap and make AI work for your organization to continue to optimize cash flow against rising costs. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfonts.com.
Share Story :
CloudFronts Expands Partnership with Databricks to Accelerate AI-Driven Growth
CloudFronts, announced the expansion of its partnership with Databricks, the Data and AI company. This collaboration will empower customers to drive growth by identifying high-impact areas for AI, starting with a robust data catalog that organizes information, automates tasks, enhances accuracy, and delivers more personalized, data-driven experiences. The Databricks Data Intelligence Platform democratizes access to analytics and intelligent applications by marrying customers’ data with powerful AI models tuned to their business’s unique characteristics. The platform is built on the Lakehouse foundation of open data formats and open governance, ensuring that all data remains completely within the customers’ control. In today’s fast-moving world, clean, reliable data is a must. The Data Intelligence Platform helps us tackle the challenge of scattered, inconsistent data by streamlining how we manage and use it. It identifies inefficiencies, automates routine tasks, and frees our team to focus on what really matters—growth. With accurate data as the foundation and a collaborative approach, we’re ready to adopt AI that powers smarter decisions and long-term impact. On this occasion, Anil Shah, CEO at CloudFronts, stated: “ We are committed to making our customers successful in their AI adoption and getting their data ready for AI. We have been working with the Databricks platform on Azure for some of our customers for a few years already. This partnership shows 100% commitment in making our customers successful in their Data + AI journey. “ “Discover How We’ve Enabled Businesses Like Yours – Explore Our Client Testimonials!” About CloudFronts CloudFronts is a global AI- First Microsoft Solutions Partner for Business Applications, Data & AI, helping teams and organizations worldwide solve their complex business challenges with Microsoft Cloud, AI, and Azure Integration Services. We have a global presence with offices in U.S, Singapore & India. Since its inception in 2012, CloudFronts has successfully served over 200+ small and medium-sized clients all over the world, such as North America, Europe, Australia, MENA, Maldives & India, with diverse experiences in sectors ranging from Professional Services, Financial Services, Manufacturing, Retail, Logistics/SCM, and Non-profits. Please feel free to connect with us at transform@cloudfronts.com
Share Story :
Create No Code Powerful AI Agents – Azure AI Foundry
An AI agent is a smart program that can think, make decisions, and do tasks. Sometimes it works alone, and sometimes it works with people or other agents. The main difference between an agent and a regular assistant is that agents can do things on their own. They don’t just help—you can give them a goal, and they’ll try to reach it. Every AI agent has three main parts: Agents can take input like a message or a prompt and respond with answers or actions. For example, they might look something up or start a process based on what you asked. Azure AI Foundry is a platform that brings all these things together; so you can build, train, and manage AI agents easily. References What is Azure AI Foundry Agent Service? – Azure AI Foundry | Microsoft Learn Understanding deployment types in Azure AI Foundry Models – Azure AI Foundry | Microsoft Learnhttps://learn.microsoft.com/en-us/azure/ai-foundry/how-to/index-add Usage Firstly, we create a project in Azure AI Foundry. Click on Next and give a name to your project. Wait till the setup finishes. Once the project creation finishes we are greeted with this screen. Click on Agents tab and click on Next to choose the model. I’m currently using GPT-4o Mini. It also includes descriptions for all the available models. Then we configure the deployment details. There are multiple deployment types available such as – Global Deployments Data Zone Standard Deployments Standard deployments [Standard] follow a pay-per-use model perfect for getting started quickly.They’re best for low to medium usage with occasional traffic spikes. However, for high and steady loads, performance may vary.Provisioned deployments [ProvisionedManaged] let you pre-allocate the amount of processing power you need.This is measured using Provisioned Throughput Units (PTUs). Each model and version requires a different number of PTUs and offers different performance levels. Provisioned deployments ensure predictable and stable performance for large or mission-critical workloads. This is how the deployment details look for in Global Standard. I’ll be choosing Standard deployment for our use case. Click on deploy and wait for a few seconds. Once the deployment is completed, you can give your agent a name and some instructions for their behavior. You should specify the tone, end goal, verbosity, etc as well. You can also specify the Temperature and Top P values which are both a control on the randomness or creativeness of the model. Temperature controls how bold or cautious the model is. Lower temperature = Safer, more predictable answers. (Factual Q&A, Code Summarization)Higher temperature = More creative or surprising answers. (Poetry/Creative writing) Top P (Nucleus Sampling) controls how wide the model’s word choices are. Lower Top P = Only picks from the most likely words. (Legal or financial writing) Higher Top P = Includes less likely, more diverse words. (Brainstorming names) Next, I’ll add a knowledge base to my bot. For this example, I’ll just upload a single file.However, you have the option to add an sharepoint folder or files, connect it to Bing Search, MS Fabric, Azure AI search, etc as required. A Vector store in Azure AI Foundry helps your AI agent retrieve relevant information based on meaning rather than just keywords.It works by breaking your content (like a PDF) into smaller parts, converting them into numerical representations (embeddings), and storing them.When a user asks a question, the AI finds the most semantically similar parts from the vector store and uses them to generate accurate, context-aware responses. Once you select the file, click on Upload and save. At this point, you can start to interact with your model. To “play around” with your model, click on the “Try in Playground” button. And here, we can see the output based on our provided knowledge base. One more example, just because it is kind of fun. Every input that you provide to the agent is called as a “message”. Everytime the agent is invoked for processing the provided input is called a “run”. Every interaction session with the agent is called a “thread”. We can see all the open threads in the threads section. To conclude, Azure AI Foundry makes it easy to build and use AI agents without writing any code. You can choose models, set how they behave, and connect your data all through a simple interface. Whether you’re testing ideas, automating tasks, or building custom bots, Foundry gives you the tools to do it.If you’re curious about AI or want to try building your agent, Foundry is a great place to begin. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfonts.com
Share Story :
Struggling with Siloed Systems? Here’s How CloudFronts Gets You Connected
In today’s world, we use many different applications for our daily work. One single application can’t handle everything because some apps are designed for specific tasks. That’s why organizations use multiple applications, which often leads to data being stored separately or in isolation. In this blog, we’ll take you on a journey from siloed systems to connected systems through a customer success story. About BÜCHI Büchi Labortechnik AG is a Swiss company renowned for providing laboratory and industrial solutions for R&D, quality control, and production. Founded in 1939, Büchi specializes in technologies such as: Their equipment is widely used in pharmaceuticals, chemicals, food & beverage, and academia for sample preparation, formulation, and analysis. Büchi is known for its precision, innovation, and strong customer support worldwide. Systems Used by BÜCHI To streamline operations and ensure seamless collaboration, BÜCHI leverages a variety of enterprise systems: Infor and SAP Business One are utilized for managing critical business functions such as finance, supply chain, manufacturing, and inventory. Reporting Challenges Due to Siloed Systems Organizations often rely on multiple disconnected systems across departments — such as ERP, CRM, marketing platforms, spreadsheets, and legacy tools. These siloed systems result in: The Need for a Single Source of Truth To solve these challenges, it’s critical to establish a Single Source of Truth (SSOT) — a central, trusted data platform where all key business data is: How We Helped Büchi Connect Their Systems To build a seamless and scalable integration framework, we leveraged the following Azure services: >Azure Logic Apps – Enabled no-code/low-code automation for integrating applications quickly and efficiently. >Azure Functions – Provided serverless computing for lightweight data transformations and custom logic execution. >Azure Service Bus – Ensured reliable, asynchronous communication between systems with FIFO message processing and decoupling of sender/receiver availability. >Azure API Management (APIM) – Secured and simplified access to backend services by exposing only required APIs, enforcing policies like authentication and rate limiting, and unifying multiple APIs under a single endpoint. BÜCHI’s case study was published on the Microsoft website, highlighting how CloudFronts helped connect their systems and prepare their data for insights and AI-driven solutions. Why a Single Source of Truth (SSOT) Is Important A Single Source of Truth means having one trusted location where your business stores consistent, accurate, and up-to-date data. Key Reasons It Matters: How we did this We used Azure Function Apps, Service Bus, and Logic Apps to seamlessly connect the systems. Databricks was implemented to build a Unity Catalog, establishing a Single Source of Truth (SSOT). On top of this unified data layer, we enabled advanced analytics and reporting using Power BI. In May, we hosted an event with BÜCHI at the Microsoft Office in Zurich. During the session, one of the attending customers remarked, “We are five years behind BÜCHI.” Another added, “If we don’t start now, we’ll be out of the race in the future.” This clearly reflects the urgent need for businesses to evolve. Today, Connected Systems, a Single Source of Truth (SSOT), Advanced Analytics, and AI are not optional — they are essential for sustainable growth and improved human efficiency. The pace of transformation has accelerated: tasks that once took months can now be achieved in days — and soon, perhaps, with just a prompt. To conclude, if you’re operating with multiple disconnected systems and relying heavily on manual processes, it’s time to rethink your approach. System integration and automation free your teams from repetitive work and empower them to focus on high impact, strategic activities. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfonts.com.
Share Story :
How We Built Smart Pitch — and What We Learned Along the Way
In today’s world, AI is no longer a luxury—it’s a necessity for driving smarter decisions, faster innovation, and personalized experiences. We have come up with our requirements for AI to support the conversion from MQL (Marketing Qualified Lead) to SQL (Sales Qualified Lead). How did the idea originate? In our organization, whenever a prospect reaches out to us, we search for company information like company size, revenue, location, industry type, contact person details, designation, decision-maker, and LinkedIn profile. This information helps the Sales team prepare better and deliver a stronger pitch by understanding the customer before the call. Also, during the MQL to SQL stage, we look for things like: This information helps the Sales team convert the prospect into a client and increases our chances of winning the deal. Earlier, this entire process was manual and time-consuming. So, we decided to automate it with an AI agent that can gather this information for us in just a few minutes. Implementation approach After the project was approved internally, we started exploring how to make it happen. Initially, we didn’t know where or how to start. During our research, we came across Copilot Studio, which allows us to build custom agents from scratch based on our needs. We learned about Copilot Studio’s and began building our agent. We named it Elevator Pitch. Version 1 Highlights: This feedback led to the idea for Version 2, which would automate more steps and also pull information from the internet. Version 2 Enhancements: Version 2 Features: Company & Contact information with a single click on MQL to SQL, the agent now generates the document within minutes—something that earlier used to take hours or even a full day. Live demo in Zurich & New York On 22nd May 2025, we had an event scheduled at the Microsoft office in Zurich with one of our clients, where we shared the Buchi journey with CloudFronts. We discussed how we collaborated to connect their multiple systems and prepared their data for insights and AI initiatives. At the same event, we had the opportunity to demonstrate our Smart Pitch product, which caught the audience’s attention. It was a proud moment for us to showcase our first AI product at the Microsoft office—delivered within just a few months of hard work. Our second opportunity came on 06 June 2025 in New York, at the AI Community Conference, where we presented again in front of a global audience. What Next in Version 3: So far, we have built this solution using Microsoft’s inbuilt Knowledge Center, ChatGPT API, SharePoint, company websites, and Dataverse. Since we were working with both structured and unstructured data, we faced some inconsistencies and performance issues. This led us to reflect and identify the need for Version 3 (V3), which will include: The development of Smart Pitch V3 is currently in progress. We’ll share our thoughts once it goes live. A demo video has also been shared, so you can see how smart and fast our Agent is at delivering useful insights. Delivering Answers in Minutes—Thanks to Smart Pitch I’d also like to share a quick story. One day, our Practice Manager was on leave, and we received a prospect inquiry about Project Operations to Business Central (PO-BC) pricing. I wasn’t sure where that information was stored, and suddenly our CEO asked me for the details. I was a bit stressed, unsure where to search or how to respond. Then I decided to ask our Smart Pitch agent the same question. To my surprise, the agent quickly gave me the exact information I needed. It was a big relief, and I was able to share the details with our CEO in just a few minutes—without even knowing where the document was uploaded. We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfonts.com.