How to Use Copilot Chat to Supercharge Productivity in Business Central
Interacting with systems in natural language benefits organizations by making technology more accessible and user-friendly for all employees, regardless of technical skill. It allows users to complete tasks quickly by eliminating complex commands, improving productivity and reducing the potential for errors. This streamlined access to information leads to faster decision-making, ultimately helping organizations operate more efficiently. Are your people finding it difficult to navigate Business Central and access important information quickly? If so, consider incorporating Copilot Chat to ease their suffering! Research indicates that call center operators using AI assistance became 14% more productive, with gains exceeding 30% for less experienced workers. This improvement is attributed to AI capturing and conveying organizational knowledge that helps in problem-solving. Specific tasks can see remarkable speed increases; for instance, software engineers using tools like Codex can code up to twice as fast. Similarly, writing tasks can be completed 10-20% faster with the aid of large language models. In the retail sector, AI-driven chatbots have been shown to increase customer satisfaction by 30%, demonstrating their effectiveness in enhancing customer interactions. Currently, around 35% of businesses leverage AI technology, which is expected to grow significantly as organizations recognize its strategic importance. I am confident that this article will highlight the advantages of incorporating Copilot into your daily activities. References Configuration In Business Central, – Search for “Copilot and Capabilities.” – Select the “Chat” and click on the “Activate” button. – Click on the “Copilot button” near the top right. – You’ll be presented with this screen. – You can ask it queries like – “Show me all the Customers” from India. – “How many open Sales Quotes do I have?” – You can also ask context specific questions like – – You can also ask questions for guidance on how to achieve certain things in the system. In my humble opinion, it is far from perfect but it is absolutely a step in the right direction.In the coming days, the functionality is surely going to blossom and navigation to different screens may become something that only power users need to think about. Conclusion In conclusion, I believe utilizing Copilot can surely boost the Users productivity and reduce reliance on other partners or other experiences users in resolving minor queries.It also reduces the effort taken to move from one piece of information to another. One thing that I would love to see incorporated into this is data summarization and inclusion of all the fields available on the entity to the Copilot’s database. If you need further assistance, feel free to reach out to CloudFronts for practical solutions that can help you develop a more effective service request management system. Taking action now will lead to better customer satisfaction and smoother operations for your business.
Share Story :
X++ and Excel: A Powerful Partnership
Excel has over 750 million users worldwide, making it one of the most popular software applications in the world. According to recent studies, 89% of companies use Excel for daily operations, financial modeling, data analysis, and other tasks. Excel is so integral to the financial world that many financial analysts and accountants refer to themselves as “Excel jockeys” or “Excel ninjas.” NASA used Excel in its operations for various calculations related to space missions. Using Excel for manual data entry is much more easier for end users as it provides a familar interface and can be navigated much more quickly.It can also be used for quick minor calculations and formulas. References: Details: For businesses generating large volumes of data, it’s essential to have an efficient system for users to input that data smoothly. Are you struggling to keep up with your rapidly growing data? A study by Forrester Consulting shows that companies using Microsoft 365 tools like Excel, Word, Outlook, and PowerPoint see a 15-20% boost in employee productivity due to better collaboration and task management. This article will surely inspire you to start using Excel for your organization’s daily operations too! Enabling the Developer Tab in Excel To access advanced features like creating macros, using form controls, or accessing the XML commands in Excel, you’ll need to enable the Developer tab. Here’s how: – In the Developer Tab, click on “Add Ins” – In the pop-up that follows, click on “Store” and search for “Microsoft Dynamics” and click on enter. – Once you get the results as described in the below screenshot, click on “Add.” – Click on Continue. – Go to your Finance and Operations environment. – Go to System Administration -> Setup -> Office App Parameters. – Go to App Parameters and click on “Initialize app parameters” – Go to “Registered applets” and click on “Initialize applet registration” – Go to “Registered resources” and then click on “Initialize resource registration” – Then to test it out, we can go to the “All sales orders” list click on the “Office” icon at the top right and click on one of the “non-obsolete” options. – You can either download it on your own system or you can save it directly from this screen. – When you open the downloaded excel, after enabling editing, you’ll get the following pop-up and data. – You can also use this Excel to create records in the system. – Open the downloaded excel sheet. – Click on “New”. – Add the necessary fields in the newly created rows. – Once done, click on Publish. And we can see back in D365 that we have added some new records in the system via Excel. In conclusion, I firmly believe that using Excel for manual data entry can significantly cut down on unnecessary tasks.If you’re looking to streamline your processes or maximize the potential of your ERP systems, please feel free to reach out.
Share Story :
Optimizing Data Management in Business Central using Retention Policies
Introduction Data retention policies dictate which data should be stored or archived, where it should be stored, and for how long. When the retention period for a data set expires, the data can either be deleted or moved to secondary or tertiary storage as historical data. This approach helps maintain cleaner primary storage and ensures the organization remains compliant with data management regulations. In this blog, we’ll be covering – Pre-requisites Business Central environment References Data Retention Policy Clean up Data with Retention Policy – Microsoft Learn Details In Business Central, we can define Retention Policies based on two main parameters – The table which is to be monitored and the retention policy. Retention Policy Retention periods specify how long data is kept in tables under a retention policy. These periods determine how often data is deleted. Retention periods can be as long or as short as needed. Applying a retention policy – Retention policies can be applied automatically or manually. For automatic application, enable the policy, which creates a job queue entry to apply it according to the defined retention period. By default, the job queue entry applies policies daily at 0200, but this timing can be adjusted (refer below screenshot), preferably to non-business hours. All retention policies use the same job queue entry. For manual application, use the “Apply Manually” action on the Retention Policies page and turn on the “Manual” toggle to prevent the job queue entry from applying the policy automatically. We can also exclude or include certain records based on filters. Deselect the “Apply to all records” this will show a new tab where we can define the record filters. Every such group can have it’s own retention period. By default, only a few selected tables are shown in the table selection on the Retention Policy page. If we want to include our custom table in this list, we’ll have to do a small customization. **You cannot add tables that belong to seperate modules, for example “Purchase Header” cannot be added in this list by you. Unless you work at Microsoft in which case you already knew this. ** So here I’ve created a small sample table. And I’ve created an Codeunit with Install subtype where I’m adding my custom table to the allowed tables list. After deploying I can now see my custom table in the list. Developers also have the option to set Mandatory or Default filters on the custom tables. Mandatory filters cannot be removed and Default filters can be removed. To create a mandatory/default filter – Setting the “Mandatory” parameter to true, makes it Mandatory otherwise it’s a default filter. When I add the table ID on the “Retention Policy” I get the following entry created automatically. If I try to remove the filters, I get the error – Conclusion Thus, we saw how we can leverage Retention Policies in Business Central to reduce capacity wastage without heavy customizations.
Share Story :
Integration with Finance and Operations – From Basics (Part 2)
Introduction Finance and Operations provides two major ways to interact with tables (or data entities) for external system using APIs; namely Custom Services and Data Entities. Data entities in D365 Finance and Operations simplify data management by grouping data from multiple tables. They make it easier to import, export, and integrate data with other systems. Custom services in D365 Finance and Operations allow developers to create web services for specific business needs. They enable external systems to interact with D365 F&O by exposing custom logic and operations. This helps in integrating and automating processes with other applications. In the previous blog, we saw how we can use Data Entities to create APIs.In this blog, we’ll see how we can use Custom Services to create APIs. References Custom Service DevelopmentExposing an X++ Class as a Data Contract Using Data Contracts Pre-requisites Configuration Right click on the project and click on “Add” and then “New Item” Click on Services and select the “Service Group.” Add the appropriate name for your Service Group.Do note that this will be a part of your endpoint url. Once that is done, we’ll need to create a new Service as well.Repeat the same steps but this time select the “Service” object and add the appropriate name. Once both the Service Group and Service objects are created, we’ll need to create request, response and request processing objects. For that, click on Right Click on Project > Add > New Item > Code > Class. Add the appropriate name and click on “Add”. In the Request object, set the attribute [DataContract] at the class level and add Global variables which will be used to send data to the processing object. In the Response object, set the attribute [DataContract] at the class level and add Global variables which will be used to return data from the processing object. In the processing object, write the necessary logic. Here, I’m writing the logic to pull the data from the request object into local variables and then create a Customer record along with an address entry for that customer and if everything is completed successfully, I’ll return a “Success” status along with the customer Id else a “Failed” status along with the Customer ID. If there is any logic for logging, that can be added to our processing class after the main operation has completed.You can do that in the following way – Once this is done, we can now add our processing class to our Service object. Open the “Service” object and set the “Class” field to the processing class you have created. Right click on the Service object in the designer and click on “New Service Operation” In the new Service Operation that is created, set the method from the processing class that you want to call in the “method” field.Set the appropriate name for that method. (This will be part of the endpoint)Set the operational domain, whether it will only work for a particular company or accross the companies.Set the Access Level (Access level increases as you go down the list) Now after this, we’ll assign our Service object to the Service Group. Open the Service Group in the designer, right click it and then click on “New Service” In the newly created “ServiceGroupService” entry set your “Service” Then after rebuild, Sync database and deploy; open postman and add the following URL template. <base_url>/api/services/<ServiceGroup>/<Service>/<Method> Now, if I trigger the “Post” request, I’ll get a “Success” status along with the CustomerId.If I try to recreate the same customer, I’ll get a “Failed” status along with the CustomerId. If you are not sure whether your API exists or not, then you can simply call a “Get” request on the URL – <base_url>/api/services This returns a list of all the “Service Groups” present in the system. We can then call a “Get” request including this “Service Group” into our URL. This returns a list of all the “Services” present in the system for that “Service Group”. We can then call a “Get” request including this “Services” into our URL. This returns a list of all the “Operations” present in the system for that “Service Group”. We can then call a “Get” request including this “Operation” into our URL. This returns the Request and Response objects for this Service Operation. Conclusion Thus, we saw how to create APIs using Custom Services in Finance and Operations. In the next blog, we’ll see some advanced API functionalities that are present in Finance and Operations. We hope you found this article useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com
Share Story :
Integration with Finance and Operations – From Basics (Part 1)
Introduction Finance and Operations provides two major ways to interact with tables (or data entities) for external system using APIs; namely Custom Services and Data Entities. Data entities in D365 Finance and Operations simplify data management by grouping data from multiple tables. They make it easier to import, export, and integrate data with other systems. Custom services in D365 Finance and Operations allow developers to create web services for specific business needs.They enable external systems to interact with D365 F&O by exposing custom logic and operations.This helps in integrating and automating processes with other applications. Feature Data Entities Custom Services Purpose Simplify data management tasks like import, export, and integration. Expose custom business logic and operations as web services. Functionality Provide structured access to data from multiple tables in a unified format. Allow external systems to perform actions or retrieve data via API calls. Usage Used for bulk data operations, data migration, and integration with external systems. Used for real-time integration, extending functionality, and custom business process automation. Typical Use Cases Data import/export, data synchronization, and data migration. Integrating with external applications, custom business processes, and real-time data access. Data Handling Focuses on data in bulk. Focuses on specific operations or business logic. Pre-requisites References Data Entities Overview – Finance and Operations Build and consume data entities – Finance and Operations Exposing an X++ class as a Data Contract Configuration Here, to understand creation of APIs in either case, we’ll expose the same table using both Data Entities and Custom Services. Data Entity: Right click on the project and click on “Add” and then “New Item” Click on Finance and Operations > Dynamic 365 Items > Data Model and then select “Data Entity” Select the table that you want to expose in the “Primary Data Source” field, appropirate “Entity Category”, “Public Entity Name” and “Public Entity Set Name” (which is what the endpoint will be), and the Staging Table name. Select the necessary fields from the primary data source. You can add related tables by clicking on the small arrow next to the table name, which displays the list of all associated tables. Then you can select the relevant fields from the associated tables. Once done, you’ll get one data entity, two security privileges and one staging table created. If you want to add new data sources, then you can right click on the Primary Data Source’s “Data Sources” tab and add new data source. You can drag fields from any of the data sources into the “Fields” section of the data entity to make them available on the API. Calling the Data EntityYou can call <base url>/data url to get a list of all the data entities available in the system. From here, if I call a “GET” request on my Data Entity (the “Public Collection Name” property of the data entity, which we set in the Data Entity wizard), I’ll get the following response.Please note that this “Public Collection Name” is case sensitive. Now, if I need to create a “Customer” record then I can simply pass the same keys into a “POST” request. And we can see the same in FnO. If we want to update a record, then we make the PUT request with the syntax – {{base_url}}/data/TestCustomers(dataAreaId='<Company Name>’,CustomerId='<Customer Id>’) It will include all the Entity Keys defined on the Data Entity as we only have one field then we are simply passing that. Passing it without the DataAreaId will throw errors. You can delete the record using the same syntax but with the “Delete” request. Conclusion: In this blog, we explored how to create APIs using Data Entities in Dynamics 365 Finance and Operations, simplifying data management and external system integrations. Data Entities offer an efficient way to handle bulk data operations, while Custom Services provide flexibility for exposing specific business logic We’ll see how to create APIs using Custom Services in the next blog. We hope you found this article useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com
Share Story :
CI/CD with TFS for Finance & Operations
Introduction There are 100 million developers around the globe who use Git. However, if you want to work with customizations in Finance and Operations, you need to learn how to use TFS. Initially, I was frustrated and confused about this requirement, but as I learned more about how projects are structured both locally and in TFS, things started to make sense. TFS (Team Foundation Server) is Microsoft’s take on source control management, continuously released and improved since 2005. TFS keeps the source code centralized and tightly integrated with the Microsoft ecosystem. Despite the differences, if you are familiar with Git, transitioning to TFS shouldn’t be too difficult. TFS shares similar concepts with Git, such as checking in, branching, merging, version tracking, and other standard features of a source control management system. Understanding these similarities can make learning TFS easier and help you leverage its full potential in Finance and Operations projects. Pre-requisites Configuration So, here we’ll be starting with a new project.(If you are working with an already created repository then you can skip ahead.) Now, I’ll be adding two folders here, “Main” and “Released”. Later, we’ll convert them into branches from Visual Studio. In TFS, we have the concepts of branches and in addition to that branches can contain folders as well. Folders are used for organizing files and do not impact the version control flow directly.They are simply a way to keep the repository structured and manageable. Branches here (similar to Git) are used to manage different versions or lines of development in the repository.They allow for parallel development and keep separate histories until changes are merged. Inside Main I’ve added a trunk folder as well. Now, let’s head into our Development Enviroment and connect this project to Visual Studio. I’ve clicked on “Continue without Code” for now. I’ll click on View and “Team Explorer” Here, it says “Offline” as currently there’s no Azure DevOps project connected to it.So, let’s do that! I’ll click on “Manage Connections” and “Connect to a Project” Here, as the project is hosted in my own organization on Azure DevOps,I’ll use the same credentials to log in. Here, we can see all the different projects I have created within my organization in Azure DevOps.I’ll click on the relevant one and click on Connect. Here, we see the 3 sections in the Team Explorer view. 1 – Which credentails are being used to connect.2 – The name of root of the project and where it is planning to download the content from TFS.3 – The different components where we’ll be doing most of our work once the initial set up is completed. For now, I’ll just click on “Map & Get” Here, we can see that the mapping was successful. Next, we click on the Source Control Explorer to see the actual content of the TFS server. Now, we can convert the “Main” and “Release” folders into Branches. We can do this by right clicking on the folder -> Branching and Merging -> Convert to Branch After converting them to Branches, the icon next to them changes. Next, I’ll right click on my “Main” branch and add two new folders here. “Metadata” and “Projects” Now, before we can use these folders anywhere, we need to “push” these changes to the TFS.For that, we right click on “Trunk” folder and click on “Check in Pending Changes”. Now, we add a comment here describing what changes have been done (similar to a commit message)At the bottom we can see the files that have been created or modified. Once the check is done, we can see that the “+” icon next to the folders disappears and we get a notification that the checkin has been completed successfully. Now, this is where TFS shines through as better source control management for Finance and Operations. In (FnO) models and projects are stored in separate folders. Using Git for this setup can be tricky, as it would either mean managing two different repositories or dealing with a huge .gitignore file. TFS makes it easier by letting you map local folders directly to TFS folders, simplifying the management process. Here, we can see that currently, our mapping is a bit different than what we need, this is because of the “Map & Get” we did initially.So, to change that mapping, click on “Workspaces” Then click on “Edit” Now, we click on a new line to create a new mapping.Here, I’m creating a mapping between the “Metadata” folder in the “Main” branch of the TFS and the “PackageLocalDirectory” the place where all the models are stored for my system, Now, I’ll create another mapping between the Projects Folder and the local folder where my projects are stored. Now, once I click on “OK” it’ll prompt me if I want to load the changes. Click on “Yes” and move forward. But nothing changes here in Source Control Explorer. That’s because the Source Control Explorer shows what is stored in the TFS.And right now, nothing is; so we’ll have to add some models or projects here.Either we can add existing ones or we can create a new one.Let’s try to create a new model. Now, that the Model is created we’ll need to add it to our Source Control. Click on the blank space within the “Metadata” folder and select “Add Items to Folder” In the window, we can see that because of the mapping, we are sent to the local directory “PackageLocalDirectory”, and we can see our model inside it.Select that and click on “Next”. In the next view, we can see all the files and folders contained within the selected folder.Out of these, we can exclude the “Delta” folders. After, this we are left with these folders for the different elements.We can remove the content from the “XppMetadata” folders as well. Which leaves us with just the Description xml file. **Please do not exclude the descriptor file as without it Visual Studio will not be able to refer to your model or it’s … Continue reading CI/CD with TFS for Finance & Operations
Share Story :
Leverage Postman for Streamlined API Testing in Finance and Operations
Introduction Postman is an essential tool for developers and IT professionals, offering a robust platform for testing APIs, automating test processes, and collaborating efficiently across teams. In this blog we’re going to connect Postman to our Finance and Operations environment so we can test out standard or custom APIs. This connection is a crucial step in ensuring that your APIs function as expected, and it helps streamline the integration of various business processes within your organization. Whether you’re new to API testing or looking to optimize your current setup, this guide will walk you through the process with clear, actionable steps. I’ve already covered automating the testing in Postman in my blog here so once the connections are in place you’ll be good to go! Pre-requisites Configuration We’ll start with creating an App Registration in Azure Portal. Go to the Azure Portal (of the same tenant of your FnO Environment). Search for “App Registration” and click on “New Registration”. Add a name for your new app and click on “Register.” Once it is completed, you’ll be taken to the Overview of the app. Here, click on the “Add a certificate or secret” under the “Client Credentials.” Add an appropriate name and select the expiration date of the certificate as necessary. Once you click on add you’ll get the confirmation message that the client credential has been created and you’ll be able to see the value of the secret. ** Be sure to copy this value and keep it securely as once we refresh from this page, this value will not be available. ** Now that everything is done on the Azure side, open your FnO environment and search for “Microsoft Entra Applications.” Click on “New.” Paste the “Application (Client) ID” into the “Client ID” field, then assign it a suitable name and a User ID. The permissions given to the User ID will determine the permissions for the App. For now, I’ve assigned the “Admin” user. That’s all the configuration required at the FnO side. Now, let’s jump back into Postman. Now, in Postman we’ll start with a blank workspace and create a simple collection. The first thing that I like to do is to create different environments. As in FnO, we have a Production, a Sandbox and we can have multiple development environments so it may be possible that different environments are using different apps. So, to represent these environments, I like to create different environments in Postman as well. This is done by going to the “Environments” then clicking on “+” to create a new environment and giving it the appropriate name. Now, in this environment, I’ll add my app details as environment variables. The values for these can be found as follows – “grant_type” can be hard coded to “client credentials” and we can leave “Curr_Token” as blank for now. So, at the end we get – We can also change the type to “Secret” so that no one else can see these values. Now, the necessary credentials have been added to Postman. Next, we’ll set up our collection for generating the Auth Token. For that, we’ll copy the “OAuth 2.0 token endpoint (v2)” from the “Endpoints” in our Azure App Registration’s Overview screen. In Postman, click on your Collection then “Authorization” then selects “OAuth 2.0” I’ll paste the URL we copied from the “OAuth 2.0 token endpoint (v2)” from the “Endpoints” in our Azure App Registration in the “Access Token URL” field and I’ll add my variables as defined at the Environment variables. If you get the error “Unresolved Variable” even after defining them, then it could mean that the Environment isn’t set as the active. So go back to the Environments list and mark it as active. This way we can easily swap between Environments. Once the environment is marked as Active, we can see that the variable is found correctly. I’ll also ensure that my Access Token URL refers to the tenant ID as per my variable by embedding my “tenant_id” variable in it. Next, I’ll click on “Get New Access Token.” If everything has gone well, you’ll be able to generate the token successfully. After that, you can give it a name and click on “Use Token” to use it. I’ll now create a simple API which we can use to test this token. Right click on the “Collection” and click on “Add Request” give it an appropriate name. The “{{base_url}}/data” returns a list of APIs available in the system. I’ve set the Authentication to “Inherit Auth from parent” which means it relies on the Authentication set on the “Collection” for calling the request as is shown on the right side of the screen. Here we see that the request was executed successfully. If for some reason, you cannot use the standard Postman way of generating the Token, you can create a seperate API responsible for generating the Auth Token, store it as a variable and use it in your requests. From here, you can use the generated “access_token” and pass it as a “Bearer Token” to your requests. Or you can select the entire token and set it to your “Curr_Token” variable. And then you can pass this variable to the requests like – From Postman, we can then share these collections (which contain API data) and environments (which contain credentials) seperately as needed. All Data Entities follow the pattern – {{base_url}}/data/{{endpoint}} All services follow the pattern – {{base_url}}/api/services/{{service_group_name}}/{{service_name}}/{{method_name}} If I call a “Get” request on them, I get the details of the services for instance, here I’m getting the type of data I have to send in and the response I’ll be getting back in the form of object names. Moving back one step, I’m getting the names of the Operations (or methods) within this service object. Moving back one step, I’m getting the services within this Service Group. Moving back one step, I can see all the service groups available in the system. To actually run the logic behind the services … Continue reading Leverage Postman for Streamlined API Testing in Finance and Operations
Share Story :
Reduce Storage Usage for Business Central using Data Administration
Introduction By default, Business Central comes with 80GB of storage capacity across three sandbox environments and 1 Production Environment with an additional 3GB/Premium License, 2GB/Essential License, 1GB/Device license. These storage limits depending on your Business volume may run out if the data is not managed properly. Business Central now comes with a one stop view where you can manage (compress or delete) the entries to reduce storage usage – “Data Administration.” Pre-requisites Business Central Cloud/On Prem References Manage Storage by Deleting Documents or Compressing Data – Business Central | Microsoft Learn Configuration In Business Central, we’ve had the option to view the capacity usage from the Admin Center for a while now. Recently, they’ve also added a one stop view to check and manage the capacity usage – Data Administration from within Business Central itself. It can be found directly from the global search. The first time we open this we are greeted with an empty view, the data is loaded after we click on refresh to load the latest data. You can also configure it so that the data is loaded automatically in the background every so often. Here, we get the options for Data Clean up where we can delete data that isn’t required anymore. All of the below options, open a similar processing report where you can set filters which are used to delete the records as needed. The “Delete Detached Media” opens another page which I’ve discussed in depth in another blog. The second action groups hold actions which are meant to compress the ledger entries which can drastically reduce the storage space used. It is important to note that you can only compress entries which are older than 5 years by yourself which belong to Fiscal years that are closed and the entries themselves are closed (Open is set to false). You can configure the compression such that there is one entry per day, one entry per week, one entry per month, one entry per quarter, one entry per year or one entry for the period that is defined for compression. You also have the functionality to delete empty registers from here. If these individual actions seem to be overwhelming, Microsoft also provides for a Data Administration wizard which simplifies this process and allows you to manage the capacity via a wizard. Conclusion Thus, we saw how we can use the standard data administration tools to manage capacity of Business Central environment which can help the system run much more efficiently in terms of both performance and costs. We hope you found this article useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com
Share Story :
Item Availability Overview – A quick glance at the Item’s Inventory levels
While going through some sales documents, I noticed that the page that appears when I click on “Show Details” in the notification for low inventory has been updated! When we click on “Show Details” now, we’re taken to the page named “Item Availability Check”. Furthermore, it includes options to directly create a Purchase Order or a Purchase Invoice from this page. If a Vendor is specified in the “Vendor No.” field of the Item Card, the Purchase Order/Invoice is automatically generated with that Vendor. In the scenario where multiple vendors are selected in the Item Vendor Catalog instead of the Vendor No., all the vendors are displayed, and the one selected by the user is utilized to create the Purchase Order/Invoice. In both cases, the Purchase Line will reflect the shortfall as the Quantity. If the Item has any substitutes available then the “Substitute Exists” indicates the same and clicking on it opens the Item Substitutions page. Further, if you click on the “All Locations” then the “Item Availability by Location” page is opened. That’s all! Just wanted to share something new I learned recently. We hope you found this article useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com
Share Story :
Actionable Error Messages in Business Central
Introduction Error handling is an important concept in every technical field. It helps programs deal with unexpected problems and mistakes smoothly. It makes sure software works reliably and doesn’t crash unexpectedly. Error handling also helps developers find and fix issues quickly, making the software better for users. Plus, it gives users clear messages when something goes wrong, making their experience smoother. It shows that the team has considered the scenario and has measures in place for it indicating a well designed solution. Microsoft has an amazing document which lists the things to keep in mind for writing resilient code. In Business Central, we have try functions to handle errors and error function to show those errors to the Users. In this blog, we’ll learn how we can enhance the error messages so that the Users can resolve the errors themselves or at the very least we can point them towards where the error is. Pre-requisites Business Central OnPrem/Cloud References Actionable errors Try Methods for Error Handling Robust Coding Practises Error Info – Business Central Docs Explanation Before we get to the code, let’s set a little context. For Error Handling, Microsoft has two categories in Business Central, ErrorInfo is a data type used for error handling and reporting. It can be used to hold information about errors that occur during the execution of code. It has additional properties and actions that can be used to define it’s behavior to the end-user. The ones that are most useful as – The “Add Action” procedure takes a codeunit and a method name as input. To pass input into this procedure, we add an “ErrorInfo” object as a parameter to the function and if we want to specify some details of the record where the error is happening or where the fix is to happen, we can use the following procedures. The “Add Navigation Action” only takes a method name as an input. So, to tell the action which page and which record to open we have the following procedures. If you are passing the Page No. and System Id to the procedure which handles the error then the same can be accessed there as well. Code Here, I’ve taken a sample scenario where the value of one field depends on the value of another field on the Sales Order. Basically ;- I’ve set it up so that these validations are triggered when the Sales Order is posted. And the same thing goes for the “Not Blank” scenario so I’m not writing it for now. So, if I try the second scenario where Type is “Blank” and Field has some value then we get the following error message. If I click on the “Copy Details” I can see the detailed message that I added for this Error Info. If I click on the “Make Mandatory Field Blank” action then I can make the “Some Important Field” as blank. The code behind the action – “Make Mandatory Field Blank” is as follows- I’ve used messages to confirm that the values that I passed during the origin of the error are flowing into the procedure. Here are the messages – Now, some of you might be wondering, if this was a error message where one field was dependent on another then it should’ve been a validation. And yes! That is correct and here is how it would look. Here, I’ve used both “Add Action” and “Add Navigation Action” on the ErrorInfo. For the Parameters, all of the parameters are pointing to the Customer. This opens the Customer Card for the specified Customer. Conclusion You can refer the “Actionable Errors” documentation for the best practises and patterns for which type of actionable error to use and where to use it. Thus, we learned how to utilize actions within error messages in Business Central to assist users in resolving errors more effectively. We hope you found this article useful and if you would like to discuss anything you can reach out to us at transform@cloudfronts.com.
