Latest Microsoft Dynamics 365 Blogs | CloudFronts - Page 3

CI/CD with TFS for Finance & Operations

Introduction There are 100 million developers around the globe who use Git. However, if you want to work with customizations in Finance and Operations, you need to learn how to use TFS. Initially, I was frustrated and confused about this requirement, but as I learned more about how projects are structured both locally and in TFS, things started to make sense. TFS (Team Foundation Server) is Microsoft’s take on source control management, continuously released and improved since 2005. TFS keeps the source code centralized and tightly integrated with the Microsoft ecosystem. Despite the differences, if you are familiar with Git, transitioning to TFS shouldn’t be too difficult. TFS shares similar concepts with Git, such as checking in, branching, merging, version tracking, and other standard features of a source control management system. Understanding these similarities can make learning TFS easier and help you leverage its full potential in Finance and Operations projects. Pre-requisites Configuration So, here we’ll be starting with a new project.(If you are working with an already created repository then you can skip ahead.) Now, I’ll be adding two folders here, “Main” and “Released”. Later, we’ll convert them into branches from Visual Studio. In TFS, we have the concepts of branches and in addition to that branches can contain folders as well. Folders are used for organizing files and do not impact the version control flow directly.They are simply a way to keep the repository structured and manageable. Branches here (similar to Git) are used to manage different versions or lines of development in the repository.They allow for parallel development and keep separate histories until changes are merged. Inside Main I’ve added a trunk folder as well. Now, let’s head into our Development Enviroment and connect this project to Visual Studio. I’ve clicked on “Continue without Code” for now. I’ll click on View and “Team Explorer” Here, it says “Offline” as currently there’s no Azure DevOps project connected to it.So, let’s do that! I’ll click on “Manage Connections” and “Connect to a Project” Here, as the project is hosted in my own organization on Azure DevOps,I’ll use the same credentials to log in. Here, we can see all the different projects I have created within my organization in Azure DevOps.I’ll click on the relevant one and click on Connect. Here, we see the 3 sections in the Team Explorer view. 1 – Which credentails are being used to connect.2 – The name of root of the project and where it is planning to download the content from TFS.3 – The different components where we’ll be doing most of our work once the initial set up is completed. For now, I’ll just click on “Map & Get” Here, we can see that the mapping was successful. Next, we click on the Source Control Explorer to see the actual content of the TFS server. Now, we can convert the “Main” and “Release” folders into Branches. We can do this by right clicking on the folder -> Branching and Merging -> Convert to Branch  After converting them to Branches, the icon next to them changes. Next, I’ll right click on my “Main” branch and add two new folders here. “Metadata” and “Projects” Now, before we can use these folders anywhere, we need to “push” these changes to the TFS.For that, we right click on “Trunk” folder and click on “Check in Pending Changes”. Now, we add a comment here describing what changes have been done (similar to a commit message)At the bottom we can see the files that have been created or modified. Once the check is done, we can see that the “+” icon next to the folders disappears and we get a notification that the checkin has been completed successfully. Now, this is where TFS shines through as better source control management for Finance and Operations. In (FnO) models and projects are stored in separate folders.  Using Git for this setup can be tricky, as it would either mean managing two different repositories or dealing with a huge .gitignore file.  TFS makes it easier by letting you map local folders directly to TFS folders, simplifying the management process. Here, we can see that currently, our mapping is a bit different than what we need, this is because of the “Map & Get” we did initially.So, to change that mapping, click on “Workspaces” Then click on “Edit” Now, we click on a new line to create a new mapping.Here, I’m creating a mapping between the “Metadata” folder in the “Main” branch of the TFS and the “PackageLocalDirectory” the place where all the models are stored for my system, Now, I’ll create another mapping between the Projects Folder and the local folder where my projects are stored. Now, once I click on “OK” it’ll prompt me if I want to load the changes. Click on “Yes” and move forward. But nothing changes here in Source Control Explorer. That’s because the Source Control Explorer shows what is stored in the TFS.And right now, nothing is; so we’ll have to add some models or projects here.Either we can add existing ones or we can create a new one.Let’s try to create a new model. Now, that the Model is created we’ll need to add it to our Source Control. Click on the blank space within the “Metadata” folder and select “Add Items to Folder” In the window, we can see that because of the mapping, we are sent to the local directory “PackageLocalDirectory”, and we can see our model inside it.Select that and click on “Next”. In the next view, we can see all the files and folders contained within the selected folder.Out of these, we can exclude the “Delta” folders. After, this we are left with these folders for the different elements.We can remove the content from the “XppMetadata” folders as well. Which leaves us with just the Description xml file. **Please do not exclude the descriptor file as without it Visual Studio will not be able to refer to your model or it’s … Continue reading CI/CD with TFS for Finance & Operations

Share Story :

Leverage Postman for Streamlined API Testing in Finance and Operations

Introduction Postman is an essential tool for developers and IT professionals, offering a robust platform for testing APIs, automating test processes, and collaborating efficiently across teams. In this blog we’re going to connect Postman to our Finance and Operations environment so we can test out standard or custom APIs. This connection is a crucial step in ensuring that your APIs function as expected, and it helps streamline the integration of various business processes within your organization. Whether you’re new to API testing or looking to optimize your current setup, this guide will walk you through the process with clear, actionable steps. I’ve already covered automating the testing in Postman in my blog here so once the connections are in place you’ll be good to go! Pre-requisites Configuration We’ll start with creating an App Registration in Azure Portal. Go to the Azure Portal (of the same tenant of your FnO Environment). Search for “App Registration” and click on “New Registration”. Add a name for your new app and click on “Register.” Once it is completed, you’ll be taken to the Overview of the app. Here, click on the “Add a certificate or secret” under the “Client Credentials.” Add an appropriate name and select the expiration date of the certificate as necessary. Once you click on add you’ll get the confirmation message that the client credential has been created and you’ll be able to see the value of the secret. ** Be sure to copy this value and keep it securely as once we refresh from this page, this value will not be available. ** Now that everything is done on the Azure side, open your FnO environment and search for “Microsoft Entra Applications.” Click on “New.” Paste the “Application (Client) ID” into the “Client ID” field, then assign it a suitable name and a User ID.  The permissions given to the User ID will determine the permissions for the App. For now, I’ve assigned the “Admin” user. That’s all the configuration required at the FnO side. Now, let’s jump back into Postman. Now, in Postman we’ll start with a blank workspace and create a simple collection. The first thing that I like to do is to create different environments. As in FnO, we have a Production, a Sandbox and we can have multiple development environments so it may be possible that different environments are using different apps. So, to represent these environments, I like to create different environments in Postman as well. This is done by going to the “Environments” then clicking on “+” to create a new environment and giving it the appropriate name. Now, in this environment, I’ll add my app details as environment variables. The values for these can be found as follows –  “grant_type” can be hard coded to “client credentials” and we can leave “Curr_Token” as blank for now. So, at the end we get – We can also change the type to “Secret” so that no one else can see these values. Now, the necessary credentials have been added to Postman. Next, we’ll set up our collection for generating the Auth Token. For that, we’ll copy the “OAuth 2.0 token endpoint (v2)” from the “Endpoints” in our Azure App Registration’s Overview screen. In Postman, click on your Collection then “Authorization” then selects “OAuth 2.0” I’ll paste the URL we copied from the “OAuth 2.0 token endpoint (v2)” from the “Endpoints” in our Azure App Registration in the “Access Token URL” field and I’ll add my variables as defined at the Environment variables. If you get the error “Unresolved Variable” even after defining them, then it could mean that the Environment isn’t set as the active. So go back to the Environments list and mark it as active. This way we can easily swap between Environments. Once the environment is marked as Active, we can see that the variable is found correctly. I’ll also ensure that my Access Token URL refers to the tenant ID as per my variable by embedding my “tenant_id” variable in it. Next, I’ll click on “Get New Access Token.” If everything has gone well, you’ll be able to generate the token successfully. After that, you can give it a name and click on “Use Token” to use it. I’ll now create a simple API which we can use to test this token. Right click on the “Collection” and click on “Add Request” give it an appropriate name. The “{{base_url}}/data” returns a list of APIs available in the system. I’ve set the Authentication to “Inherit Auth from parent” which means it relies on the Authentication set on the “Collection” for calling the request as is shown on the right side of the screen. Here we see that the request was executed successfully. If for some reason, you cannot use the standard Postman way of generating the Token, you can create a seperate API responsible for generating the Auth Token, store it as a variable and use it in your requests. From here, you can use the generated “access_token” and pass it as a “Bearer Token” to your requests. Or you can select the entire token and set it to your “Curr_Token” variable. And then you can pass this variable to the requests like –  From Postman, we can then share these collections (which contain API data) and environments (which contain credentials) seperately as needed. All Data Entities follow the pattern – {{base_url}}/data/{{endpoint}} All services follow the pattern – {{base_url}}/api/services/{{service_group_name}}/{{service_name}}/{{method_name}} If I call a “Get” request on them, I get the details of the services for instance, here I’m getting the type of data I have to send in and the response I’ll be getting back in the form of object names. Moving back one step, I’m getting the names of the Operations (or methods) within this service object. Moving back one step, I’m getting the services within this Service Group. Moving back one step, I can see all the service groups available in the system. To actually run the logic behind the services … Continue reading Leverage Postman for Streamlined API Testing in Finance and Operations

Share Story :

Reduce Storage Usage for Business Central using Data Administration

Introduction By default, Business Central comes with 80GB of storage capacity across three sandbox environments and 1 Production Environment with an additional 3GB/Premium License, 2GB/Essential License, 1GB/Device license. These storage limits depending on your Business volume may run out if the data is not managed properly. Business Central now comes with a one stop view where you can manage (compress or delete) the entries to reduce storage usage – “Data Administration.” Pre-requisites Business Central Cloud/On Prem References Manage Storage by Deleting Documents or Compressing Data – Business Central | Microsoft Learn Configuration In Business Central, we’ve had the option to view the capacity usage from the Admin Center for a while now. Recently, they’ve also added a one stop view to check and manage the capacity usage – Data Administration from within Business Central itself. It can be found directly from the global search. The first time we open this we are greeted with an empty view, the data is loaded after we click on refresh to load the latest data. You can also configure it so that the data is loaded automatically in the background every so often. Here, we get the options for Data Clean up where we can delete data that isn’t required anymore. All of the below options, open a similar processing report where you can set filters which are used to delete the records as needed. The “Delete Detached Media” opens another page which I’ve discussed in depth in another blog. The second action groups hold actions which are meant to compress the ledger entries which can drastically reduce the storage space used. It is important to note that you can only compress entries which are older than 5 years by yourself which belong to Fiscal years that are closed and the entries themselves are closed (Open is set to false). You can configure the compression such that there is one entry per day, one entry per week, one entry per month, one entry per quarter, one entry per year or one entry for the period that is defined for compression. You also have the functionality to delete empty registers from here. If these individual actions seem to be overwhelming, Microsoft also provides for a Data Administration wizard which simplifies this process and allows you to manage the capacity via a wizard. Conclusion Thus, we saw how we can use the standard data administration tools to manage capacity of Business Central environment which can help the system run much more efficiently in terms of both performance and costs. We hope you found this article useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com

Share Story :

Item Availability Overview – A quick glance at the Item’s Inventory levels

While going through some sales documents, I noticed that the page that appears when I click on “Show Details” in the notification for low inventory has been updated! When we click on “Show Details” now, we’re taken to the page named “Item Availability Check”. Furthermore, it includes options to directly create a Purchase Order or a Purchase Invoice from this page.  If a Vendor is specified in the “Vendor No.” field of the Item Card, the Purchase Order/Invoice is automatically generated with that Vendor. In the scenario where multiple vendors are selected in the Item Vendor Catalog instead of the Vendor No., all the vendors are displayed, and the one selected by the user is utilized to create the Purchase Order/Invoice. In both cases, the Purchase Line will reflect the shortfall as the Quantity. If the Item has any substitutes available then the “Substitute Exists” indicates the same and clicking on it opens the Item Substitutions page. Further, if you click on the “All Locations” then the “Item Availability by Location” page is opened. That’s all! Just wanted to share something new I learned recently. We hope you found this article useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfronts.com

Share Story :

Actionable Error Messages in Business Central

Introduction Error handling is an important concept in every technical field.  It helps programs deal with unexpected problems and mistakes smoothly.  It makes sure software works reliably and doesn’t crash unexpectedly.  Error handling also helps developers find and fix issues quickly, making the software better for users. Plus, it gives users clear messages when something goes wrong, making their experience smoother. It shows that the team has considered the scenario and has measures in place for it indicating a well designed solution. Microsoft has an amazing document which lists the things to keep in mind for writing resilient code. In Business Central, we have try functions to handle errors and error function to show those errors to the Users.  In this blog, we’ll learn how we can enhance the error messages so that the Users can resolve the errors themselves or at the very least we can point them towards where the error is. Pre-requisites Business Central OnPrem/Cloud References Actionable errors Try Methods for Error Handling Robust Coding Practises Error Info – Business Central Docs Explanation Before we get to the code, let’s set a little context. For Error Handling, Microsoft has two categories in Business Central, ErrorInfo is a data type used for error handling and reporting.  It can be used to hold information about errors that occur during the execution of code. It has additional properties and actions that can be used to define it’s behavior to the end-user. The ones that are most useful as – The “Add Action” procedure takes a codeunit and a method name as input. To pass input into this procedure, we add an “ErrorInfo” object as a parameter to the function and if we want to specify some details of the record where the error is happening or where the fix is to happen, we can use the following procedures. The “Add Navigation Action” only takes a method name as an input. So, to tell the action which page and which record to open we have the following procedures. If you are passing the Page No. and System Id to the procedure which handles the error then the same can be accessed there as well. Code Here, I’ve taken a sample scenario where the value of one field depends on the value of another field on the Sales Order. Basically ;-   I’ve set it up so that these validations are triggered when the Sales Order is posted. And the same thing goes for the “Not Blank” scenario so I’m not writing it for now. So, if I try the second scenario where Type is “Blank” and Field has some value then we get the following error message. If I click on the “Copy Details” I can see the detailed message that I added for this Error Info. If I click on the “Make Mandatory Field Blank” action then I can make the “Some Important Field” as blank. The code behind the action – “Make Mandatory Field Blank” is as follows-  I’ve used messages to confirm that the values that I passed during the origin of the error are flowing into the procedure. Here are the messages –  Now, some of you might be wondering, if this was a error message where one field was dependent on another then it should’ve been a validation. And yes!  That is correct and here is how it would look. Here, I’ve used both “Add Action” and “Add Navigation Action” on the ErrorInfo. For the Parameters, all of the parameters are pointing to the Customer. This opens the Customer Card for the specified Customer. Conclusion You can refer the “Actionable Errors” documentation for the best practises and patterns for which type of actionable error to use and where to use it. Thus, we learned how to utilize actions within error messages in Business Central to assist users in resolving errors more effectively. We hope you found this article useful and if you would like to discuss anything you can reach out to us at transform@cloudfronts.com. 

Share Story :

What is “Database Wait Statistics” in Business Central?

Introduction: “Wait” typically refers to the amount of time during which a database session waits for an event to complete before it can proceed with execution. Waits can arise for many reasons in a database system, and understanding them is important for  tuning and optimizing performance. References: Explanation: Waits, in SQL, are broadly categorized into three categories: Resource Waits: These happen when a worker needs access to a resource like data or system resources, but it’s not available because another worker is using it.Examples include waiting for locks, system latches, or for data to be read from the network or disk. Queue Waits:  Occur when a worker is waiting for a task to be assigned to it.Think of it like waiting in line for a job to do.This commonly occurs with system tasks like deadlock detection or cleaning up deleted records.Even if there’s no immediate task, workers might still check periodically. External Waits:  This occurs when a worker is waiting for something outside the SQL Server environment to finish, like a call to an external procedure or a query to a linked server.It’s important to note that just because a worker is in external wait doesn’t mean it’s idle; it might be actively running external code. In context of Business Central, we see the following “wait” types: Buffer IO: This type of wait occurs when a database session is waiting for data to be read from or written to the buffer cache, which is an area of memory used to cache data pages from disk. Buffer Latch: Buffer latch waits happen when a session is waiting to acquire a latch on a buffer in memory. Latches are used to protect access to in-memory data structures, and buffer latch waits can occur when multiple sessions are contending for access to the same buffer. Compilation: Compilation waits occur when a session is waiting for a SQL query or stored procedure to be compiled and optimized by the database engine. CPU: CPU waits occur when a session is waiting for CPU resources to become available for query processing. Idle: Idle waits occur when a session is not actively performing any work and is waiting for something to do. Latch: Latch waits, as mentioned earlier, happen when a session is waiting to acquire a latch on a data structure in memory. Lock: Lock waits occur when a session is waiting for a lock on a resource that is held by another session. Memory: Memory waits occur when a session is waiting for memory resources to become available. This can include waits for memory allocations, deallocations, or other memory-related operations. Network IO: Network IO waits occur when a session is waiting for data to be sent or received over a network connection. Other: This category typically includes waits that don’t fit into the other specific categories listed. Other Disk IO: This is similar to Buffer IO waits but encompasses other disk-related operations beyond just buffer reads and writes. Parallelism: Parallelism waits occur when a session is waiting for other parallel threads to complete their tasks. Preemptive: Preemptive waits occur when a session is waiting for an external operation to complete, such as an operating system call. Service Broker: Service Broker waits occur when a session is waiting for a message to be sent or received via the Service Broker feature in SQL Server. SQL CLR: SQL CLR waits occur when a session is waiting for a Common Language Runtime (CLR) operation to complete. Tran Log IO: Transaction Log IO waits occur when a session is waiting for data to be read from or written to the transaction log. Transaction: Transaction waits occur when a session is waiting for a transaction to complete. User Wait: User waits are general-purpose waits that occur when a session is waiting for some user-defined event to occur. Worker Thread: Worker thread waits occur when a session is waiting for a worker thread to become available for query processing. Conclusion: Thus, we saw how we can use the “Database Wait Statistics” in Business Central to identify performance bottlenecks in the system. We hope you found this article useful and if you would like to discuss anything you can reach out to us at transform@cloudfronts.com. 

Share Story :

Use Database Access Intent List to Boost Performance in Business Central

Introduction For any Business Application, database replication is a necessity for the application to be highly available, fault tolerant and performant without any data throughput issues. Business Central too follows the database replication utilizing a technique known as “Read Scale Out” or “Leader/Follower or Master/Slave Replication Architecture”. Basically, the business operations(Codeunits, Pages, POST/PUT/DELETE API calls) which create the data in the system are relatively quick as compared to Analytical operations (Reports, Queries, GET APIs calls)which read a whole bunch of data from a lot of tables at once. So, in this case, performing both business and analytical operations on the same database can cause performance issues as tables can be locked by an analytical operation while a business operation tries or access or modify that data. A solution for this is using multiple copies of the database in a leader follower architecture.All the write transactions are directed towards the leader database which are then forwarded to the follower databases.All the read transactions can be forwarded to either the leader or the follower database. Please note that this all only happens for Production Environments. Sandbox environments only have the primary database. Side Note  If you’re wondering what happens when a User tries to read from a follower database before the leader database was able to send the updated information there (This is called a stale replica).  This is an accepted risk when using this architecture. According to CAP Theorem only two of the three properties, Consistency, Availability and Partition Tolerance can be guaranteed. Out of these, partition tolerance has to be tolerated as network failures are inevitable so most systems have to choose between Consistency and Availability. In most cases, RDBMS systems choose Consistency over Availability (as does Business Central) and most NoSQL databases choose Availability over Consistency. Pre-requisites Business Central Cloud/OnPrem References Explanation Setting the property DataAccessIntent to ReadOnly doesn’t guarantee that all the operations that a particular object does are going to be routed via the “replica database”. For example, consider a case where we are using a processing report to update a field on the Item table based on the calculations done using a Query object. Here, when the operation started, given that the processing report intents to update the Item table, the operation was forwarded into the Primary database, now when the Query is executed to fetch the generate the necessary value, the database is still going to be the Primary database. To summarize, the database is not switched in the middle of a transaction. For API Pages where we are only going to be fetching the data from Business Central, we have to set the API page’s Editable property as false and only then we can set the DataAccessIntent to ReadOnly.We don’t have this property for any other page types. For Reports, we can set the DataAccessIntent property directly and if it is a processing report that tries to make any modifications to the data then we end up with a run-time error. For Queries, we can set the DataAccessIntent property directly as well with the same conditions as the Report object but in effect, the only time queries benefit from the “replica database” is if they are used directly as APIs. Almost all the ODATA GET requests are directed to the “replica” database by default in Business Central on Cloud. In the On-Premise Environment, we have a setting “ODataReadonlyGetEnabled” that controls this behaviour. Further, there is a list page in Business Central “Data Access Intent List” which can be used to modify the Data Access List of any Page, Query or Report object. The Default Value indicates that the object should use the pre-defined value defined in AL. The same rules as above are followed when we update the “Data Access Intent” values in the Data Access Intent List page. Conclusion: Thus we saw how Business Central architecture uses the “read scale out” method to ensure consistency and availability and how we can leverage those to boost our application’s performance. Happy Coding!

Share Story :

Using Notifications in Business Central via AL

Introduction Notifications in Business Central are alerts that appears in the Notification bar based on User actions. Notifications stack up from top to bottom, lasting until the user dismisses them, including those from sub-pages.  Validation errors are prioritized and shown before other notifications. We can use this to alert the user regarding something without taking all the user’s notification towards it, in the way messages or errors do. They also have the option to allow Users to make a corrective action by embedding the action button directly into the notification. Let’s see how it works. Source Code Pre-requisites Business Central OnPrem/Cloud. References Notification – Business Central Docs Configuration Here, for an example, I’ve created one simple page which takes two inputs. 1. The message that is to be shown in the notification. 2. The message to be shown after the User clicks on the action embedded in the notification. And I have two actions which I’ll be using to show/hide the notification itself. Both of those combined result in a page like below – Now, here is the list of procedures that are available with a “Notification” variable. Let us walk through these one by one. Message – Specifies the content of the notification that appears in the UI. The message function is what we use to decide what the notification will say.  I set up a global variable (Message) on my page so the user can type in a value directly, and that value will appear in the message. Scope – Specifies the scope in which the notification appears. According to Microsoft Docs, it is meant to specify the context in which the Notification appears. However for now, we only have the “Local Scope” available as an option so can’t comment much on this. Send – Sends the notification to be displayed by the client. Send is the function used to actually trigger the Notification in the UI. It returns a boolean value indicating whether the notification was triggered successfully or not. Set Data – Sets a data property value for the notification.Get Data – Gets a data property value from the notification.Has Data – Returns a boolean value indicating whether the notification has that value. These three functions work similar to a dictionaries Get, Set or Has functions. As Notifications can be used to perform actions, we need to store some data in them. This data is stored using key value pairs using the “Set Data” function. Then later we can retrieve it back using the “Get Data” function by passing in the specific key.  However, if the Key does not exist, we get a run-time error, thus we can use the “Has Data” function to check whether our notification has the specified key. Please note that the data in the Notification is stored till the User dismisses the notification or exits from the page. Add Action – Adds an action on the notification. Here, in the “Show Notification” action, I have added the “Set Data” function to store the data within the global variable “MyData” into the notification with a key as “MyData” as well. Then, we call the “Add Action” function the following parameters In the “Notification Action” codeunit i’ve created a simple procedure which checks, gets and then messages out the value set in the “MyData” key. And so, we get the following output when we click on the action button on the notification. When we click on the action button, the notification automatically disappears. Recall – Recalls a sent notification. However, if we want to manually recall a notification, we can use the “Recall” procedure. After I click the “Hide Notification” button. Id – Gets or Sets the GUID ID for an individual notification. We can use the ID function to Get to Set the ID for a particular notification. We can use this in conjunction with the other functions by passing the ID of one notification and using that to get the data from that notification instead of passing the Notification variable itself. Here I’ve made some changes to the “Show Notification” action such that now it pops up two notifications instead of one and I am storing the ID of the second notification. This is the output. Now, here is the “Hide Notification” action which will be using the ID saved from earlier to recall just the copy notification. And this is the output. Conclusion Thus, we saw how we can use notification to provide non-intrusive alerts to the User along with actions. Happy Coding!

Share Story :

Configure an Azure Connector in LCS

Posted On March 7, 2024 by Rahul Bansode Posted in

Introduction In this blog, we’ll be looking into configuring the Azure Connector in LCS with the Azure Resource Manager so that LCS can deploy your resources to Azure. Pre-requisites An Azure subscription that you are a co-administrator in. References Configuration Go to Microsoft Dynamics Lifecycle Services and log in with your account. In LCS, when we try to create a cloud hosted environment for the first time, it prompts us to create an Azure Connector first. You can also access this by going to your Project Settings and the “Azure Connectors.” Once, we reach this screen, we have to click on Authorize in the organization where we want to authorize. Please do ensure your account has the necessary permissions for these actions. Once, this is done click on Microsoft Azure Portal as there are a few configurations we need to do in the Azure Portal. Click on Subscriptions. From the Subscriptions list, we can note down the Subscription ID as we will need it while creating the Azure Connector. The “Subscription ID” is also available in the Overview section of the Subscription. Then go to the Access Control (IAM) tab and click on Add and then Add Role Assignment. Then go to Role -> Privileged Administrator Roles and then search for “Contributor”. Click on it and then click on Next. In the members tab, click on User, group or Service Principal and click on Select Members. After that search for and add “Dynamics Deployment Services [wsfed-enabled]” and your own user to this role assignment. Once that is done, we’ll get a confirmation message. After that, we can move back to the LCS for configuring the Azure Connector. We click on Add to create a new Azure Connector and get the following pop-up. Here we add a name for the connector, Azure Subscription ID and the Domain Name. Name can be anything that you want, the Domain Name in most cases is the part of your email address after the @. For e.g. it’d be “microsoft.com” in case of rbansode@microsoft.com. For the Azure Subscription ID, we have already stored that from the previous steps. Once we add the necessary values and click on next we get the following pop-up. We’ve already completed the necessary steps in the Azure Portal so we can simply click on Next. After that, we get the following pop-up. We’ve completed the steps mentioned in the Ensure you are a subscription user section. If for some reason you are facing any difficulties in that you can also try the steps from the Apply a Subscription Tag section. Apply a subscription tag When you click on Get a Code you’ll get the following pop-up which includes a unique verification code. We copy this and head on to the Azure Portal. Then go to your subscription in Azure Portal. Head to the Tags section and create a new entry with name as “LifecycleServicesAuthCode” and the value as unique verification code from LCS. If neither of those methods work, there is a soon to be deprecated method mentioned as well where you upload the certificate downloaded from LCS into the “Management Certificates” of your Azure Subscription. Hopefully, one of these three methods work out for you and you’ll get the following pop-up. Once you click on connect you’ll see an entry created in your Azure Connectors. This indicates that your Azure Account has been linked and now LCS can utilize it to create resources in Azure on your behalf. Side Note If you see the following error message then that means there was an error with one of the three suggested approaches you choose. You can try with another approach and start over. Conclusion Thus, we saw how to configure the Azure Connector in LCS. Happy Coding!

Share Story :

Create a New Environment in LCS for D365 Finance and Operations

Introduction In this blog, we’ll be looking into creating a new environment for D365 Finance and Operations or D365 Commerce. Pre-requisites References Configuration Go to Microsoft Dynamics Lifecycle Services and log in with your account. If you select D365 Commerce, you get the following screen. If you select D365 Finance and Operations, you get another screen where you have to specify whether the project is an actual implementation or just for evaluation after which you get the same screen as below. Once the Project is created, we get the following screen. From here, we click on the hamburger menu at the top and then click on Cloud Hosted Environments. Click on Add to create a new environment. If you get the below pop-up asking to configure an Azure Connector, please refer to my blog – “Configure an Azure Connector in LCS”. Once, you have an Azure Connector configured, you can click on Add again and get the following pop-up. After selecting the Application and Platform version, you’ll get the option to select the environment topology. DEMO – A demo environment includes only Microsoft demo data. You can use a demo environment to explore default features and functionality. DEVTEST – A DevTest environment is for development or build. You can use this environment for development or build. Then we get another pop-up to select the environment topology. After that is selected, we decide the environment name and the size of the VM that is to be used for this environment. You can read more about VM sizes here – VM sizes – Azure Virtual Machines | Microsoft Learn Once we click on Next we get the last pop-up after which the environment gets deployed. Once we click on deploy, it takes about 6-8 hours to deploy the environment after which it’ll be available in the cloud-hosted environments section. If, for some reason, you try to create an environment with the latest platform and application version and that deployment fails, you can try to create an environment one platform/application version below that. Conclusion Thus we saw how to create an environment in LCS for either D365 Finance and Operations. Happy Coding!

Share Story :

SEARCH BLOGS:

FOLLOW CLOUDFRONTS BLOG :


Secured By miniOrange