SQL Server Archives -

Tag Archives: SQL Server

Creating and Accessing Blob Storage with Azure Data Factory: A Complete Guide

Introduction: This guide will walk you through creating and accessing Azure Blob Storage and integrating it with Azure Data Factory to automate data pipelines. From setting up a storage account and managing containers to configuring pipelines and transferring data to an Azure SQL Database, this step-by-step tutorial ensures you gain a comprehensive understanding of the process. Steps: 3. Click on + Create to initiate the creation of a new storage account. 4. Fill in the required fields like subscription, resource group, and region. Review all the settings before proceeding. 5. Create a Storage Account 6. Once the storage account is created, go to the resource by clicking on Go to Resource. 7. In the storage account, navigate to the Containers section and click + Container to create a new container for storing your files. 8. Click on the container you just created to access its contents. 9. Upload the desired JSON file into the container by clicking on Upload and selecting the file from your local system. 10. Ensure that the uploaded file is now listed in the container. 11. Go back to the Azure Portal and search for Azure Data Factory to open the ADF service. 12. From the ADF home screen, go to Author > Datasets. Click + New Dataset to create a new dataset for your Blob Storage. 13. Select the Azure Blob Storage dataset type, as you are working with data stored in Blob Storage. 14. Choose the data format that matches the file you uploaded, such as JSON, and click Continue. 15. Enter the necessary details for your dataset, including the file path and format settings. Select the appropriate Authentication type and specify the Storage account where the Blob Storage resides. Click Create to finalize the dataset creation. 16. Verify the settings and click OK to confirm the dataset configuration. 17. Navigate to the Pipelines section and click + New Pipeline to create a pipeline that will define your data flow. 18. Pipeline gets created successfully as shown below. 19. In the pipeline, select the dataset type as Azure SQL Database and click Continue to set up the SQL Database dataset. 20. Provide the necessary Linked Service details for your SQL database and click Create. 21. After configuring both the source and target datasets, and the pipeline, publish all the elements to save your work. 22. Once the pipeline is running successfully, you can verify its functionality by querying the destination database to ensure data is being transferred properly. a. Go to the SQL Database and select the relevant database. b. Select the database on which we have perform a query. c. Log in with your credentials. d. Write a simple test query to verify data has been transferred from Blob Storage to the SQL Database. Execute the query and confirm that the expected output is returned. Conclusion: Integrating Azure Blob Storage with Azure Data Factory is a powerful way to manage and automate data workflows in the cloud. This guide walks you through creating a storage account, configuring containers, uploading data, and designing a pipeline to process and transfer data to Azure SQL Database. By following these steps, you can efficiently handle large-scale data integration and ensure seamless communication between your data sources and destinations. Azure Data Factory not only simplifies the process of orchestrating data pipelines but also provides robust options for monitoring and optimizing workflows. Whether you are managing JSON files, processing transactional data, or setting up complex ETL processes, Azure’s ecosystem offers a reliable and scalable solution. Start exploring these tools today to unlock new possibilities in data-driven operations! We hope you found this blog useful, and if you would like to discuss anything, you can reach out to us at transform@cloudfonts.com.

How to Import/Restore SQL Database (.bak Files) in SQL Server

Posted On December 9, 2021 by Shruti Gupta Posted in Tagged in ,

In this blog we will see how to import/restore SQL Database (.bak) file in SQL Server. If you have a new downloaded .bak file or the Backup of any other database in .bak file format then you can import/restore it in your SQL Server. Open SSMS and Connect to your SQL Server Expand the SQL Server, Right Click on the Database and choose Restore Database In the General settings, Select Device and then the three ellipsis. Click on Add to add the .bak file. Select the file location and then select the .bak file and click OK. Click OK and your Database will be restored.

How to set different connection colors in SSMS

Posted On June 21, 2021 by Jaison Menezes Posted in Tagged in ,

In Microsoft SQL Studio Management Server (SSMS) there is an interesting feature that helps users quickly determine which connection tab they are using currently this can be done by setting custom connection colors in SSMS for different SQL Server nodes. In this blog we are going to see how we can use this feature Open SSMS, and on the Connection to Server window, click the Options button to show advanced options In the Connection Properties Tab select the database from the dropdown on which you intend to apply custom colors on After selecting the database, click on use custom color checkbox and select the desired color. You can select any colors usually I prefer Red color for production databases and green for test databases. Click on Connect button. Restart SSMS As you can see the color green has been applied which means that green color is associated with that database. Thank you for reading my blog, hope it helped !!

Displaying Line Number in SSMS Query Editor

Sometimes while working on large SQL Queries we may face errors. SSMS tells us the line where the error has occurred after the faulty query gets executed but there is no line no. in the Query editor so it is tedious sometimes to fish out these errors. The solution to this issue would be adding line numbers in the query editor, by default this option is disabled. In this blog we will see how to enable line numbers in SSMS. From the toolbar click on tools, then select options. From the window that appears select the Text editor. Now from the text editor click on all languages and enable the line number option. Click on OK button. As you can see the corresponding line numbers are visible. Hope you found this article helpful! thanks for reading.

How to convert SQL table into Excel sheet and Excel sheet into a SQL Table

Introduction: Sometimes a user wants to analyze their SQL query results in Microsoft Excel or import existing Excel data sheets as tables in SQL in order to do various data transformations. In both scenarios we can achieve this goal as: Converting Data from SQL table to Excel sheet: Go to tools option on the toolbar on SSMS and select options. In the window that opens select Query Results. Select Results to Grid from the drop down. Check Include column headers when copying results from the checkbox and click on the ok button. Click on the intersection of first row and first column of the SQL Table to select the entire table and then copy it by pressing ctrl + C or by right clicking and selecting copy . Open a new excel sheet and just paste the contents after selecting the first cell. If some columns show values as ##### , just increase the column width. Converting Data from Excel sheet to SQL Table: Right click on the database where you want to import the excel sheet as table and go to tasks and select import data. Select Microsoft Excel as source from the dialog box Select the version of excel you want to convert into If the system gives error that microsoft.ace.oledb drivers are not found download it from Microsoft website based on what  driver version is required. Choose destination as SQL server Native client and enter the server name and credentials Select the sheet which you want to be converted into SQL as table. Click next Wait for the process to complete The new sheet is created as a table in SQL Conclusion: Thus we saw how we can convert SQL table into Excel sheet and vice versa Thank you very much for reading, hope you enjoyed the article!

Connect D365 CRM CDS Database from SQL Server

Many times, we feel like why I can’t access D365 CRM Database directly from MS SQL Server, so here is my blog that will guide you on how you can connect D365 CRM CDS Database using MS SQL Server Steps to enable D365 CRM CDS Database to make it connect from MS SQL Server: Login to https://admin.powerplatform.microsoft.com/ using administrator credentials. In Environment section, click on your environment for which you want to enable D365 CRM CDS for MS SQL Server. (In my case I am clicking on DemoEnvironment as shown below.) When Environment opens click on settings in header. Settings page will open, Click on Product and then click on Features. When features Page opens enable TDS Endpoint (Preview) and click save. Now, we have successfully enabled D365 CRM CDS to connect it from MS SQL Server. Steps to Connect CDS Database from MS SQL Server: Open MS SQL Server. In connect to SQL Server Window enter Server name (It will be your D365 CRM URL) followed by comma and Port Number (5558) e.g. of server name yourdomain.crm.dynamics.com,5558. Select Authentication as Azure Active Directory – Password. Enter Username: Your admin user id e.g. admin@xyz.com Enter Password: (Your Login password) ******** Click Connect. Now, you have successfully connected to D365 CRM DB. Write a select query and test if it’s working.

Testing SQL server connectivity without any tool

Sometimes we wanted to test SQL server connectivity, but we don’t have any tools to test then we can follow below steps. In this we are going to test connectivity with Azure SQL server.  Open Notepad and save file with the extension UDL.   Open this file and it asks for Server details and credentials.  We can provide all other details like type of data which we wanted, and connection time out etc.  Once we connected to data source then we see details in All tab,  Hope this helps! 

Triggering Azure Pipeline from on premise SQL Server

In this blog, we are going to Trigger ADF Pipeline whenever there is insert or update operation is performed on on-premise SQL Server. Steps: Create ADF Pipeline.In this case we have already created Pipeline. And below is Dataflow. Create PowerShell Script (for Authentication and Triggering Pipeline).Below is the code for authentication and triggering of Pipeline, to do so we should have following details: Tenant ID, Application ID ,Client Secret, Subscription ID, Resource group Name, API version and pipeline name. Create Job in SQL and Trigger on Table where Insert update or delete. Make sure that SQL server agent is running. Create a SQL Job with following Job StepEnter step name, select type as PowerShell, select account will be used to run PowerShell script and enter the code in command section. Next we will create trigger on the tablesWe are creating triggers on the tables on which insert, update, delete operations will be performed. Now whenever we will do insert, update or delete in the table the pipeline will automatically get executed.We have following data in Accounts table, Now we updated currency of Account ID 1, the moment we updated the record pipeline gets automatically triggered, After some time check status, Hope above helps!

Making SQL Server Accessible Over Internet

We can Make the SQL Server Accessible over Internet by following steps: 1. System should have static IP where SQL server is installed. 2. Open SSMS, right click on server > properties and check SQL Server and Windows Authentication mode 3. Go to Server > Security > Logins and configure password for users, who will be accessing the SQL server remotely. In this we are setting up password for the use ‘sa’. 4. Go to Server property and check  “allow remote connections to this server” 5. Go to SQL server configuration manager > SQL server network configuration > Protocols for SQL and make sure TCP/IP is enabled 6. Click on TCP/IP and enter port number in IPAll section 7. In firewall setting create Inbound rule for the port through which it will listen, in this case we are selecting port 4729   8. In next steps we will enable port forwarding (i.e. It will redirect request received on public IP and port to another IP and port combination) In mapped IP we can mention static IP of system also (i.e. 192.168.1.30) 9. Now from other system outside of current network, access the SQL server and enter server details and credentials. Enter server name eg 113.143.120.100, 4729\SQL Hope above Helps!    

Adding Data Source to Tabular Model

In this blog we will see how we can add data source(SQL Server) to tabular model Pre-requisite: SQL Server with AdventureWorksDW2017 database Go to Tabular Model Explorer > Data Source and Click on Data source and select Import from data sourceand it will prompt to select data source, in this case we will select SQL Server database Enter server name and enter credentials And select the database Once we select database it will prompt to select the tables from the database and click on load Once clicked on load it will start importing tables from source database. Once table imported, we can see the Tables under tables section also relationships among them. While importing it will automatically detect the relationships among the table, but if wanted to create the relationships we can create relationships. So just we have seen how we can import data source into Tabular model.

SEARCH :

FOLLOW CLOUDFRONTS BLOG :

FOLLOW CLOUDFRONTS BLOG :


Secured By miniOrange