Azure Data Factory (ADF) is a powerful cloud-based ETL (Extract, Transform, Load) service that enables data integration and transformation between various data sources. Moving data from SQL Server to Dataverse with ADF is beneficial for organizations leveraging Microsoft’s Power Platform, as Dataverse serves as the underlying data platform.
This guide provides a clear, step-by-step process to transfer data from SQL Server to Dataverse using Azure Data Factory.
Prerequisites
Before beginning, make sure you have the following:
- Azure Subscription: A subscription with access to Azure Data Factory.
- SQL Database: A SQL Server database with the necessary data you want to transfer.
- Dataverse Environment: A Dataverse environment where you want to load the data.
- Dataverse API Access: You’ll need API access permissions for Dataverse.
- Azure Data Factory Set Up: Basic familiarity with Azure Data Factory setup and permissions.
Step-by-Step Guide
Step 1: Set Up Azure Data Factory
- Log in to the Azure Portal
- Go to Azure Portal.
- Create a Data Factory
- In the Azure Portal, search for Data Factory in the search bar.
- Click + Create and fill out the required details:
- Subscription: Select your Azure subscription.
- Resource Group: Choose an existing resource group or create a new one.
- Region: Select the region closest to your resources.
- Data Factory Name: Provide a unique name for your Data Factory.
- Review and Create
- After entering all the information, click Review + create.
- Once validated, click Create to deploy your Data Factory.
- Access Data Factory
- After creation, open the Data Factory and select Author & Monitor to start building your pipeline.
Step 2: Create Linked Services for SQL Server and Dataverse
Linked services in Data Factory allow you to connect to different data stores. Here, you’ll create linked services for both SQL Server and Dataverse.
2.1. Create Linked Service for SQL Server
- In the Data Factory portal, navigate to the Manage tab on the left sidebar.
- Under Connections, select Linked Services and click + New.
- Search for Azure SQL Database or SQL Server (depending on your SQL source) in the list.
- Fill out the required details:
- Linked Service Name: Name it something like “SQLServer_LinkedService.”
- Server Name: Enter your SQL Server’s name.
- Database Name: Enter the name of your SQL database.
- Authentication Type: Choose SQL authentication or Managed Identity.
- User Name and Password: Provide credentials if using SQL authentication.
- Click Test Connection to ensure the connection is successful, then click Create.
2.2. Create Linked Service for Dataverse
- Again, in the Linked Services section, click + New.
- Search for Common Data Service (Dataverse) and select it.
- Fill out the required details:
- Linked Service Name: Name it something like “Dataverse_LinkedService.”
- Environment URL: This is your Dataverse environment URL (e.g.,
https://cerebrix.crm.dynamics.com
). - Authentication Type: Choose OAuth2.
- Service Principal ID and Secret: Provide these if you’re using a service principal for authentication, or use interactive sign-in if available.
- Click Test Connection to verify, then click Create.
Step 3: Create Datasets for SQL Server and Dataverse
Datasets in Data Factory represent the data structures within linked services. You’ll create one dataset for SQL Server (source) and one for Dataverse (destination).
3.1. Create SQL Server Dataset
- Go to the Author tab, click +, and select Dataset.
- Choose Azure SQL Database (or SQL Server).
- In the dataset settings, select the Linked Service you created earlier (SQLServer_LinkedService).
- Choose the Table you want to copy from SQL. You can also use a query to select specific columns or filter data.
- Name the dataset, e.g., “SQLServer_Table_Dataset,” and click Create.
3.2. Create Dataverse Dataset
- Go back to the Author tab, click +, and select Dataset.
- Choose Common Data Service (Dataverse) as the dataset type.
- Select the Linked Service you created for Dataverse (Dataverse_LinkedService).
- Enter the Table Name in Dataverse where you want to load the data.
- Name the dataset, e.g., “Dataverse_Dataset,” and click Create.
Step 4: Build the Data Pipeline
Now that your linked services and datasets are ready, it’s time to create the data pipeline to move data from SQL Server to Dataverse.
- Create a New Pipeline
- Go to the Author tab and select Pipelines.
- Click + New Pipeline and name it, e.g., “SQLtoDataversePipeline.”
- Add Copy Data Activity
- In the Activities pane, search for Copy Data and drag it into the pipeline canvas.
- Select the Copy Data activity to configure it.
- Configure Source Settings (SQL Server)
- In the Source tab of the Copy Data activity, choose your SQLServer_Table_Dataset as the source dataset.
- Optionally, add filters or specify a query if you want to transfer only a subset of data.
- Configure Sink Settings (Dataverse)
- In the Sink tab, select the Dataverse_Dataset as the destination dataset.
- Specify how you want to handle data in Dataverse:
- Append to add new records.
- Upsert to update existing records and insert new ones.
- Configure any mapping between SQL columns and Dataverse fields if they have different names.
- Mapping Column Fields
- Go to the Mapping tab to map the source SQL columns to Dataverse columns.
- For each SQL field, specify the corresponding Dataverse field.
- Configure Pipeline Settings
- Set up error handling, logging, and data validation if needed by configuring the Settings and Advanced tabs.
- Save and Publish
- Click Save and then Publish the pipeline to make it live.
Step 5: Test and Run the Pipeline
- Debug the Pipeline
- In the pipeline editor, click Debug to test the pipeline without publishing changes. This helps ensure everything is set up correctly.
- Run the Pipeline
- After debugging, click Trigger > Trigger Now to start the pipeline manually.
- Monitor Pipeline Execution
- Go to the Monitor tab in Azure Data Factory.
- Check the pipeline’s progress and monitor for any errors.
- You’ll see a log of each activity, including data transferred and any issues encountered.
Step 6: Automate and Schedule the Pipeline
If this data transfer is a recurring task, you can set up a schedule to automate the pipeline.
- Set Up a Trigger
- Go to the Author tab, select the pipeline, and click Add Trigger.
- Choose New/Edit to create a trigger.
- Define the Schedule
- In the trigger settings, choose a recurrence pattern (e.g., hourly, daily).
- Set the Start Date and End Date if applicable.
- Save and assign the trigger to your pipeline.
- Activate the Trigger
- Once set up, enable the trigger to start the scheduled data transfer.
Pro Tip: Automating data transfers ensures consistency and reduces manual workload. Regular data updates in Dataverse allow end-users to access the latest data in Power Apps or other Power Platform services.
Conclusion
By following these steps, you can transfer data from SQL Server to Dataverse seamlessly using Azure Data Factory. This approach leverages ADF’s powerful integration capabilities, ensuring secure, efficient, and repeatable data movement. Once your data is in Dataverse, you can unlock its full potential within Power Platform for reporting, automation, and app development.
Stay updated on the latest Database trends and news by following Cerebrix on social media at @cerebrixorg.