Reseeding Table Identity from Synapse Pipeline

A task I was requested to implement a while ago in an Azure Synapse pipeline was to have the pipeline reseed a table identity column to prepare it for new data that will be calculated and inserted into the table as part of an optimizer solution someone is writing. This request challenged me to learn about identity reseeding since I had never even considered that would be something I would ever need to do, and it also challenged me to make that reseeding query work in a Synapse pipeline.

The process isn’t difficult, but it was a little finicky since a higher level of permissions is needed to run the reseeding command, and some people, like me, might not want to give their Synapse pipeline sysadmin permissions. This post will cover general information about what it means to reseed an identity column and then how you can do that reseeding from a pipeline.

What’s in this post

Background

I completed a project where we upgraded the database servers for a business group, putting their main database onto an Azure Platform as a Service (PaaS) database instead of an old Azure virtual machine. Part of the project was to see if we could rewrite their optimizer into a new Python script instead of their current, very dated, C-based program that requires a VM to run on. Unfortunately, the business group decided to not go with my new script (as I mentioned in a previous post), so we had to come up with a way for them to continue running their current DLL program in the updated environment. For those who don’t know, the PaaS Azure databases no longer have the ability to run DLLs natively like on-prem or Infrastructure as a Service (IaaS) instances do, which meant that we had to find a way to continue running their program without the ability to run it natively on their upgraded database server.

The solution we came up with is to create a new and updated Azure IaaS VM that can run SQL Server and the DLL optimizer program. To get that to work and be cost-effective, when the time comes each month for their program to run, we are going to copy the relevant data for the optimizer from the main PaaS database into this VM’s database, the DLL will be executed which will calculate and load data into a couple of tables, then we will copy that calculated data back to the main server and power down the VM for the rest of the month.

So what does that have to do with identity columns?

Flow diagram that gives an overview of the process requiring me to reseed an identity every month

One of the tables that will have its data generated by the DLL, which will then be copied back to the main table on the main server, has an identity column. On Server A, the table contains customer data up to the beginning of the current month and the maximum identity column value is something like 165487998. On Server B, where the new data for this month will be calculated and prepared to be loaded back to Server A, the table also has that same identity column but the table is truncated each month before calculating the new data. That means that if we did nothing about the seed value for the table on Server B, the identity column value would restart at 1 and work its way up as new data was loaded, which would cause primary key conflicts when the data is loaded back to Server A. We need to prevent that issue and start the current month’s records at the identity value where we left off on Server A, so I needed to find a way to update the identity seed value for Server B’s table when the pipeline runs each month.

What is an identity column?

An Identity Column is a column in a SQL Server database table that has its value automatically generated when a new record is inserted into the table. These columns are frequently used for primary keys (PKs) because an Identity Column must be unique, which is perfect for PKs. When you create an identity column on a table, the default is to have the value of that column start at 1 and increment by 1 every time a new record is inserted. Those default values can be changed to fit your needs. The value that you want the column to start at is called the “seed”; the value you want to increment the column by is called the “Increment”.

For example, if I want to have a column that is automatically generated for me but I only want to have those values be even numbers, you can set the Seed for the column to be 2 and the Increment for the column to be 2, so the first value will be 2, the second will be 4, the third will be 6, and so on. But traditionally, I’ve only ever seen an Increment of 1 used, and I’ve never needed to set a different Seed value before this project.

How do I know if a column is an Identity?

Viewing the Column Properties

There are several ways to find out if a column is an Identity, but the method I use is to look at the properties of the column using the Object Explorer, whenever possible. In the Object Explorer for your server connection, navigate to the table you want to check for an Identity, expand the table, then expand “Columns”. Once you have the list of columns opened, you can right-click on any column and select “Properties” which is where you’ll find if it’s an identity or not.

Note: This method will not work with Azure PaaS servers, which severely limit the the information you can see from context menus and dialog windows.

Screenshot of SSMS showing how you can access the Properties menu of a column to see if it’s an identity or not
Screenshot of the Properties window of a column showing that the column is an identity

In the above screenshot showing the properties for my selected column, you can see that the value for the property “Identity” is set to “True” indicating that the column is an identity column. Then below that, you can see that the “Identity Seed” is set to 1 and the “Identity Increment” is also set to 1. Those two values mean that the first value that will be assigned to a row inserted into the table will be 1, then the value will go up by 1 for every new record inserted.

Scripting the Table

Another way you could quickly learn if a table has an Identity column and what the settings for that column is would be to right-click on the table in the Object Explorer and script the table to a new query window.

Script out the table you want to check for an identity column

Once the script has been generated, you will easily be able to tell if a column is an identity because it will be included in the SQL query to generate the table.

Note: The values for Identity Seed and Identity Increment may not be accurate! In the screenshots above where I show how to find out if a column is an identity or not, both the Seed and Increment values show as being set to 1, but I know for sure that the Seed has been changed to a much larger value. The properties of the column may not reflect the actual values.

Finding identity columns on Azure SQL Databases

As I said in a note above, you cannot simply right-click on a column in the Object Explorer for an Azure SQL Database (PaaS database) and view the properties for the column like you can with IaaS or on-prem versions of SQL Server. To see whether a table has an identity column on Azure SQL Databases, you will either need to script out the table like the previous section said or you can use the following T-SQL query.

SELECT [object_id], 
	[name], 
	column_id, 
	system_type_id, 
	max_length, 
	[precision], 
	scale, 
	is_identity, 
	seed_value, 
	increment_value, 
	last_value
FROM sys.identity_columns
WHERE OBJECT_NAME(object_id) = 'TableName';
Data you can find about a table’s identity column by using the identity_columns view

When you run that query, you will see results like the following, which show the column that is an identity for the table. If the table doesn’t have an identity column, no results will be returned by the query.

Checking Identity Properties with DBCC CHECKIDENT

As I mentioned in a Note above, you can’t always trust that the values for Identity Seed and Identity Increment are correct in the Properties dialog or how they’re scripted out when you script out a table. So how do you know what the true values are? You can use the query on sys.identity_columns above or you can use the DBCC CHECKIDENT command.

DBCC CHECKIDENT ('dbo.TableName', NORESEED)

Note: Be very careful with the formatting of that `DBCC CHECKIDENT` command, because changing the `NORESEED` value to anything else and running it could reset or change the seed value inadvertently. Use caution when using this command and make sure you have it set exactly as you want. See the resources section at the end of this post for more documentation about this command to familiarize yourself with it.

When you run that above command, it will output a message that tells you what the Identity Seed value is currently set to and what the highest value of the column is as well (for when the Identity value has been incremented above the Seed value). In my example, I have only added a single record to my table so that one record has the outputted identity value, and the current column value is the same as the seed since there’s only one record in the table.

The output of the CHECKIDENT function when you specify the NORESEED option

What is reseeding?

Reseeding is changing the value of the Identity Seed value for the Identity column so that it starts the value of the column at a number other than 1 or whatever it was originally set to. For my case, I need to retrieve the maximum value of the Identity column from Server A’s copy of the table, then set the Seed for Server B’s table to that value + 1 (so it starts at the next value above the current identity value in the source table). That means that I need to change the Seed value for my table on Server B to 128166810 instead of 1.

How to manually reseed a column

If you want to manually reseed an Identity column using a SQL command directly on the database, you can do so by using the following command:

DBCC CHECKIDENT ('dbo.TableName', RESEED, <NewSeedValue>);

You can use variables with this command as well:

DECLARE @MaxIdentity int = 128166809;
DECLARE @ReseedValue int = @maxIdentity + 1

DBCC CHECKIDENT ('dbo.LengthAnalysisResultsHeader', RESEED, @ReseedValue);

Permissions Needed to Run Reseed Query

According to the Microsoft documentation (linked in the Resources section at the bottom of this post), one of the following permissions needs to be assigned to the entity that is running the DBCC CHECKIDENT command:

  • sysadmin server role
  • db_owner database role
  • db_ddladmin database role

But that document also specifically mentions that Azure Synapse requires db_owner.

How to reseed an identity column from an Azure Synapse pipeline

For my scenario, I don’t want to run the reseed command manually because the Seed value will be changing every month when our processing runs and generates new data. I have added the DBCC CHECKIDENT command to my Synapse pipeline that’s already loading the data between my two servers so that it’s reseeded automatically right when it needs to be.

In a Synapse pipeline, the reseeding based on a value from another server can be completed with two Lookup activities:

You only need two Lookup activities in your synapse pipeline to programmatically reseed an identity column

First Lookup Activity – Get the Identity value

The first Lookup will query the source database, in my scenario Server A, to get the maximum value for the column that is the Identity.

Second Lookup Activity – Set the Identity value

The second Lookup will run the DBCC CHECKIDENT command, which doesn’t return any results, and will then run a SELECT 1; to satisfy the requirements of the Lookup activity that something should be returned:

DECLARE @MaxIdentity int = @{activity('Get max identity value').output.firstRow.ReseedValue};
DECLARE @ReseedValue int = @maxIdentity + 1
DBCC CHECKIDENT ('dbo.TableName',RESEED,@ReseedValue);
select 1;

Let’s break that reseed query down a little more. The first line is creating a SQL variable called “MaxIdentity” which is an integer, and then setting the value of that variable to the output from the previous Lookup activity, which was called “Get max identity value”. To get the specific value from that preceding activity, we need to add .output after the activity call, then .firstRow to specify we want to get the value from the first row of the output of that activity, then finally add .ReseedValue which is the column name returned from the previous lookup activity.

DECLARE @MaxIdentity int = @{activity('Get max identity value').output.firstRow.ReseedValue};

Summary

Reseeding the Identity column of a SQL Server table manually is a piece of cake because it only requires one small SQL statement. But to do the same process in an automated and repeatable manner is a different story. It’s not overly complicated to create a Synapse pipeline that reseeds a table’s Identity column on demand, but there are some quirks to getting the expressions to do so correctly which involved a lot of guessing and checking for me. I am still new to working with Synapse pipelines in-depth though, so perhaps this wouldn’t be such a difficult task for you more experienced Synapse developers. 🙂

If you run into any issues while setting up the above process yourself, I would love to help as much as I can, so please leave a comment below!

Resources

Related Posts

How to Reset the Azure CLI Virtual Environment

Have you been in a situation where you were working with the Azure CLI Bash terminal and then something went wrong with the image used to persist your data across sessions and your workspace seemingly got corrupted? I have had that happen to me recently, and I think it was because I updated my computer from Windows 10 to Windows 11, so Azure thought my computer changed and that corrupted the image.

I tried a couple of things to fix the situation so I could start working in the CLI space again to do my work, but I couldn’t fix the problems and errors I was getting. Then I set about figuring out how to clear the whole thing and start again. Thankfully, I only had two files on the virtual workspace, which I backed up on my local computer and in a repo anyway, so resetting the whole thing didn’t have negative effects for me. Continue reading to learn how I successfully reset my Azure CLI virtual workspace so I could get working again.

What’s in this post

When you might reset your workspace

Before updating my computer to Windows 11, I had been successfully reusing the same Azure cloud terminal virtual workspace for several weeks while learning to work with Bicep and testing a template repeatedly. I only encountered issues with the Azure CLI Bash terminal after the computer upgrade, and I am guessing it’s due to Azure considering my computer as a different machine after the update. I guessed that based on this StackOverflow answer which discusses how you cannot share virtual disks between different Virtual Machines (VMs) or else you can get corruption issues.

Even though that StackOverflow thread is referencing a disk issue for Virtual Machines and their disks, it seems like the same issue can happen when you try to use the same Azure account and its virtual CLI environment on two different computers, based on my experience. And this StackExchange answer says that a similar Linux error indicates a file system corruption.

I knew I had an issue when I used the Azure CLI Bash terminal for the first time with my updated computer. Immediately upon getting into that virtual environment and trying to check on my saved Bicep template there, I received the error:

azure cli cannot access 'main.bicep': Structure needs cleaning

I tried deleting the file so I could recreate it but got the same error when trying to do that. It seemed like I was stuck and really just needed to restart the whole thing and try again. The StackExchange answer referenced above said to unmount the corrupted file system disk and run a Linux command, but I knew that I couldn’t do that for this virtual environment, so I needed to figure something else out to get things working again.

How I reset my Azure CLI Bash workspace

Note: Please be cautious when following these steps as what you’re about to do may not be possible to undo.

Since you are not able to manipulate the file system in the Azure CLI virtual environment and unmount a disk like you could if you got this error on your own machine, you have to fix the error above in another way.

Trying a simpler fix first

There is one way you may be able to fix your environment that is less harsh than what I had to do, which is to choose the option in the Cloud Shell in the Azure Portal to “Reset User Settings”. This action, according to the Microsoft documentation, deletes your data from the shell by terminating the current session and unmounting the linked storage account. To do this reset, select the dropdown in the Shell window in the portal for “Settings”, then click “Reset User Settings”.

Select “Reset User Settings” from the “Settings” menu in the Azure CLI in the Portal

After completing that step, you will need to start a new Cloud Shell/CLI session, which may or may not fix the error you are seeing. In my case, doing this step did not fix the issue because the actual issue was with the image being used to regenerate my virtual environment. Continue on to the next sections if you continue to have the same error after resetting your user settings.

Identify the Storage Account for your virtual environment

The first step in doing the full reset of your workspace after getting the above error is to identify the storage account your environment data is persisted to. In my case, it is in a storage account called “azcliemily” with a file share called “azcliemilyfs”. I’m not sure if I created this storage account a while ago and have just forgotten about that or if the system created one for me when I set up my CLI workspace for the first time. The naming for your storage account and file share is likely different.

The easiest way to determine where your files are stored for your CLI sessions is to open the Azure CLI from within the Azure portal.

To get to the CLI from within the Azure Portal, click on the “terminal” looking icon in the top right menu of the screen

When that opens, if you are here because you are having the same error I was, everything in the shell/CLI should already be set up for you and you should see something like this.

When you open the CLI terminal in the Portal, you should see something like this when it first starts up

To view the file location for your workspace, click the dropdown for “Manage Files” at the top of the shell window, then select “Open file share” which will open a new tab.

You can access the file share where the files for the session are stored by opening the dropdown for “Manage Files” and then choosing “Open file share”

The file share page will look something like this:

The file share that Azure uses to persist your CLI virtual environment between sessions should look something like this.

Delete the image file used for your virtual environment

Note: Please be cautious when following these steps as what you’re about to do may not be possible to undo.

If the above “Reset User Settings” step didn’t work, like it didn’t for me, you may need to completely delete the machine image that Azure has created and is using for your virtual environment in the CLI. The error I got seems to indicate that there is corruption within the disk, so the only way for me to get rid of the corruption and access my files again is to delete the disk image being used so Azure is forced to make a new one when I start my next session.

The consequence of proceeding with this method is that you will lose anything and everything that you had stored in your CLI virtual environment. For me, this was only two test files, but it could be a lot more for you.

If you are going to proceed with the following steps, please save off everything from your virtual environment that you don’t want to lose.

Once you are sure you are ready to proceed at your own risk, navigate to the file share location we identified in the above section, then go into the folder “.cloudconsole”. In this folder, you will see a file with the type “.img”, which is a disk/machine image, the one that Azure uses to spin up disks on the backend for you when you open a new Cloud Shell/Terminal session. For my session, the file was named “acc_username.img”.

Opening the folder “.cloudconsole” in your file share should show you a file with the type “.img”, which is the machine image for your environment

If you haven’t done so already, please first open a Cloud Shell in the portal and choose “Reset User Settings” before moving on to the next step so that this file share/storage account is unmounted from your Cloud Shell session.

Once you have done that reset, delete the .img file from the file share. This step may be irreversible so proceed with caution!

After you have deleted that image file, you can open a new terminal session with the Azure CLI or use the Cloud Shell from the Portal and then work through the normal steps of setting up a new virtual environment. When you choose to persist your files between sessions, Azure will create a new machine image to use to store those files, which should no longer have the corruption you were seeing previously.

Summary

If you run into what appears to be disk corruption issues with the Azure CLI virtual workspace, like if you are working with Bicep deployments or anything else through the CLI and are saving files between sessions, I highly recommend starting completely fresh with your environment if you can. It doesn’t take very long to do an entire reset of the virtual environment and it easily fixes errors that indicate disk corruption which occurred for whatever reason (like if you upgrade your computer’s OS so Azure thinks you’re trying to use the same session from two different computers).

Let me know in the comments below if this solution worked for you!

Related Posts

Deploying A DACPAC to Azure SQL Server

One of the easiest ways to duplicate SQL Server database objects from one database to another–such as in a situation where you want to copy the structure of a production database down to a development database to help with testing and development–is to create a file called a DACPAC from the database you want to copy. With SQL Server, there are two ways to copy a database without creating and restoring a full database backup: 1) Create a full copy of the database structure and data by exporting to a BACPAC file, or 2) Create a copy of only the database structure by exporting to a DACPAC file.

This post will cover how you can import a previously generated DACPAC file onto an Azure SQL Database to duplicate the structure of another database with little effort on your part. It specifically covers copying an existing Azure SQL Database to another Azure SQL Database. This process may work with on-prem or managed instances of SQL Server, but I haven’t tested that yet.

What’s in this post

Prerequisites

  • Have SSMS available to use
  • Have an existing Azure SQL Database ready to copy
  • Have an Azure SQL Server or Database to load the DACPAC to

Note: The source and target databases can be on the same Azure SQL Server or on different ones.

What are DACPACs and BACPACS?

There have already been many blog posts written on the subject of the difference between DACPACs and BACPACs, so I will refrain from trying to duplicate that information. Essentially, a DACPAC is an export of the schema/structure of a database, including the database users and logins. A BACPAC is similar to the DACPAC except that it contains the data from the database as well. If you would like more detailed information about both of these types of files, I recommend reading this post from SQLShack.

Then if you’re like me and are now wondering what the difference is between a BACPAC file and a full database backup (BAK file), I found this resource on SQL Server Central that explained the difference well.

Why use a DACPAC?

As I mentioned in the introduction above, the primary use case for making and importing a DACPAC file of your database is to quickly copy all database objects–including tables, procedures, functions, and even users–to a new version of the database, such as when trying to create a new dev version of an existing production database. That was my exact use case and what triggered me to write this post. I got tired of manually copying tables, procedures, and functions from one database to another whenever we needed a new version of a database.

How to Create a DACPAC File

When you have a source database that you would like to copy over to a new database, without the data, you should create a DACPAC from SQL Server using a “Task” on the database. To do this, right-click on the database name in the Object Explorer, select “Tasks” then select “Extract Data-Tier Application”. This will open a wizard for you to create a DACPAC file of the selected database.

Screenshot of the context menu where you can select to Extract Data-tier Application for a database
Choose to “Extract Data-tier Application” from the context menu of the Database in the Object Explorer

If instead, you click on “Export Data-Tier Application”, you will be brought to a wizard for creating a BACPAC file, which is not what we intend to do during this process, so make sure you choose the option to Extract a data-tier application.

After selecting that option, you will see a screen like the following. Review the introduction page and make sure it refers to creating a DAC package (see screenshot below, underlined in red) and not a BAC package.

Screenshot of the "Extract Data-tier Application" wizard in SSMS
The introduction page of the “Extract Data-tier Application” wizard, confirm that it is going to extract to a DAC package, not a BACPAC.

Step through the wizard, provide a name for the “Application” (database name), and choose where you want the file to be downloaded to on your local machine.

Screenshot of the setup screen of the "Extract Data-Tier Application" wizard in SSMS.
The DACPAC extraction properties page of the “Extract Data-tier Application” wizard

Once you have reviewed the settings for your file, click “Next” until the creation process starts. Once that has been completed, you will have successfully created a DACPAC file which is stored in the location that you chose.

Using a DACPAC to Format an Azure SQL Database

There are two different ways that you can deploy a DACPAC to an Azure SQL Server. The first way is to deploy the DACPAC to create a database, the second is to update an existing database with the DACPAC.

Overview of the Steps

  • Choose whether you want to create a new database with the DACPAC or update/format an existing database
  • If choosing to create a new database:
    • Right-click on the “Databases” folder in the Object Explorer of SSMS, choose “Deploy Data-tier Application”
    • Specify the DACPAC you want to use and the name of the database you want to create
    • Run the wizard which will create a database with default settings
  • If choosing to use an existing database, whether blank or already in use:
    • Right-click on the database, select “Tasks” then “Upgrade Data-tier application”
    • Specify the DACPAC file you want to use
    • Review the changes the wizard identifies between the DACPAC and the target database
    • Run the wizard which will update the target database with all changes necessary from the DACPAC

Deploying a New Azure SQL Database using a DACPAC

If you want to make your new Azure SQL Database on an existing Azure SQL Server and don’t want to do that through the portal, you can create a new database on the server by “deploying” a DACPAC to create a new database.

The main downside to this method of using a DACPAC is that the database will be created with the default setup, which for me is a General Purpose standard database with 2 vCores. That level of database may be pricy for some people, or it could be underperforming for others, depending on your needs. Due to that downside, that is not the method I chose to use my DACPAC with, but I still researched it so I could understand both ways. If you want to know how I used a DACPAC, skip down to the section for “upgrading” with a DACPAC below.

Deploying a Data-Tier Application

To create a new database from a DACPAC, you first need to connect to your target server through SSMS in the Object Explorer. Once you have connected, right-click on the “Databases” folder and choose “Deploy Data-Tier Application” from the list.

Screenshot of the context menu where you can choose to "Deploy Data-tier Application" for a server
Choose to “Deploy Data-tier Application” from the context menu of the Databases folder in the Object Explorer of SSMS

This will open a wizard that is very similar to the one for “upgrading” a data-tier application (discussed below). On the overview page of the wizard, you can confirm that you are going to deploy a DACPAC and not a BACPAC.

Screenshot of the introduction screen of the "Deploy Data-Tier Application" wizard in SSMS.
The introduction page of the “Deploy Data-tier Application” wizard

The first thing you will need to do in this wizard is to choose the DACPAC file you previously created.

Screenshot of the "Deploy Data-tier Application" wizard on the page where you choose the DACPAC you want to deploy
The page of the “Deploy Data-tier Application” wizard where you choose the DACPAC file you want to deploy from

Click Next to move to the following page, where you will choose the name of the database you will deploy.

Screenshot of the "Deploy Data-tier Application" wizard on the page where you choose the the name of the new database you want to deploy
The page of the “Deploy Data-tier Application” wizard where you choose the name of the new database you want to deploy from the DACPAC

Click Next to move to the Summary page and review the process that the wizard is about to execute to ensure everything is set the way you want. Click Next again to deploy the DACPAC and create a new database from that file.

Once that process completes, you can close the wizard and then check on your database through the Object Explorer to see that you now have a new database with the same objects as the source database. At this point, you will also be able to see the newly created database in the Azure Portal under that SQL Server, where you can make any changes you would like to the default setup chosen by the deployment.

Updating or Formatting an Existing Azure SQL Database with a DACPAC

This method of “updating” an existing Azure SQL Database is helpful if you already have a database that you want to align with another, or if you have already created a blank database meeting your standards that you want to quickly deploy database objects to. In the previous section, the method of “Deploy a Data-Tier Application” creates a new database from the specified DACPAC, but that database is created with the default Azure SQL Database settings, which is “General Purpose: Standard-series (Gen5), 2 vCores”. That database setup is estimated to cost (in our setup) $373 per month.

If that default setup for a database doesn’t work for you or you don’t want to fix that after the database has been created, you can use this second method of “upgrading” your existing database with the DACPAC. Before you can complete this method, you first need to create a blank database on your Azure SQL Server, through whichever method you prefer. Or use any other existing Azure SQL Database that you are fine with overwriting with the DACPAC.

Upgrading a Data-Tier Application

Connect to your target database in SSMS through the Object Explorer. When you are ready to import the source database objects onto this new database, right-click on the empty database then select “Tasks” then “Upgrade Data-tier Application”.

Screenshot of the context menu in SSMS where you can choose to "Upgrade Data-tier Application"
Choose to “Upgrade Data-tier Application” from the context menu of the existing database you want to deploy the DACPAC to

This will open another wizard that will walk you through the process of “upgrading” your chosen database with the DAC package of another database.

Screenshot of the "Upgrade Data-tier Application" wizard on the introduction page.
The introduction page of the “Upgrade Data-tier Application” wizard

On the second page of the wizard, you will be prompted to select the DAC package you would like to use for the upgrade. Browse to the DACPAC file you created earlier, then click Next.

Screenshot of the "Upgrade Data-tier Application" wizard on the page where you choose the DACPAC you want to deploy.
The page of the “Upgrade Data-tier Application” wizard where you choose the DACPAC file you want to deploy to the database

After you click Next, the system will start detecting changes between the target database and the DACPAC you selected, which may take a few minutes.

Screenshot of the "Upgrade Data-tier Application" wizard on the page where you have the wizard detect changes between the DACPAC and the target database.
The page of the “Upgrade Data-tier Application” wizard where the wizard checks for differences between the DACPAC and the target database

If you have already imported the DACPAC before or have anything on your target database that is different from the source, you will receive a warning like this:

Screenshot of the "Upgrade Data-tier Application" wizard on the page where you must specify that you want to proceed with the upgrade despite possible data loss.
The page of the “Upgrade Data-tier Application” wizard where you confirm that you are okay with proceeding despite the risk of data loss from deploying the DACPAC

I got that warning because I had already imported the DACPAC so I no longer had a blank database. In this situation, you are required to check the box that you’re okay with overwriting the existing structure by continuing. If it’s your first time attempting to import the DACPAC onto a blank database, you should instead receive a green check box on this page.

On the next page, choose whether or not you want to roll back in the event of a failure during the “upgrade”. I chose to not have it roll back since this was going to be a new dev database that I could tweak, fix, or simply recreate if something went wrong. You may want to choose differently.

Screenshot of the "Upgrade Data-tier Application" wizard on the page where you choose rollback options.
The page of the “Upgrade Data-tier Application” wizard where you choose the rollback option you want the wizard to use when deploying the DACPAC

The final step will be for the system and you to review the “upgrade” plan. You will see a list of all changes that will be applied to the database on the “Review Upgrade Plan” page. If you’re happy with the proposed list of changes, click Next until the upgrade process starts and the DACPAC gets applied. Once that process completes, you can close the wizard and then check on your database through the Object Explorer to see that you now have database objects deployed to the database that you didn’t before.

Screenshot of the "Upgrade Data-tier Application" wizard on the page where you review the differences between the DACPAC and the target database that have been identified.
The page of the “Upgrade Data-tier Application” wizard where you review all the changes that will be applied based on the DACPAC

Can’t I just import the DACPAC from the Azure Portal?

This was the first thing I tried when trying to import my DACPAC into an Azure SQL Server. I uploaded my DACPAC file to a storage account, then clicked “Import” on the SQL Server page in the Portal. This started a new deployment where the SQL Database was created but then the deployment of the DACPAC to the database failed so I was left with an empty database on my server. The error I received when the deployment failed was related to not specifying a BACPAC file for the import process, so it expects that file type instead of a DACPAC.

If you can figure out how to import a new database to a SQL Server using the Portal, I would love to hear about it in the comments below.

Summary

If you, like me, were confused about how you could import a DACPAC file into a database on an Azure SQL Server (because the Microsoft documentation for this isn’t great), I hope these instructions help you with your import. It is as simple as creating the DACPAC file and then using a wizard to “upgrade” or “deploy” your database using the DACPAC.

Related Posts

A Fun SQL Function — QUOTENAME()

I have a short post today because I’ve gotten a little behind on my editing and want to get something out today. This week, I learned about the most useful little string function available in SQL Server that I wish I had known about previously. The function is QUOTENAME()which isn’t the most intuitive name for a function.

What’s in this post

Overview of QUOTENAME()

What the function QUOTENAME() does is put delimiters around a string, which is especially useful for certain columns in your queries. Where this comes in handy are scenarios where you are going to use the outputs of your SQL queries in other future queries, so having the strings delimited to meet SQL Server standards is essential.

For example, say you want to generate a list of the tables in your database and then you are going to use that list of table names in another query, so you would like the table names to be surrounded with square brackets–a SQL Server best practice. Instead of manually adding the brackets yourself, which I have done too many times to count, you can use the QUOTENAME() function that will do it for you. This function will also let you specify other delimiters, like parentheses, if you would like that instead. There are other delimiters you are allowed to specify, more than I was expecting, so check out the W3Schools page on the function for more details.

Demo of the function

Here is a demonstration of the function being used on the [name] column of the sys.tables table.

SELECT [name]
	,QUOTENAME([name]) AS DelimitedName
	,QUOTENAME([name], '()') AS ParenthesesName
FROM sys.tables;
Screenshot showing a SQL query using SSMS with the default settings and with specifying parentheses as the delimiter.
This screenshot shows two uses of the QUOTENAME() function: one with the default setting of square brackets and one specifying the parentheses.

In the example above, I have displayed the normal [name] column from the table and then used the same QUOTENAME() function two different ways. The first usage of the function only inputs the column that I want to be delimited, which then gets the default delimiter of square brackets. If you, however, would like to use a different delimiter, like parentheses, you can specify that as the second parameter to the function. In my testing, you can specify QUOTENAME(column, '('), QUOTENAME(column, ')'), or QUOTENAME(column, '()') and they all give you the same parentheses delimiting.

Summary

If you often find yourself manually putting bracket delimiters around values you retrieve from SQL Server with queries, please consider using the QUOTENAME() function in the future instead to save yourself time and manual effort. I know I will for sure be using this now.

Related Posts

Moving Data Between Azure SQL Databases–the Fast Way

I have done more database creation in the past couple weeks than I had done in the previous year combined, all because my team is working on making several different AI chat bots for our business users and we are being very agile about our development process. At first we had separate Azure SQL Databases for each chat bot to use for its backend, but then I decided it would be more prudent, cost effective, and easier to manage if we combined the three related bots into a single database. This meant that I needed to move data from three disparate databases into one single database, and I wanted to do it easily without any extra tools.

How I decided to move the data between the databases was to quickly export to a flat file then import that flat file onto the new database–since I only had a few hundred rows to move from each source database. As far as I am aware, this should work with all modern types of SQL Server (on-prem, IaaS, Azure SQL Database). Read on to learn how to follow the same process I did.

What’s in this post

Exporting the data from the source

On your source database in SSMS, run a SELECT query to retrieve all the data you want to copy over to your target database. Once you have your result set, select all records, right-click on them, then choose “Save Results As…” from the menu.

Screenshot showing the "Save Results As" option when right-clicking on the results of a query in SSMS
You can export your query results using “Save Results As…” from the right-click dialog

That will open the normal “Save As” dialog, and you can choose where to save your results. Save the file to your computer.

Once the file has been saved, I recommend opening the CSV using Notepad++ or similar tool to review the formatting of the data. There were a few times when I went through this process that the columns names did not get exported to the flat file, and instead the columns were generically named “Column1”, “Column2”, and so on (see final screenshot below for what this would look like when trying to import). Chances are that you won’t run into this issue, but it doesn’t hurt to check. If for some reason your file doesn’t have the correct column names that you expect, simply export the query results to a file again and it should fix the issue.

Importing the data to the target

To import the data onto your target database, connect to it in SSMS as well, right-click on the database name in the Object Explorer, then choose “Tasks” > “Import Flat File”. This will open a wizard to help you import the data.

Screenshot showing where to navigate to in the database menu to import a flat file
Import a flat file using the Task “Import Flat File”, which will open a wizard tool for you

In the dialog that opens, browse to the file that you previously exported from the other database. Once you select that file, a table name is generated that the data will be imported to. You can change the name of that table if you wish. Also, choose the schema that table will go into. This process creates a new table from the imported data, so make sure you don’t use an existing table name.

Screenshot of the Import Flat File wizard where you specify the file you want to import and the table you want to load the data to
Specify the CSV file you created during the export process and the name of the new table you want to load the data into

Click “Next” to move to the next page, where you can preview the data that will be imported. If the data looks good to you, click “Next” to move to the “Modify Columns” page where you can change some settings on the table columns if you would like. I chose to “Allow Nulls” for all columns, just to make things easier for myself in this import process. I plan on deleting this table after I copy the data from it into the final target table, so it doesn’t matter to me if every column allows nulls.

Screenshot of the Import Flat File wizard showing the screen where you can modify the columns of the new table you are going to create during import
This screenshot shows you what can happen if the export process goes a little wonky–the column names from the source may not be put into the CSV file and thus only show as generic names in the Import wizard. From this menu, though, you can modify the properties of the columns in the new table that is going to be created for the import.

Keep clicking “Next” to move through the rest of the pages, review the process before you opt to import, then import the data. Once the import wizard has completed its work, you will now have a new table in your database, essentially a staging table.

To get the data from your staging table into the final target table in your database, you can use an INSERT INTO tablename SELECT... query. This step also allows you to modify the data before moving it into the target table if you need to. In my case, I needed to add an extra column to the data that the source table didn’t have but the target requires, so I hard-coded the value for that column into my SELECT statement so it would be added to the final table.

Cleanup

After you have moved your data into its final destination, make sure to clean up the temporary new table that you created during the import wizard process, to keep things as neat as possible in your database.

Summary

Using this method to copy data between two Azure SQL Databases may be considered a bit lazy, but I truly think it’s the most efficient way to get data from point A to point B. Sure, I could have made a pipeline or used some other system to move the data around, but I only had a few hundred records to copy over so this method worked great for me. Let me know if this method saved you time as well!

Related Posts

How to View Pipelines Associated with A Synapse Data Source

This morning I received a question from a colleague about how to set up a new linked service for an on-prem server to use that as a data source for a pipeline. As part of my review of our current pipelines setup to make sure I was answering properly, I needed to find an example pipeline where we were already doing that. I had a linked service and associated data source doing exactly what I needed to share with this other person, but then I got to wondering if that data source was even used anywhere and set out to find out. As per usual though, Google betrayed me and wasn’t giving me anything related to the question I was asking, so I had to figure out the answer for myself. Which is why I am now sharing with you the simple way you can see what pipelines are using which data sources, and thus linked services, in your Synapse Analytics workspace.

What’s in this post

Seeing the Data Sources associated with a Linked Service

In Azure Synapse Analytics studio, it is very easy to figure out what data sources have been created using a given Linked Service. To view this information, you go to the “Manage” page in the workspace, then select the page for “Linked Services”.

Screenshot showing the left hand menu in the Synapse Analytics studio
Open the Manage page to view Linked Services
Screenshot showing the "Manage" menu with Synapse Analytics studio
Open the Linked Services page under Manage to view all Linked Services in the workspace

Once you are on the Linked Services tab, there is a column in the list of services called “Related” which lists the number of related data sources for each Linked Service. If you click on the number, it will open a new pane of information telling you which data sources are created using that Linked Service.

Screenshot showing the Linked Services page in the Synapse Analytics studio management tab, where you can view the data sources related to a Linked Service.
Click on the linked numbers in the Related column of the Linked Services list to see data sources using the Linked service
Screenshot showing the list of data sources related to a Linked Service
When clicking on the linked numbers in the Related column, you will see this Related pane.

In that “Related” pane, the names of any related data sources will be links, so you can click on those to take you to the data source editor for that specific data source within the workspace.

Seeing which pipelines use a Data Source

Once you navigate to the Data Source editor, you are then able to view which pipelines are using that data source. To view that information, you simply need to click on the “Properties” icon in the top right corner of the Data Source editor tab.

Screenshot showing where to locate the Properties panel of a data source so you can view the related pipelines
The Properties pane for the Data Source can be accessed from the top right corner of the Studio window.

Once you have the Properties pane opened, click the “Related” tab to view the list of any pipelines that are using the data source.

Screenshot showing the list of pipelines related to the current Data Source
In the Properties pane for the Data Source, all related pipelines can be found on the Related tab

If there are no pipelines using the data source, you will instead see this on the “Related” tab.

Screenshot showing an empty Related Pipelines panel for a given Data Source in Synapse Analytics studio
If there are no pipelines using the Data Source, you will see “No items to show”

Summary

I wish I had looked for this information sooner, because I think it could have saved me some time while working with Synapse pipelines. Viewing the Data Sources related to a Linked Service and then what pipelines are using a given Data Source is simple to accomplish and helpful for seeing the relationships between items in your Synapse workspace.

Related Posts

Azure Storage Explorer for Moving Files

Once again I am sharing a quick tip that will potentially speed up your work process, this time by using Azure Storage Explorer for moving database backup files between servers. This process saved me so much time this past weekend while completing a server migration, which is why I wanted to cover it with a quick post.

What’s in this post

What is Azure Storage Explorer?

Azure Storage Explorer is a desktop application that helps you manage your Azure Blob Storage and Azure Data Lake storage. Before this past weekend, the only purpose that I had used it for was to propagate Access Control Lists (ACLs) for storage accounts, which is also a very helpful task it can accomplish. However, what I want to focus on for this post is how you can also use this tool to very quickly transfer files between servers, as long as those servers have access to the internet.

Moving files with Azure Storage Explorer

If you are ever in a situation where you are migrating a database from one server to another using the backup/restore method, and that backup file is very large even after being compressed, you may want to try this method of moving the file between your servers. This of course only works if you use Azure as your cloud provider.

With previous database migrations, I tried different methods of moving my file between the servers, and even have an old post discussing this. But this past weekend, I was doing the production migration for a database and the file was larger than previous ones I had moved between servers, even after creating the backup in a compressed format. My first transfer method that I tried was to drop the file onto a network drive and then copy it to the destination server from that share drive on the destination server. While that worked, it was pretty slow, taking about 30 minutes to complete. That would have been… fine… if I hadn’t run into issues with the backup which forced me to generate a new backup file that needed to be copied over as well. Since I didn’t want to force the rest of the upgrade team to wait for that again, I started trying a new method.

While that slow copy was in progress, I quickly download Azure Storage Explorer (ASE) onto the source server and uploaded my backup file to a storage account in our Azure subscription. And to my surprise, the upload of the 15 GB file took just about a minute or two if I recall correctly. No matter what specific amount of time it took, using ASE was significantly faster, and it didn’t cause a browser memory issue like when I attempted to upload the same file to the same storage account manually through the Azure portal. Because for some reason, even though I got the manual upload to a storage account to work in the past, I have now been having issues with my browser, both Chrome and Firefox, crashing out part way through the upload. So this new, faster transfer method is a huge win for me!

Then I also quickly downloaded and installed ASE onto the target server, and the download of the file from the storage account through the application was just as fast as the upload was. I had my database backup copied over to the new server in the same amount of time that the progress of the network drive copy only reached 20% done. So I gratefully cancelled that copy process. I was happy about the speed of ASE and I am sure my colleagues were happy they didn’t have to wait on the database part of the upgrade even longer.

Why is this so much faster?

Given how much faster the upload and download for my file went using Azure Storage Explorer compared to every other method, I really want to know how it manages to achieve that. Unfortunately, it seems that the information about why and how it manages to be so fast is limited. Part of the speed obviously came from our own network speed, but some of it certainly comes from something special with the tool since trying to upload manually through the portal proved to be super slow, and would then crash out in the browser.

From the few resources I’ve found online (listed in the References section below), it seems that the performance of ASE comes from how it uses the azcopy tool to speed up and also parallelize the file transfers and use multiple cores from the host machine to do so. Whatever makes it work so quickly, I am very happy that it exists and will likely be using this method of copying files between servers going forward. Downloading and installing ASE, then using it to upload and download files, is still much faster than trying to move big files any other way.

Summary

If you need to move a large file between two servers in your Azure cloud environment, I highly recommend using Azure Storage Explorer to drop that file onto a storage account, which will complete very quickly as long as your internet speed is normal, and then download that file using ASE as well, which will be equally as fast. There are other methods of copying files between servers that I’ve discussed before, but this is now going to be my preferred method.

Resources

  • https://stackoverflow.com/questions/57729003/blob-code-download-much-slower-than-ms-azure-storage-explorer
  • https://azure.microsoft.com/en-us/blog/azcopy-support-in-azure-storage-explorer-now-available-in-public-preview/
  • https://learn.microsoft.com/en-us/azure/storage/storage-explorer/vs-azure-tools-storage-manage-with-storage-explorer?tabs=windows
  • https://azure.microsoft.com/en-us/products/storage/storage-explorer

Notepad++ Backup Files

This is going to be a short and slightly random post, but this information is something I learned this week that was entirely opposite of what I thought it was so I wanted to share.

What’s in this post

“Unsaved” Notepad++ files are actually saved

When you work with Notepad++, a free and very nice notepad application for Windows, you have the ability to create a lot of new files and leave them unsaved without losing them when closing the app or even restarting your computer. I never really knew how the app managed that and hadn’t thought much about it, but I figured that since I had never officially saved the files to my hard drive, that the information in the files wasn’t saved onto my hard drive and were thus safer from external access than a stored file would be.

However, this week I learned that I was totally wrong and that the way Notepad++ keeps your “unsaved” files ready for you to use again after being closed is to keep a “backup” saved onto your hard drive. For me, these files were saved in this location: C:\Users\username\AppData\Roaming\Notepad++\backup

This is what I currently have backed up there, which lines up with what I see in the actual application, plus backups of files I closed today which are kept just in case I want them, I guess.

Screenshot showing the Windows File Explorer location for Notepad++ unsaved file backups and my list of backup files
Screenshot showing the tab bar of Notepad++ and my list of open and unsaved files that are being backed up by the application

And then if you end up saving the files, like for me I had unsaved changes in the two actually-named files which I then saved, the “backup” files will disappear.

Screenshot showing the Windows File Explorer location for Notepad++ unsaved file backups and my list of backup files, with some now gone from the list after I saved them

I think this is actually a neat feature since it could save you if you accidentally close an important file that you didn’t mean to. But it isn’t some cool loophole for keeping important secret things easily at hand but secure like I kind of assumed. I’m not saying I put secrets into these temp files, but if I was, I definitely wouldn’t be doing it anymore. 😀 Always keep your secrets locked up in key vaults or password tools! The one I’ve started using is Bitwarden and it seems pretty nice and easy to use so far.

Summary

Notepad++ doesn’t use magic to keep your unsaved files available for you to use after closing the app or restarting your computer; it is saving those files in a backup location on your computer. And if you close a file you didn’t intend to before you saved it, you can get that file back from this backup location before you restart your computer.

Adding Entra Authentication to IaaS Server

The other day I needed to verify that some IaaS (infrastructure as a service) SQL Servers (“SQL Server on Azure VMs” as Microsoft puts it) I created for a project a couple months ago were setup exactly as we want them to be before we move them to production in their upcoming upgrade. There were several things I needed to check to ensure the servers match our team’s standards, and one of them was double-checking to ensure that Entra ID authentication is enabled for the servers. I knew that we had worked through all the required steps on the Azure portal to enable that type of authentication, but wanted to validate to make sure I could login to the server with that authentication type. However, what I discovered is that I could not login with my Entra ID account despite Entra being enabled for the server. This was really confusing to me so I set out to figure out why it wasn’t behaving the way I thought it should be and why I could log in with Windows Authentication but not Entra MFA.

What’s in this post

Enabling Entra Authentication for the Server

When I originally set up my servers a couple months ago at this point, I know that I fully went through the process in this Microsoft document to add Entra authentication to the SQL Server. But since it had been a couple months, I reviewed everything in the settings for the server and confirmed that it all should be set up to allow me to authenticate with my Entra user when connecting to the server through SSMS. This is what the security configuration page looked like for each of my servers, and it shows that Microsoft Entra Authentication is Enabled.

Screenshot showing the Azure Security Configuration page of an IaaS server with Entra Authentication enabled

Based on that information, I thought that I should be able to login with Entra, yet I still couldn’t. The only way I could connect to the server was to use Windows Authentication. So what was the problem then?

What my problem was and how I solved it

The next step of my troubleshooting process was to login to the server and look at the groups I added for my team so that we can be the admins on the server. For some reason, we have two different groups so I needed to review both of them. What I found when reviewing the logins for our groups is that both were created using the FROM WINDOWS statement, since I had directly copied the logins from the old versions of our servers which were created before Entra authentication was possible for IaaS servers.

Screenshot showing a redacted CREATE LOGIN script using the phrase FROM WINDOWS to create a Windows Authentication type login

That was the problem! The logins for our groups, which are now in Entra ID and we want to use Entra auth to login with, were created as Windows logins which is why I could only login with Windows authentication. Makes a lot of sense.

To resolve this problem and make it so that I can login to the server with Entra instead, I had to drop and recreate the logins using the parameter FROM EXTERNAL PROVIDER, like this.

Screenshot showing a redacted CREATE LOGIN script using the phrase FROM EXTERNAL PROVIDER to create a Entra Authentication type login

Once I made that change, I was then able to login to the server with Entra MFA authentication like I originally wanted. And since I had made multiple servers with this same issue, I had to go and make the same changes on those as well. Super easy to overlook when migrating logins from an older server to a new, but also super easy to fix.

How to tell at a glance if a login is Windows or Entra

There is a super quick way to tell at a glance on your server if a group login is an Entra based group or a Windows authentication based group, and that is to see what kind of icon is given to the login in the Object Explorer list. The icon for the Entra group looks the same as a single user icon, whereas the icon for the Windows group looks like multiple people in front of a server.

Screenshot of a list of logins from the SSMS Object Explorer demonstrating what the Entra ID icon for a group looks like and what a Windows icon for a group looks like

Summary

If you run into a situation where you enabled Entra ID authentication on your IaaS SQL Server instance in Azure yet you still can’t login to the server with Entra authentication, you may want to check to see what type of authentication your login was created with. If you want to be able to login through a user or group with Entra ID, you need to make sure that the login was created with the phrase FROM EXTERNAL PROVIDER instead of FROM WINDOWS.

Quick side note

This form of Entra authentication with IaaS servers in Azure is only available for SQL Server 2022. If you have older versions of SQL Server installed on Azure VMs, you will still need to use Windows authentication for your logins.

Fixing Broken Oracle Listener After Patching

Before the holidays, I had the pleasure of doing the most recent round of patching for our production Oracle databases, to apply the patches that were released in late November. Similarly to how a lot of things behave unexpectedly in the IT world, I had different results of the patching in production compared to what happened in our test environments when I applied the same patches there first a few weeks before production. During the application of the patches in test, we had absolutely no issues, the patches applied seamlessly and it was super easy, although time consuming. Then in production, late at night on a Thursday evening when I should have been in bed already, the patches seemed to have been applied without issue again, except this time, my application developer teammates could not get their apps to connect to the database again once it came up.

What’s in this post

The Error

When my app dev teammate tried to reconnect his app to the newly patched server, he received an error similar to the following:

ORA-12514: TNS:listener does not currently know of service requested in connect descriptor

Thankfully my teammate used to deal with Oracle databases directly himself so he could help me run commands to troubleshoot what the app was encountering. We were quickly able to see that things weren’t right, but also we weren’t quite sure how to fix them.

Helpful Commands

The following commands were ones that helped us figure out what was wrong with the database after it had been patched. I ran these through a terminal connection to the node using PowerShell.

lsnrctl status

This command checks the current status of the listeners available on the system. When we ran the command, we got an output that looked like this, with status for a listener related to ASM but none of our container databases or pluggable databases:

Screenshot of the Oracle terminal command that shows a list of all server listeners and their statuses with the incorrect results

What we expected we would see was a whole list of listeners for our database and the pluggable databases on it, like this:

Screenshot of the Oracle terminal command that shows a list of all server listeners and their statuses with the correct database listeners shown

ps -ef | grep LIST

Will list out any file or directory on your server node that contains ‘LIST’, we used it to see what listeners may be on the node

How We Fixed the Problem

After running the lsnrctl status command I mentioned above and seeing that there were no listeners that it knew of for the whole database or the pluggable databases on it, we knew that there was clearly something wrong with how the listeners for the DBs came up (or rather didn’t) when the server patched. The first solution we attempted was a classic–reboot the machine to see if that would force the listener to come up how it should have. This did not resolve the problem for us, we still didn’t have a listener for the database.

To figure out the real solution, we of course hopped on Google, and I found this StackOverflow answer that mentioned the person had to update their tnsnames.ora file on the server to include the missing listener, and once they did that it fixed the issue.

Since my teammate had no other ideas to try, we opened the tnsnames.ora file on our server to see if there were any entries in it for the listener of the server we were on. Unfortunately, there was not an entry for the listener that should exist on the server, just listeners for other servers that we connect to. Based on the StackOverflow solution, we decided that we would manually enter the connection entry for the listener. But first we had to figure out what the correct name would be. To do that, we ran this command:

# Start a sqlplus session
sqlplus / as sysdba

# Run command to see the name of the listener that the database system is expecting
show parameter local

That gave us the following output:

Screenshot of the Oracle terminal command "show parameter local" which is used to see the listener the server is trying to use

Based on that output above, the server was looking for a listener called “LISTENER_dbname”, so now we knew what to enter into the tnsnames file (with a little assistance from looking at what was in the same file but on the test server, so we could get the formatting correct).

Step by Step Solution to Update TNSNames.ora

  • Navigate to the tnsnames.ora file location on the server: cd /u01/app/oracle/product/19.0.0.0/dbhome_1/network/admin
  • Create a backup of the tnsnames file just in case you mess something up and need to revert: cp -p tnsnames.ora tnsnamesbak.bak— that is cp -p <source file> <new file name>
  • Open the file using the “vi” editor: vi tnsnames.ora (If you’re like me and not great with vi, use this cheat sheet I found online to help you use the editor)
  • Navigate to the end of the file (SHIFT + G)
  • Press I to enter the Insert mode
  • Copy the text to setup your local listener into the file, making sure there are no leading spaces or extra spaces anywhere
    • LISTENER_DBNAME = (ADDRESS = (PROTOCOL = TCP)(HOST = <hostname>)(PORT = 1521))
    • The LISTENER_DBNAME is what we retrieved above with the show parameter local command, the HOST is the hostname of your instance which you can find in the console if you don’t know it already
  • Save the file
    • Press ESC to exit Insert Mode
    • SHIFT + ; to enter Command Mode so we can save and quit
    • wq then Enter to write and quit
  • Now we must register the newly added listener with the system so it knows what to use. Enter a sqlplus session and run the following commands:
    • alter system register;: This command will register current settings at the container database level
    • alter session set container=<PDBName>;: Change your session into your first pluggable database
    • alter system register;: This command will now register current settings at the pluggable database level. Repeat this step and the previous one to register settings for each pluggable database on your system
  • To validate that the change fixed the problem, run the lsnrctl status command again and see that you should now have multiple lines below the ASM listener lines, one for each container database and one for each pluggable database. Like this:
Screenshot of the Oracle terminal command that shows a list of all server listeners and their statuses with the correct database listeners shown

After we completed all the above steps, my app dev was able to connect his app to the database again and we were able to wrap up our very late working session. And thankfully I documented this all the day after, because we then had to patch a different production database on the weekend and ran into the exact same issue, and I was able to fix it in just a couple minutes this time. I don’t understand why test went flawlessly and then production had issues with the exact same patch, but such is life in the technology realm. Nothing ever behaves like you really want it to.

Summary

If you run into the error on your Oracle system that indicates the listener isn’t working or isn’t present, you should be able to follow the detailed steps I outlined above to get your database back in working order.

Did you run into something like this with the most recent Oracle database patches? If so, I would love to hear about your experiences down in the comments below.