Category: Database Tools (page 1 of 2)

Handy Use of a SQL Cursor

Welcome to another coffee break post where I quickly write up something on my mind that can be written and read in less time than a coffee break takes.


Several months ago I ran into a situation where I needed to update the records in one table based on values in a related reference table. To do this update, I was going to need to run an existing stored procedure once for every record in the reference table, which I believe contained information about countries and markets within those countries. The reference table looked something like this:

The stored procedure I needed to run had input parameters for CountryID and MarketID as well as several other things that aren’t important for this post. When I was originally looking at this task I needed to complete, I was not looking forward to running the stored procedure manually dozens of times, one for each combination of Country and Market. But I knew there must be a better way. I’m a programmer, I can find a way to automate this tediousness.

A practical use for a cursor

If you’ve developed SQL code for any length of time, you’ve probably heard an older DBA or database developer tell you to never use cursors! I know that I personally have been reminded of that many times, so I had never even considered using one or tried to use one. But when I was staring down the barrel of updating two values in a procedure execution call, running it, waiting for several minutes for the procedure to complete, then doing it all over again, for dozens of times, I knew I had to give a cursor a try.

I of course had to Google how to write a cursor, since I had never done that before, but was quickly able to write a script I would need. The cursor was created to loop over every record retrieved from the reference table using a query I wrote, and injected each of the CountryID and MarketID values into the input parameters of the stored procedure. This easily automated the tedious manual work that I would have needed to do myself, and it did it in a lot less time since it wasn’t a user having to slowly update each relevant value each time they needed to change.

Summary

Maybe cursors aren’t quite the devil I always believed them to be. I know they can certainly cause performance issues on databases when they’re written into stored procedures and ran regularly, turning what should be set-based work into row-based work, but I have learned that there is at least one fantastic use. And this use will make my life easier going forward any time I need to run one stored procedure a lot of times with different input values.

Sources

Here is the main StackOverflow answer I used to help me write my own query: https://stackoverflow.com/a/2077967. And you can see, the first comment of this answer is literally calling cursors evil, which I find amusing.

Taking a Break and Switching Things Up

If you know me personally or follow me on LinkedIn, then you know that several weeks ago I made the decision to leave my friends and colleagues at Scentsy to take a new role at Boise Cascade. I made this decision for a lot of different reasons, but the main one being that I felt like I needed more of a challenge and wanted to get back into development work rather than focusing on project organization and management.

I am just about through my third week in my new role and it has been a scary, tiring, interesting, and even a bit fun, experience. Starting any new job comes with some new stress as I adjust to the environment and coworkers, and while I’m not completely out of the adjustment phase yet, I am already becoming more comfortable in this role so thought I should come here and give an update.

Previously, this blog has been heavily focused on cloud development with AWS since that is what I was developing in daily at Scentsy. But with my new role, I am drinking from a firehose to start learning cloud development in Azure, so my posts will be switching to focus on that platform instead. As of right now, I have no plans to do any personal development projects in AWS to be able to continue content in that space. I am really excited to be learning about Azure though, and am equally excited to start sharing what I learn about it here on my blog, since it seems like the Azure documentation is just as hard to understand, interpret and use as the AWS documentation is. So I will have plenty of my own learnings to share.

In my new role, I am also jumping into the deep end with scripting in Python which I am loving (but of course having issues with just like any other platform), so I will be starting to share some of my learnings about that space as well. Plus, there’s always a possibility I’ll be learning some other new technology because of how diverse my new role is, so my content going forward might be a lot more diverse than it was in the past.

It may take a few more weeks for me to get into the swing of things enough at my new job to feel like I have something worth posting about here, so I hope you’ll stick around and give the new content a read once it comes out. And as always, if there is anything in particular you would like me to write about, let me know in the comments and I will try to get to it!

Two Useful Keyboard Shortcuts for SSMS

Welcome to another coffee break post where I quickly write up something on my mind that can be written and read in less time than a coffee break takes.


This morning I was doing my normal work when I had a realization that I should share something I find super useful and use frequently use in SSMS that a lot of developers seem to not know about. They are small actions but they make your life easier when doing a lot of query editing in SSMS.

How to Minimize the Results Window

I have told many developers about this keyboard shortcut and they all appreciated it. I’m sure most people that frequently work in SSMS would like to be able to minimize and maximize the results window as needed in order to give themselves more screen real estate to work with while coding but still be able to see their query results as needed. But there is no minimize button for the results window of SSMS.

The only way that I know of to minimize and then maximize the results window in SSMS is to do CTRL + R. I use this keyboard shortcut every day at work while writing queries or updating existing queries.

How to Refresh the Intellisense

I have also had to tell many developers about updating the intellisense suggestions of SSMS since it will often trip them up if they don’t know how it works. First, you should understand that the intellisense offered by SSMS is only accurate as of the time you opened your query window or changed the connection for the query window (usually). If you’ve been working in the same query window for a while and have made DDL changes to any tables, functions, stored procedures, etc., intellisense is likely out of date and could tell you that a table or column you’re trying to reference doesn’t exist when you know it does.

If you ever run into this situation where it’s telling you something doesn’t exist but you know it does, use CTRL + SHIFT + R and the intellisense suggestions/corrections will be updated.

Bonus shortcut for Red-Gate SQL Prompt

Similar to the intellisense built in to SSMS, if you are using the SQL Prompt tool from Red-Gate, you can run into the same issue with the tool not recognizing that objects or columns exist when you know that they do. If you run into that issue and would like to update the suggestions list for SQL Prompt, use CTRL + SHIFT + D.

Getting SQL Prompt to Prompt on RDS Servers

This may seem like a ridiculous thing to need to write about, making the Red-Gate tool SQL Prompt generate prompts like it should, but I have been having a weird issue with it over the past couple months and have finally learned the solution. So of course I thought I should share it!

What is SQL Prompt?

SQL Prompt is a tool made by Red-Gate that works as a much cleaner, nicer, and more useful autocomplete feature for SQL Server Management Studio (SSMS). It is a plugin you install to SSMS that then seems to magically work to help you write queries faster. Not only does this tool autocomplete databases, schemas, tables, and column names for you in your queries, but it also provides a lot of other useful tools like a Snippets Manager, which allows you to use default and custom snippets to write code faster (e.g. writing “sf” then pressing Tab will type out “SELECT * FROM” for you so all you need to type is the table name you want to select from).

Every developer in my organization uses this tool heavily in our day-to-day operations while writing any SQL scripts because it makes writing queries so much faster. So when my SQL Prompt seemed to stop working after an update, I was getting really frustrated because it meant I had to write all of my SQL queries manually again. And when all you do all day is write SQL, that adds up to a significant hindrance to your work speed.

My Problem

The problem I was having with SQL Prompt was that when I connected to any of our RDS database instances, the tool would no longer do any prompting of schemas, tables, or columns which was making my coding life so much harder. Oddly, the snippets manager portion of the tool was still working fine, so at least I wasn’t needing to type out the queries I normally use the snippets shortcuts for. Also oddly, Prompt would work perfectly fine connecting to other databases that weren’t on RDS instances, it was only happening for RDS databases.

I dealt with this issue for months on our production server, since I figured it was due to the security settings or something else I wouldn’t be able to fix, and I don’t access prod servers very frequently, so when I did use them and the Prompt wasn’t working, it wasn’t as bothersome. But after I had to completely reset my developer computer and reinstalled SQL Prompt, I started having this same issue for our lower environment databases, that I work with every day, so Prompt not prompting was suddenly a big deal.

The Solution

I created a support ticket with Red-Gate since I’ve always had good luck with their support services. This time wasn’t any different. Within a couple hours of creating the ticket, I had an email from a support rep asking me if I had tried checking the “Trust Server Certificate” check box on the Connection Properties tab while connecting to the RDS servers. No, I had not done that because I did not know that was an option before that day.

I disconnected from the RDS server then reconnected, making sure to check that box before clicking “Connect”, and now I had SQL Prompt back up and working, providing prompts of schemas, tables, and columns just like I want it to. Yay!

I also logged onto our prod server to see if doing the same thing there would fix that issue, and it fixed Prompt there as well. I am so excited to be able to not type out every detail of every SQL query again!

I love quick fixes like this.

Why CFTs Take so Long to Delete

Welcome to another coffee break post where I quickly write up something on my mind that can be written and read in less time than a coffee break takes.

Background

Recently, I went through an AWS workshop for Lake Formation, a data lake management tool in AWS, and that workshop had me create many different Cloud Formation Templates (CFTs) to spin up services to use in the workshop. After I finished that, I then had to go through my development AWS account for work and clean up everything that had been created so we stopped paying for these services I no longer needed.

While attempting to delete the many CFTs I had used, I saw one that was seemingly stuck in the DELETE_IN_PROGRESS state for almost 20 minutes. I did not realize it would take so long to delete one CFT and was getting worried that it was actually stuck. So I started searching online to see if this has happened to others as well.

Why does the delete take so long?

I found this Reddit post of someone reporting the same thing, and it linked to a very informative answer to a similar question on Stack Overflow. I would recommend you go and read that detailed answer there for the best understanding of why CFTs sometimes take forever to delete.

The simple answer is that is just how it is. My CFT in question had set up a lot of Virtual Private Clouds (VPCs), Elastic Compute Cloud (EC2) instances, Elastic Network Interfaces (ENIs) as well as other resources, and some of those items simply take awhile to delete.

Even though I can’t speed up the deletion process for these big CFTs, at least now I know that in the future, should I need to delete any other large CFTs from my AWS account, I can expect it might take longer than I would anticipate to complete.

How to Get Public IPv4 DNS for AWS EC2 Instance

I have been trying to learn how to work with AWS Glue because it’s probably going to be a new ETL solution my organization uses as we migrate to Postgres in AWS. Part of learning how to use Glue is learning how to set up and use Postgres RDS instances so that I can move data between them with Glue.

Setting up the RDS instances was the easy part, since AWS makes that process go very smoothly. Even setting up the EC2 jump server to connect locally to my RDS instances seemed like it was easy as well, only a few options to select and then a new server was created for me.

The Problem

However, in my most recent attempt at creating all 3 of these servers (I have to regularly delete what I have while not using it to not incur additional charges), I kept running into an issue where my EC2 server was not being assigned an IPv4 Public DNS address, and without that value, I can’t connect to that server as a jump host on my local computer. That was a big problem for me.

I spent over a half hour trying to troubleshoot this problem, double-checking the VPC rules for DNS and everything I could think of, and none of it was working. I terminated and recreated the instance multiple times and that did not do the trick. Finally I found this Stack Overflow answer that was exactly what I needed, and the fix was super obvious but also hard to see at the same time.

The Solution was Simple

For some unknown reason, the settings that AWS defaulted to when I was creating new instances was to set “Auto-assign public IP” to “Disabled”, and I didn’t catch it at first because that section of the instance creation settings was in a non-editable state by default as well. If you run into this same issue, when you get to the “Network Settings” part of your instance creation dialog and “Auto-assign public IP” is set as Disabled and it looks like there’s no way to change that, click the edit button at the top right of that pane to change the default instance settings. Then Enable the option to assign a public IP address to the instance.

It’s that simple. I can’t believe it took me so long to figure out something so obvious! But that’s life in IT sometimes.

Extra Note

When you stop and then start your EC2 instance again, it will assign a new Public IPv4 DNS name to the instance. It took me longer than I would like to admit to figure this out. I kept having an issue each morning where my SSH tunnels to my RDS databases through this EC2 server would no longer work. After several weeks and trying many different things, I finally figured out that the Public IP address was changing each time I stopped my instance at the end of the work day and restarted it the following day, and that’s what was causing my tunnel to break.

Do DML Statements Work in Liquibase Changesets?

After finishing the blog post last week about how to work with Liquibase, I decided to find the answer to one of the outstanding questions I had about the tool, which was whether or not it allows you to put DML statements in your changelogs and changesets. I couldn’t find any documentation anywhere online about putting DMLs in changesets, so I had to figure it out myself. Finding the answer to this was much easier than I thought it would be since all it involved was adding a DML statement to a changelog, running the Liquibase update statement, and then seeing what happened.

So do DMLs work in Liquibase changesets?

Yes, they do. To prove it, I opened the existing changelog file that I created for last week’s tutorial and I added a new changeset.

Screenshot of text editor containing Liquibase changeset with DML statement to insert into a table

Then I opened the command prompt for Liquibase and ran the normal update statement to get my database aligned with the changelog file.

Screenshot of Liquibase command window showing successful execution of DML changeset

The update statement completed successfully, which I truly was not expecting. Then I had to go into the database to see if that DML statement was actually executed on the DB or not, and it was!

Screenshot of PGAdmin window showing SELECT statement results containing record inserted by DML changeset

I was very excited to see that, because it meant that if my team decided to switch to this tool, we could continue deploying DML scripts alongside any DDL scripts they may be associated with.

Summary

Today’s post is short and sweet. I wanted to see if the Liquibase tool had a key feature I was looking for it to have but couldn’t find documentation about. I was thrilled to see that it does work with DMLs. Such a small but important feature.

How to Set Up and Use Liquibase, Part 2

In last week’s post, I covered the initial setup steps you must follow when starting to work with Liquibase. In this week’s post, I will be finishing up my tutorial of getting started with Liquibase. If you haven’t yet downloaded and set up Liquibase on your computer, please review that post before reading this one.

What’s in this post:

Create the baseline changelog file for your database

Using the command “generate-changelog” with the CLI for Liquibase, we can create a SQL file containing queries that will regenerate all objects in your database. What database objects get scripted into this files depends on which license you have for Liquibase. If you have the open-source version of the tool, it will script out all non-programmable objects like tables, keys, etc. If you want or need to script out all of your programmable objects such as procedures and functions (plus other items), you will need to have the Pro version of the tool.

Either way, the command for creating the script is exactly the same.

liquibase generate-changelog --changelog-file=mydatabase-changelog.sql --overwrite-output-file=true

Let’s break this command down. The first two words are simple, you’re calling Liquibase and specifying you want it to run the generate-changelog command. The next part is the “changelog-file” argument that allows you to specify the file you want to write the new changelog to. The next argument, “overwrite-output-file” tells the tool if you want to overwrite that specified file if it already exists. In this case, I specified true for that argument because I want the tool to overwrite the example changesets in the file it created upon project creation with the actual queries for my database. After running this command, you should get a success message like the following.

And if you open that specified file now, it should contain the actual scripts to generate all of the objects in your database, each change separated into its own changeset. Each generated changeset will be defined with the username of the person who generated the file, as well as the tracking/version number for the set.

Now you are ready to start doing normal development and changes to your database because you have baselined your project.

Adding and tracking ongoing database changes

There are two methods for adding/tracking database changes with this tool: 1) add your scripts to the changelog file as changesets, then “update” the database with those changes, or 2) make your changes within the IDE for your database (ex: PGAdmin) then use the “generate-changelog” command to identify and script those changes.

Method 1: Adding Scripts to Changelog File

Open your changelog file and add a new line. On that line, you are going to add the required comment that lets Liquibase know you are starting a new changeset. This line looks like “– changeset author:versionNumber”. Example: “– changeset elahren:1.1”. Then, add a line below your changeset comment and add the DDL script you would like to run on your database. Once you have added all the changes you would like to your changelog file, save and close the file, then open the Liquibase command prompt to execute those changes on your database.

If you would like to preview the changes Liquibase will run on your database, you can run the command “liquibase update-sql” which will show you all the SQL that will be executed, which will include your queries as well as queries Liquibase will run to track what you’re applying. In the below screenshot, the commands with a green square are the ones I included in my changesets, and the commands with a blue square are the ones that Liquibase will run to track the changes.

If the preview looks correct, you can then run the command “liquibase update” which will apply all the previously viewed SQL queries to your database. You can verify the changes have been successfully applied by opening your database in your normal IDE (e.g. PGAdmin) and confirm the changes.

Method 2: Make Changes in your IDE

The process for making the changes in your IDE and then tracking those changes in Liquibase is almost exactly the same as the process we used to create the initial changelog file when setting up the project. It is as easy as making whatever database changes you want in your IDE and then opening the Liquibase CLI and running the “generate-changelog” command with either a new file name if you want to put it in a new changelog file, or use the same file name with the “--overwrite-output-file=true” argument.

If you are going to use the first option, writing to a new changelog file, it seems like you will then need to edit the file after creating it to remove any of the queries you didn’t create in your latest changes (since the command will try to recreate all objects in your database).

I’m not sure if this is the recommended workflow for tracking database changes, but it was a feature my team was hoping to get from the database change tracking tools we’ve been investigating, so I found a way to make it happen with Liquibase. If you want or need to have a “database-first” approach to change tracking (making changes directly to the database and then generating files to track that), instead of a “migration-first” type approach (making migration/change scripts and then applying that to your database), it appears that is technically possible with this tool.

Structuring your changelogs according to best practices

You can set up and structure your changelogs in any way that you would like, it’s your project, but Liquibase does have some ideas to help you stay organized. There are two different organization methods they recommend: object-oriented and release-oriented.

Object-oriented means you will create a different changelog file for each object or type of object being tracked in your database (e.g. one file for stored procedure changes, one file for table changes, etc.). I personally don’t like the idea of this organization method since it would mean you could be updating many files each time you make database changes, like if you’re updating procedures, tables, indexes, and views all for one release. However, having all the object types separated could also be a benefit, depending on how you normally complete your work.

Release-oriented means you make a new file for each release you make for your software/database. This method seems more familiar to me personally since it’s similar to the concept of migration scripts in Red-Gate’s SQL Change Automation or Flyway tools, where you can combine multiple database changes you’re making at once into a single file and deploy it all at once. This process could also work for organizations that use a more structured delivery system and not continuous delivery/agile development. That way you could put all of your changes for the week, month, or whatever development length into one file to associate with one particular release.

Whichever method you choose should work well if you set it up properly, the decision of which option to choose only depends on how you work and how you prefer to track that work.

Outstanding Questions

The first outstanding question I have about this tool right now is can you put DML scripts in your changelogs? That is something supported by Red-Gate’s SQL Change Automation and Flyway tools, which is what I’m used to. So far, I haven’t been able to figure out if that’s possible with Liquibase. Being able to deploy DML changes alongside a regular deployment really simplifies the process of some DMLs that you may need to run in your production environment, because it makes sure they go out with the deployment they are related to. An example of this is if you are adding a lookup type table (i.e. AccountTypes) and need to add the few records into that table after it’s created. Normally, you would need to run such a DML script manually after your deployment has completed. But SCA and Flyway allow you to put the DML in a deployable script that will automatically insert that data after the table is created. That of course can come with its own challenges, but it’s something I’ve really enjoyed with Red-Gate SQL Change Automation so I want it to be possible with Liquibase.

The second outstanding question I have about Liquibase is whether or not it can work with a secrets manager for database user passwords. How I set up my test project locally required me to put the password for the database user for Liquibase to be saved in cleartext in the properties file, which is not safe. For my purposes, it was fine since it’s a dummy database that doesn’t have any real data in it. But for production purposes, there is no way we would ever save a database user password in cleartext in a file. I haven’t had the chance to research this question more yet, so I’m not sure if the tool would work with a secrets manager or not.

Summary

When I first started working with Liquibase I was pretty frustrated with it since it was a totally new-to-me tool with documentation I didn’t find intuitive. But I kept working with it because I wanted to make it work for my organization and then just found it interesting to learn more about. I now feel fairly confident working with the tool in the ways my organization would need.

For being a tool with a completely free-to-use version, it seems like it has a good amount of features that many developers might need and could use for tracking and deploying changes to their databases. I can’t honestly say that I would prefer using this tool to Red-Gate’s SQL Change Automation or Flyway tools, which I currently work with, since they have a better use interface and seem to have more intuitive script creation processes. But Liquibase does seem like a useful tool that I could get used to working with given enough time. It’s worth a try to work with if your organization is working with a limited tool budget.

How To Set Up and Use Liquibase, Part 1

In a recent post, I gave an overview of what Liquibase is and what features it offers as a bare minimum. This week and next, I am going to give the step-by-step instructions I followed myself to learn how to properly set up a Liquibase project after playing with it for several hours. I personally found the documentation offered by Liquibase a little confusing, so this post is essentially the notes I took while figuring out what I really needed to do to set up a demo project with the tool. The aim of this post is not to be an exhaustive tutorial of the software, since I am far from an expert of Liquibase. Let me know in the comments if you found any of this useful or interesting!

What’s in this post:

Download Liquibase and install it on your computer

I won’t provide a link to it here (because that would be sketchy), but you can find the tar or Windows installer download for the free (open-source) version of Liquibase on the Liquibase website under Editions & Pricing > Open Source. I used the Windows installer version since I am working on a Windows machine. After you download the installer, you can run it to install the tool. I did not change any of the setup options (there were very few). The installation and setup were both extremely fast.

After the installer runs, you should be able to see where it was downloaded, which for me, was under Program Files. Now you can start working from that downloaded folder for the program, or you can copy the entire folder to another directory so you can play around with it without the fear of breaking something and then having to reinstall everything to fix issues. I made a copy in another location and worked from that copy (which was suggested by one of the Liquibase tutorials).

Verify that Liquibase is properly installed on your computer

The next important step is to verify the status of the tool to make sure it installed correctly. To do this, you want to open a command prompt window and navigate to the directory of the liquibase folder that you want to work with, then run the command “liquibase status“.

As you can see in the above screenshot, although I had an error returned by the “status” command (since I haven’t setup a Liquibase project yet), the tool did run and work (as evidenced by the giant Liquibase printout). Download and installation of the tool was successful.

Create your first Liquibase project

Creating your first project is simpler than I originally thought it was. I knew that at a bare minimum, a file called “liquibase.properties” needed to be created, but I thought that I had to do that manually (mostly because I did skip a page or two in the documentation that I thought weren’t needed). Although it is possible to create that file manually and then manually enter the necessary values to create a project, the easiest way to setup a new project and all of its necessary files is to use the “init” command, “liquibase init project“.

That command will run you through the process of setting up a new project, including setting the necessary values for the liquibase.properties file. If you are fine with using all default settings for your project, specify “Y” when prompted, otherwise specify “C” which will allow you to customize the values used by the project so you can work with your own database.

What I specified for each of the setup prompts:

  • Use default project settings or custom? Custom, “C
  • Relative path for the new project directory? In the main Liquibase folder, not in a subfolder, “./” (I would likely change this for an actual development project if I was creating one for work)
  • Name for the base changelog file?mydatabase-changelog“, but you can set it to whatever makes the most sense to you. I definitely wouldn’t use that format for an actual work project if we move forward with this.
  • Changelog format?sql” since I am working with a normal SQL database
  • Name of default properties file?liquibase.properties“, the recommended name, but you could change it to something else
  • JDBC URL to the project database?jdbc:postgresql://localhost:5432/postgres“. This points to the database called “postgres” on my local machine, which is a PostgreSQL database so it uses that type of connection string that I specified. Other database engines will have their own connection string format.
  • Username to connect to database? “Postgres”, the default user for a new PostgreSQL database, which is what the project should use to interact with the database. You should change this to a better-defined user just for the Liquibase tool.
  • Password? The password associated with the specified user. For real-world purposes, I hope we’ll be able to use a secrets manager tool to update that file for deployment purposes so that we don’t have to specify that in clear text in the file.

New Project Files

After you go through all of the above steps, Liquibase will have created several new files for you. I am not sure what most of them are for at this point, but I think some of the other files besides the properties file that were created are used for CI/CD integration (flow files). The properties file I created for my project now looks like this after my project setup:

If you have a license key for the enterprise/Pro version of the software, you can scroll to the bottom of this file and uncomment the line “liquibase.licenseKey:” then add your license key after the colon on that line.

You can also review the default changelog file created by Liquibase if you specified that during project initialization. For me, the file contains some sample changesets that don’t relate to my actual database:

These are the tables in my database for the Liquibase project:

To set up this file to represent your database, we’ll need to use the “generate-changelog” command on the project, and I’ll walk through that in the next post.

Database changes made with project initialization

As you can see from the last screenshot above showing what tables I have in my database, there are two tables called databasechangelog and databasechangeloglock. Those two tables will be created in your project database when you connect Liquibase to the DB because that is how the tool keeps track of what changelogs and changesets have been applied to the database already. You can also prevent some changes from being executed when the changelog is applied, and those changes will be tracked with the “lock” table.

You are now ready to start making changes to your database and tracking those changes with Liquibase!

A quick note about changelogs and changesets

The changelog is the main file (or files) that contain the queries you use to interact with your project database. You can have just one or you can have a series of changelogs, each used for a different part of your database.

Each changelog file contains what Liquibase calls the changeset, which is a single unit of change for your database, like an ALTER TABLE statement or any other DDL statement you can run against a database. A changeset is identified in the changelog file with a comment line which will contain the change’s author and then a change number which can essentially be any number you would like to track the changes. If you use the above steps to have Liquibase create your first changelog file for you, it will create a randomly generated version number for each of the queries listed in the file. This number will determine in what order the changes will be applied.

Summary

In this post, I covered how you go about setting up your first Liquibase project and what each of commands and files related to that means. This tool is surprisingly simple to work with, as can be seen with the initial setup process. Next week, I will be covering what you do now that you have the tool successfully installed and set up on your computer.

What’s in the next post:

  • Create the baseline changelog file for your database
  • Adding and tracking ongoing database changes
    • Method 1: Adding scripts to changelog file
    • Method 2: Making changes in your IDE
  • Structuring your changelogs according to best practices
  • Outstanding questions
  • Summary

More Postgres vs. SQL Server

Welcome to a coffee break post where I quickly write up something on my mind that can be written and read in less time than a coffee break takes.

As I’m getting further into my PostgreSQL adoption project at work, I am continuing to learn about so many more small differences between Postgres and SQL Server that I never would have expected. None of the differences I’ve found so far are profound, but they will pose challenges for all the developers we have (including myself) that have only worked with SQL Server.

Postgres does not have default constraints

That’s right, there is no way to make a default constraint on a Postgres table, instead you make a default value, which cannot have a name assigned to it. In SQL Server, you can define a default constraint in essentially the same way as you would define a unique or key constraint and you can give that constraint a name. In Postgres, you simply specify that a column has a default value when adding that column to a table. In theory (I haven’t tested it yet), the generation of default values works exactly the same between the two engines, one just isn’t saved as it’s own script or file with a name.

The index differences are amazing (and confusing)

I’m not going to lie, the prospect of figuring out when to use what kind of index with Postgres is daunting to me. Deciding which type of index to use with SQL Server is very straightforward, since you only need to know if you already have a primary key/clustered index on the table already and then go from there. You pretty much only have clustered or non-clustered indexes in SQL Server. But Postgres has a handful of different possible options and you need to better know how the data will be queried in order to pick the best one. The default/usual best option seems to be a B-Tree index, which is comforting and familiar, but one day I know I’ll likely have to figure out the other options as well.

The default length of object names is really short

Of these three new items I learned recently, I think this one is going to be the most annoying. The name length for objects in Postgres is surprisingly low, only 31 characters. The max length for objects in SQL Server is much longer. I’m mostly concerned that we’re going to need to figure out how to rename many long object names as we migrate off SQL Server, and that could be tedious. But we’ll see how that goes. I read somewhere in my research that you may be able to change this default length, so that will need to be something else for me to research.