Category: Miscellaneous Tools (page 1 of 2)

Using Custom Script Extensions with Bicep Templates

After a previous in-depth post about creating a Bicep template to deploy an Azure Infrastructure as a Service (IaaS) virtual machine, which included the use of a Custom Script Extension (CSE), I want to go more into depth about some of the trials I went through while trying to get the Custom Script Extension to work. Troubleshooting it wasn’t the easiest thing in the world, since Azure doesn’t give much feedback at all from the failure to deploy that resource from a Bicep template, so I am hoping I can give anyone else struggling to implement the Custom Script Extension resource to run a PowerShell script after deploying an Azure resource some assistance with their process.

This post only has the code for CSE itself, and not the rest of the template I use it in. If you would like to see the full template, please view it on my GitHub.

What’s in this post

What is a Custom Script Extension resource?

In the Bicep world of infrastructure as code templates in Azure, there is a resource type that is called a “Custom Script Extension” which isn’t so much a “resource” as you would normally expect in a Bicep template. Normally, a “resource” is what it sounds like: an Azure resource you want to create, like a server, network security group, database, and even things like role assignments. But for the Custom Script Extension, it’s a method of running a PowerShell script on a Virtual Machine (VM) resource after it has been deployed, to install software or do other necessary setup on the machine.

My use case for a Custom Script Extension

The reason why I use a Custom Script Extension in my Bicep template when I create an Azure IaaS VM is so that I can initialize the disks related to the VM so they can actually be seen and used like you would expect. For some reason, when you create an Azure VM with a Bicep template, it does not automatically join the created disks to the machine. Due to this, when you first log in to the machine after it has been created, you won’t see the disks in the File Explorer like you would expect. Thankfully, PowerShell and the Custom Script Extension allow me to initialize those disks and name them what we normally want them to be named without having to login to the server and do it manually myself.

I originally had my PowerShell (PS) script set to also join the machine to our corporate domain after it was created, but I recently removed that part because it would not work with the protectedSettings section of the extension description, which you’ll see below. If you want more details about why I couldn’t get this to work and thus had to remove the most problematic sections from my PS script, keep reading.

My setup for a Custom Script Extension

The following is the Bicep resource script I used as part of my wider IaaS VM creation script to setup the VM after it is deployed.

var initializeDisksScript = 'https://stgAcct.blob.core.windows.net/myContainer/SetUpWindowsVM.ps1'
var sqlExtensionName = 'SqlIaasExtension'

resource customScriptExtension 'Microsoft.Compute/virtualMachines/extensions@2024-07-01' = { 
  name: customScriptExtName
  location:'westus2'
  dependsOn:[sqlVM]
  properties:{ 
    publisher:'Microsoft.Compute'
    type:'CustomScriptExtension'
    typeHandlerVersion:'1.2'
    settings:{ 
      fileUris:[
        initializeDisksScript
      ]
      commandToExecute:'powershell -File SetUpWindowsVM.ps1'
    }
    protectedSettings:{
      storageAccountName:storageAccountName
      storageAccountKey:storageAcctKey
      
    }
  }
}

A quick explanation of that definition is that it’s creating a resource of an extension type for VMs, and it’s dependent upon the VM which I specify further up in the full Bicep template. The script extension is set to execute a PowerShell command so that I can run a file called SetUpWindowsVM.ps1, which is downloaded by the script runner from the storage account location specified in a variable called initializeDiskScript. There are two different sections of “settings that you can specify: a normal settings section whose values will be output to the log after deployment, and then a section of “protected settings”, whose values do not get output to the log after deployment.

How the Custom Script Extension Works

After the Bicep template has created the VM, it will then set about running the specified script file I designated in my CSE resource definition. The first step to do that is to download the file from the specified fileUris location, which for me is an Azure Storage Account. The extension is able to connect to that Storage Account, since I provided the name and access key in the protectedSettings, and then download the file from there onto the local machine. The general location it’s downloaded to is: “C:\Packages\Plugins\Microsoft.Compute.CustomScriptExtension\1.*\Downloads\<n>”, where “1.*” is the version of Bicep you’re using and the “<n>” is a seemingly random integer value that the extension picks. For me, that was always “0”. After the file is downloaded, the CSE handler tries to execute the commandToExecute that you specified in your Bicep template. Since the PowerShell file will be downloaded locally to the area that CSE expects to use it from, you do not need to specify the full path to the file location, you can use the relative path formatting.

If you’re having issues with the CSE, like I was, you can get insight into what happened when the CSE ran by viewing the logs in this location: “C:\WindowsAzure\Logs\Plugins\Microsoft.Compute.CustomScriptExtension”. For more on that, see section below.

Issues with the Custom Script Extension

As of the time this was posted, I’ve been going back and forth with Microsoft support for several weeks to figure out how I could possibly use the commandToExecute specification in the protectedSettings object of the resource definition, and I have not yet resolved the error while working with them. The issue I am having is that the PowerShell script that I actually want to run includes the usage of a parameter containing a password, so I should use the protectedSettings to pass in the command with the password parameter so that the password is not output to the deployment log in plain text after the template is deployed. However, if I put the commandToExecute into the protected settings, nothing seems to happen and the script is not executed. If I put the same exact command into the normal settings, the script completes successfully yet my password it unsecurely output to the log, which I do not want.

Since I haven’t been able to resolve this problem, even with the help of Microsoft support, I have updated my PowerShell script to remove the section that joins the machine to the domain, which removes the need for me to pass through a password, so I can use the normal settings section to successfully call the commandToExecute. I am going to continue working with Microsoft support to see if we can finally come to a resolution, but I didn’t want to keep waiting on this post. If I ever fix the problem, I will update here.

Troubleshooting help

As I mentioned in the section above, you may run into random issues with the Custom Script Extension of Bicep (CSE) if you include it in your templates. Overall, I think it is still worth using this resource type, but you do need to be armed with some knowledge to help yourself as much as possible. These are the things I found useful when troubleshooting different issues with the CSE.

  • You will need to delete your VM and redeploy it so many times while troubleshooting issues, so be ready for that. Thankfully, deleting a VM and all its associated resources through the Azure Portal has gotten a little easier recently, so that will save you a little time.
  • If you are unsure whether the problem is with your PowerShell script or your Bicep template, create a test version of your VM as it would be created by your template (or run the template to create the VM then log on to that) and manually run the same PowerShell script on the machine. If it runs when you execute it manually, then the issue is not the PowerShell script but with the CSE.
  • Do not create the CSE nested under the resource creation specification for the VM, list it as its own separate resource definition, like I have in my template. It’s much harder to understand what’s happening if you put it as a nested resource, and then you can’t troubleshoot it on its own because the entire VM creation will fail if the CSE fails.
  • Make sure you specify the dependsOn property in the CSE resource definition, or else it will likely get deployed out of order. Bicep is supposed to be smart enough to know that some things should be deployed before others, but it doesn’t seem to understand order of operations for the CSE.
  • To view the logs of the CSE after it’s been deployed to get a sense of what was happening, you can see that by going to the Deployments section of the resource group you deployed the template into. Open the specific deployment you created for your template, then click on the link for the CSE resource deployment.
Screenshot showing the deployment overview where you can find more details on the CSE resource deployment
  • Check to make sure the file downloaded onto the local machine by the CSE handler is the correct file. It would be in this general location on the machine: “C:\Packages\Plugins\Microsoft.Compute.CustomScriptExtension\1.*\Downloads\”. I never saw it be wrong, but it doesn’t hurt to double-check.
  • Check the execution logs of the CSE after it runs on the machine. You can find all logs in this location on the machine after it’s been created and the CSE executed: “C:\WindowsAzure\Logs\Plugins\Microsoft.Compute.CustomScriptExtension”. The most useful log to me was the one called “CustomScriptHandler”, since that shows the exact steps the CSE handler ran when it executed your script (or tried to execute it).
  • Set your PowerShell script to generate a log when it runs, because that will either tell you what happened and went wrong in PowerShell, or it will not be created at all like in my situation, showing that the CSE is the problem.
$scriptName = 'customScriptExtensionLog'
$scriptName = $scriptName.Split('.')[0] 
$runtime = (get-date -Format 'hh-mm-dd-MM-yyyy') 
$logfile = "D:\" + "$scriptName" + "-" + "$runtime" + '.txt' 
Start-Transcript -Path $logfile

Mostly, you will need a lot of time and patience to properly troubleshoot issues when the Custom Script Extension goes wrong.

Alternatives to this resource type

While I haven’t used this yet, there is an alternative resource type availabe to use instead of the Custom Script Extension (CSE), and it’s called RunCommand. If you would like to read more about this resource type to see if it would be a better fit for you, there is a great Microsoft blog about it.

Summary

While I may have gotten very frustrated using the Custom Script Extension resource type in Bicep recently, I still think it’s a very useful feature that could save you a lot of manual work in setting up a virtual machine, if you can get it to run the way you need. If you’ve also run into issues with this resource type, I would love to hear about them in the comments below! Otherwise, I will keep you all updated if I find a resolution to my specific issue after posting this.

Related Posts

Create an IaaS Windows VM with a Bicep Template

Today’s post will be the first technical post of this series and will focus on the script you need to write to generate an Infrastructure as a Service (IaaS) virtual machine (VM) in Azure. For my purposes, I’ve created such a Bicep template to create this resource to speed up our process of making IaaS VMs to host SQL Server instances–for our applications that cannot utilize a Platform as a Service (PaaS) version of SQL Server.

Since the post is very detailed and long, please use the table of contents below to jump forward to specific sections if you would like.

What’s in this post

What the template does

In the template I am about to walk through, I first create the Azure IaaS virtual machine resource. I then create the minimum required networking resources–a network interface and a network security group, which sets up access rules for the server. I then create a resource to install the SQL IaaS extension on the VM, which will connect the main virtual machine resource with the SQL virtual machine resource in the portal. My final step of the template is to create a Custom Script Extension resource which allows me to run a PowerShell script on the VM after it’s created, which finalizes the required setup I need for my machine. All these steps work together to make the bare minimum requirements of the type of VM I need to create.

Getting the entire template file

I have broken down a full Bicep template into its parts in the post below. If you would like to see the template in its entirety for an easier overview, you can find it on my GitHub here.

How to know what configurations to choose

When I first started writing this template, I had no idea what configuration values to choose. When looking at the Microsoft documentation for the VM resource type in Bicep, it seemed like there were endless possibilities for what I could choose to configure the machine. That was overwhelming to me at first. I then had the idea that I would compare the list of possible values in the documentation for the resource type to the settings of an existing resource with settings similar to what I wanted for my new machine.

That effort originally started with me looking at the normal portal view of the resource, but I didn’t stick with that for long. I quickly realized that the portal view of settings doesn’t show a lot of values that I was looking for. But I figured out that you can view all the setup information for a given resource in the portal in a JSON format, which is very similar to the Bicep formatting I needed. I believe this JSON document is likely what would be used by Azure Resource Manager (ARM) to create the same resource, which is why it’s available for every resource in the portal.

To view the JSON version of the resource settings, navigate to the resource in the portal, then near the top right corner of the Overview page, you will have a link to “JSON View”.

Screenshot of the Azure portal showing how you can locate the JSON view for any resource

When you open that pane, you will see something like the following, with all pertinent details of the resource, which you can then use to help you create your Bicep template.

Screenshot of an example JSON view of an Azure VM

Writing the template

Parameters

When creating a new Bicep template, the first thing you’ll need to decide (apart from what specific resources you need to create) is what parameters you will need to input into the template, which will usually be fed in from a pipeline that deploys the template. For my template, I created the following parameters:

  • vmName (string): The name I want to give to the VM that will be created
  • dataDiskNameBase (string): A base string that I will use to name the data disks that I will create and attach to the VM. This base name follows my team’s standard naming strategy for disks.
  • adminPassword (string): The password that should be assigned to the total admin user the VM will be created with
  • adminUsername (string): The username that should be given to the ultimate admin user created for the VM
  • domainAdminPassword (string): This password is used in the PowerShell script I run after the VM is created
  • storageAcctKey (string): The key associated with the storage account where my PowerShell script (used by the custom script extension resource) is stored
  • storageAcctName (string): The name of the storage account where my PowerShell script is stored
  • resourceTags (object): A list of key-value pairs for the resource tags I want assigned to the VM after it’s created

Variables

Variables in Bicep are used so that you don’t have to repeatedly type the same values over and over, like variables in other scripting and programming languages. For my Bicep template, I created the following variables:

  • sqlExtensionName: This variable is used to give a name to the resource for the SQL IaaS extension
  • customScriptExtName: Used to store the name I want to give to the custom script extension resource
  • initializeDisksScript: The URL to the PowerShell file that I have saved in a storage account, which will be run by the custom script extension resource

Resource: Virtual Machine

Once you have your parameters and variables defined, the next step is to define the main virtual machine resource. Depending on the settings and configurations you need for your VM, you may need to define this resource differently. In my template, I have the following definition for the VM resource:

resource sqlVM 'Microsoft.Compute/virtualMachines@2024-07-01'= {
  location:'westus2'
  name: vmName
  properties:{
    hardwareProfile: {
      vmSize:'Standard_D4s_v3'
    }
    additionalCapabilities:{
      hibernationEnabled:false
    }
    storageProfile:{
      imageReference: {
        publisher: 'MicrosoftWindowsServer'
        offer: 'WindowsServer'
        sku: '2022-datacenter-azure-edition-hotpatch'
        version:'latest'
      }
      osDisk:{
        osType: 'Windows'
        name: '${vmName}_OsDisk_1_${uniqueString(vmName)}'
        createOption:'FromImage'
        caching:'ReadWrite'
        deleteOption:'Delete'
        diskSizeGB: 127
      }
      dataDisks: [for i in range(0,3):{
        lun: i
        name: '${dataDiskNameBase}${i}'
        createOption:'Empty'
        caching:'None'
        writeAcceleratorEnabled: false
        deleteOption:'Detach'
        diskSizeGB: 256  // The disks should all have 256 GB except for the 4th which should have 128, so this checks the index value
      }
      ]
      diskControllerType:'SCSI'
    }
    osProfile: {
      computerName:vmName
      adminUsername: adminUsername
      adminPassword: adminPassword
      windowsConfiguration:{
        provisionVMAgent: true
        enableAutomaticUpdates: true
        patchSettings:{
          patchMode:'AutomaticByPlatform'
          automaticByPlatformSettings:{
            rebootSetting:'IfRequired'
            bypassPlatformSafetyChecksOnUserSchedule: false
          }
          assessmentMode:'ImageDefault'
          enableHotpatching:true
        }
      }
      secrets: []
      allowExtensionOperations: true
    }
    networkProfile:{
      networkInterfaces:[
        {
          id:netInterface.id
        }
      ]
    }
    diagnosticsProfile:{
      bootDiagnostics:{
        enabled:true
      }
    }
  }
  tags:resourceTags
  identity:{type:'SystemAssigned'}
}

That is a lot of script to look through, so I will break it down into its components and explain what each does.

Location and Name

Location and name are pretty self-explanatory: the Azure region you want the resource deployed in and the name you want to give to the resource. For the location, I specified ‘westus2’, and for the name, I specified the vmName parameter that I would get input from when deploying the template.

Hardware Profile

Next in the resource definition is the specification of all properties for the VM, which is the meat of the resource definition, so I’ll walk through it section by section. The first property set is the “hardware profile”, which is the VM hardware specification/type. I chose a standard version.

hardwareProfile: {
      vmSize:'Standard_D4s_v3'
    }

Additional Capabilities

Next is to specify “additional capabilities” for the VM, which I only use to set the hibernation setting to off (false).

additionalCapabilities:{
      hibernationEnabled:false
    }

Storage Profile

The next section is much longer, which is where we specify everything needed for the storage that should be attached to the VM, using the storageProfile property.

storageProfile:{
      imageReference: {
        publisher: 'MicrosoftWindowsServer'
        offer: 'WindowsServer'
        sku: '2022-datacenter-azure-edition-hotpatch'
        version:'latest'
      }
      osDisk:{
        osType: 'Windows'
        name: '${vmName}_OsDisk_1_${uniqueString(vmName)}'
        createOption:'FromImage'
        caching:'ReadWrite'
        deleteOption:'Delete'
        diskSizeGB: 127
      }
      dataDisks: [for i in range(0,3):{
        lun: i
        name: '${dataDiskNameBase}${i}'
        createOption:'Empty'
        caching:'None'
        writeAcceleratorEnabled: false
        deleteOption:'Detach'
        diskSizeGB: 256
      }
      ]
      diskControllerType:'SCSI'
    }
Image Reference

The “imageReference” object within the “storageProfile” property is where you specify the type of operating system you want to install on the VM, which you do by choosing a standard Azure VM image that should be installed on the machine. You could also specify a Linux-based image or even your own custom image if you have one already. In the future, I will be updating this template to use a custom image that already has SQL Server and SSMS installed, so I no longer have to do those steps manually as well.

imageReference: {
        publisher: 'MicrosoftWindowsServer'
        offer: 'WindowsServer'
        sku: '2022-datacenter-azure-edition-hotpatch'
        version:'latest'
      }
OS Disk

This object is where you specify the specific Operating System information for the VM, as well as the name of the disk the OS will be installed on, and how big that disk should be.

osDisk:{
        osType: 'Windows'
        name: '${vmName}_OsDisk_1_${uniqueString(vmName)}'
        createOption:'FromImage'
        caching:'ReadWrite'
        deleteOption:'Delete'
        diskSizeGB: 127
      }
Data Disks

The next step of defining the storage profile for the VM is to create any data disks that you want the VM to have. For my case, I need to create 3 separate data disks, because that’s what my team requires for our SQL Server setup. To easily create the data disks with fewer lines of code and less manual specification, I’ve used a loop in the dataDisk specification to loop through 3 times to create 3 different disks.

dataDisks: [for i in range(0,3):{
        lun: i
        name: '${dataDiskNameBase}${i}'
        createOption:'Empty'
        caching:'None'
        writeAcceleratorEnabled: false
        deleteOption:'Detach'
        diskSizeGB: 256
      }
      ]

To enable the process to create 3 unique disks, I specify the name of each disk to use the parameter “dataDiskNameBase” which I pass in when deploying the template, so they’ll get a unique but useful name. I have all my disks created as Empty, since that’s the only way I could get the template to deploy successfully. Each disk is created with 256 GB of storage.

Disk Controller Type

The final setting for the storageProfile property is the diskControllerType, which is what controls the connections between the VM and disks. I set it to “SCSI” because that is what our existing VMs were using.

diskControllerType:'SCSI'

OS Profile

The next property that needs to be specified for the VM resource is the osProfile, which gives more specific settings that should be used for the Operating System setup.

osProfile: {
      computerName:vmName
      adminUsername: adminUsername
      adminPassword: adminPassword
      windowsConfiguration:{
        provisionVMAgent: true
        enableAutomaticUpdates: true
        patchSettings:{
          patchMode:'AutomaticByPlatform'
          automaticByPlatformSettings:{
            rebootSetting:'IfRequired'
            bypassPlatformSafetyChecksOnUserSchedule: false
          }
          assessmentMode:'ImageDefault'
          enableHotpatching:true
        }
      }
      secrets: []
      allowExtensionOperations: true
    }

The computer name for the OS is set to the same vmName parameter I used for the general resource name above. The adminUsername and adminPassword are set to the input parameters with those same names as well. This admin user is the super user that the VM is created with, which will be what is used to initially set up the computer and will allow you to log in even before your VM is joined to your domain and allows your normal login process.

Next, you need to specify the configurations particular to Windows, and I pretty much set these all to what appear to be default values.

Finally, you can specify any necessary secrets, but I had none, and then you can choose to allow extensions or not, which I allowed.

Network Interface

When creating an IaaS VM, you will need to specify a network interface resource that connects network security groups to the VM. You do this by specifying the interface through the networkProfile property.

networkProfile:{
      networkInterfaces:[
        {
          id:netInterface.id
        }
      ]
    }

This section is very short. The only thing I specified within the networkProfile is the ID for the network interface resource I will create later in the template. I specify that value by using the semantic name of the resource (netInterface) then added .id to get to the ID value of that resource. Bicep is smart enough to know that the network interface object should be created before the VM, so specifying the ID in this method will not cause any deployment issues.

Diagnostics, tags, identity

The final things that I specify for the VM resource definition is the diagnosticProfile, where I specify that I want boot diagnostics to be saved, the tags that should be assigned to the resource after it has been created, and the Identity the resource should be created with, which I put as the system-assigned managed identity (not a custom identity).

Resource: Network interface

Next up, we need to specify the creation of a network interface, which is required for the VM to use the network security group we specified in that resource script above. The network interface resource specification is much simpler than that of the VM.

resource netInterface 'Microsoft.Network/networkInterfaces@2024-05-01' = {
  name:'${vmName}-${uniqueString(vmName)}'
  location: 'westus2'
  tags:resourceTags
  properties:{ 
    ipConfigurations:[
      { 
        name:'ipconfig1'
        type:'Microsoft.Network/networkInterfaces/ipConfigurations'
        properties:{ 
          privateIPAllocationMethod:'Dynamic'
          subnet:{ 
            id:resourceId('USWest_2_Network_Services','Microsoft.Network/virtualNetworks/subnets','Subnet_1_Name','Subnet_2_Name')
          }
          primary:true
          privateIPAddressVersion:'IPv4'
        }
      }
    ]
    enableAcceleratedNetworking:true
    enableIPForwarding:false
    disableTcpStateTracking:false
    networkSecurityGroup:{
      id:nsg.id
    }
    nicType:'Standard'
  }
}

The first specification of that resource is the name, which I create by combining the name of the VM it is associated with, plus a unique string (using the uniqueString() function) to make the resource name unique but still meaningful. Next, I provide the location the resource should be deployed to, the tags I want to apply to the resource using the input parameter, and finally get into the properties and detailed configurations of the resource.

For the properties of the network interface, we need to create a sub-resource of type “IP Configuration” which is used to specify the subnet the network interface should be deployed to. For the template to appropriately identify and configure information for the subnets that should be used, I had to use a function called resourceID(), to which I fed the subnet names so it could return the IDs of those subnets for use by the IP configuration.

Then, under the properties of the IP configuration, I specified that I want the private IP address of the resource to be given dynamically, that the IP configuration is the primary, and that the IP address version should be IPv4. Which closes out the creation of that sub-resource.

Continuing with the rest of the properties for the main network interface setup, I specified that I want to enable accelerated networking, I did not want to utilize IP forwarding, and I did not want to disable state tracking for TCP. Finally, we come to the most important part of the creation of the network interface, which is to specify the ID of the network security group (NSG) that we want to associate with the interface. To do that link, you need the ID of the NSG, which you can do by combining the semantic name of the NSG resource plus “.id” to retrieve the ID that will be generated when it’s deployed.

The final configuration I specify in the network interface properties is the type, which is Standard.

Resource: Network Security Group

The further we get into the Bicep template, the shorter the resource definitions get, thankfully. Our next resource specification is for the Network Security Group (NSG), which dictates what network traffic will be allowed into and out of the VM we’re creating.

resource nsg 'Microsoft.Network/networkSecurityGroups@2024-05-01' = { 
  name:'${vmName}-nsg'
  location:'westus2'
  tags:resourceTags
  properties:{ 
    securityRules:[ 
      {
        name:'SQL1433'
        properties:{ 
          protocol:'Tcp'
          sourcePortRange:'*'
          destinationPortRange:'1433'
          sourceAddressPrefix:'10.0.0.0/8'
          destinationAddressPrefix:'*'
          access:'Allow'
          priority:1020
          direction:'Inbound'
        }
      }
    ]
  }
}

Like with the previous resources, we first specify the name, location, and tags before getting into the property specifications. The properties for this type of resource are fairly simple–just the inbound and/or outbound network access rules. For my resource, I created only a single rule to allow the required inbound traffic for a SQL Server instance. The rule I specified allows inbound traffic from any source IP to port 1433.

Resource: SQL IaaS Extension

The next resource I create in the template is one that allows me to install the SqlIaaSExtension onto the newly created virtual machine, after it’s been created. This extension is what allows you to connect a SQL Server instance installed on an IaaS virtual machine to the Azure portal and to connect to the Virtual Machine resource itself. The resource type for this is extensions under the virtualMachines resource type.

resource iaasExtension 'Microsoft.Compute/virtualMachines/extensions@2024-07-01' = {
  name: sqlExtensionName
  parent: sqlVM
  location: 'westus2'
  tags:resourceTags
  properties:{ 
    autoUpgradeMinorVersion: true
    enableAutomaticUpgrade: true
    publisher: 'Microsoft.SqlServer.Management'
    type:'SqlIaaSAgent'
    typeHandlerVersion:'2.0'
  }
}

Most of the settings I specified for this extension resource are fairly standard, since there’s not much to change. I gave it the name, location, and tags that I wanted, just like with the rest of the resources. One key property of the resource specification to note is the “parent” value, which is how we tell Bicep/ARM that this resource is related to another we previously specified, so the other resource should be created before this one is. Bicep is usually good at understanding relationships between resources and which should be created first, but for this resource type, you need to specify it explicitly.

Resource: Custom Script Extension for PowerShell

The final resource I specify the creation of in my Bicep template for my IaaS VM is a CustomScriptExtension resource, which allows me to run a custom PowerShell script on the VM I create after it has been created. In my PowerShell script, I run commands to initialize the disks I attach to the VM, since that is not done for you if you create a VM using a Bicep template.

resource customScriptExtension 'Microsoft.Compute/virtualMachines/extensions@2024-07-01' = { 
  name: customScriptExtName
  location:'westus2'
  dependsOn:[sqlVM]
  properties:{ 
    publisher:'Microsoft.Compute'
    type:'CustomScriptExtension'
    typeHandlerVersion:'1.2'
    settings:{ 
      fileUris:[
        initializeDisksScript
      ]
      commandToExecute:'powershell -file .\\SetUpWindowsVM.ps1 -password "${domainAdminPassword}"'
    }
    protectedSettings:{
      storageAccountName:storageAccountName
      storageAccountKey:storageAcctKey
    }
  }
}

As with the previous resources, I specify the semantic name and location of the resource. I then also must specify the dependsOn value to tell Bicep/ARM that this resource should be deployed after the previously defined resource called ‘sqlVM’. That means that the custom extension resource will only be created after the main VM has been created, as it should be.

The properties object of this resource is where the magic happens, which tells Bicep that the “extension” you’re creating is one that you’ve created yourself to run a script. The publisher value is still Microsoft, but the type is “CustomScriptExtension”. The settings object is where you specify everything you need to tell the extension what script it should be running. In my case, the fileUris list only contains a single object, which is the variable containing the URL location of where my PowerShell file is stored on a Storage Account. Then the commandToExecute string contains the exact command that needs to be run on the VM to execute the PowerShell script I want, along with the required parameters of the script (which in this case is only the “password” parameter).

A cool feature of the custom script extension capability is that there is a protectedSettings object, which you can put any secret values into, and Bicep/ARM will keep those secrets hidden even in the log output of the template deployment. If you put those same values into the settings object instead, the values passed into those parameters would be displayed in the deployment log, which isn’t good for keeping secrets. For my protectedSettings object, I passed in the storageAccountName where the PowerShell script is saved and the storageAccountKey to provide access to that Storage Account. While the name isn’t secretive, I did put it in the protectedSettings to get rid of a warning in the VS Code editor.

Summary

Whew! That was a lot to get through! I hope my breakdown above of all resource creation steps needed for a Bicep template creating an IaaS virtual machine proved helpful to you. There are so many little pieces that go into creating a Virtual Machine through a Bicep template that you never really see when creating the same resource through the portal. If there is anything that I didn’t clarify well enough, please let me know in the comments, and I will clarify the best I can. There are a lot of lines that go into making this type of Bicep template, and it felt like a big time suck at the beginning of writing it, but it is worth the time to create the template to very quickly deploy this resource type repeatedly in the future whenever you want.

Related Posts

How to Reset the Azure CLI Virtual Environment

Have you been in a situation where you were working with the Azure CLI Bash terminal and then something went wrong with the image used to persist your data across sessions and your workspace seemingly got corrupted? I have had that happen to me recently, and I think it was because I updated my computer from Windows 10 to Windows 11, so Azure thought my computer changed and that corrupted the image.

I tried a couple of things to fix the situation so I could start working in the CLI space again to do my work, but I couldn’t fix the problems and errors I was getting. Then I set about figuring out how to clear the whole thing and start again. Thankfully, I only had two files on the virtual workspace, which I backed up on my local computer and in a repo anyway, so resetting the whole thing didn’t have negative effects for me. Continue reading to learn how I successfully reset my Azure CLI virtual workspace so I could get working again.

What’s in this post

When you might reset your workspace

Before updating my computer to Windows 11, I had been successfully reusing the same Azure cloud terminal virtual workspace for several weeks while learning to work with Bicep and testing a template repeatedly. I only encountered issues with the Azure CLI Bash terminal after the computer upgrade, and I am guessing it’s due to Azure considering my computer as a different machine after the update. I guessed that based on this StackOverflow answer which discusses how you cannot share virtual disks between different Virtual Machines (VMs) or else you can get corruption issues.

Even though that StackOverflow thread is referencing a disk issue for Virtual Machines and their disks, it seems like the same issue can happen when you try to use the same Azure account and its virtual CLI environment on two different computers, based on my experience. And this StackExchange answer says that a similar Linux error indicates a file system corruption.

I knew I had an issue when I used the Azure CLI Bash terminal for the first time with my updated computer. Immediately upon getting into that virtual environment and trying to check on my saved Bicep template there, I received the error:

azure cli cannot access 'main.bicep': Structure needs cleaning

I tried deleting the file so I could recreate it but got the same error when trying to do that. It seemed like I was stuck and really just needed to restart the whole thing and try again. The StackExchange answer referenced above said to unmount the corrupted file system disk and run a Linux command, but I knew that I couldn’t do that for this virtual environment, so I needed to figure something else out to get things working again.

How I reset my Azure CLI Bash workspace

Note: Please be cautious when following these steps as what you’re about to do may not be possible to undo.

Since you are not able to manipulate the file system in the Azure CLI virtual environment and unmount a disk like you could if you got this error on your own machine, you have to fix the error above in another way.

Trying a simpler fix first

There is one way you may be able to fix your environment that is less harsh than what I had to do, which is to choose the option in the Cloud Shell in the Azure Portal to “Reset User Settings”. This action, according to the Microsoft documentation, deletes your data from the shell by terminating the current session and unmounting the linked storage account. To do this reset, select the dropdown in the Shell window in the portal for “Settings”, then click “Reset User Settings”.

Select “Reset User Settings” from the “Settings” menu in the Azure CLI in the Portal

After completing that step, you will need to start a new Cloud Shell/CLI session, which may or may not fix the error you are seeing. In my case, doing this step did not fix the issue because the actual issue was with the image being used to regenerate my virtual environment. Continue on to the next sections if you continue to have the same error after resetting your user settings.

Identify the Storage Account for your virtual environment

The first step in doing the full reset of your workspace after getting the above error is to identify the storage account your environment data is persisted to. In my case, it is in a storage account called “azcliemily” with a file share called “azcliemilyfs”. I’m not sure if I created this storage account a while ago and have just forgotten about that or if the system created one for me when I set up my CLI workspace for the first time. The naming for your storage account and file share is likely different.

The easiest way to determine where your files are stored for your CLI sessions is to open the Azure CLI from within the Azure portal.

To get to the CLI from within the Azure Portal, click on the “terminal” looking icon in the top right menu of the screen

When that opens, if you are here because you are having the same error I was, everything in the shell/CLI should already be set up for you and you should see something like this.

When you open the CLI terminal in the Portal, you should see something like this when it first starts up

To view the file location for your workspace, click the dropdown for “Manage Files” at the top of the shell window, then select “Open file share” which will open a new tab.

You can access the file share where the files for the session are stored by opening the dropdown for “Manage Files” and then choosing “Open file share”

The file share page will look something like this:

The file share that Azure uses to persist your CLI virtual environment between sessions should look something like this.

Delete the image file used for your virtual environment

Note: Please be cautious when following these steps as what you’re about to do may not be possible to undo.

If the above “Reset User Settings” step didn’t work, like it didn’t for me, you may need to completely delete the machine image that Azure has created and is using for your virtual environment in the CLI. The error I got seems to indicate that there is corruption within the disk, so the only way for me to get rid of the corruption and access my files again is to delete the disk image being used so Azure is forced to make a new one when I start my next session.

The consequence of proceeding with this method is that you will lose anything and everything that you had stored in your CLI virtual environment. For me, this was only two test files, but it could be a lot more for you.

If you are going to proceed with the following steps, please save off everything from your virtual environment that you don’t want to lose.

Once you are sure you are ready to proceed at your own risk, navigate to the file share location we identified in the above section, then go into the folder “.cloudconsole”. In this folder, you will see a file with the type “.img”, which is a disk/machine image, the one that Azure uses to spin up disks on the backend for you when you open a new Cloud Shell/Terminal session. For my session, the file was named “acc_username.img”.

Opening the folder “.cloudconsole” in your file share should show you a file with the type “.img”, which is the machine image for your environment

If you haven’t done so already, please first open a Cloud Shell in the portal and choose “Reset User Settings” before moving on to the next step so that this file share/storage account is unmounted from your Cloud Shell session.

Once you have done that reset, delete the .img file from the file share. This step may be irreversible so proceed with caution!

After you have deleted that image file, you can open a new terminal session with the Azure CLI or use the Cloud Shell from the Portal and then work through the normal steps of setting up a new virtual environment. When you choose to persist your files between sessions, Azure will create a new machine image to use to store those files, which should no longer have the corruption you were seeing previously.

Summary

If you run into what appears to be disk corruption issues with the Azure CLI virtual workspace, like if you are working with Bicep deployments or anything else through the CLI and are saving files between sessions, I highly recommend starting completely fresh with your environment if you can. It doesn’t take very long to do an entire reset of the virtual environment and it easily fixes errors that indicate disk corruption which occurred for whatever reason (like if you upgrade your computer’s OS so Azure thinks you’re trying to use the same session from two different computers).

Let me know in the comments below if this solution worked for you!

Related Posts

Azure Storage Explorer for Moving Files

Once again I am sharing a quick tip that will potentially speed up your work process, this time by using Azure Storage Explorer for moving database backup files between servers. This process saved me so much time this past weekend while completing a server migration, which is why I wanted to cover it with a quick post.

What’s in this post

What is Azure Storage Explorer?

Azure Storage Explorer is a desktop application that helps you manage your Azure Blob Storage and Azure Data Lake storage. Before this past weekend, the only purpose that I had used it for was to propagate Access Control Lists (ACLs) for storage accounts, which is also a very helpful task it can accomplish. However, what I want to focus on for this post is how you can also use this tool to very quickly transfer files between servers, as long as those servers have access to the internet.

Moving files with Azure Storage Explorer

If you are ever in a situation where you are migrating a database from one server to another using the backup/restore method, and that backup file is very large even after being compressed, you may want to try this method of moving the file between your servers. This of course only works if you use Azure as your cloud provider.

With previous database migrations, I tried different methods of moving my file between the servers, and even have an old post discussing this. But this past weekend, I was doing the production migration for a database and the file was larger than previous ones I had moved between servers, even after creating the backup in a compressed format. My first transfer method that I tried was to drop the file onto a network drive and then copy it to the destination server from that share drive on the destination server. While that worked, it was pretty slow, taking about 30 minutes to complete. That would have been… fine… if I hadn’t run into issues with the backup which forced me to generate a new backup file that needed to be copied over as well. Since I didn’t want to force the rest of the upgrade team to wait for that again, I started trying a new method.

While that slow copy was in progress, I quickly download Azure Storage Explorer (ASE) onto the source server and uploaded my backup file to a storage account in our Azure subscription. And to my surprise, the upload of the 15 GB file took just about a minute or two if I recall correctly. No matter what specific amount of time it took, using ASE was significantly faster, and it didn’t cause a browser memory issue like when I attempted to upload the same file to the same storage account manually through the Azure portal. Because for some reason, even though I got the manual upload to a storage account to work in the past, I have now been having issues with my browser, both Chrome and Firefox, crashing out part way through the upload. So this new, faster transfer method is a huge win for me!

Then I also quickly downloaded and installed ASE onto the target server, and the download of the file from the storage account through the application was just as fast as the upload was. I had my database backup copied over to the new server in the same amount of time that the progress of the network drive copy only reached 20% done. So I gratefully cancelled that copy process. I was happy about the speed of ASE and I am sure my colleagues were happy they didn’t have to wait on the database part of the upgrade even longer.

Why is this so much faster?

Given how much faster the upload and download for my file went using Azure Storage Explorer compared to every other method, I really want to know how it manages to achieve that. Unfortunately, it seems that the information about why and how it manages to be so fast is limited. Part of the speed obviously came from our own network speed, but some of it certainly comes from something special with the tool since trying to upload manually through the portal proved to be super slow, and would then crash out in the browser.

From the few resources I’ve found online (listed in the References section below), it seems that the performance of ASE comes from how it uses the azcopy tool to speed up and also parallelize the file transfers and use multiple cores from the host machine to do so. Whatever makes it work so quickly, I am very happy that it exists and will likely be using this method of copying files between servers going forward. Downloading and installing ASE, then using it to upload and download files, is still much faster than trying to move big files any other way.

Summary

If you need to move a large file between two servers in your Azure cloud environment, I highly recommend using Azure Storage Explorer to drop that file onto a storage account, which will complete very quickly as long as your internet speed is normal, and then download that file using ASE as well, which will be equally as fast. There are other methods of copying files between servers that I’ve discussed before, but this is now going to be my preferred method.

Resources

  • https://stackoverflow.com/questions/57729003/blob-code-download-much-slower-than-ms-azure-storage-explorer
  • https://azure.microsoft.com/en-us/blog/azcopy-support-in-azure-storage-explorer-now-available-in-public-preview/
  • https://learn.microsoft.com/en-us/azure/storage/storage-explorer/vs-azure-tools-storage-manage-with-storage-explorer?tabs=windows
  • https://azure.microsoft.com/en-us/products/storage/storage-explorer

Notepad++ Backup Files

This is going to be a short and slightly random post, but this information is something I learned this week that was entirely opposite of what I thought it was so I wanted to share.

What’s in this post

“Unsaved” Notepad++ files are actually saved

When you work with Notepad++, a free and very nice notepad application for Windows, you have the ability to create a lot of new files and leave them unsaved without losing them when closing the app or even restarting your computer. I never really knew how the app managed that and hadn’t thought much about it, but I figured that since I had never officially saved the files to my hard drive, that the information in the files wasn’t saved onto my hard drive and were thus safer from external access than a stored file would be.

However, this week I learned that I was totally wrong and that the way Notepad++ keeps your “unsaved” files ready for you to use again after being closed is to keep a “backup” saved onto your hard drive. For me, these files were saved in this location: C:\Users\username\AppData\Roaming\Notepad++\backup

This is what I currently have backed up there, which lines up with what I see in the actual application, plus backups of files I closed today which are kept just in case I want them, I guess.

Screenshot showing the Windows File Explorer location for Notepad++ unsaved file backups and my list of backup files
Screenshot showing the tab bar of Notepad++ and my list of open and unsaved files that are being backed up by the application

And then if you end up saving the files, like for me I had unsaved changes in the two actually-named files which I then saved, the “backup” files will disappear.

Screenshot showing the Windows File Explorer location for Notepad++ unsaved file backups and my list of backup files, with some now gone from the list after I saved them

I think this is actually a neat feature since it could save you if you accidentally close an important file that you didn’t mean to. But it isn’t some cool loophole for keeping important secret things easily at hand but secure like I kind of assumed. I’m not saying I put secrets into these temp files, but if I was, I definitely wouldn’t be doing it anymore. 😀 Always keep your secrets locked up in key vaults or password tools! The one I’ve started using is Bitwarden and it seems pretty nice and easy to use so far.

Summary

Notepad++ doesn’t use magic to keep your unsaved files available for you to use after closing the app or restarting your computer; it is saving those files in a backup location on your computer. And if you close a file you didn’t intend to before you saved it, you can get that file back from this backup location before you restart your computer.

The Fastest way to Add Azure Firewall Rules

To keep resources secure in the Azure cloud environment, there are usually multiple levels of security that must be cleared for someone to be able to access a resource. For Azure SQL Databases, for example, the user who is trying to access the database must have access granted for their user account on the database but they also need to be given access for their IP address through the network firewall rules for the server or database resource.

I usually only need to add or update a single user’s firewall rule at a time when our business users get their IP addressed updated sporadically, but I had a situation last week where I needed to add over 40 firewall rules to an Azure SQL Server resource for a new database I imported on the server. I did not want to manually add that many firewall rules one at a time through the Azure Portal because that sounded tedious, boring, and like too many mouse clicks, so I instead figured out how to do it the fastest way possible–running a T-SQL script directly on the server or database through SQL Server Management Studio (SSMS).

Note: This would also work through Azure Data Studio or your chosen SQL Server IDE, I simply prefer to use SSMS so use that as a reference in comparison to completing the same actions directly through the Azure

What’s in this post

Server vs. Database Firewall Rules

According to this Microsoft document, it should be possible to grant firewall rule access for a user to a single database on a logical Azure SQL Server resource. In my experience in Azure so far, we’ve only ever granted firewall access at the server level instead, since that is acceptable for our business use-cases. But since it’s possible to set the same firewall rules at the database level according to Microsoft, I added how to do that to this post, as well as the server-level rules which I will be creating for my use case.

T-SQL for Creating Firewall Rules

Did you know it was possible to create both database-level and server-level firewall rules for an Azure SQL Database through SSMS using T-SQL commands? I didn’t before I started this project request last week. Going forward, I will likely use the T-SQL route through SSMS when needing to make multiple firewall rules instead of using the Azure portal, to save myself time, effort, and mouse clicks.

Create a Server-Level Firewall Rule

To create new firewall rules at the server level, you can connect to your Azure SQL Server through SSMS and run the following command on the master database.

/* EXECUTE sp_set_firewall_rule N'<Rule Name>','<Start IP Address>','<End IP address>'; */
EXECUTE sp_set_firewall_rule N'Example DB Rule','0.0.0.4','0.0.0.4';

When you run that command, you are executing a built-in stored procedure that exists on the master database that will insert a new record for a server-level firewall rule into the table sys.firewall_rules, creating the server level firewall rule that will allow access through the specific IP address or IP address range. In the example above, I have the same value for both the Start IP Address and End IP Address parameters, but you can just as easily set that to a range of addresses, like 10.0.5.0 for the start and 10.0.5.255 as the end. I usually prefer to do that for a user’s IP address since our systems can change the last value fairly regularly, but since my current task was to grant access for servers, I set the start and end values to the same single IP address since I’m not expecting them to change frequently.

Create a Database-Level Firewall Rule

If instead of granting access to all databases on the server through a server-level firewall rule you would prefer to grant access to only a single database on your Azure SQL Server instance, you can do that as well. The T-SQL command to create a database-level firewall rule is the following. Notice how it’s very similar to the server-level command above, but with “database” specified in the stored procedure name.

/* EXECUTE sp_set_database_firewall_rule N'<Rule Name>','<Start IP Address>','<End IP address>'; */
EXECUTE sp_set_database_firewall_rule N'Example DB Rule','0.0.0.4','0.0.0.4';

The parameters that are required to run this database stored procedure are the same as what’s required for the server-level procedure, so you can either specify the same Start and End IP address values to allow access through a single address, or you can specify different values for the two to give access to a range of addresses. This procedure inserts records into the system table sys.database_firewall_rules on the database you ran the command on.

Speeding up the Creation of Execution Scripts

Knowing the above commands is helpful, and even works if you only have one or two firewall rules to add, but I promised I would tell you the fastest way possible to add 40+ firewall rules to a server, so how do I do that? How do I save myself the trouble of writing the above procedure execution commands 40+ times or copy/pasting/updating the line over and over again?

The trick I use is to generate my SQL queries in Excel, since Excel has the ability to concatenate strings and then to use the same formula across however many rows you have to generate the same concatenated string using multiple distinct values. I use this Excel trick quite frequently, whenever I need to generate the same type of SQL query/command multiple times based on specific values.

In this use case, I was provided a list of the names of the servers that needed inbound firewall access to my Azure SQL Database along with the IP addresses they would be using. I copied that information into two columns in Excel then wrote a formula in a third column to generate the SQL command that I needed to run to make a firewall rule for that particular server and IP address.

Here is an example of how I accomplish this type of task in Excel:

Screenshot of text in Excel demonstrating how to quickly develop SQL queries for adding firewall rules

In Column A of the spreadsheet, I copied in the list of the server names I was provided. I am going to set the name of the firewall rule for each to the name of the server that is being given access. Column B then has the IP address that the server will be connecting through.

Note: all server names and IP addresses were made up for this example.

Then in Column C, I use the CONCAT function of Excel to generate the SQL query that I want, which I will then be able to copy to the database and execute to generate the firewall rule. The following is the full formula I used to generate the SQL command:

=CONCAT("EXECUTE sp_set_firewall_rule N'",A2,"','",B2,"','",B2,"';")

After I’ve successfully made the command as I want it for the first server in the list, I then copy that same formula down the column for every row to generate the same command for each server and IP address combination. Once all the queries have been created, I copy the entire Column C into an SSMS query window:

Screenshot of SQL queries in SQL Server Management Studio for adding firewall rules to the server

I then just have to click Execute and all the commands run, creating all the new firewall rules I needed in just a couple seconds. Believe me, this method of using Excel will save you a lot of time copying and pasting and updating IP addresses in the queries.

View existing Server-Level Firewall Rules Through SSMS

If you would like to review the list of server-level firewall rules that currently exist on your server, you can do so by running the following query on the master database of your Azure SQL Server instance.

select id, [name], start_ip_address, end_ip_address, create_date, modify_date
from sys.firewall_rules
order by id

This list will directly correspond to the list of values that you would see under “Networking” for your Azure SQL Server instance in the Azure portal.

View existing Database-Level Firewall Rules Through SSMS

If you would like to review the list of database-level firewall rules that currently exist on your database, you can do so by running the following query on the database you would like to see the firewall rules for.

select id, [name], start_ip_address, end_ip_address, create_date, modify_date
from sys.database_firewall_rules

As far as I am aware, there is no way to view this same list of details from the Azure portal, so this should be the source of truth for database-level firewall rules for an Azure SQL Database.

How to Delete Firewall Rules Through SSMS

Similarly to how there is a more efficient way to create multiple firewall rules through T-SQL queries in SSMS, you can also quickly delete a lot of firewall rules at once through SSMS using a delete procedure. I haven’t yet had to delete a large number of firewall rules at once, but you can follow the same process I outlined above for adding them, but use the deletion procedure instead.

Delete Server-Level Firewall Rules

The T-SQL command you can use to delete a specified server-level firewall rule through SSMS is:

EXEC sp_delete_firewall_rule N'SERVER1';

When deleting a rule, you only need to provide the name of the firewall rule you would like to delete.

Delete Database-Level Firewall Rules

The T-SQL command you can use to delete a specified database-level firewall rule through SSMS is:

EXEC sp_delete_database_firewall_rule N'SERVER1';

When deleting a rule, you only need to provide the name of the firewall rule you would like to delete.

Summary

In this post, I outlined the full process I used to generate 40+ server-level firewall rules on my Azure SQL Server instance as requested. Before starting on this task I had no idea that it was possible to generate these firewall rules through a T-SQL command in SSMS, I only knew about adding them through the Azure portal manually. But like every good programmer, I knew there had to be a better and faster way and I was correct. I hope this post helps save you a little time as well the next time you need to add more than a couple firewall rules to your server.

How to Squash commits with Git Bash

I have been working with Git for the last several years, but in my current position, I am having to do more of the manual work with Git to ensure my commits meet our branch policies when pushing, since my current company has stricter rules on the pipelines. One of the Git activities I’ve found myself doing nearly every week now is to Squash my commits. While initially learning how to do this, I found some resources online that were somewhat helpful, but as with most documentation, it seems the authors assumed some level of basic understanding of Git that I did not possess. I understand it now that I’ve been doing it so frequently, but want to make a concise post about how to squash commits with Git Bash.

What’s in this post

What is a squash commit?

To squash commits means to combine multiple commits into a single commit after the fact. When I code, I do commits every so often as “save points” for myself in case I royally screw something up (which I do frequently) and really want to go back to a clean and working point in my code. Then when it comes time to push to our remote repos, I sometimes have 5+ commits for my changes, but my team has decided that they only want to have squashed commits instead of having all that commit history that probably wouldn’t be useful to anyone after the code has been merged. That is when I need to combine and squash all my commits into a single commit. Squashing is also useful for me because while I am testing my code, I copy any necessary secrets and IDs directly into my code and remove them before pushing, but those IDs are still saved in the commit history so our repos won’t even let me push while that history is there. And squshing the old commits into a single new commit removes that bad history and allows me to push.

How to squash multiple commits

For the purpose of this post, assume I am working with a commit history that looks like this:

613f149 (HEAD -> my_working_branch) Added better formatting to the output
e1f0a67 Added functionality to get the Admin for the server
9eb29fa (origin/main, origin/HEAD, main) Adding Azure role assgmts & display name for DB users

The commit with ID 9eb29fa is the most recently commit on the remote. The two commits above are the ones I created while I was making my code changes, but I need to squash those two into one so that I can push to our remote repo. To do this, I will run the following Git command:

git rebase -i HEAD~2

That command indicates that I want to rebase the two commits before HEAD. And the -i indicates that we want to rebase in interactive mode, which will allow us to make changes to commit messages in a text editor while rebasing. When I run the command, Git opens Notepad++ (which is the text editor I specified for Git Bash) with a document that looks like this:

pick e1f0a67 Added functionality to get the Entra Admin for the server
pick 613f149 Added better formatting to the output

# Rebase 9eb29fa..613f149 onto 9eb29fa (2 commands)
#
# Commands:
# p, pick <commit> = use commit
# r, reword <commit> = use commit, but edit the commit message
# e, edit <commit> = use commit, but stop for amending
# s, squash <commit> = use commit, but meld into previous commit
# f, fixup [-C | -c] <commit> = like "squash" but keep only the previous
#                    commit's log message, unless -C is used, in which case
#                    keep only this commit's message; -c is same as -C but
#                    opens the editor
# x, exec <command> = run command (the rest of the line) using shell
# b, break = stop here (continue rebase later with 'git rebase --continue')
# d, drop <commit> = remove commit
# l, label <label> = label current HEAD with a name
# t, reset <label> = reset HEAD to a label
# m, merge [-C <commit> | -c <commit>] <label> [# <oneline>]
#         create a merge commit using the original merge commit's
#         message (or the oneline, if no original merge commit was
#         specified); use -c <commit> to reword the commit message
# u, update-ref <ref> = track a placeholder for the <ref> to be updated
#                       to this position in the new commits. The <ref> is
#                       updated at the end of the rebase
#
# These lines can be re-ordered; they are executed from top to bottom.
#
# If you remove a line here THAT COMMIT WILL BE LOST.
#
# However, if you remove everything, the rebase will be aborted.

The first comment in the document # Rebase 9eb29fa..613f149 onto 9eb29fa (2 commands) gives an overview of what the command is doing. We’re rebasing the three listed commits onto the most recent commits that’s on the remote, which will give us one new commit after that remote commit in the place of the two we currently have.

To rebase these commits, I will change the top two lines of that document to:

pick e1f0a67 Added functionality to get the Entra Admin for the server
squash 613f149 Added better formatting to the output

No matter how many commits you are squashing, you always want to leave the command for the first command in the list as “pick” and then every other commit needs to be changed to “squash”. Once you have made that change, save the file and close it. Once you close that document, it will open another text document containing the previous commit messages, giving you an opportunity to amend them. This is what my commit messages look like when the document pops up:

# This is a combination of 2 commits.
# This is the 1st commit message:

Added functionality to get the Entra Admin for the server

# This is the commit message #2:

Added better formatting to the output

# Please enter the commit message for your changes. Lines starting
# with '#' will be ignored, and an empty message aborts the commit.
#
# Date:      Fri Jul 12 11:11:10 2024 -0600
#
# interactive rebase in progress; onto 9eb29fa
# Last commands done (2 commands done):
#    pick e1f0a67 Added functionality to get the Entra Admin for the server
#    squash 613f149 Added better formatting to the output
# No commands remaining.
# You are currently rebasing branch 'my_working_branch' on '9eb29fa'.
#
# Changes to be committed:
#	modified:   file1.py
#	modified:   file2.py
#	new file:   file3.py
#

I will change the file to the following so that I have a single, concise commit message (although I would make it more detailed in real commits):

Updated the files to contain the new auditing functionality.

# Please enter the commit message for your changes. Lines starting
# with '#' will be ignored, and an empty message aborts the commit.
#
# Date:      Fri Jul 12 11:11:10 2024 -0600
#
# interactive rebase in progress; onto 9eb29fa
# Last commands done (2 commands done):
#    pick e1f0a67 Added functionality to get the Entra Admin for the server
#    squash 613f149 Added better formatting to the output
# No commands remaining.
# You are currently rebasing branch 'my_working_branch' on '9eb29fa'.
#
# Changes to be committed:
#	modified:   file1.py
#	modified:   file2.py
#	new file:   file3.py
#

Once you’ve updated your commit messages as you would like, save and close the file and then push the changes like you normally would. If you would like to confirm and review your changed commits, you can use git log --oneline to see that the log now reflects your squashed commit instead of what it had previously.

Note: One important standard of doing rebasing with Git is that you should not rebase changes that have already been pushed to a public or remote repo that others are using. It’s bad practice with Git to try to rewrite shared history, since keeping that history is the whole point of Git and version control. Try to stick to the custom of only doing rebasing with your own local changes.

Summary

In this post, I covered the basics of how to perform a squash commit of multiple commits using Git Bash. If this tutorial helped you, please let me know in the comments below. If you would like to read more of the details about what rebasing means, please refer to the Git documentation.

Sources

A Week in the Life- 10/7 – 10/10

Have you ever wondered what the normal work tasks of a database developer/integration engineer looks like? If you have, then this is the post for you. This is a new series of posts where I simply give an overview of what I accomplished each week, giving insight into what life as a database developer looks like for those who might be curious. I also want to do these reviews for my own records and edification, because it’s always good to keep track of the things you accomplish at your job. This post is going to review the week of October 7 – October 10.

What’s in this post

More SQL Server Setup

The week started with a request from one of my teammates to set up SQL Server 2022 on another Windows Server virtual machine as part of my ongoing project I am a part of. I have somehow made myself the SQL Server 2022 installation guru, so I’m not surprised that I was requested to complete this task.

Setting up the SQL Server itself was as easy as it could be, easier than the last few I did since the application using the server had less configuration requirements. But after installing the server and making sure the firewall rules were in place (as I learned two weeks ago), I then learned even more setup that I had missed previously that needed to be done for the server. The new thing I had missed was getting Entra ID authentication to work with the server, so one of my teammates showed me how to do that so that the new server was now setup perfectly.

Production Upgrade for an Application

My biggest feat of the week was having to do the production upgrade for the legal application project I’m on completely by myself. I was on this project with one of my teammates, but he had to be out this week for personal matters so I was left as the sole developer working on the production upgrade we had been building up to for months. Although I was nervous about being the only one responsible if something went wrong, I stepped into the challenge and completed the upgrade. I did run into a few issues during the upgrade process, but I was able to work them out by the end of the day by working with the vendor of the application, so I have to say the upgrade went pretty dang well given the circumstances. Pat on the back to me for surviving my first solo production upgrade.

Preparing for the Next Application Upgrade

I couldn’t celebrate this week’s successful application upgrade for too long, because I already had to be starting on the steps for the next phase of this application’s upgrade process. Thankfully, this week’s work for that next phase was only to take backups of two databases and provide that to the application vendor, so I wasn’t overly burdened with work for this part.

Writing Documentation about the Application

After completing that production upgrade, facing a few issues along the way, I knew that I needed to write everything down that I knew about the application and what went wrong with the upgrade, or else me and my team might forget the important but small details. None of us think that this type of work is truly something that we as database developers should be doing, we’re not application developers, but we have been told that we need to support this application from front to back so that is what we’re going to do. And since the territory is unfamiliar to everyone on my team, I know good documentation will be essential to continuing to support this application in the future.

Met with Colleague to Review Conference

There are only two women on my team: me and one other woman. We both attended the Women & Leadership conference two weeks ago so wanted to catch up with each other to review our learnings and takeaways so that we can present those ideas to our manager. This conversation was really lovely. When working in a field that is 98% men like I am, it’s always a breath of fresh air to be able to sit down, relax, and catch up with other women dealing similar tasks and issues. Our scheduled half-hour meeting turned into an hour because we were having a good time and brainstorming good ideas to present to our manager. We left with a list of items we wanted to cover with him and a plan for how to get some of his time to present our list.

Presented new Database Architecture to Division IT Team

I need to be forward and say that I was not the one who presented the database architecture to the other team, but I was included in the meeting since my teammate who normally works with this other team is going to be out of office for two months at the end of the year and I need to be apprised of the work he normally does.

This meeting with a business division IT team (not the corporate IT team that I’m on) turned out to be a fantastic relearning opportunity for me to see the database architecture possibilities in Azure. The backstory is that this other team had requested we create them a new database they could use for reporting purposes to not bog down their main application database. My teammate who normally works with them came up with several different new architecture possibilites that would fulfill the request, and ran those possibilities by the team to see which they would be most comfortable with. I had technically learned this same architecture information shortly after I started at the company, but that time is honestly a blur so I appreciated getting to see this information re-explained. I took lots of notes in this meeting and now feel more confident about making similar presentations and architecture decisions in the future when that will eventually be required of me.

Summary

This was a shorter week for me because I took Friday off, but I still accomplished a lot of big projects. I am looking forward to having some calmer weeks in the future where not everything is a “big stressful thing”, but I do appreciate that this week taught me a lot and helped me grow as a developer. I am slowly easing into the point of my career where I can start to be a leader instead of a follower, so I am taking each and every learning opportunity as a gift helping me to grow into that future leader.

Do you have any questions about what a database developer does day-to-day that I haven’t answered yet? Let me know in the comments below!

How to Add an Address to Google Maps

This post is going to really deviate from my normal content, except for the fact that I am still writing about technology. My husband and I recently purchased our first house, which was a new build. Because of that, Google and every other map service of course did not know that our house exists, and that was becoming annoying while trying to help other people navigate to our new address.

I did a lot of googling and reading of support forum answers trying to find out how to add my new address to Google Maps, but nothing that I found online was possible when I went into the app. A lot of the suggestions seemed to be really outdated for the Google Maps UI. I eventually figured out how to add it myself through poking around the app, so I thought I would share how I did it in hopes of helping someone else who was struggling to find help with other online resources.

These instructions cover how to add an address to Google Maps using the iPhone app, I’m hoping it would work for the Android version as well, or even on a browser, but I’m not sure since I haven’t been able to try with either of those options.

Quick Overview

  • Open the Google Maps app
  • Press and hold on the location of the address you want to add to drop a pin
  • In the menu that opens when you drop a pin, select “Suggest an Edit”
  • In the next menu, select “Fix an Address”
  • Confirm the location you are adding using the map on the next screen. Move the map if needed. Then click “Next”.
  • Enter your new address information in the fields provided and then hit “Submit” when ready to add it.

Open Google Maps and add pin where your address is

In the iPhone app, you can press and hold on the map in Google Maps to drop a pin, and that is what you want to start with. That will bring up a menu on the bottom half of the screen, choose “Suggest an Edit”.

Select the option to “Fix an Address”

In the menu that is brought up after you select “Suggest an Edit”, choose the option to “Fix an Address” (although technically the address doesn’t exist yet).

Confirm the location of the pin is where your location is supposed to be

After selecting to “Fix an Address”, the app will bring up a map again for you to confirm the location of the address you are fixing/adding. You will need to drag the map around until the pin in the center of the screen is where you would like the address to be. When I was adding my address, I put the pin on top of my current location while at home to make sure I put it in the right spot.

Enter your new address information

The very last thing you will need to do is to enter your address correctly into the fields you see in the final screen. Double-check to make sure you don’t have any typos or any other mistakes, then click “Submit”. After you submit the address information, the suggested edit apparently goes through some sort of review process at Google, but you should get your address added to the map within a few days of submitting.

How to Set Notepad++ as Your Default Git Editor

Welcome to another coffee break post where I quickly write up something on my mind that can be written and read in less time than a coffee break takes.


When you start working with Git Bash for the first time (or you have your computer completely reimaged and have to reinstall everything again like I did recently), you will likely encounter a command line text editor called Vim to edit your commits or to do any other text editing needed for Git. And if you’re like me, you probably won’t like trying to use a keyboard-only interface for updating your commit messages. If that’s the case, then I have a quick tutorial for how to make Git use a normal text editor to more easily update your commit messages.

What is Vim?

Vim is an open-source text editor that can be used in a GUI interface or a command line interface. My only familiarity with it is with the command line interface, as the default text editor that Git comes installed with. If you don’t update your Git configuration to specify a different text editor, you will see a screen like the following whenever you need to create or update a commit message in a text editor (like when you complete a revert and Git generates a generic commit message for you then gives you the opportunity to update it, or when you want to amend an existing commit). This is what the command line editor version of Vim looks like (at least on my computer).

I personally don’t like this text editor because to use it, you need to know specific keyboard commands to navigate all operations and I don’t want to learn and remember those when I can use a GUI-based text editor instead to make changes more quickly and intuitively.

How to Change Text Editor to Notepad++

The command for changing your default text editor is super simple. I found it from this blog post: How to set Notepad++ as the Git editor instead of Vim.

git config --global core.editor "'C:/Program Files/Notepad++/notepad++.exe' -multiInst -notabbar -nosession -noPlugin"

After you execute that command in Git Bash, you can run this command to test it, which should open up Notepad++ to try to update the last commit you made: git commit --amend.

That blog post then says that you should be able to double-check your Git configuration file to see that the editor has been changed, but my file doesn’t reflect what the other post says despite Notepad++ be opened for commit messages after I ran the change statement. This is what my gitconfig file currently looks like after setting the editor to Notepad++:

So your mileage may vary on that aspect. As long as the change works for me, I’m not too concerned about what the config file looks like.

How to Revert Text Editor to Vim

If you ever want to change back to the default text editor, you can run the following command to switch back to Vim, and once again confirm it worked using the git commit --amend statement:

git config --global core.editor "vim"

Conclusion

Changing your default text editor for Git Bash is extremely simple as long as you know where the exe file of your preferred editor is stored on your computer. It’s also simple to change back to the default Vim editor in the future if you want or need to.