Author: Emily Lahren (page 1 of 8)

Recovery Options for Azure Key Vaults

I recently had a post about how to manage deleted Azure Key Vaults: how to find them after they’ve been deleted, how to recover them if you didn’t really want them to be deleted, and how to purge them if you want them to be permanently removed. In today’s post, I am going to cover more of the finer details of the recovery options you can set on a key vault when creating it, which will dictate the options you have available for recovering them in the case they get deleted. My organization has defaults that enabled me to recover a deleted key vault, but you may not get that option unless you specify that yourself, so I will tell you how!

What’s in this post

Recovery Options for Existing Key Vaults

As of the end of 2025, when I went into an existing key vault that I’ve already created, and I reviewed the “Properties” of it under the “Settings” tab in navigation, I could see the following settings:

  • Soft delete policy
  • Days to retain deleted vaults
  • Purge Protection policy

In the portal, that looks like this:

Screenshot of the Properties page of a Key Vault in the Azure Portal showing the retention settings available: soft delete, days to retain deleted vaults, purge protection.

For the example key vault shown, Soft Delete was enabled when it was created, which means that when a key vault is deleted, it will no longer show in your main list of KVs, so it appears deleted. Even though it looks deleted, you can still see the vault for the specified retention period if you go into a separate section of the console, which I covered in my previous post.

For this KV, we set the number of days to retain vaults after they’re deleted to 90 days, which is generally the default value supplied during setup. The final option you can choose is to either enable or disable Purge Protection, which is a feature that dictates whether or not you can permanently delete a KV before the end of the retention period. In our case, Purge Protection was set to disabled because this wasn’t an important key vault, and we don’t believe we would be negatively affected by an internal bad actor if they decided to delete and then permanently delete this vault.

One thing to note about the settings of this already created KV is that most of the recovery options cannot be changed after it’s created.

Recovery Options When Creating a Key Vault

For the most part, when you create a new key vault, you have total control over the recovery options for that vault. However, starting at the beginning of 2026, Microsoft changed the settings to force Soft-delete to be enabled for all key vaults. At the time I took these screenshots, which was at the end of 2025, I was still able to create a key vault through the Azure CLI, PowerShell, or REST API with soft-delete disabled, which is noted with the “i” icon next to the title during setup, but that has likely already been removed at the time of posting this:

Screenshot of an information icon on the Soft Delete setting for a new key vault, which says, "The ability to turn off soft delete via the Azure portal has been deprecated. You can create a new key vault with soft delete off for a limited time using CLI / PowerShell / REST API. The ability to create a key vault with soft delete disabled will be fully deprecated by the end of the year."

Besides the soft-delete setting, though, you have total control over the two remaining settings, which are the days to retain deleted vaults and then whether or not Purge Protection is enabled.

Screenshot of the recovery options settings available when creating a new key vault in the azure portal

By default, the value for “Days to retain deleted vaults” is 90, but you could set that to any value you would like. Also, by default, the Purge Protection option is set to “Disable”, but you can also change that if you would like to prevent people from permanently deleting key vaults after they’ve been soft-deleted.

Depending on our use case, my organization will decide to either enable or disable Purge Protection. If the vault is for something that is business critical, it will get the Purge Protection. But if the vault is only something that we use for our own testing or development, we likely won’t enable it. There is no additional cost either way, so choose the option that suits your security and retention requirements best.

Summary

When creating a new Azure Key Vault, there are very few retention settings you have control over. If you’re looking to change an existing key vault, there are even fewer things you can change for retention. That means you need to ensure you choose the correct settings for you and your organization at key vault creation time, since you will only be able to toggle the Purge Protection option after the vault has been created.

If you’re looking for more information on managing deleted key vaults, hop over here to see my post on that. Interested in learning how to create key vaults with a Bicep template so you don’t have to do it manually? I have a post about that as well!

Limitations of Azure Synapse Analytics SQL Pools

In the past couple of months, I have run into some annoying limitations and unsupported features with T-SQL in our Azure Synapse Dedicated SQL Pools, which is where my team puts our data warehouses for reporting needs across our business. These unsupported features were a bit surprising to me, which is why I’ve decided to share them in this post. This is certainly not going to be an all-inclusive list of things you can’t do in Dedicated SQL Pools that you can in a normal SQL database, but I will keep it updated as I run into more barriers in the future.

What’s in this post

No Primary Keys

When you understand that the Dedicated SQL Pools (DSP) are meant to be used as Online Analytics Processing (OLAP) data warehouses for analytics and reporting purposes, it makes more sense that primary keys are not allowed on tables since that would affect performance. I did not initially put that concept together until I was trying to create a new table on the DSP and got the following error:

Screenshot showing a T-SQL command to create a Primary Key on a table in SQL Server Management Studio and the corresponding error since the command is not supported on this type of database

The full error message is:

Enforced unique constraints are not supported. To create an unenforced unique constraint you must include the NOT ENFORCED syntax as part of your statement.

Unfortunately, there is no workaround for this limitation. You must accept that you will not be able to enforce uniqueness on your tables in the DSP, unless you want to have “uniqueness” in name only and not have it actually enforced.

No Foreign Keys

In the same vein as above, foreign keys are also not allowed in a Dedicated SQL Pool (DSP), since that would also hinder the performance of OLAP queries if they existed. When trying to create a foreign key on a table in a DSP, you get the following error:

Screenshot showing a T-SQL command to create a Foreign Key on a table in SQL Server Management Studio and the corresponding error since the command is not supported on this type of database

The full error text is:

Parse error at line: 3, column: 1: Incorrect syntax near 'FOREIGN'.

This error is different from the error received when trying to create a primary key, because this query simply returns a parsing error; the query processor doesn’t even understand the concept of trying to apply a foreign key constraint to the tables in this database, since it’s a DSP.

There is no workaround for this limitation beyond accepting that you will not be able to maintain relationships between tables in your database for this type of server.

No Default Constraint using a Function

This is the constraint I am most sad about not being able to apply to tables in a Dedicated SQL Pool (DSP), since I think it would be really useful for certain situations. But again, needing to apply a default value function for every row inserted into a DSP table would slow down the processing speed, which is bad for OLAP queries.

When you try to add a default constraint using a function or expression to a table in a DSP, you will get the following error:

Screenshot showing a T-SQL command to create a Default Constraint on a column in SQL Server Management Studio and the corresponding error since the command is not supported on this type of database

The full error message for that is:

An expression cannot be used with a default constraint. Specify only constants for a default constraint.

However, you are still allowed to add default constraints that have a constant value, like this:

ALTER TABLE dbo.CUSTOM_FORECAST_Raw
ADD CONSTRAINT DF_MyDefault
DEFAULT '1900-01-01 00:00:00' for UploadDatetime;
Screenshot showing a T-SQL command to create a default constraint with a constant value on a column in SQL Server Management Studio and the corresponding error since the command is not supported on this type of database

My most common use of a function in a default constraint for a column is to set an update datetime for the row, and that is precisely what I was trying to accomplish when I found this limitation. There is no direct workaround for this limitation. What I did to subvert it was to add the UpdateDatetime value to the column further upstream in the ETL process. Hopefully, that type of workaround will be doable for you as well.

No Multi-Line Inserts

This limitation is one I truly don’t understand, and I was really confused about it when I first ran into it during development. In a normal SQL Server database, you can insert multiple rows into a table at once using the syntax INSERT INTO dbo.MyTable VALUES (value1, value2...), (value3, value4...);. Running that command would insert two different rows into the table at the same time. That syntax is not allowed in Dedicated SQL Pools (DSP).

If you were to run the same command on a DSP, you would get this error:

Screenshot showing a T-SQL command to add multiple rows of data at once in SQL Server Management Studio and the corresponding error since this command format is not supported on this type of database

The exact error message for that scenario is:

Parse error at line: 2, column: 19: Incorrect syntax near ','.

This is another scenario where the query processor doesn’t even know how to approach this code, even though it would work just fine if I instead ran it on a normal SQL Server database.

The workaround for this limitation is to create an INSERT statement for every row of data you want to insert into your table, or use a pipeline in the Synapse workspace to load data instead.

INSERT INTO dbo.MyTable VALUES ('AR',5103);
INSERT INTO dbo.MyTable VALUES ('CA',6111);

I only needed to load a few test rows into a table on my DSP for testing purposes, so the multiple INSERT statements worked in a pinch. I would not want to do that if I were adding more than a few rows to the table.

Cannot use sp_rename

For this scenario, I’m not entirely sure what the reasoning is behind removing the ability to run this particular procedure (and likely others like it), but I have learned that in a Dedicated SQL Pool (DSP), you are not able to rename objects in the database using the system stored procedure sp_rename. Instead, you now either need to just not rename your object, drop it and recreate it, or use a new command called rename object, which has a different syntax.

When you try to run the sp_rename procedure on your DSP, you will get an error like this:

Screenshot showing T-SQL commands to rename tables using the sp_rename procedure in SQL Server Management Studio and the corresponding error since the command is not supported on this type of database

The full error message is:

"An insufficient number of arguments were supplied for the procedure or function sp_rename"

This is an odd message, because it leads you to believe you’ve typed something wrong instead of the fact that this procedure is not valid on this type of database. I’m not sure why the procedure is even installed on the database, given that it doesn’t work as it does on other databases.

When I first got that message, I wasted time going to the documentation and verifying that I did have the correct parameters for the procedure. The following message is displayed at the top of the documentation page, but that isn’t specific to any given T-SQL command that may not work, like the one being documented on the current page. Plus, the notice only warns for serverless SQL Pools and not dedicated SQL Pools, like what I am working with.

Screenshot of a note in a Microsoft document about the sp_rename procedure, saying that some T-SQL commands are not supported for Azure Synapse Analytics serverless SQL pools

Mostly, when it comes to this system procedure not working, I am annoyed that, from the documentation of the procedure, it seems like it should work in this scenario. That includes the existence of the green checkmark next to “Azure Synapse Analytics” at the top of the page!

Screenshot of the top of the Microsoft document for the sp_rename procedure showing the supported list of database types the page applies to, which includes Azure Synapse Analytics

How to Rename Tables on a Dedicated SQL Pool

If the main system stored procedure for renaming objects doesn’t work for DSPs, how then are we supposed to rename objects as needed? Or are we out of luck and have to drop and recreate the table, or leave its name alone? At first glance, it does seem like we may be out of luck, because even the “Rename” option in the right-click menu on a table is grayed out and won’t let us change it from there:

Screenshot of the right-click menu of a table in the Object Explorer of SQL Server Management Studio showing that the Rename option is disabled and grayed out.

There is a way to rename, it’s just a little harder to find! With this type of database, there is a new command rename object that will let you rename a table. The syntax for that looks like:

rename object MySchema.MyTable to NewTableName;

That command worked successfully for me in my scenario of renaming a table that I wanted to mark for future deletion, as shown by this screenshot:

Screenshot showing a T-SQL statement to rename a table using the "rename object" command in SQL Server Management Studio with no error messages since it is supported for this type of database

With further reading of the sp_rename procedure documentation, if you scroll all the way to the bottom part with Examples, there is another note that the sp_rename feature is currently only in “preview” for Azure Synapse Analytics and is therefore only available for objects in the dbo schema. And honestly, that’s a little silly to me. It’s also silly that you wouldn’t find out that information unless you looked for examples; the information was not up in the main section of the documentation, so it isn’t as easy to find.

Screenshot of the Microsoft document for the sp_rename procedure saying that the procedure is still in preview for Azure Synapse Analytics so is only supported for the dbo schema and no custom schemas

If you would like to read further on the new syntax for renaming tables in Dedicated SQL Pools, you can review the document for the rename object command.

Summary

SQL Pools in Azure Synapse Analytics give you a lot of great performance and features that are specifically tuned for use in Data Warehouse situations, but those performance improvements do come with a cost–not being able to do things exactly as you are used to with standard SQL Server databases. I’ve struggled quite a bit with database development on this style of database, since it seems like every time I want to do something on the database, I hit a temporary wall of differences that I didn’t expect. I have obviously found an explanation for or a workaround to most of these problems, as evidenced by the information above, but it’s not a welcome interruption to my workflow. I hope that you will hit fewer walls in your own database development now that you’ve read through this post.

Related Posts

How to Enable Akismet Anti-Spam on WordPress

This is going to be a super quick post, inspired by my late-night searching while trying to finally enable the anti-spam feature from Akismet (Jetpack) on this website. Every time I went through the Jetpack settings page and tried to enable the Personal version of the tool, I would click the “Learn More” button to try to enable it, but would then find myself on a plain page that simply said, “Sorry, you are not allowed to access this page.” When searching for how to resolve that through a normal search engine, I never found anything specific enough to help me. But I finally figured it out with the assistance of an AI chat, and I am so excited about finally fixing this issue that I needed to share it with everyone else.

What’s in this post

How to Enable Akismet Anti-Spam from Jetpack

I don’t know why this didn’t click for me earlier, but Akismet Anti-Spam is a plugin for WordPress, so to be able to enable the feature through the Jetpack main page, you first need the plugin installed on your website.

Once you install the plugin on your site, it is then easy to get it running by going through the entire process as prompted by Jetpack.

Install Plugin from Plugins Page

In your WordPress admin dashboard, click “Plugins > Add Plugins” from the menu on the left side.

Screenshot from the WordPress admin console showing that you should select the subitem "Add Plugin" under menu item "Plugins"

On that page, enter “Akismet” in the search box at the top right of the screen, then click “Install Now” on the “Akismet Anti-Spam: Spam Protection” plugin:

Screenshot of the WordPress admin console page where you can install the Akismet plugin by searching for it using the search box on the top right of the screen

You will then be directed through the purchasing process for the plugin, which applies even if you are doing the free personal version of the plugin. You will be asked to set an annual price for yourself (I chose $0 since I’m not currently making money on this site), then finish the “checkout process” using your Jetpack or WordPress account information.

After finishing the checkout process, I was provided with a screen and an API key for the plugin and received an email for setting up the plugin from Jetpack. I copied the API key to a safe location from the checkout screen, then went to my email and clicked the link for setting up the plugin. That took me to a screen in my WordPress admin console where I could choose to manually enter the API key, which I did.

At that point, the plugin was installed, set up, and already protecting me from spam!

Summary

I wish the explanation of how to handle the Akismet anti-spam plugin was a little more straightforward. The “Products” page of the Jetpack section on the admin portal makes it seem like you can get the feature directly from that section of the website, and people (like me) may get confused when they end up at an error screen instead of being prompted to install the plugin.

To enable the Akismet anti-spam feature on your WordPress site, first install it from the “Plugins” page of the admin portal, then work through the process as prompted by the setup process with Jetpack.

Related Posts

Managing Deleted Key Vaults

Having delete protection on shared cloud resources is usually a very nice and beneficial feature to enable, since it protects you and your organization from the disaster of someone accidentally deleting a resource they didn’t intend to by keeping the resource available in the background for restore after deletion. My team has the feature enabled on our storage accounts and some other resources which I knew about, but I did not know that our key vaults also had the same feature enabled. Until I was trying to create a new key vault with the same name as a key vault I had already deleted and was getting an error saying a key vault with that name already existed.

In this post I will show how to find and manage delete key vaults and how to permanently delete them if you want to. You could use this process to find a key vault to recover it if it was accidentally deleted, or you can use it to do what I did and get rid of it permanently so you can recreate it.

What’s in this post

Finding Deleted Key Vaults

When running a Bicep template, which was creating a new version of a key vault I had deleted moments before, I got an error that the key vault couldn’t be created because one with the same name already existed. Confused, since I knew I had already deleted the resource, I went back out to the Azure portal and searched for the key vault the template error indicated, which was called “biceptest”. As you can see in the screenshot below, searching for that name returned no results.

Screenshot of the Azure Portal page for Key Vault resources showing that one named "biceptest" does not appear when searched for, since it has been deleted.

As I mentioned above, key vaults can be set to not permanently delete immediately, and instead stay alive in the background for a set amount of time so they can be restored if needed. To find any deleted key vaults that are still available for restore, you can click on the “Manage deleted vaults” on the top menu of the key vault list.

Screenshot of the Azure Portal page for Key Vault resources showing where to locate the "Manage deleted vaults" button

When you click that, a new pane will pop up that will let you filter and view deleted key vaults by Subscription. Choose your subscription from the dropdown menu, and you will then be given a list of deleted key vaults that are still available for restore.

Screenshot of the Azure Portal page for Key Vault resources showing the "Managed deleted vaults" pane which lists recently deleted vaults that have not yet been permanently purged

Notice in the above screenshot that the deleted vaults list shows the date it was deleted and then the date it is set to be permanently removed from Azure. In my case, I had 90 days to recover a deleted vault.

Recover a Deleted Key Vault

To recover a deleted key vault, you need to check the box next to it in the pane showing a list of deleted vaults for a subscription, then click the “Recover” button at the bottom of the screen:

Screenshot of the Azure Portal page for Key Vault resources showing the "Managed deleted vaults" pane where you can click the "Recover" button to undelete the resource.

Permanently Delete a Deleted Key Vault

If you would like to permanently get rid of a deleted key vault, perhaps to create a new vault with the same name without getting an error, you will need to click the “Purge” button at the bottom of the screen after checking the box next to the vault you want to permanently delete.

Screenshot of the Azure Portal page for Key Vault resources showing the "Managed deleted vaults" pane where you can click the "Purge" button to permanently delete the resource

Note: If the key vault has been setup with “purge protection enabled”, you will not be able to purge/permanently delete the vault. In that case, the vault will only be permanently deleted once the preset number of days has been reached.

Summary

Choosing to delete a key vault through the Azure portal does not guarantee that the vault has been completely deleted from your system. If the vault was setup to have delete protection enabled, you may be able to recover the deleted vault for a set amount of time after it was deleted. If you want to permanently delete a vault that had delete protection enabled, you will need to go into “Manage Deleted Vaults”, choose the vault you want to completely remove, then click the option to “Purge”. Once you have done that, the key vault will be 100% gone and you will be able to create a new one with the same name if you choose to do so.

Related Posts

Return the True URL for a Document in SharePoint Online Indexer for Azure Search

I am going to keep today’s post short and sweet, covering a quick change I needed to make to my SharePoint Online Indexer (still in preview but we’re using it for our custom chat bots) to make the index of a SharePoint library return the true URL of the source document so that we can feed that back to users so they can validate the chat bot answers.

It took me longer than I would like to admit to figure out how to do this, even though the metadata item is listed in the one and only Microsoft document for this tool, because I was wanting to return the URL and the documentation only mentioned URI and didn’t explain what they meant by that and gave no examples of it being used.

This index is specifically used for custom chat bots created through Azure AI Foundry, and not for those created with other AI or cognitive services within Azure. I saw a lot of documentation and forum posts about those versions of indexers, but didn’t see anything covering this topic specifically which is why I wanted to write this post.

What’s in this post

Adding SharePoint document URL to index results

For creating my Azure Search data sources, indexes, and indexers, I have used Postman to run the API calls needed to hit the SharePoint Online (SPO) indexer service, since that is the only way to create this type of indexer (can’t use the Azure portal wizard).

It is very simple to return the document URL in your index, you only need to add this line to your index, and then rename it/map it in the indexer definition if you want. I didn’t not want to rename it, so I only changed the index definition.

{ "name": "metadata_spo_item_weburi", "type": "Edm.String", "key": false, "searchable": false, "filterable": false, "sortable": false, "facetable": false },

Once you add that to your index definition, make sure to send the API request through again, then reset and run the indexer related to the index. At that point, you should be able to query your index through the console and see that URL included in the results of the index.

Screenshot of an Azure AI Search Index resource test query demonstrating what the full URL of a SharePoint document looks like in the index

Summary

If you are using the SharePoint Online Indexer for Azure AI Search (with Azure AI Foundry) and you would like to return the full URL of a source document for a chat bot response, you can do so by adding the “metadata_spo_item_weburi” metadata field to your index definition.

Related Posts

Cybersecurity News Sources I am Following

You need to regularly be reviewing news sources for announcements about cybersecurity threats, vulnerabilities, and exploits if you are working anywhere in the IT field. There have never been more threats to our digital footprints than there are right now, and attackers are getting more ingenious with their attacks every day.

I used to not care about following this type of news, but then the recent SharePoint legacy server exploit hit close to home for my team. We weren’t negatively impacted, but we could have been. Thankfully, other people at my company were already keeping on top of those things so we were able to act immediately to secure our server as soon as the vulnerability was announced.

My team is also diving heavily into the AI space, which has been riddled with vulnerabilities and hacks this year, which I’m also now able to keep on top of by following a few websites.

To make it easier to see recent news at a glance, I decided to make myself an RSS feed (is that outdated?) to follow a handful of the top cybersecurity news sites so that I could quickly get an overview of what’s being discussed.

Sites I am following

In my RSS feed for cybersecurity news, I am following these websites:

  • Dark Reading
  • The Hacker News
  • SecurityWeek
  • Ars Technica Security

The first two on the list, Dark Reading and The Hacker News, are the ones I end up clicking through to the most, but all have good feeds that you can follow easily. I personally am using feedly.com as my RSS reader tool, but you can follow the sites however you choose. Just make sure you read regularly since new threats are coming out all the time.

Related Posts

You should Make Your Writing Easier to Understand

As technical professionals, those of us in the IT field tend to lean towards writing with technical language when we are communicating with others, whether that is with other members of our own teams or with business users or customers. While we may find our technical writing easy to understand, that may not be the case for others that we are working with.

Recently I was writing an email to a business customer in another part of my company who we made a custom chat bot for, trying to explain to her how the bot was able to give great answers even if the information it was providing was not explicitly written within the documentation we had given the bot for the Retrieval-Augmented Generation (RAG) model. Writing that email turned into a 20 minute learning experience of how I could simplify my explanation to a level that a normal business user would understand. I’m not sure why this idea popped into my head today, but I remembered that there are online services that will check the reading level of your writing, so I put my email draft into one of those and was surprised by the outcome and knew I needed to rewrite the email to be easier to understand for non-technical people.

What’s in this post

Average U.S. Adult Reading Level

Based on different studies and models, it is estimated that the average reading level for an adult in the United States is around the 7th-8th grade level[1]. There are other estimates that say that 54% of adults in the U.S. have a 6th grade reading level or below[2], which is a startling statistic.

I bring those statistics into this post because they point out why we as technical professionals may need to change our writing behaviors, especially when communicating with non-technical business users or customers. We need to make sure others understand what we are trying to say. If we are writing at a college level but the average person can only comprehend up to a 7th or 8th grade level, there are going to be misunderstandings.

Check Reading Level Online

In the email I was writing about how our chat bot worked, when I realized that it was overly complicated from a business user’s perspective, I started researching concrete changes I could make to how I wrote the message to make it more understandable to a normal, non-technical person. I love being in the weeds of the technicalities, but I know most of our business customers don’t; they only want to know how to use the tool to help themselves.

The best resources I found were the “Hemingway Editor Readability Checker” and then an older PDF document from Montclair State University which walks through getting analysis of writing in Microsoft Word. Jump to the section below to learn more about the Microsoft Word method.

The “Hemingway Editor” is a simple website where you can paste in your text you’d like to check, or provide a full text document, and it will do a quick analysis and highlight sentences that are hard to read, very hard to read, or not hard to read at all. It also gives a numeric “Grade” value indicating at what grade someone would be able to read the text.

When I pasted in my original email draft, it rated it at Grade 14, and marked all sentences in the text as either hard to read or really hard to read.

Screenshot of the Hemingway App readability checker showing a block of text marked with red and yellow highlights for very hard and hard-to-read sentences. The right panel displays a readability grade of 14 and notes that most of the six sentences are difficult to read

That review confirmed that I needed to rewrite the email to be less technical and easier to understand for average people. The website recommends aiming for Grade 9, which is what I tried to do. After a lot of editing, I got the score down to 10, which was as close as I could get to 9 without completely changing what I was trying to communicate.

Screenshot of the Hemingway App readability checker showing a block of text highlighted mostly in yellow, indicating hard-to-read sentences. The right panel reports a readability grade of 10 and notes that 5 of 6 sentences are hard to read, with none marked as very hard.

Check Reading Level with Microsoft Word

If you don’t want to use the online editor to check your writing level, there is also the option to do that through Microsoft Word. Before you can check the reading level, you first must enable an option to do that.

In Word, go to File > “Options”, then “Proofing”. Under that page, check the box for “Show readability statistics”. Click OK to save the change.

Screenshot of Microsoft Word "options" window on the "Proofing" page, demonstrating how to turn on the readability statistics function through a checkbox option.

Once you have enabled the feature, you can then go to the “Review” tab and click the button for “Spelling and Grammar”.

Screenshot of the "Review" ribbon tab at the top of the Microsoft Word screen highlighting the location of the feature "Spelling and Grammar".

When you click that, you will first get a panel that will point out any grammar issues that the program has found. If you don’t care about that, you can click the “X” at the top left of the window (next to “1 remaining”) to close those suggestions.

Screenshot of the first page of the "Spelling and Grammar" editor panel in Microsoft Word

After closing the grammar window, you will then get scores for your writing and recommendations for fixing the writing. There is also an option to check the similarity of the text to what can be found online, which might be useful for teachers and professors that are reviewing the writing of others and who may be concerned about plagiarism.

Screenshot of the "Spelling and Grammar" editor in Microsoft Word, showing the Editor Score of 89% for the document and other corrections, refinements, and features available for checking the document.

If you keep scrolling almost to the bottom of that recommendations panel, you can click on the “Document stats” button under the “Insights” heading, which will bring up a separate window with the reading level information and other details about your writing.

Screenshot showing the bottom of the "Spelling and Grammar" editor screen in Microsoft Word, where you can find the button called "Document Stats" to get insights into your document

While there are other statistics about my writing that could be helpful in other scenarios, what I am most interested for this example is the “Flesh-Kincaid Grade Level”. In this case, Microsoft Word recognizes the same reading level as what the online checker has for my simplified email. Which is cool.

Screenshot of the "Readability Statistics" window in Microsoft Word, which shows the word counts, averages, and readability scores of the document. The reading level of my sample email is 10.0 in this window.

Reading Level for this Post

I got curious while writing this post, wondering what its reading level would be. The answer? 12.2 according to Microsoft Word and 13 according to the online Hemingway Editor.

Screenshot of the "Readability Statistics" window in Microsoft Word, which shows the word counts, averages, and readability scores of the document. The reading level of my blog post is 12.2 in this window.

Screenshot of the Hemingway Editor Readability Checker showing a scoare of Grade 13 for my blog post draft, with 13 of 34 sentences having been marked as very hard to read, and 10 of 34 being marked as hard to read.

I thought it would be higher, so I’m glad to see that it hopefully isn’t unreadable for your average IT professional.

Summary

Technical people are often bad communicators, especially when it comes to interacting with non-technical people. I can’t say that we’re all terrible at it, but many technical degrees require technical communication classes for a reason. I am as guilty of too-elaborate writing as others are. But I am now going to intentionally work on better summarizing myself when emailing and talking with my business users. I would never want to make someone feel dumb because I was talking at too high a level.

Sources

Related Posts

Change the Admin Password on an Oracle Database

Do you have an old or bad Oracle admin password that you’ve been putting off changing because you’re scared of the impacts? Has your Oracle SYS user password been through the hands of multiple generations of database developers? Or maybe you just need to start regularly rotating your admin passwords to meet auditing guidelines? If you answered yes to any of those, I am here to help you make the change of your admin passwords on your Oracle Cloud Infrastructure (OCI) databases.

This post focuses on changing the passwords for OCI databases and pluggable databases. I specifically have done this on database version 23.9.0.25.07 and 19.0.0.0. The process was exactly the same for both, and is covered fully in this post.

What’s in this post

Why change your SYS and SYSTEM user passwords?

As we all know, password security is one of the easiest ways to increase the security of any account you own, which will include the admin accounts for your OCI database. There have been countless data breaches across all sectors, even ones you would think would be better at making strong passwords, due to people using too simple of passwords like “admin123” or “password”. We want to be better than that.

Regular rotation of your strong passwords will also increase the security posture of your system, which is another reason you may want to consider changing the passwords of your SYS and SYSTEM users on your database, especially when I show you how easy it is to do.

Disclaimer: This process worked for me and my systems using the OCI databases, it may not work as flawlessly for you. If your overall architecture includes having applications use these admin accounts for access, changing the password could break those systems. Make sure you don’t have any applications, pipelines, or processes using these accounts before you start. Or simply be aware that they will all have to be updated with the new password once you change it on the database (but don’t be that person, use service accounts or managed identities instead!).

Change the SYS User and TDE Wallet Passwords through the Console

The best and easiest way to change the password for your SYS admin account on an OCI database is to do so through the OCI console. If you navigate to the database you need to make the change for (not the Database System or the Pluggable Databases, just the Database level), you can find the option to change the passwords under the additional menu on the top right of the screen. Choose “Manage Passwords”.


That will bring open a pane that looks like this, which will allow you to change the password for your Admin account (SYS user) or for the TDE wallet.

You will only be able to change one of those passwords at a time. To change the admin user password, leave the option for “Update administrator password” selected, then enter the new password into both boxes. When you start typing, you will be provided the requirements for the password.

If you enter a password that doesn’t meet those requirements then try to save, you will get this error:

For my database, the password requirements are the following:

  • Length: 9-30 characters
  • Alphabetic characters:
    • Minimum 2 uppercase
    • Minimum 2 lowercase
  • Numeric: Minimum 2
  • Special characters:
    • Minimum 2
    • Only options are hyphen, underscore, pound

Once you click “Apply” to save the password, it will take about 2 minutes for the database to make the change. During that time, the state of the database will show as “Updating”.

If you would like to update the TDE Wallet password as well, you will need to wait for the other password change to apply first. It is just as simple to update that password as it was to update the admin password, except this time you must first specify the previous password along with the new password and confirmation.

Once again, the database will go into an “Updating” state once you click “Apply” to change the password. For me though, the TDE Wallet password took much less time to apply.

Change the SYS Password on the Pluggable Database Level

In my situation, once I updated the SYS password on the container database (CDB) level, the same change was automatically applied to all the Pluggable Databases (PDBs) within that CDB. Which was a surprise to me, since everything I was reading online before making the change seemed to indicate that I would need to make the change there as well.

I was able to confirm that the PDB SYS user password had been updated on all PDBs by updating my connections to them in my IDE to use the new password. Once that connection worked, I knew that the password had been updated everywhere.

Change the SYSTEM User Password on the Container Database

The console method of updating the main admin password for an OCI database unfortunately won’t update the passwords for all system users at the same time. In my case, I also needed to update the password of the SYSTEM user. (Curious how many system users there might be on your database? You can view the complete list here.)

To change the password of the user “SYSTEM” on an OCI database, you will need to connect to the container database (CDB) and run the ALTER USER command to change the password. You can do that through the terminal/command line or through an IDE. I chose to make the change through an IDE.

Since I wasn’t sure what was going to be required for updating this user, I decided to start at the Pluggable Database Level, where I ran this command: ALTER USER SYSTEM IDENTIFIED BY "password";. I got an error when trying to run that though:

I researched that error and found this Oracle help document, which indicated that changing the password for “common users” needs to be done at the CDB level, or the root level of the container database. Based on that, I then ran that same ALTER USER command on the CDB level and it completed without any issues.

I’m not sure why, but the SYSTEM user then became locked (or it was locked before I changed the password but I hadn’t seen that). After changing the password for that account, I wasn’t able to login on either the CDB or any of the PDBs with that user, so I was worried something had broken. However, logging in with a different user I was able to see that the SYSTEM user was locked on the CDB level, but not the PDB level, so I unlocked the account and was then able to login on the CDB and PDB level. And that also taught me that if a user is locked out on the CDB level that they will also not be able to login to any of the PDBs. Which makes sense for security purposes.

Change the SYSTEM Password on the Pluggable Database Level

As with the SYS user, once the SYSTEM user password was changed on the container database (CDB) level, the password for the account was also automatically changed on the pluggable database (PDB) level without me having to do anything.

Summary

The process of changing the admin account passwords on an OCI database is simple and straightforward if you know what you need to do. To change the SYS user password, use the OCI console on the container database level. To change the SYSTEM user password, as well as any other system/common user passwords, you will need to run an ALTER USER SQL command to make the change at the container database level. While I didn’t need to update the password on the pluggable database level at all, you will need to verify the same for your own system.

Related Posts

Troubleshoot ALL Layers of Your Data Flows

This is going to be a short post, which I am writing as a reminder for my future self as well as any of you reading out there.

If you are troubleshooting an error with your pipeline, especially if you work in a database system that has layered views, make sure you fully dig down through all layers of your data setup before you make a ticket with Microsoft. I learned this the annoying way over a work week while working through a ticket with Microsoft for a very strange issue I was seeing in a pipeline that uses a Synapse Serverless SQL Pool. We had checked so many things that week with no luck in changing the outcome of how the pipeline runs, and then the pipeline just went back to working when I ran it outside of its schedule.

What’s in this post

The Error

The error I was seeing made it look like the Serverless SQL Pool views, which use an OPENROWSET call to a parquet file, were referencing the wrong source file even though I confirmed multiple times that the view definitions were correct. For example, the view was written to use parquet files under TestDatabase/MyTable/** as the source files, but the error was making it seem like they were instead pulling data from TestDatabase/OtherRandomTable/** which was confusing to say the least. I thought that the Serverless node was broken or had a bug that was making the views look at “OtherRandomTable” files instead of the correct files.

The Cause

The error happened because multiple views used a CROSS APPLY to another view tied to a parquet file in a data lake, and that parquet file was being deleted and recreated by a parallel pipeline. When the failing pipeline tried to reference its views, it couldn’t find that base view because the source file had not yet been recreated by the parallel pipeline. Makes sense and is so obvious in hindsight, but it took Microsoft support directly asking me to make me realize that I had a view referencing another view, which I needed to check the definition of.

The change I needed to make was to update the pipeline triggers so that the process deleting and recreating the base view would be done making the parquet files when the second pipeline ran and tried to use those files.

If I had done my due diligence and dug through every layer of the data environment, which I am normally good at with other scenarios, I would have quickly and easily discovered the issue myself. But sometimes we need to learn the hard way because our brains aren’t running at full capacity. (It also helps that I finally had dedicated time to set aside to this problem and so wasn’t trying to multitask multiple work items at once.)

Summary

If you are troubleshooting ETL failures of any kind, make sure you dig down through all layers of the process to ensure you have checked everything possible related to your failure before reaching out to support. They’ll happily help you find what you missed, but it will save everyone time if you can figure it out yourself first.

Related Posts