Category: Oracle (page 1 of 1)

A Week in the Life- 9/23 – 9/27

Have you ever wondered what the normal work tasks of a database developer/integration engineer looks like? If you have, then this is the post for you. This is a new series of posts where I simply give an overview of what I accomplished each week, giving insight into what life as a database developer looks like for those who might be curious. I also want to do these reviews for my own records and edification, because it’s always good to keep track of the things you accomplish at your job. This post is going to review the week of September 23 – September 27, 2024.

What’s in this post

Meeting with a Software Vendor

One part of my job is interacting with various software vendors that we buy products from. I’m currently on a longer-term project where we are doing a piece-by-piece upgrades to one application that our legal department uses, so this week I met with the vendor’s development team to get an overview of the next step of this upgrade process.

One of the joys of working with vendors is sometimes you go into these meetings thinking the purpose is completely different than what it ends up being, then you have to rethink everything you planned for the meeting on the fly. That was how this meeting went. I wasn’t expecting that I would be the one taking lead on driving the topic by asking questions we needed answers to on our side before we could move forward with the next step of the upgrade. I thought the meeting was going to be the vendor giving an overview of the product for my own learning, not me driving the conversation. As someone newer to the company, this change was a little scary, but I handled it well and got good information for our team from the vendor.

Updating Security Certificates

This part of technology is something I’m still not very familiar with but had to dive into this week. My team had an ETL that broke because it could no longer communicate with our customer’s software due to an outdated security certificate. To fix this broken ETL, I had to locate the correct updated certificate and put that on our ETL server and remove the old one. After making such a small change, things were back to working as they normally do. This problem was a good learning opportunity and not nearly as difficult as I was expecting it would be.

Emergency Database Refresh

I got to experience my first emergency refresh request in our Oracle database environment this week because something went wrong with one of our production databases and the app development team wanted to refresh the database into the lowers as quickly as possible to start troubleshooting. I think fate decided we should have a good day, because our Oracle cloning process into our test environments went off without any issues and finished in about an hour, which is almost a record low. We’ve ran into numerous issues in recent months with the pluggable database cloning process in Oracle, so we are very thankful that one of those problems didn’t arise when we needed this emergency refresh.

Completing Database Permissions Requests

Another piece of Oracle work I got to experience for the first time this week was updating user permissions. We received a request from one of our application developers that they needed a service account to be granted the same level of permissions that his own account had (in test) so that he could complete a task. Although the SQL needed to complete a permissions change in Oracle looks a little different from what I’m used to in SQL Server, it overall was very similar to making permissions changes with T-SQL so was easy to complete.

Attending A Women’s Leadership Conference

My favorite part of this week was getting to spend Tuesday and Wednesday at the Women & Leadership Conference by the Andrus Center at Boise State. This was my first time attending this conference, and I was able to go with the handful of other women in the IT department.

As would be expected for this type of conference, the sessions focused on building soft skills in women instead of hard technical skills like previous work conferences I’ve attended. I listened to a range of women speakers, all of whom are in various leadership roles in different fields and states across the country. I wouldn’t say I loved every session I attended, but most of them were interesting and I learned a lot of tips for managing in the work environment as a woman that I hadn’t thought of before. In the near future, I will be writing a recap with more of my learnings from the conference, if you’re interested in hearing more.

Troubleshooting a Virtual Machine (VM)/SQL Server Connection Issue

This issue was not something I had ever considered before, because I’ve never had to think about the networking setup for VMs that I’ve worked with. Normally, someone else sets up the VM and makes sure it has all the networking and firewall rules needed before we get access to the VM. However, one of my project teammates and I found out that it’s not always the case.

I am working on a different application upgrade project from the one I mentioned above, and I was in charge of setting up those SQL Servers last week. This week, the main application developer started his portion of setting up the VMs we put the SQL Servers on, and he found that he was unable to access the SQL Server from his local computer version of SQL Server Management Studio (SSMS). When he messaged me about this connection issue, I had no idea where to start troubleshooting it since I had never had to think about how SSMS connects to the SQL Server on the VM before.

I had to work with our networking team to figure out the issue, which ended up being that the local firewall on the VM was not setup to allow ingress from the two ports that SSMS/SQL Server requires for connections. The networking team wrote two commands to allow that ingress, ran the commands on all 3 servers we setup last week, and then we were able to easily connect to the new SQL Server instances from our local SSMS apps instead of having to login to the VMs directly.

Finishing and Presenting a Python Data Analysis Script

One of the things I was most pleased and excited about this week was finally getting to demo a complicated Python script I wrote to optimize customer orders based on their previous order history, given a list of input parameters from the customer. I have spent months working on this script, going back and forth with the business users about what should be included, adding new features when requested, and even totally reworking the algorithm when their requests got more advanced. My demo of the script had an audience of the business users as well as members of my own team, including my manager.

The demo itself went really well, the script worked exactly as I wanted it to, running in under 5 seconds to optimize the customer data and provide a recommendation for what should be stocked to fulfill those orders, which is a massive improvement from the current process that takes a week to calculate the best possible solutions. What did not go as I expected was hearing from the business users that they are unsure if the solution provided is accurate enough, so they’re afraid to move forward with my script unless I did several things that would likely lead to several more months of rework. At the end of the meeting, we had decided as a team that instead of implementing this solution I worked hard on for months, that isntead we would put their current process onto a newer and beefier server in hopes that it would run faster than the current week.

No developer wants to spend months working on something just to be told that it won’t be used due to reasons that are out of the developer’s control, but that seems to be the situation I am in now. I am staying positive about it though, because it was a great Python development learning opportunity for me when I came into the company, and I have faith that with time, the business users will come around to using the faster and more modern solution when they see that the results I produced are very close to what they already get with the current solution. I might need to do a little more tweaking to get my algorithm’s results into an acceptable range compared to the current process, but I am hoping it won’t require a full rework to do so.

Summary

Sometimes I have weeks at work where I feel like I haven’t accomplished all that much. This week was one of those. But now that I have typed out everything I did like this, I am seeing that I do a LOT of work while I’m at work, and I am proud of everything I learned and accomplished this week, even if every single work item did not go as planned.

Being a database developer or data integration engineer comes with a lot of variation in work, which you can probably see by looking at this week’s and last week’s summaries. There is always something new to learn and work on, so I’m excited I’ve had another interesting week of work and I look forward to next week being interesting as well. (Although I’m technically on vacation next week, so really I mean the following week.)

Do you have any questions about what a database developer does day-to-day that I haven’t answered yet? Let me know in the comments below!

A Week in the Life- 9/16 – 9/20

Have you ever wondered what the normal work tasks of a database developer/integration engineer looks like? If you have, then this is the post for you. I want to start a new series of posts where I simply give an overview of what I accomplished each week, giving insight into what life as a database developer looks like for those who might be curious. I also want to do these reviews for my own records and edification, because it’s always good to keep track of the things you accomplish at your job. This post is going to review the week of September 16 – September 20, 2024.

What’s in this post

SQL Server Administration

This week was a bit of a throwback for me because one of my main tasks was to get SQL Server 2022 and one particular database up and running on three different virtual machines (VMs) for an application upgrade project I am working on. I have not created a new SQL Server, created a database backup, or restored a database backup in years. At my previous job, there was a dedicated team of DBAs that did that sort of work, and I was not on that team. But my current role is more of a jack-of-all-database-skills role, so it was time to blow the dust off my server creation skills.

The first step, of course, was to install SQL Server 2022 on each of the Windows VMs, following some guidance from the application vendor. I have never installed any SQL Server with so many custom configurations, so that was an interesting new thing to learn and do this week. After taking too much time to install SQL Server on each of the three Windows servers, I also installed SQL Server Management Studio (SSMS) 20, the main app used to interact with SQL Servers and their databases, on the VMs as well. That was the easiest part of this whole thing.

Before I could restore a copy of our production database onto these new servers, I needed to copy all existing/needed logins from the current production server to each of the new test servers so that the database restoration wouldn’t fail due to a user not having a corresponding login. I also did not want to have to manually repair all those database users when the SID values they were created with didn’t match the SIDs of the logins if I created all the new logins from scratch. This left me wondering how I could script out all the existing production logins, including their passwords (we have a lot of SQL auth logins on this server), so I didn’t have to do any of it manually. I found this very helpful Microsoft article that includes scripts for creating two stored procedures that you can then run to generate the perfect CREATE scripts for every login on a server. I would highly recommend using these scripts if you need to migrate/copy a database to a new server.

The final step that I needed to accomplish was to backup and restore a copy of the current production database to each of these new test servers so the app developers can do their side of the upgrade testing. I had to meet with one of the app developers so we could do our portions of the backup of production at the same time, since that is important on the app side to keep all data in the system aligned. Creating a backup was just as easy as I remembered.

Unfortunately, after I had the 97 GB compressed database backup file created, I then had to figure out how I was going to copy that file from the current production VM to the new test VMs, because we do not have any of these VMs networked together. My first idea was to simply do a copy/paste action from one VM to another, but that was projecting it was going to take 17 hours for one copy, and it kept failing after a few minutes with an unspecified error anyway (I’m guessing network issues). I could not think of any better way of getting the file from one server to another, and Google wasn’t being helpful for my specific situation, so I asked my main mentor coworker what he would do, and he suggested a brilliant idea: upload the backup file to one of our storage accounts in Azure by logging in on the source database, then after it’s uploaded, login to the storage account on the destination servers and download the file. This was the best idea for this situation (and only works because we do not limit internet access on our Windows server VMs, which I had at previous roles, so this method might not work for you). The upload of the 97 GB file took about 25 minutes. The download onto each of the destination servers was about 15 minutes. Much faster than the projected 17 hours of copying over our VPN directly from one to the other!

Ongoing Learning

As a part of any career, one should always be striving to learn new things and grow in their role. My current organization is fantastic at helping me to do this. Not only on an official level with conferences and online trainings, but also unofficially with everyone being willing to share what they know about our systems at any given time.

This week, I got to attend a casual training led by one of our Oracle app developers to tell us everything he knows about the Oracle Grid infrastructure and Automatic Storage Management (ASM), since those topics came up while we were troubleshooting an Oracle issue a few weeks ago, and I knew nothing about them. I also attended a more formal training with our Microsoft representatives to start wrapping up training we’ve been doing for a couple months on Azure AI and Machine Learning tools. Both of these training sessions were informative and useful and make me feel better prepared to handle upcoming projects and issues.

Python Development

Since I started at Boise Cascade, I’ve been doing a lot of development in Python, and am currently working on a project to accurately calculate optimal solutions for fullfilling customers requested wood orders. This project has been a big struggle, not due to technical issues, but due to changing requirements from the business customers, which means I’ve had to write two totally different solutions for this same problem over the past couple of months. However, the struggle is finally almost over because I have mostly wrapped up my rewrite of the solver using a new algorithm that a colleage wrote the basis of for me. (Yay for more experienced developers helping us solve difficult problems!) I am very excited about coming to a close on this program and will hopefully demo it to the business users next week to get final feedback and get it moved to prod in the very near future. The users are also very excited since this new program solves a problem in less than 5 seconds when the previous/current solution takes a literal WEEK to do the same thing. I have found it really interesting to write this program, but I also know I will find new projects equally as interesting when I finally get to move on.

Preparing for New Projects

Once I finally wrap up my wood order optimizer, I will get to move on to rewriting a different solver currently used by someone in the company who would like it automated away. To get up to speed on what the user is requesting, I met with one of my teammates this morning to get context on the solution that is currently in place and what the user wants the new solution to do differently. Basically, I needed to figure out what the heck this project was even supposed to be about. In my prep meeting, I learned that I will definitely need to meet with the user to discuss what they expect and need from the solution since they’re the one currently doing the solving manually, but I at least now have a general sense of what is being requested. I am looking forward to getting started on this project once I have the time.

Summary

That is my whole week in review! While database development usually does include a lot of writing SQL, developing ETLs to move data around, and doing other things directly with databases, that does not mean the work will always only consistent of those types of tasks. It all depends on the company you work for and how your specific role is defined. I am very thankful that my current role includes those types of tasks but also adds other work like writing data integration scripts in Python so that I can stretch my skills and knowledge.

Is there something specific you were hoping I would cover about my time at work that I didn’t cover in this post? Let me know in the comments below!

Oracle Cloud World 2024- Day 2 In Review

I am going to try to keep this post shorter than yesterday’s because today was an ever longer conference day for me, and I followed it up by doing some sightseeing in Vegas for the entire evening afterwards. Overall, my sentiment of the day is that a good portion of the sessions are not what I was hoping for or needing in my Oracle journey right now.

Once again, I did learn a lot of new things, but so many of those things aren’t applicable to our Oracle systems at my current job. I think a big part of that is due to the fact that Oracle is basically a side database for us, not the main databases we are developing with the majority of the time. Many of the sessions I’ve attended in the conference so far have been targeted at developers who solely develop in the Oracle ecosystem, doing more advanced developing than anything I’ve needed to do with our Oracle databases. That’s not really a fault of the sessions, more of a mismatch of my needs and wants with the main target audience of the conference.

What’s in this post

Keynote 1: Discover the Power of Oracle AI in Oracle Fusion Applications

This was another keynote where many incredible statements were made about the influence and reach of Oracle, specifically in relation to Large Language Models (LLMs) and other aspects of Artificial Intelligence (AI). One of my to-do items for after the conference is to complete my own research to fact-check these grandiose statements that have been made throughout the conference.

The main point of this keynote was to once again talk to large customers of Oracle that are utilizing AI in there internal corporate systems or “at the edge” with their own customers. I thought a lot of the use cases presented were interesting, like how Caesar’s uses AI to speed up check-in and check-out and other customer-facing features of their resorts and casinos. I also liked many of the presented AI agents and features that Oracle has been working on, like the ability to automate a lot of the more tedious aspects of database management or day-to-day tasks of accountants and other parts of businesses to free up human time for better things.

This session is the epitome of what I wish every session at the conference would be. Although if they were all like this, I think I would be extremely overwhelmed with information. This was by far my favorite session of the day. The topic of the session was vector search and how it relates to artificial intelligence, and the presenters did a fantastic job at explaining the general technology concepts while at the same time sprinkling in a few plugs for how Oracle is doing those things and doing them better than current standards with their new services. I cannot validate whether or not Oracle’s versions of vector search and vector databases are better than their competitors, but this session was extremely informative and cleared up a lot of topics I have been unclear on when it comes to AI.

Best Practices for Oracle Integration and OCI Process Automation

This was the second best session I attended today. What I liked most about this presentation is it was 100% what I expected based on the title and abstract. They had concise, to the point slides covering exactly what you need to know to make the best of your Oracle Integration and Oracle Cloud Infrastructure systems. Plus they also had the proper presentation layout recommended by communications classes, where they first gave us a quick overview of what they were going to cover, then they covered that information, then they gave a quick summary of what they had covered. The presentation was well-organized, the presenters were funny, and the information was useful.

Keynote 2: Solving Industries’ Biggest Challenges with Applied AI

This was yet another keynote where a top leader at Oracle interviewed various customers about how they are using Oracle’s AI tools to solve their biggest problems. While hearing testimonials like these are nice, I really don’t need to hear as many of them as I’ve heard from the keynotes in the past two days. Not the worst thing ever, but I feel like picking a different conflicting session might have provided more useful information for me.

A Simple Python, Flask Application with Oracle REST APIs

This hands-on lab definitely did not go the way the presenters wanted to, which seems to happen with a good portion of lives demos. It seems like the system our lab sandboxes were built on was not built strong enough to handle the load we were putting on it, since there were over 20 people in the session. I think the content of what we were supposed to be learning was really interesting, but only one person in the class could get the system to work, so he became the unintended example for the rest of us to watch. So shoutout to that Norwegian man who had the luck of the draw getting the web app to work.

What I realized as I was reviewing the lab code was that Flask and ORDS (Oracle REST Data Services) Python code is formatted very similarly to how Azure Functions are formatted. Which makes sense since both frameworks are allowing you to easily work with REST APIs without having to do a lot of the more tedious work yourself. Thankfully the leader of the workshop, who wrote all the code (it’s not his fault the sandboxes weren’t working because it seemed to be a network or VM issue), has the entire code example saved on his public GitHub so that I can download it and play with it on my own in the near future to play around with REST APIs in the Oracle workspace.

Continuous Integration and Delivery (CI/CD) for the Oracle Database Developer

This was one of my least favorite sessions of the day simply for the fact that I was expecting a session that covered Oracle-specific CI/CD practices, but instead, the content of the presentation was mostly generalized information about CI/CD, which I’m already familiar with. At some point in the future, I am now going to need to research the standard ways people normally do CI/CD in the Oracle environment to see if there’s anything we can or should use in our own Oracle systems.

One tidbit that was mentioned during the presentation that really brought be back in time about a year was hearing that the Oracle tool SQLcl is based on Liquibase, which is a open-source database change management tool I did a proof of concept project on at my last job. If you are interested in learning more about what I learned while doing a trial with Liquibase, you can read my post about it.

Summary

Today was really difficult for me to get through, because the day felt extremely long. I think about half the sessions I attended were useful to me while the others were not as useful. I have a whole list of random topics I’ve gathered from the presentations today that I want to research further, so that is the brightest side of today at the conference. Tomorrow is the last day of the conference, and only a half day, so I am going to try to make the most of it and learn as much from my last few sessions as possible.

Have you also been at Oracle Cloud World this year or have gone in previous years? If so, I would love to hear about your experiences in the comments below.

Oracle Cloud World 2024- Day 1 In Review

Today was my first day at Oracle Cloud World in Las Vegas, and also my first time ever at an Oracle conference, since I only recently started doing database administration with this RDBMS. As with any technology conference, the day was jam-packed with many different sessions. Although I obviously cannot convey all the information I learned at my sessions throughout the day, I will try to summarize the interesting and key points I took away from each.

What’s in this post

Keynote Session 1: “Customers Winning with the Cloud and AI”

I’m not going to lie, I went into this conference and this sessions not having a great opinion of Oracle, due to the multiple negative experiences I had with their database platform in the past few months. However, this keynote was not a useless session, because I did learn that MGM Resorts owns a huge number of properties and hotels in Las Vegas, which is interesting from a big data perspective, and that the CIA is the first customer of Oracle. There are a lot of rumors online about how Oracle came to be and how it may or may not relate to a CIA project codename, but I couldn’t find any reputable sources, so we’ll just leave it at the CIA being the first and one of the largest customers of Oracle.

Besides those interesting tidbits, this keynote mainly contained somewhat dry interviews with different large customers of Oracle talking about how they’ve utilized Oracle products to revolutionize their businesses and how they’ve started to use AI in their technology journeys. This was the beginning of discussions surrounding “AI” that continued throughout most of the sessions today. (On that topic, I’m starting to feel like the term “AI” is being watered down or misused at this conference to represent things that really shouldn’t fall under that term…)

“Accelerate Your IAM Modernization to Multi-cloud Deployments”

This session was not what I expected it was going to be, and it wasn’t the only one where that happened today. However, even though the content of this session wasn’t what I thought it was going to be, I did learn a few interesting things. The presenters gave many startling facts about the costs associated with data breaches as well as the causes of those breaches. The statistic that I found most interesting is their claim that 60% of breaches resulted from poor patch management.

I was hoping that this presentation was going to cover more of the technical details of implementing and modernizing IAM within the Oracle ecosystem, but it proved to be a general overview of what everyone should be doing instead, which was what disappointed me about it. However, it at least gave me some topics that I can do further research on by myself to learn more about IAM in Oracle, such as the Oracle Access Manager. At least if I couldn’t get the technical details I wanted, I still got some direction about what to research next.

“Create a Data Pipeline with Data Transforms in Autonomous Database Data Studio”

This presentation was more the style of what I was expecting of most of the sessions I chose today, but it once again covered completely different information than what I thought it would. This session was different than what I expected because I was not aware that “Data Transforms” is an Oracle product; I thought that it was being used as a general term in the title. If I had known that the presentation would be covering a specific service, I would have had a better understanding of what I was about to learn. My confusion of expectations did not make the presentation unenjoyable or uninformative, though.

What I learned from this session is that Oracle has two different ETL platforms available to move data around, similar to how Microsoft has SSIS for SQL Server (and other databases). Data Transforms was the service covered by this session, but they did mention Oracle Data Integrator (ODI) which is another, older ETL service. Data Transforms can move data between tons of different types of databases, not just Oracle, and it seemed to have a lot of interesting and easy to use ETL capabilities. It seems like they are trying to make this tool be the data flow tool of the future, especially since they covered 3 different features that are about to be added, like vector search/query capabilities. Although I haven’t had the chance to use this tool myself, I want to temper the expectations for what it can accomplish due to my own personal experiences with other Oracle services. Maybe it’s as fantastic and as useful as they say it is, or maybe it’s just another sales pitch that is better than the real user experience. If you have personal experience, good or bad, with Data Transforms I would love to hear about it in the comments below.

Keynote Session 2: “Oracle Vision and Strategy”

Of all the sessions today, I think this one had the most interesting pieces of information, although none of it was directly applicable to me or my company. The biggest downside of this keynote was that it went way over time, so I had to head out before the end of it. The two major topics of the keynote, presented by Larry Ellison, the founder and CTO of Oracle, were the joining of Oracle Cloud with the other major cloud providers and then covering various different topics surrounding artificial intelligence and how they want to use it to fix all the problems of the application and database development world.

While I like the idea of them putting Oracle databases into the other cloud platforms–Google Cloud Platform, AWS, and Azure–because it gives me hope that maybe one day we could migrate our Oracle databases to a less fragile ecosystem, it did leave me wondering if one day in the future there will just be one single mega-cloud system and monopoly originating from the combination of the current big four (but maybe I’ve just been reading too many dystopian novels lately).

I thought the second part of the keynote, surrounding current and potential uses of AI integration with other software systems, was more interesting and also a bit scary. Interesting in that automating mundane and error-prone processes makes our lives as database developers and administrators easier. There are so many things Mr. Ellison mentioned automating with AI that sounded great and useful to me. But it also scared me a bit, as it felt like there was an undertone of invasiveness being discussed under the guise of security. Security, on the technological and physical level, is important for individuals, groups, and even our whole country, but I personally believe security should not come at the cost of personal freedom and privacy. Some of the proposed and planned uses of AI, specifically how it relates to biometric authentication for every aspect of our lives, left me feeling a little uneasy (but once again, maybe it’s due to the large number of dystopian books I’ve read lately).

“Access Governance: The Key to Ensuring the Survival of Our Digital Lives”

I think this session was the best presentation of the day, as far as straight communication abilities go. The team of presenters was very well put together and knew their topics well without having to read off their slides at all, and I really appreciated that.

The topic of this session was once again about managing who can access what, including the use of Identity and Access Management (IAM) as one of the core topics. The presentation was lead by a member of the Oracle leadership team, who was accompanied by three Oracle customers, including a senior security engineer from Uber. Hearing from the different customers about their experience using IAM in general, not just the Oracle services, offered a great perspective on managing access to applications and databases, and gave me some ideas to take back to my own work. The main Oracle services covered were Oracle Access Governance and the Intelligent Access Dashboard, which I’ll need to do further research on myself now.

“AI-Based Autoscaling with Avesha for Simplified OKE Management on OCI”

This was my last session of the day, and although I was tired and dreaming of my hotel room, I did find it to be another interesting presentation, although not super applicable to my work life. The title is quite a mouthful, but what it covered was how a small company called Avesha has created 4 different tools to help you autoscale and manage Kubernetes clusters in Oracle Cloud Infrastructure (OCI). Their 4 tools all seemed like they would be very useful for people who are working with Kubernetes in Oracle, since apparently the autoscaling in OCI doesn’t always work as well as people want it to (coming from comments from the audience during the Q&A at the end of the presentation).

While I don’t think my company will be using any of Avesha’s tools anytime soon, they did seem like they could be extremely useful to other organizations. And the presenters definitely understood their own products, down to the fine details of how they work, which is always a green flag I appreciate with software vendors.

Summary

Wooh, that was a lot of information to recap and cover for a blog post! After attending these six sessions on this first day of Oracle Cloud World, I’m a little bit overwhelmed an exhausted, and not quite ready for another day and a half of more info dumps. But that’s okay because it’s what’s to be expected from conferences like this. I am hoping that the sessions I picked for tomorrow are more applicable to my current role, but even if they aren’t, I’m sure I’ll learn more interesting things throughout the day.