Tag: AI (page 1 of 1)

Oracle Cloud World 2024- Day 2 In Review

I am going to try to keep this post shorter than yesterday’s because today was an ever longer conference day for me, and I followed it up by doing some sightseeing in Vegas for the entire evening afterwards. Overall, my sentiment of the day is that a good portion of the sessions are not what I was hoping for or needing in my Oracle journey right now.

Once again, I did learn a lot of new things, but so many of those things aren’t applicable to our Oracle systems at my current job. I think a big part of that is due to the fact that Oracle is basically a side database for us, not the main databases we are developing with the majority of the time. Many of the sessions I’ve attended in the conference so far have been targeted at developers who solely develop in the Oracle ecosystem, doing more advanced developing than anything I’ve needed to do with our Oracle databases. That’s not really a fault of the sessions, more of a mismatch of my needs and wants with the main target audience of the conference.

What’s in this post

Keynote 1: Discover the Power of Oracle AI in Oracle Fusion Applications

This was another keynote where many incredible statements were made about the influence and reach of Oracle, specifically in relation to Large Language Models (LLMs) and other aspects of Artificial Intelligence (AI). One of my to-do items for after the conference is to complete my own research to fact-check these grandiose statements that have been made throughout the conference.

The main point of this keynote was to once again talk to large customers of Oracle that are utilizing AI in there internal corporate systems or “at the edge” with their own customers. I thought a lot of the use cases presented were interesting, like how Caesar’s uses AI to speed up check-in and check-out and other customer-facing features of their resorts and casinos. I also liked many of the presented AI agents and features that Oracle has been working on, like the ability to automate a lot of the more tedious aspects of database management or day-to-day tasks of accountants and other parts of businesses to free up human time for better things.

This session is the epitome of what I wish every session at the conference would be. Although if they were all like this, I think I would be extremely overwhelmed with information. This was by far my favorite session of the day. The topic of the session was vector search and how it relates to artificial intelligence, and the presenters did a fantastic job at explaining the general technology concepts while at the same time sprinkling in a few plugs for how Oracle is doing those things and doing them better than current standards with their new services. I cannot validate whether or not Oracle’s versions of vector search and vector databases are better than their competitors, but this session was extremely informative and cleared up a lot of topics I have been unclear on when it comes to AI.

Best Practices for Oracle Integration and OCI Process Automation

This was the second best session I attended today. What I liked most about this presentation is it was 100% what I expected based on the title and abstract. They had concise, to the point slides covering exactly what you need to know to make the best of your Oracle Integration and Oracle Cloud Infrastructure systems. Plus they also had the proper presentation layout recommended by communications classes, where they first gave us a quick overview of what they were going to cover, then they covered that information, then they gave a quick summary of what they had covered. The presentation was well-organized, the presenters were funny, and the information was useful.

Keynote 2: Solving Industries’ Biggest Challenges with Applied AI

This was yet another keynote where a top leader at Oracle interviewed various customers about how they are using Oracle’s AI tools to solve their biggest problems. While hearing testimonials like these are nice, I really don’t need to hear as many of them as I’ve heard from the keynotes in the past two days. Not the worst thing ever, but I feel like picking a different conflicting session might have provided more useful information for me.

A Simple Python, Flask Application with Oracle REST APIs

This hands-on lab definitely did not go the way the presenters wanted to, which seems to happen with a good portion of lives demos. It seems like the system our lab sandboxes were built on was not built strong enough to handle the load we were putting on it, since there were over 20 people in the session. I think the content of what we were supposed to be learning was really interesting, but only one person in the class could get the system to work, so he became the unintended example for the rest of us to watch. So shoutout to that Norwegian man who had the luck of the draw getting the web app to work.

What I realized as I was reviewing the lab code was that Flask and ORDS (Oracle REST Data Services) Python code is formatted very similarly to how Azure Functions are formatted. Which makes sense since both frameworks are allowing you to easily work with REST APIs without having to do a lot of the more tedious work yourself. Thankfully the leader of the workshop, who wrote all the code (it’s not his fault the sandboxes weren’t working because it seemed to be a network or VM issue), has the entire code example saved on his public GitHub so that I can download it and play with it on my own in the near future to play around with REST APIs in the Oracle workspace.

Continuous Integration and Delivery (CI/CD) for the Oracle Database Developer

This was one of my least favorite sessions of the day simply for the fact that I was expecting a session that covered Oracle-specific CI/CD practices, but instead, the content of the presentation was mostly generalized information about CI/CD, which I’m already familiar with. At some point in the future, I am now going to need to research the standard ways people normally do CI/CD in the Oracle environment to see if there’s anything we can or should use in our own Oracle systems.

One tidbit that was mentioned during the presentation that really brought be back in time about a year was hearing that the Oracle tool SQLcl is based on Liquibase, which is a open-source database change management tool I did a proof of concept project on at my last job. If you are interested in learning more about what I learned while doing a trial with Liquibase, you can read my post about it.

Summary

Today was really difficult for me to get through, because the day felt extremely long. I think about half the sessions I attended were useful to me while the others were not as useful. I have a whole list of random topics I’ve gathered from the presentations today that I want to research further, so that is the brightest side of today at the conference. Tomorrow is the last day of the conference, and only a half day, so I am going to try to make the most of it and learn as much from my last few sessions as possible.

Have you also been at Oracle Cloud World this year or have gone in previous years? If so, I would love to hear about your experiences in the comments below.

Oracle Cloud World 2024- Day 1 In Review

Today was my first day at Oracle Cloud World in Las Vegas, and also my first time ever at an Oracle conference, since I only recently started doing database administration with this RDBMS. As with any technology conference, the day was jam-packed with many different sessions. Although I obviously cannot convey all the information I learned at my sessions throughout the day, I will try to summarize the interesting and key points I took away from each.

What’s in this post

Keynote Session 1: “Customers Winning with the Cloud and AI”

I’m not going to lie, I went into this conference and this sessions not having a great opinion of Oracle, due to the multiple negative experiences I had with their database platform in the past few months. However, this keynote was not a useless session, because I did learn that MGM Resorts owns a huge number of properties and hotels in Las Vegas, which is interesting from a big data perspective, and that the CIA is the first customer of Oracle. There are a lot of rumors online about how Oracle came to be and how it may or may not relate to a CIA project codename, but I couldn’t find any reputable sources, so we’ll just leave it at the CIA being the first and one of the largest customers of Oracle.

Besides those interesting tidbits, this keynote mainly contained somewhat dry interviews with different large customers of Oracle talking about how they’ve utilized Oracle products to revolutionize their businesses and how they’ve started to use AI in their technology journeys. This was the beginning of discussions surrounding “AI” that continued throughout most of the sessions today. (On that topic, I’m starting to feel like the term “AI” is being watered down or misused at this conference to represent things that really shouldn’t fall under that term…)

“Accelerate Your IAM Modernization to Multi-cloud Deployments”

This session was not what I expected it was going to be, and it wasn’t the only one where that happened today. However, even though the content of this session wasn’t what I thought it was going to be, I did learn a few interesting things. The presenters gave many startling facts about the costs associated with data breaches as well as the causes of those breaches. The statistic that I found most interesting is their claim that 60% of breaches resulted from poor patch management.

I was hoping that this presentation was going to cover more of the technical details of implementing and modernizing IAM within the Oracle ecosystem, but it proved to be a general overview of what everyone should be doing instead, which was what disappointed me about it. However, it at least gave me some topics that I can do further research on by myself to learn more about IAM in Oracle, such as the Oracle Access Manager. At least if I couldn’t get the technical details I wanted, I still got some direction about what to research next.

“Create a Data Pipeline with Data Transforms in Autonomous Database Data Studio”

This presentation was more the style of what I was expecting of most of the sessions I chose today, but it once again covered completely different information than what I thought it would. This session was different than what I expected because I was not aware that “Data Transforms” is an Oracle product; I thought that it was being used as a general term in the title. If I had known that the presentation would be covering a specific service, I would have had a better understanding of what I was about to learn. My confusion of expectations did not make the presentation unenjoyable or uninformative, though.

What I learned from this session is that Oracle has two different ETL platforms available to move data around, similar to how Microsoft has SSIS for SQL Server (and other databases). Data Transforms was the service covered by this session, but they did mention Oracle Data Integrator (ODI) which is another, older ETL service. Data Transforms can move data between tons of different types of databases, not just Oracle, and it seemed to have a lot of interesting and easy to use ETL capabilities. It seems like they are trying to make this tool be the data flow tool of the future, especially since they covered 3 different features that are about to be added, like vector search/query capabilities. Although I haven’t had the chance to use this tool myself, I want to temper the expectations for what it can accomplish due to my own personal experiences with other Oracle services. Maybe it’s as fantastic and as useful as they say it is, or maybe it’s just another sales pitch that is better than the real user experience. If you have personal experience, good or bad, with Data Transforms I would love to hear about it in the comments below.

Keynote Session 2: “Oracle Vision and Strategy”

Of all the sessions today, I think this one had the most interesting pieces of information, although none of it was directly applicable to me or my company. The biggest downside of this keynote was that it went way over time, so I had to head out before the end of it. The two major topics of the keynote, presented by Larry Ellison, the founder and CTO of Oracle, were the joining of Oracle Cloud with the other major cloud providers and then covering various different topics surrounding artificial intelligence and how they want to use it to fix all the problems of the application and database development world.

While I like the idea of them putting Oracle databases into the other cloud platforms–Google Cloud Platform, AWS, and Azure–because it gives me hope that maybe one day we could migrate our Oracle databases to a less fragile ecosystem, it did leave me wondering if one day in the future there will just be one single mega-cloud system and monopoly originating from the combination of the current big four (but maybe I’ve just been reading too many dystopian novels lately).

I thought the second part of the keynote, surrounding current and potential uses of AI integration with other software systems, was more interesting and also a bit scary. Interesting in that automating mundane and error-prone processes makes our lives as database developers and administrators easier. There are so many things Mr. Ellison mentioned automating with AI that sounded great and useful to me. But it also scared me a bit, as it felt like there was an undertone of invasiveness being discussed under the guise of security. Security, on the technological and physical level, is important for individuals, groups, and even our whole country, but I personally believe security should not come at the cost of personal freedom and privacy. Some of the proposed and planned uses of AI, specifically how it relates to biometric authentication for every aspect of our lives, left me feeling a little uneasy (but once again, maybe it’s due to the large number of dystopian books I’ve read lately).

“Access Governance: The Key to Ensuring the Survival of Our Digital Lives”

I think this session was the best presentation of the day, as far as straight communication abilities go. The team of presenters was very well put together and knew their topics well without having to read off their slides at all, and I really appreciated that.

The topic of this session was once again about managing who can access what, including the use of Identity and Access Management (IAM) as one of the core topics. The presentation was lead by a member of the Oracle leadership team, who was accompanied by three Oracle customers, including a senior security engineer from Uber. Hearing from the different customers about their experience using IAM in general, not just the Oracle services, offered a great perspective on managing access to applications and databases, and gave me some ideas to take back to my own work. The main Oracle services covered were Oracle Access Governance and the Intelligent Access Dashboard, which I’ll need to do further research on myself now.

“AI-Based Autoscaling with Avesha for Simplified OKE Management on OCI”

This was my last session of the day, and although I was tired and dreaming of my hotel room, I did find it to be another interesting presentation, although not super applicable to my work life. The title is quite a mouthful, but what it covered was how a small company called Avesha has created 4 different tools to help you autoscale and manage Kubernetes clusters in Oracle Cloud Infrastructure (OCI). Their 4 tools all seemed like they would be very useful for people who are working with Kubernetes in Oracle, since apparently the autoscaling in OCI doesn’t always work as well as people want it to (coming from comments from the audience during the Q&A at the end of the presentation).

While I don’t think my company will be using any of Avesha’s tools anytime soon, they did seem like they could be extremely useful to other organizations. And the presenters definitely understood their own products, down to the fine details of how they work, which is always a green flag I appreciate with software vendors.

Summary

Wooh, that was a lot of information to recap and cover for a blog post! After attending these six sessions on this first day of Oracle Cloud World, I’m a little bit overwhelmed an exhausted, and not quite ready for another day and a half of more info dumps. But that’s okay because it’s what’s to be expected from conferences like this. I am hoping that the sessions I picked for tomorrow are more applicable to my current role, but even if they aren’t, I’m sure I’ll learn more interesting things throughout the day.

Highlights of PASS Data Community Summit 2023

This year I was able to attend the PASS Data Community Summit for the first time, and it was also the first time I had ever attended a professional conference in my career. Not only did I learn more than I expected about new and existing databases and tools, but I was also able to meet many interesting new people that I never would have been able to connect with otherwise. Each of the people I chatted with at various points throughout the conference had a unique story to share and was more than happy to introduce themselves, share their experience, and offer me help going forward if I need it. Going in as someone who’s never been to such an event before, I was relieved to find that I was fully welcomed and not made to feel like an outsider.

While I really enjoyed meeting so many new people from all around the world (I can’t believe how far some people traveled!), the best part of the conference was learning about many new and useful topics. These are the most interesting highlight items I learned across the three days of the conference:

Red-Gate Test Data Manager

I think this was the most exciting thing I learned while at the conference because it would be the most beneficial for my job and company. Currently, my organization has a homegrown tool that pulls down a subset of test data (from our QA environment) to each developer’s local database. While this tool is useful and has been mostly sufficient for us since it was created, it’s becoming harder to maintain and less useful to developers over time. The main issue we have with our custom tool is that it’s set to only pull down 500,000 records for any given table, with no regard (for the most part) for getting useful data for testing. Often, when trying to test a complicated or totally new process, the data subset either isn’t enough records or the records aren’t what is needed for testing.

That is why the Test Data Manager (TDM) from Red-Gate is so interesting to me. It solves all of the problems we currently have with our custom tool plus more we hadn’t considered yet. It allows you to create a clone in less than a minute from production data (my organization’s tool currently takes a long time to pull down all of the data and it pulls from our QA environment which isn’t as good of a testing set). TDM also allows you to mask any Personal Identifiable Information (PII) while keeping that data relevant, related, and useful after it has been masked (our custom tool isn’t that smart). Additionally, you can select a premade subset of data for any given table (small or large) or you can specify your own custom type of subset (e.g. only pull customer data for 2022 customers), and the software will ensure the data stays related and meaningful as it creates the subset of data.

Unit testing for databases is possible

You might think this is obvious, but the fact that unit testing can be added to database code now is crazy and exciting to me. I figured that it would be possible to write unit tests for stored procedures since that’s just code, but there’s a whole world of possibilities for other ways to unit test databases using tools already created and perfected by others. The two main tools that were discussed in regards to database unit testing were tSQLt, an open-source testing framework, and Red-Gate’s SQL Test which utilizes that framework while also giving additional capabilities for your team. While I haven’t had a chance to try to use either of these tools yet, I look forward to exploring the world of unit testing. I would love to be able to have a way to automatically test database code changes, since currently, all testing of database-related code changes, whether it’s SQL or SSIS, must be done manually by someone else on my team (and testing resources have been scarce lately).

PostgreSQL (PGSQL) is similar but also really different from Microsoft SQL Server (MSSQL)

Over the course of the Summit, I went to many different sessions related to PostgreSQL, some of them more helpful than others. What I learned is that there are multiple different tools in the AWS platform that could be useful to others to help in their transition from MSSQL to PGSQL, but those tools might not be as useful to my company. We are considering a full transition to PGSQL in the coming years for cost savings, which means that we will want to completely move our current application systems from one engine to the other.

One of the AWS services I learned about that didn’t seem like it would be as helpful to us is BabelFish, which is a service that allows you to create an instance of PGSQL in AWS and still use your existing TSQL queries to interact with the database. This service seems like it would be very helpful for anyone who is looking to migrate a single database instance, or maybe up to a few, to PGSQL without having to rewrite their entire codebase, since it lets you keep working with your same application code. BabelFish probably won’t be helpful to my team in our migration efforts since we do want to do the full switch, so there would be no need to spin up BabelFish instances that we would later have to migrate fully anyway (especially since we’re looking at moving dozens of databases).

The other AWS services that were discussed for migration from MSSQL to PGSQL, Schema Conversion Tool (SCT) and Database Migration Service (DMS), seem like they would be closer to what my organization needs for our transition to Postgres. These tools allow you to convert most of your existing TSQL codebase to Postgres without any work on your part. However, AWS doesn’t claim that the converted code is going to be aligned with PGSQL best practices or will be performant, so that could be a challenge we’ll have to deal with later. The first half of the migration process with these tools is SCT which does the conversion of tables, schemas, functions, procedures, etc. from TSQL to PL/PGSQL Then the second half of the toolset, the DMS, would help with moving the data from one server to another. If I was going to guess how my organization is going to manage the transition from one engine to another, I would say that we’ll likely use these two tools to get us started, recognizing that they won’t get us 100% converted and we’ll have to do some manual conversions ourselves as well.

The last thing I learned about how Postgres differs from Microsoft SQL Server is that the indexing system is much more customizable and specific in PGSQL than in MSSQL, and that means I will need to learn a lot about indexes in the former. MSSQL only has a few different index types you can use, clustered, nonclustered, and columnstore, while PGSQL has at least 5 or 6 different kinds that are built for very specific use cases. While it may be a bit confusing to me, a career-long Microsoft database developer, to figure out the different types of Postgres indexes available when we start the transition, I’m going to focus on what an interesting learning opportunity it is. This will be the first large database engine change I will have in my career, and learning a new system will make me a better developer and a more lucrative job candidate in the future.

AI is being added to tools many of us use daily, to help us work more efficiently so we can do the more important and interesting work

This is one of my final takeaways from the conference, but I don’t think it will be quite as useful or relevant for my current role as the other things I listed above. A large part of the sessions at the Summit this year were focused on how AI is going to start changing how we work, and that we are going to need to adapt and learn how to use it to benefit ourselves instead of being left in the technological dust by refusing to use it. AI certainly isn’t going to be stealing our jobs in the next couple of years, but it will change how we work so we should prepare for that.

For myself, I plan on using AI tools currently available (mainly ChatGPT) to try to help me be more efficient in my role, probably with non-coding tasks such as writing emails and documentation. AI can help you work much more efficiently so you can spend most of your time on what is most important to you and your company instead of the mundane daily tasks that hog your time. That is the most exciting takeaway from the AI discussions I attended at PASS.

In the future, when other new AI tools, such as Red-Gate’s SQL Prompt+ (which was announced at the conference) are released, I also intend to play around with those to see how they might increase my work efficiency. The demo that the Red-Gate team gave for how SQL Prompt+ can be used is to highlight a query in a stored procedure and ask the tool for recommendations on how to improve the performance of the query. While I guarantee the tool is going to make a lot of bad suggestions that a human can easily tell won’t improve anything, I also think that it may provide a good starting point for investigating underperforming queries that you may not have thought of yourself. That is why I go back to my point of AI not taking our coding jobs in the near future but it becoming a great resource for making your job easier.

The concept of AI in general is a bit frightening (as a lover of sci-fi and horror books and movies), but it’s not likely to take over the world anytime soon. However, it may trickle into your workplace in the next few years so it’s better to confront change head-on and figure out how you can use these new tools to your advantage.

Summary

I hope that in the future, I’ll be able to attend more of the PASS Summit events since this year’s conference proved to be such a fun and relatively easy way to learn a lot of new information relevant to my career. Whether you’re newer in your career like me or an old hat in the trade, I really would recommend attending the next PASS Data Community Summit if you’re able to. You never know what you’ll learn that could improve your work life and make you a better developer.