orasetup

orasetup is one of the first utilities that I wrote when I started out working with Oracle databases. oraenv, in my opinion, just doesn’t have the flexibility to accommodate all of the the different environment variables for the various Oracle Database utilities. You can download orasetup from Github here.

orasetup depends on entries in the oratab file, just like oraenv. However, there is the ability to add comments with specific keywords to accommodate other utilities. To be honest, nowadays I mainly use it for connection to databases, but it can do a lot more. One of the nice things about orasetup is the ability set the $ORACLE_SID or $TWO_TASK environment based on the oratab entry. This can be especially useful in a CDB environment. CDB databases usually operate with the environment variable $ORACLE_SID set, but PDB databases will use $TWO_TASK. This is simple to set in the oratab file using the third parameter, Y for $ORACLE_SID and N for $TWO_TASK.

Since the N flag is designated for databases that should not be started by dbstart at boot time this works out nicely for also using it to set the TWO_TASK variable and doesn’t interfere with any Oracle Database utilities.

The following is an example oratab file using Oracle Database 23c Free:

#
# This file is used by ORACLE utilities.  It is created by root.sh
# and updated by either Database Configuration Assistant while creating
# a database or ASM Configuration Assistant while creating ASM instance.
#
# A colon, ':', is used as the field terminator.  A new line terminates
# the entry.  Lines beginning with a pound sign, '#', are comments.
#
# Entries are of the form:
#   $ORACLE_SID:$ORACLE_HOME:<N|Y>:
#
# The first and second fields are the system identifier and home
# directory of the database respectively.  The third field indicates
# to the dbstart utility that the database should , "Y", or should not,
# "N", be brought up at system boot time.
#
# Multiple entries with the same $ORACLE_SID are not allowed.
#
#
FREE:/opt/oracle/product/23c/dbhomeFree:Y
FREEPDB1:/opt/oracle/product/23c/dbhomeFree:N

The CDB name is FREE and the PDB name is FREEPDB1. With these oratab entries, orasetup will set the $ORACLE_SID for FREE and $TWO_TASK for FREEPDB1. The following are some examples:

[oracle@localhost ~]$ . orasetup menu
 
Alias: FREE          /opt/oracle/product/23c/dbhomeFree
Alias: FREEPDB1      /opt/oracle/product/23c/dbhomeFree

[oracle@localhost ~]$ . orasetup free
[oracle@localhost ~]$ echo $ORACLE_SID
FREE
[oracle@localhost ~]$ echo $ORACLE_HOME
/opt/oracle/product/23c/dbhomeFree
[oracle@localhost ~]$ 
[oracle@localhost ~]$ . orasetup freepdb1
[oracle@localhost ~]$ echo $ORACLE_SID

[oracle@localhost ~]$ echo $TWO_TASK
FREEPDB1
[oracle@localhost ~]$ echo $ORACLE_HOME
/opt/oracle/product/23c/dbhomeFree
[oracle@localhost ~]$ 

A couple of more things to note. The menu option will list the entries in the oratab file. This can be handy when you are not sure of the entries and don’t want to list the oratab file itself. There are also help screens:

[oracle@localhost ~]$ . orasetup
Oracle database not found in the oratab file
 
Usage:  . orasetup { <SID> | <utility> [alias] 
                     unset | opatch | ssh 
                     help | ? | version| menu } 
 
 
Note: The command MUST be prefaced with a ".". If it is not then
      no permanent change can be made to the user's environment.
 
[oracle@localhost ~]$ . orasetup help
 
+-+-+  orasetup, Ver 2.3b, 08/10/2017
 
 
Usage:  . orasetup { <SID> | <utility> [alias] 
                     unset | opatch | ssh 
                     help | ? | version| menu } 
 
 
Note: The command MUST be prefaced with a ".". If it is not then
      no permanent change can be made to the user's environment.
 
Options: 
         SID - Database SID based on first label in the oratab file 
         <utility> [ alias ] - recognized utility and optional alias 
           as defined by #<utility>: format in the oratab file 
         unset    - Unset all variables set by orasetup 
         opatch   - Add $ORACLE_HOME/OPatch to the PATH 
         ssh      - Runs ssh.env in $HOME/.ssh (default) to set 
           ssh equivalence 
         help | ? - Display this help message 
         version  - Display orasetup version 
         menu     - Display database and known utilities defined in 
           the oratab file 
 
[oracle@localhost ~]$ 

Just a few more things. The following environment variables are set or unset by orasetup:

ORACLE_BASE
ORACLE_HOME
ORACLE_SID or TWO_TASK
PATH
LD_LIBRARY_PATH or SHLIB_PATH
NLS_LANG (optional)
ORA_NLS or ORA_NLS32 or ORA_NLS33
ORACLE_TERM
EPC_DISABLED

orasetup will always unset all Oracle related environment variables first. It does this to try and ensure a “clean” environment before starting to set any environment variables. orasetup will first cycle through all of the oratab entries and search for any occurrences that match ORACLE_HOME or ORACLE_SID/TWO_TASK settings.

That is really all there is to it. I will post a full guide that I wrote quite a long time ago on Github that describes the other things that orasetup has been used for, but this should be enough information to get you started if you decide to give it a try.

Posted in Oracle Database | Comments Off on orasetup

SQL Monitor Active Reports

In Oracle Database SQL Monitor is an easy to use tool for getting details about SQL statement execution. We use it all the time for Database In-Memory queries. I even wrote a technical brief to explain how to create and use SQL Monitor active reports. Maria Colgan also did a nice post about Getting the most out of Oracle SQL Monitor, and I have done a couple of Ask TOM Office Hours sessions on SQL Monitor, the latest being about the new updates that have been added.

However, many people still don’t realize that there are different report types with SQL Monitor and they are not all equal. I will go so far as to say that the only SQL Monitor report you should ever use is the “active” report. The biggest reason for this is that only the “active” report will identify the “Activity” or where, and more importantly how, time was spent in the execution plan steps. There is also an “Activity” section in the other report types (i.e., TEXT and HTML), but it is not used. It is a long standing bug that does not appear will ever get fixed.

With Database In-Memory, a SQL Monitor active report will differentiate CPU time spent “in-memory” and other CPU time. How this is done will be detailed in a subsequent blog post, but suffice it to say that this and other information is captured in Active Session History (ASH) in a very lightweight manner. The bottom line is that only the active report will display this information.

Here is an example of a SQL Monitor active report with Database In-Memory CPU time shown. Notice that the color of the activity bar is different and by hovering the mouse pointer over the bar we see that this time represents “Cpu: in memory”, how many samples were observed and the overall percentage of total time.

SQL Monitor active reports do have their limitations. I pointed this out in the Ask TOM session. Since SQL Monitor is based on ASH data things like parse time and non-idle wait events are not directly attributable in a SQL Monitor active report. Session statistics are also not available. This doesn’t mean that it is not a useful tool. It just means it is one of several tools that you may need to use to properly diagnose where time is spent in a SQL execution.

Here is an example:

SQL Monitor Time and Wait section

Notice in this snippet of a Time & Wait section that the query Duration, or elapsed time, is much longer that the Database Time. Unfortunately there is no easy way in a SQL Monitor active report to determine where the additional time was spent. In this example, the additional time was spent in an “idle” wait event. An extended SQL trace file for this execution shows us a more complete time picture:

trace file time

Notice that most of the additional Duration time was spent in the SQL*Net message from client wait event.

I will admit we’ve taken a slight detour, but this was really just to show that a SQL Monitor active report does have limitations, mainly imposed by the source data used which is ASH data. The bottom line though, is that in most cases a SQL Monitor active report is easy to generate and easy to use to visualize the SQL execution plan and where execution time was spent. And in the case of Database In-Memory, the only easy way to differentiate CPU time spent in “in-memory” versus other CPU time.

Posted in Oracle Database, Performance Tuning | Tagged , , , | Comments Off on SQL Monitor Active Reports

Database In-Memory Summit 2023

Join myself and the Database In-Memory team for the Database In-Memory Summit 2023.  It is free and virtual, and there will be two sessions so people around the world can attend.

You can register here: https://asktom.oracle.com/pls/apex/asktom.search?oh=13621

Session 1: Tuesday June 13th, 8:00 am – 10:00 am PDT

Session 2: Tuesday June 13th, 8:00 pm – 10:00 pm PDT

Agenda:
• Keynote presentation
• Use Cases and Roadmap of Database In-Memory
• What’s new in Database In-Memory 21c and 23c
• Best Practices
• Customer presentations

I will be covering what’s new in 23c and there are some big features that should make managing Database In-Memory much simpler. Please register at the link above and plan on attending.

Posted in Database In-Memory | Comments Off on Database In-Memory Summit 2023

CloudWorld 2022 Is Over

I just wanted to thank everyone that attended the Database In-Memory Hands On Lab, sessions and demo area. It was a pleasure speaking with you and being able to meet you in person. The Steve Miller concert on Wednesday night was a personal highlight and even though it ran very late I still had a great turnout on Thursday morning for my Best Practices session.

The Hands On Lab was brand new and I think it went well. It is now based on Oracle Database 21c and it includes most of the the topics from the previous lab plus I have added a JSON lab and a lab on Automatic In-Memory. The new version should be published soon on LiveLabs if you want to check it out, and then I will be expanding it with many of the Database In-Memory features that have been added since Oracle Database Release 12.2. Look for an Ask TOM Office Hours session for an official kick-off.

Posted in Database In-Memory | Comments Off on CloudWorld 2022 Is Over

New Database In-Memory Videos

I’ve posted some new videos on the Database In-Memory YouTube channel.

A two part video that talks about what Database In-Memory is and describes some of the key underlying technology. Part 1 describes the use cases for Database In-Memory, it is not a one size fits all option, and a quick overview of how to get started. In Part 2 I focus on the underlying technology that allows Database In-Memory to be faster by an order of magnitude or more for running analytic workload.

I’ve also posted the talk I did at the INSYNC 2021 Conference describing the new 21c Database In-Memory features.

Posted in Database In-Memory | Comments Off on New Database In-Memory Videos

Database In-Memory Star Schema Benchmark Now Available

I’ve promised this for a long time and I’ve finally delivered! To all of those of you who have attended my Open World Hands On Lab sessions or have been to my In-Memory By Example presentations you can now easily run the lab on your own laptop. I’ve finally found a place that can handle the schema export. It’s rather large so it’s broken up into a total of 21 Data Pump export files that have been compressed. Hopefully the instructions that I’ve created will enable you to import it. I would recommend using the Developer Days Virtual Box VM and then import the schema into an SSB schema. The files are available on GitHub here: https://github.com/AndyRivenes/DBIM.

The original Hands On Lab and scripts from OOW 2017 are available here on our DBIM blog “Resources” page. Just scroll down to the Technical Briefs section.

If you want more information about the Star Schema Benchmark then you can take a look at the original paper here: https://www.cs.umb.edu/~poneil/StarSchemaB.PDF

A version of the dbgen utility, if you want to build your own SSB schema, maybe to make it larger is available here: https://github.com/electrum/ssb-dbgen

Another nice “how to” I found if you roll your own is available here: https://jorgebarbablog.wordpress.com/2016/03/21/how-to-load-the-ssb-schema-into-an-oracle-database/

Jim Czuprynski has also done some work with Dom Giles’ Swingbench to show how well Database In-Memory works. His presentation is here.

Posted in Database In-Memory | Comments Off on Database In-Memory Star Schema Benchmark Now Available

What happened to AppsDBA?

Since joining Oracle the appsdba.com website went on hiatus, and now it has been forwarded here to AndyRivenes.com. The still relevant information will be made available along with new blogging and tweeting at @AndyRivenes. The code has also been resurrected and moved from Sourceforge and appsdba.com to GitHub at https://github.com/AndyRivenes.

 

Posted in Uncategorized | Comments Off on What happened to AppsDBA?

DOAG and GLOC Are Almost Here!

It’s hard to believe that it’s already May, and that means that the DOAG and GLOC conferences are just around the corner. I will be presenting Database In-Memory By Example at DOAG on Tuesday, May 15 and then Database In-Memory – Real-Time Analytics on Thursday, May 17. This is going to be an exciting trip and I’m really looking forward to it. If you’re going to be at either of the conferences please consider stopping by and seeing the presentations and if you see me wandering the hallways don’t hesitate to say hello.

Posted in Uncategorized | Comments Off on DOAG and GLOC Are Almost Here!

Oracle Logging Framework (OLF)

The Oracle Logging Framework (OLF) is an Oracle PL/SQL based logging and debugging utility that supports “levels” similar to Log4J. The key feature of the OLF is the ability to dynamically set a logging level based on almost any combination of module, action, username, and sid and instance_id. The OLF is integrated with the Instrumentation Library for Oracle (ILO) to provide task timing and extended SQL tracing, and since the OLF requires the setting of module and action to be effective, the ILO is the best suited utility to provide that functionality.

The OLF has been moved to GitHub at https://github.com/AndyRivenes/OLF

The OLF provides the ability to support the following instrumentation categories:

  • Debugging
  • Logging
  • Runtime registration
  • Metric collection
  • Logging Levels

The Oracle Logging Framework (OLF) follows the basic Log4j logging levels and values:

  • FATAL – 50000
  • ERROR – 40000
  • WARN – 30000
  • INFO – 20000
  • DEBUG – 10000

Log4j also includes an ALL and an OFF level, and the Oracle Logging Framework (OLF) includes these levels as well as a TIMED level that is used to insure that task timing is always logged. Each level is set so that ALL < DEBUG < INFO < WARN < ERROR < FATAL< TIMED < OFF. A default level of FATAL is assigned in the OLF code and can be overridden in the dynamic configuration or it can be explicitly set.

Dynamic Control

One of the biggest features of the OLF is the ability to dynamically set log levels. Whether it’s for a single user, program or task you don’t want to have to stop everything and reset a configuration file and then restart the application. You really want to be able to set the logging dynamically on the fly. With the OLF you can do this through the dblog_config table.

Simple Example

A very simple example of a logged PL/SQL block is the following:

declare
  l_num number;
begin
  ilo_task.begin_task(
    module => 'module',
    action => 'action');
  --
  dblog.info('Before statement');
  --
  select 1 into l_num from dual;
  --
  dblog.info('After statement');
  --
  ilo_task.end_task;
exception
  when others then
    dblog.error('An error occurred');
    --
    ilo_task.end_all_tasks;
    --
    RAISE;
end;

Posted in Oracle Database | Comments Off on Oracle Logging Framework (OLF)

New Database In-Memory Features in Oracle Database 18c

Oracle Database 18c is now available and you can find more details in this blog post by Dom Giles. Of course, with Oracle Database 18c comes new features for Oracle Database In-Memory as well. In Oracle Database 12c Release 2 (12.2) Oracle added features to help manage the IM column store with Automatic Data Optimization, expand the column store with Active Data Guard and Exadata Flash Cache, and increase performance with In-Memory Expressions and Join Groups. With Oracle Database 18c Oracle has expanded on this innovation and has added features that now help to automatically manage the objects in the IM column store, improve performance even further and expand the reach of Database In-Memory by making it easier to run analytic queries on non-Oracle data. I’m excited about this release and I will be blogging about many of the new Database In-Memory features as well as other Oracle Database 18c features, but here’s a teaser to get you started thinking about Oracle Database 18c.

Automatic In-Memory

Oracle introduced Automatic Data Optimization (ADO) in Oracle Database 12.1 with Heat Map and segment based policy management. In Oracle Database 12.2 ADO was extended to support Database In-Memory and that allowed policy management of objects in the IM column store. In Oracle Database 18c Oracle added a new feature called Automatic In-Memory. This expands on the management of the IM column store by allowing you to automatically manage the contents of the IM column store based on Heat Map data. This is another step on the journey to making the IM column store self-managing. When Automatic In-Memory is enabled, if the size of the objects that have been enabled for in-memory exceeds the size of the IM column store, and the population of a new object won’t fit, the least active object(s) will be evicted to make enough room to populate the new object. All of this can happen automatically without needing any administrative intervention.

In-Memory Dynamic Scans

Building on what I call the high-performance options of 12.2, In-Memory Expressions and Join Groups, Oracle added a really innovative feature called In-Memory Dynamic Scans (IMDS). This is like supercharging Database In-Memory performance. With IMDS, when additional CPU is available Database In-Memory will automatically parallelize scans of data in the IM column store. It does this by using multiple lightweight threads of execution within a process. This allows Database In-Memory to further increase scan performance by utilizing idle CPU resources. All of this is controlled by the Resource Manager so you don’t need to be worried about impacting the performance of your other database tasks.

In-Memory Optimized Arithmetic

Another performance innovation in Oracle Database 18c is the ability to create In-Memory Optimized Arithmetic encoded numbers. Similar to what In-Memory Expressions did in 12.2 with the storing of pre-computed expressions, In-Memory Optimized Arithmetic builds on the ability to pre-optimize NUMBER format data types in the IM column store. This enables faster arithmetic calculations using SIMD hardware and can significantly improve performance by performing arithmetic operations natively right in the hardware.

In-Memory External Tables

In-Memory External Tables builds on the theme of expanding analytic queries to all data, not just Oracle native data. Oracle Database already supports accessing external data with features like External Tables and Big Data SQL to allow fast and secure SQL queries on all types of data. In-Memory External Tables allow essentially any type of data to be populated into the IM column store. This means non-native Oracle data can be analyzed with any data in Oracle Database using Oracle SQL and its rich feature set, and get the benefit of using all of the performance enhancing features of Database In-Memory.

Columnar Format Exadata Flash Cache Enhancements

Oracle Database 18c now supports accessing non-HCC objects in the Exadata Flash cache in full Database In-Memory columnar format. In 12.2 this was restricted to just HCC objects. I blogged about this new ability here since it is dependent on Exadata storage server software as well. The bottom line is that with Exadata it is possible to easily have a multi-terabyte column store and perform analytic queries on huge sets of data.

Wrap Up

Oracle has also made a lot of other miscellaneous improvements to make Database In-Memory even better at scanning and filtering data enabling you to run your analytic queries even faster than before. Oracle Database 18c continues Oracle’s commitment to adding features and improving the performance of Database In-Memory. On the Database In-Memory blog I will be blogging in detail about the new Database In-Memory features and I will be exploring other features here as well.

If you want to get started exploring what’s new in Database In-Memory in Oracle Database 18c you can get started by taking a look at the new documentation here.

Posted in Database In-Memory | Comments Off on New Database In-Memory Features in Oracle Database 18c