DOAG and GLOC Are Almost Here!

It's hard to believe that it's already May, and that means that the DOAG and GLOC conferences are just around the corner. I will be presenting Database In-Memory By Example at DOAG on Tuesday, May 15 and then Database In-Memory - Real-Time Analytics on Thursday, May 17. This is going to be an exciting trip and I'm really looking forward to it. If you're going to be at either of the conferences please consider stopping by and seeing the presentations and if you see me wandering the hallways don't hesitate to say hello.

Oracle Logging Framework (OLF)

The Oracle Logging Framework (OLF) is an Oracle PL/SQL based logging and debugging utility that supports "levels" similar to Log4J. The key feature of the OLF is the ability to dynamically set a logging level based on almost any combination of module, action, username, and sid and instance_id. The OLF is integrated with the Instrumentation Library for Oracle (ILO) to provide task timing and extended SQL tracing, and since the OLF requires the setting of module and action to be effective, the ILO is the best suited utility to provide that functionality.

The OLF has been moved to GitHub at https://github.com/AndyRivenes/OLF

The OLF provides the ability to support the following instrumentation categories:

  • Debugging
  • Logging
  • Runtime registration
  • Metric collection
  • Logging Levels

The Oracle Logging Framework (OLF) follows the basic Log4j logging levels and values:

  • FATAL - 50000
  • ERROR - 40000
  • WARN - 30000
  • INFO - 20000
  • DEBUG - 10000

Log4j also includes an ALL and an OFF level, and the Oracle Logging Framework (OLF) includes these levels as well as a TIMED level that is used to insure that task timing is always logged. Each level is set so that ALL < DEBUG < INFO < WARN < ERROR < FATAL< TIMED < OFF. A default level of FATAL is assigned in the OLF code and can be overridden in the dynamic configuration or it can be explicitly set.

Dynamic Control

One of the biggest features of the OLF is the ability to dynamically set log levels. Whether it's for a single user, program or task you don't want to have to stop everything and reset a configuration file and then restart the application. You really want to be able to set the logging dynamically on the fly. With the OLF you can do this through the dblog_config table.

Simple Example

A very simple example of a logged PL/SQL block is the following:

declare
  l_num number;
begin
  ilo_task.begin_task(
    module => 'module',
    action => 'action');
  --
  dblog.info('Before statement');
  --
  select 1 into l_num from dual;
  --
  dblog.info('After statement');
  --
  ilo_task.end_task;
exception
  when others then
    dblog.error('An error occurred');
    --
    ilo_task.end_all_tasks;
    --
    RAISE;
end;

 

 

 

 

New Database In-Memory Features in Oracle Database 18c

Oracle Database 18c is now available and you can find more details in this blog post by Dom Giles. Of course, with Oracle Database 18c comes new features for Oracle Database In-Memory as well. In Oracle Database 12c Release 2 (12.2) Oracle added features to help manage the IM column store with Automatic Data Optimization, expand the column store with Active Data Guard and Exadata Flash Cache, and increase performance with In-Memory Expressions and Join Groups. With Oracle Database 18c Oracle has expanded on this innovation and has added features that now help to automatically manage the objects in the IM column store, improve performance even further and expand the reach of Database In-Memory by making it easier to run analytic queries on non-Oracle data. I'm excited about this release and I will be blogging about many of the new Database In-Memory features as well as other Oracle Database 18c features, but here’s a teaser to get you started thinking about Oracle Database 18c.

Automatic In-Memory

Oracle introduced Automatic Data Optimization (ADO) in Oracle Database 12.1 with Heat Map and segment based policy management. In Oracle Database 12.2 ADO was extended to support Database In-Memory and that allowed policy management of objects in the IM column store. In Oracle Database 18c Oracle added a new feature called Automatic In-Memory. This expands on the management of the IM column store by allowing you to automatically manage the contents of the IM column store based on Heat Map data. This is another step on the journey to making the IM column store self-managing. When Automatic In-Memory is enabled, if the size of the objects that have been enabled for in-memory exceeds the size of the IM column store, and the population of a new object won’t fit, the least active object(s) will be evicted to make enough room to populate the new object. All of this can happen automatically without needing any administrative intervention.

In-Memory Dynamic Scans

Building on what I call the high-performance options of 12.2, In-Memory Expressions and Join Groups, Oracle added a really innovative feature called In-Memory Dynamic Scans (IMDS). This is like supercharging Database In-Memory performance. With IMDS, when additional CPU is available Database In-Memory will automatically parallelize scans of data in the IM column store. It does this by using multiple lightweight threads of execution within a process. This allows Database In-Memory to further increase scan performance by utilizing idle CPU resources. All of this is controlled by the Resource Manager so you don’t need to be worried about impacting the performance of your other database tasks.

In-Memory Optimized Arithmetic

Another performance innovation in Oracle Database 18c is the ability to create In-Memory Optimized Arithmetic encoded numbers. Similar to what In-Memory Expressions did in 12.2 with the storing of pre-computed expressions, In-Memory Optimized Arithmetic builds on the ability to pre-optimize NUMBER format data types in the IM column store. This enables faster arithmetic calculations using SIMD hardware and can significantly improve performance by performing arithmetic operations natively right in the hardware.

In-Memory External Tables

In-Memory External Tables builds on the theme of expanding analytic queries to all data, not just Oracle native data. Oracle Database already supports accessing external data with features like External Tables and Big Data SQL to allow fast and secure SQL queries on all types of data. In-Memory External Tables allow essentially any type of data to be populated into the IM column store. This means non-native Oracle data can be analyzed with any data in Oracle Database using Oracle SQL and its rich feature set, and get the benefit of using all of the performance enhancing features of Database In-Memory.

Columnar Format Exadata Flash Cache Enhancements

Oracle Database 18c now supports accessing non-HCC objects in the Exadata Flash cache in full Database In-Memory columnar format. In 12.2 this was restricted to just HCC objects. I blogged about this new ability here since it is dependent on Exadata storage server software as well. The bottom line is that with Exadata it is possible to easily have a multi-terabyte column store and perform analytic queries on huge sets of data.

Wrap Up

Oracle has also made a lot of other miscellaneous improvements to make Database In-Memory even better at scanning and filtering data enabling you to run your analytic queries even faster than before. Oracle Database 18c continues Oracle's commitment to adding features and improving the performance of Database In-Memory. On the Database In-Memory blog I will be blogging in detail about the new Database In-Memory features and I will be exploring other features here as well.

If you want to get started exploring what’s new in Database In-Memory in Oracle Database 18c you can get started by taking a look at the new documentation here.