Advertisment

Tips to tune your Oracle database

author-image
CIOL Bureau
Updated On
New Update

Rahul Sah

Advertisment

We often think that hardware issues, like lesser RAM or CPU speed are the main culprits behind slow database performance. So, we end up upgrading the hardware, which is always not the best solution. We can do a lot apart from that.

Here are some techniques that DBAs can use to enhance the performance of their crippled database. These include optimizing some parameters for having performance improvements and also tuning the database with some database tuning tools.

Advertisment



Optimizing the Cache Size

Oracle database maintains a cache, which is a part of the Shared Global Area (SGA) in memory for each Single Database Instance (SID). This cache is used to store data blocks and indexes that are currently in use. In the init.ora file under the Oracle installation directory there are two parameters, namely DB_CACHE_SIZE and DB_BLOCK_BUFFERS that determine the size of buffer cache. We can adjust these parameters to optimize the Cache Hit Ratio (CHR). The CHR determines how many blocks were read from the memory against how many blocks were to be read from the disk.

The blocks from memory are called logical reads and those from disk are called physical reads. A logical read occurs whenever a user requests data from the database and that data is in the buffer cache, whereas if the data must be read from disk then a physical read occurs. In the V$SYSSTAT table, Oracle keeps a record of the logical as well as physical reads. Through the following query, this ratio can be calculated.

Advertisment



select pr.value PR, dbg.value DB_BLOCK, cg.value CONS_GETS, (1-(pr.value/( dbg.value+cg.value)))*100 HIT_RATIO from v$sysstat pr, v$sysstat dbg, v$sysstat cg where pr.name = 'physical reads' and dbg.name = 'db block gets' and cg.name = 'consistent gets';

The logical reads include 'db block gets' for blocks you are presently using in memory space and 'consistent gets' of original blocks from rollback segments that others are updating. This ratio will be more representative when the database has been running for hours with normal application and user transactions taking place. If this ratio is below 90%, then the parameters DB_CACHE_SIZE and DB_BLOCK_BUFFERS can be adjusted to make performance improvements. It is a good practice to try out several values for extra buffers, so as to end up with an optimal value for the database workload.

 

Advertisment

Tuning Redo Log buffer

Oracle database maintains a Redo Log that records the changes made to the datafiles. All such changes that are maintained by Redo Log are first written into Log Buffer until those changes are committed. The purpose of storing the changes in memory is to reduce the disk IO operations. For a transaction, it is necessary to get access of Redo Allocation Latch, before the changes could be written onto Redo Log. Redo Allocation Latch controls the allocation of space for redo entries in the Redo Log buffer. If it doesn't get access of that, it tries for Redo Copy Latch, which gets used when the size of a redo entry is greater than the value of LOG_SMALL_ENTRY_ MAX_SIZE. There is just one such latch so as to avoid simultaneous writing to the log. If none are available, then it results in an “immediate miss”. Oracle keeps recording the number of gets and misses of the Redo Allocation Latches in v$latch table. The following SQL code gives the ratio of Redo Allocation Latch misses.



select name latch_name, immediate_ gets, immediate_misses, round(decode(immediate_gets-immediate_ misses,0,1, immediate_gets-immediate_misses)/ decode(immediate_gets ,0,1,immediate_gets),3)*100 hit_ratio from v$latch where name = 'redo copy';

Advertisment

If this ratio keeps falling below 99%, then you should lower the value for LOG_SMALL_ENTRY_MAX_SIZE parameter in init.ora file under Oracle home directory. This way the Redo Copy Latch will be used more as the redo entry size would be greater than the LOG_SMALL_ENTRY_MAX_SIZE value. Alternately, the number of Redo Copy Latches can be changed upto twice the number of CPUs. For this, the value of LOG_SIMULTANEOUS_COPIES needs to be changed.

 



Sizing up the Shared Pool

Advertisment

Shared Pool is also a part of the Oracle SGA in the memory. This shared pool holds the Library Cache, which is a piece of memory that Oracle uses to store SQL statements. When a Process issues an SQL statement, the text of the statement goes into the Library Cache where the statement is parsed and validated.

The Shared Pool also records the Data Dictionary Cache with definitions of tables, views, and other dictionary objects. There are indicator ratios that can be used to determine when to increase the size of the shared pool, which is determined by the SHARED_POOL_SIZE parameter. One such indicative ratio is of Library Cache Hit Ratio that shows how many cursors are being shared by SQL statements, which were found and parsed in the shared pool. In the v$librarycache table Oracle keeps tracks of all the library cache activities. This ratio can be calculated with the following query.



Select namespace, pinhits / pins*100 "Hit Ratio" from v$librarycache;

Advertisment

The average hit ratio of various namespace column objects should be at least 85%. If the value is not 85%, then the initialization parameter SHARED_POOL_SIZE needs to be increased. Oracle has a table v$parameter, where all the initialization parameters used by Oracle are stored. The values for those parameters can be altered through this table.

The Gets and Misses of individual activities, give you the idea of 'Immediate miss', which indicates whether you need to increase the buffer size

 

DB tuning tools

There are some freeware tools available that help in monitoring the performance of Oracle. Such tools like DB Tuning Expert available with DB Tools Package for Oracle can be of great usage. This tool runs a database performance test and also a diagnostic test, after which it presents a comprehensive report of existing performance problems along with the suggestions for solving such problems. Through such tools the database performance can be monitored in real-time and necessary changes for parameter tweaking can be done beforehand to keep the database performance optimal. We have given this tool in this month's DVD.

DB Tuning Expert performs an analysis (left) of the database and reports (right) all the shortcomings that need to be overcome to enhance the Oracle performance

While altering the initialization parameter values, it should be noted that the database needs to be restarted so that the changes could reflect in performance of the database.

Applies To: Oracle DBAs

USP: Performance enhancement of the Oracle database

Google Keywords: Database tuning

On PCQ Extreme DVD: System /Labs/dbtools5.exe

tech-news