Tuesday, June 8, 2010

Backing Up and Recovering Voting Disks

Backing Up and Recovering Voting Disks
What is a voting disk and why its needed ?The voting disk records node membership information. A node must beable to access more than half of the voting disks at any time.
For example, if we have seven voting disks configured, then a node mustbe able to access at least four of the voting disks at any time. If anode cannot access the minimum required number of voting disks it is evicted(NODE EVICTION), or removed, from the cluster.
Backing Up Voting Disks
When to backup voting disk ?
1)After installation

2)After adding nodes to or deleting nodes from the cluster

3)After performing voting disk add or delete operations

To make a backup copy of the voting disk, use the Linux 'dd' command. Perform this operation on every voting disk as needed where voting_disk_name is the name of the active voting disk and backup_file_name is the name of the file to which we want to back up the voting disk contents:-
dd if=voting_disk_name of=backup_file_name
If our voting disk is stored on a raw device, use the device name in place of voting_disk_name. For example:

dd if=/dev/sdd1 of=/tmp/voting.dmp
Note : When we use the dd command for making backups of the voting disk, the backup can be performed while the Cluster Ready Services (CRS) process is active; we do not need to stop the crsd.bin process before taking a backup of the voting disk.
Recovering Voting Disks
If a voting disk is damaged or crash, and no longer usable by Oracle Clusterware, we can recover the voting disk if we have a backup file.
dd if=backup_file_name of=voting_disk_name

Friday, April 30, 2010

Basic Hints On RAC Architecture For Oracle 9i


Nodes
Single CPU nodes are Known as uni processors. In uniform memory access (UMA) configurations, each CPU has equal access to all of the memory. These systems are known as SymmetricMulti-Processor (SMP) System. In nonuniform memory access (NUMA) system, each CPU has a pool of local memory it uses by preference, but can also access the memory assigned to other CPUs, if needed.
Interconnect
The cluster interconnect provides a physical link between the nodes of the Cluster.
Best results are achieved using high speed interconnects such as:
-> Gigabit Ethernet
-> Virtual Interface Architecture (VIA)
-> Vendor-specific, certified, Proprietary systems.
Cluster Software
Real Application Cluster software uses all the components of single instance oracle environment plus cluster software that facilitates internodes communication. The Operating system cluster software contains operating System-dependent (OSD) Cluster ware components.
Cluster Manager
A layer of software, known as the Cluster Manager, Provides a cluster wide view of the nods. The Cluster Manager Regulates cluster Membership and manager messages sent between nodes. The Node Monitor, a component of the Cluster Manager, Monitors the status of the nodes (whether they are active members of the cluster).
Real Application Clusters interacts with the Cluster Manager for its own cluster information and requirements.
Inter process Communication
The inter process communication software is the second key subcomponent of the operating system-depended OSD clusterware. The IPC layer controls messaging functions so the nodes can communicate with each other through the interconnect.
Real Application Cluster enlists the ipc to transfer messages and data blocks between instances on different nodes.
Disk Access And Shared File
All Files are created commonly except the Redo logs. Redo logs are segregated using the Parameter Thread. On Some Platforms, a quorum disk (also known as voting disk on some systems) is used by the node monitor to manage the cluster configuration. On Windows the voting disk and configuration data share the same disk resource.
If you employ a server parameter file (spfile), this must also be shared by the instances.
Global Service Daemon
The Global Services Daemon (GSD) performs manageability tasks, such as instance startup and shutdown, for clients such as the Server Control Utility (SRVCTL), the Database Configuration Assistant (DBCA), and Oracle Enterprise Manager. You must start the GSD on all the nodes in Your Real Application Clusters database so that the manageability features and these tools operate properly. However, you only need on GSD service on each node regardless of how many Real Application Clusters database installation you have.
As an example, assume you execute a SRVCTL command to stop all active instances in your Real Application Clusters database. The GSD receives your request from SRVCTL and executes the command locally on each node. Finally, The GSD returns the results to you SRVCTL Session.
Background Process
The additional process available along with the regular process are as follows:
LMON : The Global Enqueue Service Monitor
LMD : The Global Enqueue Service Daemon, typically appearing with the
process name ora_lmd0_instance on a UNIX system.
LMSn : The Global cache service processes, where n can range from zero to
nine (LMS0 to LMS9)
LCK : Typically appearing with the process name ora_lck0_instance
on a UNIX system.
DIAG : The Diagnosability Daemon.


Global Enqueue Service Monitor

An Enqueue is a memory structure that serializes access to the database resources.
The global Enqueue Service Monitor (LMON) is responsible for monitoring the entire cluster to manage global enqueues and resources. The Process ensures the LMD processes and the storage they require for global enqueue information are functioning correctly.

Global Enqueue Service Daemon
The current status of each global enqueue is maintained in as shared memory area of one of the active instances. The status indicates which instances, if any, have the right to use that resource.
Global Cache service process
The Global Cache Service Processes(LMSn) manage requests for data access across the cluster. They ensure that images of the same block can only appear in the buffer caches of two different instances if the block contents are valid for each instance.
LMSn processes coordinate block access by sending messages between an instance requesting access to a specific block to an instance holding an image of that block, if there is one. An LMSn process on the holding instance can:
-> Build a read consisten block image and ship it to the buffer cache of the requesting instance, or
-> Forward the current block image to the requesting instance.

LCK Process

The LCK process manages instance resource requests and cross-instance call operations. These are calls associated with coordinating access to dictionary and row cache objects.
Diagnosability Daemon (DIAG)
The Diagnosability Daemon captures diagnostic information related to process and instance failures. This information can be used Oracle World Wide Support to help and analyze and resolve problems with your database and instances.
The DIAG process writes its diagnostic information to files in a subdirectory of the directory specified by the initialization parameter BACKGROUN_DUMP_DEST.The subdirectories are named cdmp_timestamp, where timestatmp identifies when the subdirectory, and trace information, was written. The following is an example taken from a Solaris system.

$ ls cdmp*
cdmp_20011218004549:
pln1_arco_21550.trw pln1_lmon_21525.trw ...
pln1_lmd0_21530.trw pln1_p001_21565.trw

The Command was issued from the Background_Dump_Dest directory and shows DIAG output created at 45 minutes and 9 seconds after midnight on December 18, 2001.

The Diag process starts automatically, is not tunable, and should not be disabled or removed. It can be automatically restarted by other background processes if this becomes necessary.


Wednesday, April 21, 2010

Oracle Cluster Registry (OCR) in an Oracle Cluster environment.

The Oracle Cluster Registry(OCR) is one of the required components in an Oracle Cluster environment. The most each way to position the OCR is like the windows registry. It is a registry which contains all the information about the cluster environment. You need to think of node names, ip addresses, an application resources like listener,vip, gsd but also the databases/instance. Also the parameters like need to startup, dependencies are stored in the OCR. The OCR is created during the CRS installation when the root.sh script is executed. When root.sh is executed it will read the ocr.loc file which is create during installation and pointing the the OCR file/device. To make sure all the nodes in the cluster can read the OCR the ocr location must be on shared storage.

The location of the ocr.loc depended on the platform used.

Linux: /etc/oracle/ocr.loc

Aix: /etc/oracle/ocr.loc

Solaris : /var/opt/oracle

Windows : HKEY_LOCAL_MACHINE\SOFTWARE\Oracle\OCR

If we look in the ocr.loc file we see the following.

ocrconfig_log=/dev/raw/ocr1

ocrmirrorconfig_loc=/dev/raw/ocr2

local_only=FALSE

The value of local_only = true indicates "Single instance only" and false means using "RAC".

The orc.loc is the location where the CRS stack will check for the OCR during startup. When the OCR is found it will be read for the voting disk location and the other information. If for some reason the orc.loc or the location in the ocr.loc is not available the cluster will not be started. From oracle 10g release 2, it is possible to define more OCR locations (mirroring).

Clients of the OCR are srvctl, css, crs, dbua, vipca and em.

Tools which can be used:

ocrconfig - configuration tool for Oracle Cluster Registry

ocrdump – utility to dump the contents of the OCR in a file.

ocrcheck – utility to verify the OCR integrity.

Thursday, April 1, 2010

Oracle performance tuning

When SQL statements are fired in the Oracle database, a part called the optimizer will determine the most efficient execution path of the statement after considering many factors related to the objects referenced and the conditions specified in the query. In the old days a rule based optimizer was used, but since Oracle 8i most of the time a cost based optimizer is used.
For the Cost Based Optimizer to work efficiently, Oracle needs to have right statistics. There is a seperate document that descripts how to create statistics for Oracle tables/columns.Tracing SQLTo trace SQL statements you can also use TKProfSince Oracle 10g Oracle ADDM can be used to look at executed SQL statements.In SQL*Plus statements can be traced by using the command 'set autotrace on;'.Tuning SQLIf SQL statemnts are not performing you can do many things about it. Some solutions can be: • Add a hint to the statement to influence the optimizer. • Add indexes • Rewrite the SQL
Oracle 10 automated performance tuning features.Oracle 10g provides some automated features:Automatic Workload Repository (AWR)The Automatic Workload Repository (AWR) collects, processes, and maintains performance statistics for problem detection and self-tuning purposes. This data is both in memory and stored in the database. The gathered data can be displayed in both reports and views. The statistics collected and processed by AWR include: • Object statistics that determine both access and usage statistics of database segments • Time model statistics based on time usage for activities, displayed in the V$SYS_TIME_MODEL and V$SESS_TIME_MODEL views • Some of the system and session statistics collected in the V$SYSSTAT and V$SESSTAT views • SQL statements that are producing the highest load on the system, based on criteria such as elapsed time and CPU time • Active Session History (ASH) statistics, representing the history of recent sessions activity • AWR automatically generates snapshots of the performance data once every hour and collects the statistics in the workload repository. You can also manually create snapshots, but this is usually not necessary. The data in the snapshot interval is then analyzed by the Automatic Database Diagnostic Monitor (ADDM).
Automatic Database Diagnostic MonitorAutomatic Database Diagnostic Monitor (ADDM) analyzes the information collected by the AWR for possible performance problems with the Oracle database. An ADDM analysis is performed every time an AWR snapshot is taken and the results are saved in the database. You can view the results of the analysis using Oracle Enterprise Manager or by viewing a report in a SQL*Plus session. In most cases, ADDM output should be the first place that a DBA looks when notified of a performance problem. Automatic database diagnostic monitoring is enabled by default and is controlled by the STATISTICS_LEVEL initialization parameter. The STATISTICS_LEVEL parameter should be set to the TYPICAL or ALL to enable the automatic database diagnostic monitoring. The default setting is TYPICAL. Setting STATISTICS_LEVEL to BASIC disables many Oracle features, including ADDM, and is strongly discouraged. The primary interface for diagnostic monitoring is the Oracle Enterprise Manager Database Control. On the Database Home page, ADDM findings for the last analysis period are displayed under Diagnostic Summary. ADDM can also be invoked in SQL*Plus. This can be done by running the $ORACLE_HOME/rdbms/admin/addmrpt.sql script. SQL Tuning AdvisorSQL Tuning Advisor allows a quick and efficient technique for optimizing SQL statements without modifying any statements. Automatic SQL Tuning capabilities are exposed through a server utility called the SQL Tuning Advisor. The SQL Tuning Advisor takes one or more SQL statements as an input and invokes the Automatic Tuning Optimizer to perform SQL tuning on the statements. The output of the SQL Tuning Advisor is in the form of an advice or recommendations, along with a rationale for each recommendation and its expected benefit. The recommendation relates to collection of statistics on objects, creation of new indexes, restructuring of the SQL statement, or creation of SQL Profile. A user can choose to accept the recommendation to complete the tuning of the SQL statements. The recommended interface for the SQL Tuning Advisor is the Oracle Enterprise Manager. The SQL Tuning Advisor may be used to tune a single or multiple SQL statements. When tuning multiple SQL statements, Oracle Enterprise Manager will automatically create a SQL Tuning Set (STS) from a user-defined set of SQL statements. An STS is a database object that stores SQL statements along with their execution context. While the recommended interface for the SQL Tuning Advisor is the Oracle Enterprise Manager Database Control, the advisor can be administered with procedures in the DBMS_SQLTUNE package. To use the APIs, the user must be granted specific privileges. Running SQL Tuning Advisor using DBMS_SQLTUNE package is a multi-step process:1. Create a SQL Tuning Set (if tuning multiple SQL statements)2. Create a SQL tuning task3. Execute a SQL tuning task4. Display the results of a SQL tuning task5. Implement recommendations as appropriate
Sample code: -- Tuning task created for specific a statement from the AWR.DECLARE l_sql_tune_task_id VARCHAR2(100);BEGIN l_sql_tune_task_id := DBMS_SQLTUNE.create_tuning_task ( begin_snap => 743, end_snap => 804, sql_id => '2udx7yrn9gcf2', scope => DBMS_SQLTUNE.scope_comprehensive, time_limit => 60, task_name => 'Edwin_AWR_tuning_task', description => 'Test Tuning task for select distinct .'); DBMS_OUTPUT.put_line('l_sql_tune_task_id: ' l_sql_tune_task_id);END;/
EXEC DBMS_SQLTUNE.execute_tuning_task(task_name => 'Edwin_AWR_tuning_task');
SET LONG 10000;SET PAGESIZE 1000SET LINESIZE 200SELECT DBMS_SQLTUNE.report_tuning_task('Edwin_AWR_tuning_task') AS recommendations FROM dual;SET PAGESIZE 24

Create statistics

Create statisticsSince Oracle 8i the Cost Based Optimizer (CBO) is the preferred optimizer for Oracle.In order to make good use of the CBO, you need to create statistics for the data in the database. There are several options to create statistics.
Analyze command The ANALYZE command is available for all versions of Oracle, however to obtain faster and better statistics use the procedures supplied - in 7.3.4 and 8.0 DBMS_UTILITY.ANALYZE_SCHEMA, and in 8i and above - DBMS_STATS.GATHER_SCHEMA_STATS The analyze table can be used to create statistics for 1 table, index or cluster. Syntax:ANALYZE table tableName {computeestimatedelete) statistics optionsANALYZE table indexName {computeestimatedelete) statistics optionsANALYZE cluster clusterName {computeestimatedelete) statistics optionsCode examples
ANALYZE table scott compute statistics;
ANALYZE table scott estimate statistics sample 25 percent;
ANALYZE table scott estimate statistics sample 1000 rows;
analyze index sc_idx compute statistics;
analyze index sc_idx validate structure;
DBMS_UTILITY.ANALYZE_SCHEMA With DBMS_UTILITY.ANALYZE_SCHEMA you can gather all the statistics for all the tables, clusters and indexes of a schema. Code examples
exec DBMS_UTILITY.ANALYZE_SCHEMA('SCOTT','COMPUTE');
exec DBMS_UTILITY.ANALYZE_SCHEMA('SCOTT','ESTIMATE', estimate_rows =>
1000);
exec DBMS_UTILITY.ANALYZE_SCHEMA('SCOTT','ESTIMATE', estimate_percent
=> 25);
exec DBMS_UTILITY.ANALYZE_SCHEMA('SCOTT','DELETE');
Note: It's also possible to analyze the whole database with the DBMS_UTILITY.ANALYZE_DATABASE('COMPUTE'); command.
DBMS_STATS.GATHER_SCHEMA_STATS From Oracle 8i the DBMS_STATS package is the preferred method Oracle list a number of benefits to using it including parallel execution, long term storage of statistics and transfer of statistics between servers. Once again, it follows a similar format to the other methods: Syntax:exec DBMS_STATS.GATHER_SCHEMA_STATS(ownname,estimate_percent, block_sample , method_opt,degree,granularity,cascade,stattab, statid,options,statown ,no_invalidate, gather_temp,gather_fixed); Code examples:
exec
DBMS_STATS.GATHER_SCHEMA_STATS('SCOTT',DBMS_STATS.AUTO_SAMPLE_SIZE);
exec
DBMS_STATS.GATHER_SCHEMA_STATS(ownname=>'SCOTT'
,estimate_percent=>DBMS_STATS.AUTO_SAMPLE_SIZE);
EXEC DBMS_STATS.gather_schema_stats(ownname => 'SCOTT',
estimate_percent => 25);
EXEC DBMS_STATS.gather_table_stats('SCOTT', 'EMPLOYEES');
EXEC DBMS_STATS.gather_index_stats('SCOTT', 'EMPLOYEES_PK');
exec DBMS_STATS.DELETE_SCHEMA_STATS('SCOTT');
Note: It's also possible to gather statistics for the whole database with the DBMS_STATS.gather_database_stats; command.
Transfering statistics between database. It can be very handy to use production statistics on your development database, so that you can forecast the optimizer behavior.You can do this the following way:1. Create the statistics table.exec DBMS_STATS.CREATE_STAT_TABLE(ownname =>'SCHEMA_NAME' ,stat_tab => 'STATS_TABLE' , tblspace => 'STATS_TABLESPACE');Example:exec DBMS_STATS.CREATE_STAT_TABLE(ownname =>'SYSTEM',stat_tab => 'STATS_TABLE');2. Export statistics to statistics tableEXEC DBMS_STATS.EXPORT_SCHEMA_STATS('ORIGINAL_SCHEMA' ,'STATS_TABLE',NULL,'SYSTEM');3. Import statistics into the data dictionary.exec DBMS_STATS.IMPORT_SCHEMA_STATS('NEW_SCHEMA','STATS_TABLE',NULL,'SYSTEM');4. Drop the statistics table.exec DBMS_STATS.DROP_STAT_TABLE('SYSTEM','STATS_TABLE');

Opatch Utility: The OPatch utility is located in the /OPatch directory. You can run it with various commands and options. The following command shows the syntax for the OPatch utility:

/opatch [-options]
OPatch Commands

---------------------------

napply: Installs n number of patches (hence napply). Refer to "napply Command" for more information.

auto: Applies Oracle Clusterware patches. Refer to "auto Command" for more information.

lsinventory: Lists what is currently installed on the system. Refer to "lsinventory Command" for more information.

query: Queries a given patch for specific details. Refer to "query Command" for more information.

rollback: Removes an interim patch. Refer to "rollback Command" for more information.

nrollback: Removes n number of patches (hence nrollback). Refer to "nrollback Command" for more information.

version: Prints the current version of the patch tool. Refer to "version Command" for more information.