Thursday, December 4, 2008

Redo Logs :2

Multiplexing Redo Logs:

Oracle Database lets you multiplex the redo log files of an instance to safeguard against damage to any single file. When redo log files are multiplexed, LGWR concurrently writes the same redo log information to multiple identical redo log files, thereby eliminating a single point of redo log failure.

Multiplexed redo log files are called groups. Each redo log file in a group is called a member. Redo01 and redo01a are both members of Group 1, redo02 and redo02a are both members of Group 2, and so forth. Each member in a group must be exactly the same size.

Whenever LGWR cannot write to a member of a group, the database marks that member as INVALID and writes an error message to the LGWR trace file and to the database alert file to indicate the problem with the inaccessible files

Multiplexed redo log should be symmetrical: all groups of the redo log should have the same number of members. However, the database does not require that a multiplexed redo log be symmetrical. For example, one group can have only one member, and other groups can have two members. This configuration protects against disk failures that temporarily affect some redo log members but leave others intact.

Creating Redo Log Groups:

To create new redo log group we can issue like this---

Alter database add logfile size 500k;
Or
Alter database add logfile group 1 size 500k;

Creating Redo Log Members:

In some cases, it might not be necessary to create a complete group of redo log files. A group could already exist, but not be complete because one or more members of the group were dropped (for example, because of a disk failure). In this case, you can add new members to an existing group.

To create redo log members we can issue like this…

Alter database add logfile member to group 1;
Filename should be specified, sizes need not to be..size is determined by existing members.
Renaming Redo Log:

This should be done when your DB is in mount state…move the file by using UNIX commands... (mv) and open your DB for normal operations.

mv $ORACLE_HOME/oradata/test/redo01.log $ORACLE_HOME/oradata/test/redo02.log

Dropping Redo Groups:

An instance requires at least two groups of redo log files, regardless of the number of members in the groups. (A group comprises one or more members.)
You can drop a redo log group only if it is inactive. If you need to drop the current group, first force a log switch to occur.
Make sure a redo log group is archived (if archiving is enabled) before dropping it.

Drop a group like this…

Alter database drop logfile group 1;

Dropping Redo Log Member:

To drop a redo log member, you must have the alter database system privilege.

You can drop a redo log member only if it is not part of an active or current group. If you want to drop a member of an active group, first force a log switch to occur.

Drop a member like this…

Alter database drop logfile member ;

When a redo log member is dropped from the database, the operating system file is not deleted from disk. Rather, the control files of the associated database are updated to drop the member from the database structure. After dropping a redo log file, make sure that the drop completed successfully, and then use the appropriate operating system command to delete the dropped redo log file.

Viewing Redo Log Information:

v$log --- Displays the redo log file information from the control file
v$logfile --- Identifies redo log groups and members and member status
v$log_history --- Contains log history information

Redo Logs

Redo Logs:

Redo logs are most important when you are recovering your DB. Every instance of DB has an associated redo logs to protect the DB in case of instance failure.

Content of Redo Log:

Redo log files are filled with redo records. A redo record, also called a redo entry, is made up of a group of change vectors, each of which is a description of a change made to a single block in the database. For example, if you change a salary value in an employee table, you generate a redo record containing change vectors that describe changes to the data segment block for the table, the undo segment data block, and the transaction table of the undo segments.

Redo entries record data that you can use to reconstruct all changes made to the database, including the undo segments. Therefore, the redo log also protects rollback data. When you recover the database using redo data, the database reads the change vectors in the redo records and applies the changes to the relevant blocks.

Redo records are buffered in a circular fashion in the redo log buffer of the SGA and are written to one of the redo log files by the Log Writer (LGWR) database background process. Whenever a transaction is committed, LGWR writes the transaction redo records from the redo log buffer of the SGA to a redo log file, and assigns a system change number (SCN) to identify the redo records for each committed transaction.

Active and Inactive Redo:

Oracle Database uses only one redo log files at a time to store redo records written from the redo log buffer. The redo log file that LGWR is actively writing to is called the current redo log file.

Redo log files that are required for instance recovery is called active redo log files. Redo log files that are no longer required for instance recovery are called inactive redo log files.

Log Switch and Sequence Number:

A log switch is the point at which the database stops writing to one redo log file and begins writing to another. Normally, a log switch occurs when the current redo log file is completely filled and writing must continue to the next redo log file. But you can configure log switches to occur at regular intervals, regardless of whether the current redo log file is completely filled. You can also force log switches manually

Oracle Database assigns each redo log file a new log sequence number every time a log switch occurs and LGWR begins writing to it.

Monitoring Alert Log File

  • It is important for a DBA to check alert log file regularly to detect problems before they become serious.
  • Internal errors like ORA-600 and block corruption errors like ORA-1578 or ORA-1498.
  • Locations of alert log file is given as $ORACLE_HOME/admin/TEST/bdump (background_dump_dest).
  • Alert log file consist of
    a) checkpoint start and end times
    Note: the values are written into alert log file only if LOG_CHECKPOINTS_TO_ALERT parameter is TRUE.
    b) Incomplete checkpoints
    c) Time to perform archiving
    d) Instance recovery start and complete times
    e) Deadlock and timeout errors
  • Along with alert log file in bdump, some other trace files created by background process which in turn helps us to diagnose and troubleshoot.
  • Tracing can be enabled at Instance Level and at Session Level.
  • Instance Level: This can be enabled or disabled by SQL_TRACE parameter. Default value will be False.
  • Session Level: Following statement enables writing into a trace file
    Execute dbms_system.set_sql_trace_in_session (10, 22, TRUE);
    Where 10 and 22 are the SID and SERIAL# in v$session view.

How to monitor tablespaces proactively from back-end in 10g

  • MMON background process monitors the free space in each tablespace and sends out the alerts.
  • we create a tablespace for testing
    create tablespace test datafile '/apps/oracle/oracle/product/10.2.0/oradata/orcl/test01.dbf' size 100m extent management local uniform size 10m;

    Tablespace created.
  • this pl/sql procedure explains us to alert when the tablespace is 80% full and 95% full
    execute DBMS_SERVER_ALERT.SET_THRESHOLD(-
    > DBMS_SERVER_ALERT.TABLESPACE_PCT_FULL,DBMS_SERVER_ALERT.OPERATOR_GE,'80',-
    > DBMS_SERVER_ALERT.OPERATOR_GE,'95',1,1,NULL,-
    > DBMS_SERVER_ALERT.OBJECT_TYPE_TABLESPACE,'TEST');

    PL/SQL procedure successfully completed.
  • by querying from this table we can see which tablespace is the most effected.
    select reason from dba_outstanding_alerts;

    for example if test tablespace is more than 80% we can an entry here.
  • we can add a datafile to test tablespace so that we can avoid any further complications.
    alter tablespace test add datafile ‘'/apps/oracle/oracle/product/10.2.0/oradata/orcl/test02.dbf’ size 50m;
    Tablespace altered
  • by quering again from this table we can see that test tablespace alert is gone from the view. ( for example)
    select reason from dba_outstanding_alerts;
  • all cleared alerts will be shown up in dba_alert_history view
    select reason, resolution from dba_alert_history;

Thursday, October 30, 2008

Custom Top Creation in 11.5.0

  • Log into your unix box as applmgr and create directory under appl_top for example: test
  • Create another directory under test as 11.5.0 (location of test_top )
  • Create directories
    mkdir -p media forms reports sql log bin admin lib java mesg patch out html jar
  • Change your directory location to test_top/forms and create US directory
  • Similarly change your directory location to test_top/reports and create US directory
  • create sql and odf directories under test_top/admin
  • Then after create customTEST.env in /apps/applmgr/visappl/
  • Edit customTEST.env
  • TEST_TOP=/apps/applmgr/visappl/test/11.5.0/
    export TEST_TOP
    save and quit
  • log into your database and create tablespace for your custom top
    For example:
    create tablespace test datafile '/apps/oracle/visdata/testdb01.dbf' size 20
    0m;
  • create a user called test
    For example:create user test identified by welcome;
  • grant privileges
    For example: grant connect, resource to test;
  • for the user created assign a default tablespace
    For example: alter user test default tablespace test;
  • Login to ur EBS with sys administrator privs..
  • Navigate application- register---
    Application ---custom application
    Short name---test
    Basepath---test_topDescription---custom application
  • Navigate security—oracle—register—
    Database user---test
    Password—welcome
    Privilege—enabled
    Install group—0Description—custom application user
  • Navigate security—oracle—data group
    Data group—custom group
    Description—custom application data group
    Click on copy application form and select standard data group
    Now add your custom top application:
    Application—custom application
    Oracle ID--appsDescription—custom application test group
  • Navigate to security—responsibility—request
    Group--custom apppl request group
    Application—custom application
    Code—customDescription—custom application data group
  • Navigate application—Menu
    Menu—custom_menu
    User menu name—custom application user
    Menu type—leave blank
    Description—custom application custom menu
    Seq-10
    Prompt—run requests
    Sub menu—leave blank
    Function—requests:submit
    Description—submit requests
    Seq-20
    Prompt—view requests
    Sub menu—leave blank
    Function—view all concurrent requestsDescription—view requests
  • Navigate security—responsibility—define
    Responsibility name—custom application
    Application—custom application
    Responsibility key—customapplication
    Description—custom application responsibility
    Available from—oracle applications
    Data group
    Name--custom group
    Application—custom application
    Request group
    Name—custom appl request group
    Application—custom applicationMenu—custom application user
  • Navigate security—user—define
    User—test
    Password-welcome
    Description—custom application user Add custom application responsibility to this user test

Wednesday, October 29, 2008

Clearing JSP pages in R12

  • While clearing all _pages in ($INST_TOP/_pages) and restart apache, we'll get a blank page since appsLocalLogin.jsp cannot be compiled automatically in R12.
  • This is can be done by making changes in the appsLocalLogin.jsp file.
  • Just change "justrun" to "recompile"
  • Location of appsLocalLogin.jsp in R12 is $INST_TOP/ora/10.1.3/j2ee/oacore/application-deployments/oacore/html/orion-web.xml
  • Restart middle tier services.

How to Compile Forms in R12

compile a single form manually:
frmcmp_batch module=$au_top/forms/US/testform.fmb userid=apps/apps output_file=$test_top/forms/US/testform.fmx module_type=form compile_all=special

Note: Here we are compiling a form so we specify module_type=form, if it is a library file like .pll file we specify module_type=library

compile all forms and all library files:
  • log in as applmgr
  • source environment file
  • run adadmin
  • select Generate application files menu and
  • Generate form files

Friday, September 26, 2008

Configuring notification mailer in EBS R12

  1. Login to your EBS as an system administrator.
  2. Navigate to System Administration-->OAM-->Workflow
  3. You can find all the related information regarding your workflow system.
  4. Navigate to Notification Mailers and edit basic configuration
  5. details--> name workflow notification mailer
  6. Outbound Email Account (SMTP)--> Server Name--> Host Name of the machine used to send email notifications (including full domain name).
  7. Inbound Email Account (IMAP)--> Uncheck Inbound Processing --> Server Name--> Host Name of the machine to receive the email replies (including full domain name)-->Username--> Username for the Inbound Server-->Password-->Password for the Inbound Server--> Reply-to-address--> The address of the e-mail account that receives incoming messages, to which notification responses should be sent (for example: fred.smith@company.com).
  8. Apply the changes done to notification mailer
  9. Go back to Notification mailers main page and you can see Actions --> just start your notifcation mailers.
  10. Click on Workflow in your OAM--> you can see all the listeners like (Notification Mailers, Agent Listeners, Service Components) are up.
  11. This completes Workflow Notification Mailer Setup.

Saturday, September 20, 2008

Send Mail option in RHEL ES 4.7

  1. Download send mail rpm from rpmfind.net
  2. Install the rpm
  3. Go to /etc/mail, here u will find sendmail related files
  4. Now send the configuration file sendmail.mc to sendmail.cf using m4 (macro processor) after backing up sendmail.mc and sendmail.cf
  5. Mail ] # m4 /etc/mail/sendmail.mc > /etc/sendmail.cf
  6. Mail ] # m4 /etc/mail/sendmail.mc > /etc/mail/sendmail.cf
  7. Go to /etc/init.d for newaliases and restart send mail (service sendmail restart)
  8. Change permissions for sendmail.cf (chmod –R 700 sendmail.cf)
  9. Go to tmp (directory) to check if sendmail is running using following command
  10. tmp ] # pgrep sendmail
  11. It should return pid if it is running and return blank if it is not running
  12. To check the interface on which sendmail is listening using the following command
  13. tmp ] # netstat –an grep :25 grep tcp
  14. It should show a list of tcp listeners
  15. Go to /etc/mail/sendmail.mc and modify/edit the file as follows:
    Comment DAEMON_OPTIONS to make sendmail listen to NIC
    Comment “accept_unresolvable_domains” to avoid SPAM
    Comment “relay_based_on_MTI” to avoid mails from other PC’s which are not configured on the domain.
  16. Regenerate the sendmail.cf file as in step 4
  17. Configure the sendmail in /etc/hosts file
    Ipaddress hostname.domainname
    Example: 10.0.0.143 localhost.domainname.com localhost mail www
  18. Now try sending mail as follows
    Tmp ] # sendmail –v ****@domain.com
    Test text
    Test text
    . (dot)
  19. Now we can check the status or errors mail sent in /var/log/maillog
  20. If mail is sent status will be shown as stat=sent if not stat=deffered

Tuesday, August 12, 2008

Staging Area activity for R12 (12.0.4)

Contents:
Start Disk 1 Disk
RDBMS 1 Disk
Database 8 Disk
Tools 1 Disk
Appl_top 2 Disk

Operating System

Red Hat Enterprise Linux 5 (2.6.18-53.el5xen)

Memory Configuration

RAM 16 GB
HDD 146 GB

Stage Path

/media/R1204start/Disk1/rapidwiz/adautostg.pl

Steps to be followed while creating Staging Area

1) Login as root user
2) Verify the Perl version (perl –version)
3) Make sure that sufficient space is available in the particular mount point where you want to create staging area.
4) Copy all the contents in Start Disk to local HDD and start executing the perl script.
5) After executing the perl script it prompts you to give a directory where the media is to be staged (for example: To create stage area at /u01/StageR12 enter /u01) : /u01
6) The stage area will be created at /u01/StageR12
7) Do you use AutoMount to mount/umount the media y/n [y] : n
8) Enter the entire mount command [mount –t iso9660 /dev/cdrom /mnt/cdrom] : (just press enter)
9) Enter the umount command [umount /mnt/cdrom] : (just press enter)
10) Choose the components you want to stage (for example : 1,2,3,4,5)
Press 1 (complete Oracle Applications 10g, Database, 10.1.2, 10.1.3, APPL & COMN)
11) Insert the valid labeled DVD as and when it prompts you.
12) By doing so Staging Area will be created.
13) The top-level staged directory will look like StageR12, with sub-directories startCD, oraApps, oraDB, oraAS, and oraAppDB.

Tips:

1) After completion of Disk 1 in appl_top, it prompts you to insert the second Disk at that time just press enter (in your keyboard) the staging process will be continued. Later it asks for Dis2 then insert the Disk which is labeled as Appl_top Disk 2. Staging area will continue as it was doing.
2) Total Space used for Staging is 36 GB.
3) Total time taken is 1 and half hour.
4) After creating Staging Area, move all the contents into an external HDD so that it will be stored permanently to avoid data loss on local HDD.

Monday, August 4, 2008

Error: Ora-12560

ErroR: Ora-12560 TNS Protocol Adapter error...
Cause: This could happen when some one modifies the tnsnames.ora file ($TNS_ADMIN)
Action: 1) Check the connect descriptor
2) Check the host file name
3) See the service_name is correct specified is correct or not
4) See whether all the parameters are aligned in same way
Example of sample tnsnames.ora:
testdb =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = custprod)(PORT = 1521
(CONNECT_DATA = (SERVICE_NAME = testdb.world) ) )

Hot Backup for Windows

set ORACLE_SID=test
set ORACLE_HOME=D:\oracle\product\10.2.0\db_1
set PATH=D:\oracle\product\10.2.0\db_1\bin;%PATH%
(Local Drive)H:
cd test\hot-backup
mkdir _bkp_%date:~7,2%-%date:~4,2%-%date:~10,4%
(Local Drive)D:
cd ORACLE_HOME
sqlplus sys/password@test as sysdba @ D:(location of yoursqlscript)backupscripts\hot_test.sql echo "end of hot backup "
-----hot_test.sql
alter database begin backup;
host copy D:oracle\product\10.2.0\oradata\test\ H:test\hot-backup\_bkp_%date:~7,2%-%date:~4,2%-%date:~10,4%
alter database backup controlfile to trace;
alter database end backup;
exit

Full Database Backup:Export Utility

set ORACLE_SID= test
set ORACLE_HOME=D:\oracle\product\10.2.0\db_1
exp system/welcome@test file=\exp_full_test_%date:~7,2%-%date:~4,2%-% date:~10,4%.dmp log\exp_full_test_%date:~7,2%-%date:~4,2%-%date:~10,4%.log full=y

Top Ten List of Oracle Performance Mistakes

1) Bad connection Management:
The application connects and disconnects for each database interaction. This is common problem with stateless middlewarein application servers. This mistake has two orders of magnitude impact on performance, and it is totally unscalable.
2) Bad use of Cursors and Shared Pool:
Not using cursors results in repeated parses. If bind variables are not used, then hard parsing occurs of all SQL statements. This has an order of magnitude impact in performance, and it is totally unscalable. Use of cursors with bind variables that open the cursor and re-execute it many times.
3) Getting Database I/O wrong:
Many sites lay out their database poorly over the available disks. Other sites specify the number of disks incorrectly, because they configure disks by disk space and not I/O bandwidth.
4) Redo Log Setup Problems:
Many sites run with too few redo logs which are too small. Small redo logs cause system checkpoints to continuously put a high load on the buffer cache and I/O system. If there are too few redo logs, then the active cannot keep up, and the database stalls.
5) Serialization of data blocks in the buffer cache due to lack of free lists, free list groups, transaction slots (INITRANS), or shortage of rollback segments:
This is particularly common on INSERT heavy applications, in applications that have raised the block size to 8K to 16K, or in applications with large numbers of active users and few rollback segments.
6) Long Full Table Scans:
Long full table scans for high volume or interactive online operations could indicate poor transaction design, missing indexes, or poor SQL optimization. Long table scans, by nature, are I/O intensive and unscalable.
7) In disk Sorting:
In disk sorts for online operations could indicate poor transaction design, missing indexes, or poor SQL optimization. In disk sorts, by nature, are I/O intensive and unscalable.
8) High amounts of Recursive (SYS) SQL:
Large amounts of recursive SQL executed by SYS could indicate space management activities, such as extent allocations, taking place. This is unscalable and impacts user response time. Recursive SQL executed user another user ID is probably SQL and PL/SQL and this is not a problem.
9) Schema Errors and Optimizer Problems:
In many cases, an application uses too many resources because the schema owning the tables has not been successfully migrated from the development environment or from an older implementation. Examples of this are missing indexes or incorrect stats. These errors can lead to sub-optimal execution plans and poor interactive user performance.
10) Use of Nonstandard Initialization Parameters:
These might have been implemented based on poor advice or incorrect assumption. In particular, parameters associated with spin count on latches and undocumented optimizer features can cause a great deal of problems that can require considerable investigation.

Sample DB Script

connect sys/oracle as sysdba
startup nomount pfile=/dbs/inittest.ora
create database "test"datafile '/oradata/system01.dbf' size 200M
sysaux datafile '/oradata/sysaux.dbf' size 100M
logfile '/oradata/redo01.log' size 100M,'/oradata/redo01.log' size 100M
character set "WE8ISO8859P1"
national character set "UTF8"
extent management
localundo tablespace undotbs1 datafile '/oradata/undotbs1.dbf' size 100M
default temporary tablespace temp tempfile '/oradata/temp01.dbf' size 100M;
Running Data Dictionary Scripts:
1) catalog.sql (ORACLE_HOME/rdbms/admin)
--- Needed for all normal data dictionary views
2) catproc.sql (ORACLE_HOME/rdbms/admin)
--- Needed to load the initial PL/SQL environment
3) pupbld.sql (ORACLE_HOME/sqlplus/admin)
--- Script to install the SQL*Plus PRODUCT_USER_PROFILE tables