Thursday, July 11, 2013

to_char or to_date which one to use

Our developper ask me often which on to use between to_char and to_date, i think they should
  • Use to_char to DISPLAY a date datatype 
  • Use to_date to STORE a date datatype in a row
to_char converts a date to a varchar and to_date converts a varchar to a date

Tuesday, June 11, 2013

Disavantages for enabling row movement on oracle tables


It will necessarily consume processing resources on your machine while running (it will 
read the table, it will delete/insert the rows at the bottom of the table to move them 
up, it will generate redo, it will generate undo).
 
If you had densely packed blocks with lots of "pretty much empty blocks", and the densely 
packed blocks were organized by index key value -- spraying them about the top of the table could 
hurt the clustering factor.
 
Do you have any application that expects rowid to be constant for a row?  If so - then it is wrong
 
 
  
 
 
 
 

Monday, June 10, 2013

Oracle Streams vs Advanced Replication

Streams is a much more efficient architecture. Rather than synchronously capturing changes in materialized view logs which adds overhead to the transaction, Streams mines the redo logs that are generated by the change in order to generate the change vector to send to the target system. That also allows the change to be replicated to the other system(s) much more quickly.

Streams is a much more flexible architecture. You can write your own custom apply processes which makes it relatively easy to inject additional processing or logging, to ignore certain changes, etc. You can send a Streams logical change record (LCR) to a JMS queue or to a non-Oracle system (though, of course, you'd need to write code to apply the LCR on the non-Oracle system).

Oracle Streams does not support the following:
  • Replication of changes to tables with columns of the following data types: BFILE, ROWID, and user-defined types (including object types, REFs, varrays, and nested tables)
  • Synchronous replication
If your current Advanced Replication environment uses these features, then these elements of the environment cannot be migrated to Oracle Streams. In this case, you might decide not to migrate the environment to Oracle Streams now, or you might decide to modify the environment so that it can be migrated to Oracle Streams.

When to Replicate Data with Materialized Views

Unlike Oracle Streams replication, materialized views do not continuously replicate data at all times. A materialized view is a replica of a table or a subset of a table that can be refreshed to a transactionally consistent point in time. During a refresh, only the final values of the changed rows are pulled down and applied to the materialized view, no matter how many updates were applied to the master table. This reduces the amount of time that the remote site must be connected to the master site.
Materialized views are especially useful for locations with limited connectivity to the master site. Updatable materialized views allow these locations to function autonomously, even when connectivity is unavailable. When updates are allowed at multiple locations, ownership is typically partitioned in some manner between the locations to prevent conflicting updates. When conflicts are possible, Oracle provides conflict resolution methods that can resolve these conflicts automatically.
In addition to supporting disconnected computing, organizations can also use materialized views to improve performance and scalability by providing local access to data and by off loading processing at the primary location. For example, one or more materialized views might be used to off load reporting activity from an order-entry system.

gather statistics on standby database and import on primary (PROD)

Statistics can be exported and imported from the data dictionary to user-owned tables, enabling you to create multiple versions of statistics for the same schema. You can also copy statistics from one database to another database. You may want to do this to copy the statistics from a snapshot standby database to a production database.

Before exporting statistics, you first need to create a table for holding the statistics. This statistics table is created using the procedure DBMS_STATS.CREATE_STAT_TABLE. After this table is created, then you can export statistics from the data dictionary into your statistics table using the DBMS_STATS.EXPORT_*_STATS procedures. The statistics can then be imported using the DBMS_STATS.IMPORT_*_STATS procedures.

In order to move statistics from a snapshot standby database to a primary database, you must first export the statistics on the first database, then copy the statistics table to the second database, using the EXP and IMP utilities or other mechanisms, and finally import the statistics into the second database.


Thursday, May 23, 2013

How to tune redo apply on a standby database

To maintain close synchronization with the primary database, it is recommended that you enable real time apply on the standby database. This will enable the standby to apply redo changes as they are received, and not wait for a log switch on the primary database. Real time apply is the default If you have created your configuration using the Data Guard broker. Real time apply will have to be enabled manually if the configuration was created using SQL*Plus. 
 
Query the V$DATAGUARD_STATS view to determine if Redo Apply has recovered all redo that has been received from the primary. For example:
 
SELECT * FROM V$DATAGUARD_STATS WHERE NAME='apply lag';
 
NAME VALUE TIME_COMPUTED
----------------------------------------------------------
apply lag +00 0:00:04 15-MAY-2005 10:32:49
 
 
The query indicates that Redo Apply has not applied the last 4 seconds of redo received on the standby database.
If the apply lag indicates that the standby is behind the primary by a significant amount this could indicate that a gap exist between the standby and the primary. 
A gap occurs when events have prevented the successful transmission of a complete
redo log file. Redo Apply will not be able to proceed if this has occurred until the gap has been resolved, either automatically by Data Guard, or manually by copying
the missing log files and registering them at the standby database should there be
other complications preventing the automated process from completing. To detect if a gap does exist, run the following query on both the primary and standby database and compare the results: