Wednesday, March 22, 2017

How to identify High volume tables / Empty tables in Oracle


To maintain database and to improve the performance; we need to have an eye at huge volume tables and also we may in need to delete unused tables ( zero record tables ). 

We can identify Huge tables or Empty tables easily using NUM_ROWS


What is NUM_ROWS?

NUM_ROWS is a column available in Oracle inbuilt table  " ALL_TABLES "

The table "ALL_TABLES"  hold the key summary information about all the existing tables at Schema and database level.


Simply, to identify the tables which have more than 1 million records:-

Below is the query to execute.  It will retrieve table name, num of rows and many more information.


 SELECT   * from ALL_TABLES 
 WHERE    OWNER ='GLOGOWNER'
     AND    NUM_ROWS > 1000000;




To identify top huge volume tables :-

SELECT table_name, tablespace_name, num_rows FROM dba_tables WHERE owner='GLOGOWNER' ORDER BY num_rows DESC;


The above statement will result with table names in the order of high volume records to low number of records. 


To identify list of empty tables :-



 SELECT   * from ALL_TABLES 
 WHERE    OWNER ='GLOGOWNER'
     AND    NUM_ROWS = 0;



 

NOTE:-  The count of rows what we get out of the above queries are not exactly matching with live number of records; since the record count will be updated when the table has been undergone gather statistics  either by   ANALYZE statement or   DBMS_STATS package.  These statements will be normally executed by DBAs as part of maintenance activity. 

So, the count would not be exactly match with current data count but approximately matches ( 95%)


To get the count exactly match :-

Execute any of the below statement for a particular table to analyze or gather statistics. Once this is done, NUM_ROWS column will be updated with exact row count of the table.
 
 ANALYZE TABLE SHIPMENT; 
                          [OR]
  EXEC DBMS_STATS.gather_table_stats('GLOGOWNER', 'SHIPMENT');


GLOGOWNER =  Schema or User name
SHIPMENT     =  Table name



Click the link and Like this FB page  to get more articles like this.

Monday, March 20, 2017

SQL Loader - Utility to load data from flat files to Oracle tables - Quick & Detailed view



It is very usual requirement to move custom data available in external files to be moved to the particular tables present in Oracle DB.  SQL Loader is one of the option; where we can play along with different requirements in moving the data. 


First of all, what is SQL Loader?  - A Quick view. 

SQL Loader is an utility by which we can move data from external files such as CSV, TXT, XLS etc., to Oracle tables. 


How to achieve this? 

Very simple. We need to create two files first.   1. Control file    and  2. Source data file. 


What is control file?

Control file is similar to txt file but have to be saved with extension  " .ctl " ; where we will hold the specifications about the flat file, data, table names, delimiter etc., 


Sample & Simple control file will be as following one:-


 load data
 infile 'c:\data\item_data.csv'     BADFILE 'myydata.bad'  DISCARDFILE 'mydata.dis'
 into table item
 fields terminated by "," optionally enclosed by '"'    
 ( item_gid, item_desc, item_type, domain_name)

==> infile holds the external source file name and its path details
       
The bad file and discard files both contain rejected rows, but they are rejected for different reasons:  Both are optional.
  • Bad file:  The bad file contains rows that were rejected because of errors.  These errors might include bad datatypes or referential integrity constraints.
  • Discard file:  The discard file contains rows that were discarded because they were filtered out because of a statement in the SQL*Loader control file.
==> into table clause holds the < table name > 
==> terminated by "," means data are separated by comma.  " # " means data are separated by #.
==> optionally enclosed by  '  " ' means..  values within double quotes with spaces will be considered as single data.
For example : " Logitech Pen Drive " will be considered as data for single column.  
==> last line in the above example holds the list of column names of the table ITEM.

Sample Source File :-

the source file  item_data.csv file may look like this:
10001,"Laptop HP", 'Electronic', 'Sales'
10002,"Logitech Pen drive", 'Electronic', 'Service'

One we created the above said two files, we have to execute the below SQLLDR statement from command promt. Upon execution of any of the below statement; data will be loaded into the table ITEM.


sqlldr username@server/password control=loader.ctl
sqlldr username/password@server control=loader.ctl

==> sqlldr = keyword
==> username, password & server names are related to the DB where we want to connect and load the data. 
==> control = keyword
==> loader.ctl  is the name of control file. 


More options on SQL Loader?  - A Detailed view. 

Load fixed length data ( instead of comma separator ). 

In this case, control file will be as follows.


load data
 infile 'c:\data\item_data.txt'
 into table item
 (  item_gid  position (02:05) char(4),
    item_desc position (08:27) char(20)
 )

item_data.txt will be like this:-


1234   Laptop
1111   Mouse


Can have data in control file itself ( No need of source file ). ? 

Yes we can .. as follows

Please note that;    infile *    and   begindata are the keywords change in this control file than earlier one. 


load data
 infile *
 replace
 into table item_data
 (  item_gid  position (02:05) char(4),
    item_desc position (08:27) char(20)
 )
begindata
1111  APPLE IPHONE 7S
2222  LENOVA LAPTOP
3333  LOGITECH MOUSE V2
4444  NIKON CAMERA 



Can we modify the data.. on the fly .. whilst upload the data.  ?

Yes we can.. by using the below one sample control file.


LOAD DATA
  INFILE *
  INTO TABLE modified_data
  (  rec_no                      "my_db_sequence.nextval",
     region                      CONSTANT '31',
     time_loaded                 "to_char(SYSDATE, 'HH24:MI')",
     data1        POSITION(1:5)  ":data1/100",
     data2        POSITION(6:15) "upper(:data2)",
     data3        POSITION(16:22)"to_date(:data3, 'YYMMDD')"
  )
BEGINDATA
11111AAAAAAAAAA991201

LOAD DATA
  INFILE 'mail_orders.txt'
  BADFILE 'bad_orders.txt'
  APPEND
  INTO TABLE mailing_list
  FIELDS TERMINATED BY ","
  (  addr,
     city,
     state,
     zipcode,
     mailing_addr   "decode(:mailing_addr, null, :addr, :mailing_addr)",
     mailing_city   "decode(:mailing_city, null, :city, :mailing_city)",
     mailing_state,
     move_date      "substr(:move_date, 3, 2) || substr(:move_date, 7, 2)"
  )


Can we upload data from multiple source files ? 

Yes we can . Please refer the below sample control file. 


LOAD DATA
  INFILE file1.dat
  INFILE file2.dat
  INFILE file3.dat
  APPEND
  INTO TABLE emp
  ( empno  POSITION(1:4)   INTEGER EXTERNAL,
    ename  POSITION(6:15)  CHAR,
    deptno POSITION(17:18) CHAR,
    mgr    POSITION(20:23) INTEGER EXTERNAL
  )


Can we upload into multiple tables ? 

Yes we can. please refer the below sample control file.



LOAD DATA INFILE 'mydata.dat' REPLACE INTO TABLE emp WHEN empno != ' ' ( empno POSITION(1:4) INTEGER EXTERNAL, ename POSITION(6:15) CHAR, deptno POSITION(17:18) CHAR, mgr POSITION(20:23) INTEGER EXTERNAL ) INTO TABLE proj WHEN projno != ' ' ( projno POSITION(25:27) INTEGER EXTERNAL, empno POSITION(1:4) INTEGER EXTERNAL )


Can we upload only selected data from source file ?

Yes we can. please refer the below sample control file. 


LOAD DATA
  INFILE  'mydata.dat' BADFILE  'mydata.bad' DISCARDFILE 'mydata.dis'
  APPEND
  INTO TABLE my_selective_table
  WHEN (01) <> 'H' and (01) <> 'T'
  (
     region              CONSTANT '31',
     service_key         POSITION(01:11)   INTEGER EXTERNAL,
     call_b_no           POSITION(12:29)   CHAR
  )
  INTO TABLE my_selective_table
  WHEN (30:37) = '20031217'
  (
     region              CONSTANT '31',
     service_key     POSITION(01:11)   INTEGER EXTERNAL,
     call_b_no         POSITION(12:29)   CHAR
  )


Can we load images, sound clips, blog data and documents ? 

Yes, we can by using the below control file as sample one. 


Consider the following table is created.

CREATE TABLE image_table (
       image_id   NUMBER(5),
       file_name  VARCHAR2(30),
       image_data BLOB);

Control File:
LOAD DATA
INFILE *
INTO TABLE image_table
REPLACE
FIELDS TERMINATED BY ','
(
 image_id   INTEGER(5),
 file_name  CHAR(30),
 image_data LOBFILE (file_name) TERMINATED BY EOF
)
BEGINDATA
001,image1.gif
002,image2.jpg
003,image3.jpg


We are at the end of the session of SQL Loader. Now you are proudly a SQL Loader Expert.  Try this out to  experience more.. 


Click the link and Like this FB page  to get more articles like this.

Saturday, March 11, 2017

FLASHBACK - How to get deleted records after commit & recover dropped tables

Accidentally or by human error, we might delete/commit,  truncate  some records  and drop table which we were not supposed to do.

Have you done this ?  No worries, we can recover them using FLASHBACK feature in Oracle. 

Let us see in detail below. 


How to Recover Deleted Records using AS OF TIMESTAMP :-


Imagine, if you had deleted some records ( and did commit ) from the table SHIPMENT. Here, we go to get back those records with the below query. 

select * from SHIPMENT as of timestamp sysdate-1/24 ;

will give you result of how the table was before 1 hour with the records deleted an hour before.

[or]

select * from SHIPMENT AS OF TIMESTAMP 
      TO_TIMESTAMP('2017-03-07 10:00:00', 'YYYY-MM-DD HH:MI:SS');


will give you the result of how the table was exactly to the particular time.

[or]

insert into SHIPMENT(select * from emp as of timestamp sysdate-1/24);

will insert the deleted records back.


How to recover dropped table:


From Oracle 10g, Recycle bin stores dropped table in other name, which can be recovered later. 

Imagine SHIPMENT_STATUS  table is dropped. Now, the below statement help you to recover the table back as it was before drop.

FLASHBACK table SHIPMENT_STATUS to before drop;

We can even recover the dropped table with new table name. 

FLASHBACK table SHIPMENT_STATUS to before drop rename to SHIPMENT_STATUS_BKP;


Below are the some of the key points to understand about recovering the data and table :-


1. Flashback feature depends upon on how much undo retention time you have specified. If you have set the UNDO_RETENTION parameter to 6 hours then. Oracle will not overwrite the data in undo tablespace even after committing until 6 Hours have passed. Users can recover from their mistakes made since last 6 hours only.

To check for the current value,  the below query will revert you the answer in seconds.

SELECT TUNED_UNDORETENTION FROM V$UNDOSTAT ;


2. Recycle bin space is refreshed by Oracle when it need space to store newly impacted tables. Hence, it is not sure the dropped table long time back will be available in Recycle bin for sure. 

SHOW RECYCLEBIN;    query will help you to list the available tables. 

3. When we recover the dropped table, all the indexes , triggers and constraints associated with the table will also be recovered, but BITMAP index wont. Since BITMAP indexes are not being  stored in recycle bin when we drop the table. 

4. If the table is altered using DDL statements like adding a new column, data type change of a column, then deleted records cannot be retrieved. 

I wish not to use FLASHBACK in real time :)

Thursday, March 2, 2017

Top 10 Awesome New features of Oracle 12c



1. Invisible Columns ( Reference purpose ):-

We can have columns created as Invisible; which will not be shown when we do SELECT * FROM ..
These columns can be used to save some data for reference purpose.  No need of separate custom table to secure any data from visibility - Good one.


2. Easy Database Archiving :-

Need to keep full data in a table, as well as to keep performance when data grow?   Prior to 12c, the options are partitions and purging.
  
In 12c, we have amazing option of marking old records as INACTIVE, so that those records will not be considered for fetching, parsing and data scan. No need of backup, history and purging process for performance issues. 


3.Temporary UNDO or Staging UNDO  :-

Prior to 12C, undo records generated by TEMP Table space is stored in the undo table space. With Temp undo feature in 12C, temp undo records can be stored in temporary table instead of UNDO TS. The benefit is ... reduced undo table space and reduced redo log space used. 



4. Online migration of tables ( now partitioned tables also )

Prior to 12c, only non-partition tables can be migrated from one table space to other table space.  From 12c on-wards, we can migrate  any partition or sub partition of the table can be migrated from one table space to other table space.  ( only if the ONLINE clause is specified for the partition )


5. More than one Indexes for a single column  ( Switch option )

Prior to 12c, only one index can be created for a column.  Going forward, we can create multiple indexes ( For Example:  Binary tree  as well as Bit map ).  But only one index can be in active mode at a time. Indexes can be switched to use any one of them in a needy basis. 


6. Auto Increment Primary Key - ( Similar to Sequence )

From 12c, we can now create a column with 'Generated as Identity' clause. This is equivalent to creating a separate sequence and doing a sequence.nextval for each row.   No need to create new object -SEQUENCE just for incremental purpose.  I love this feature most. 


7. Recover a table is easy though RMAN  ( Good one )

Prior to 12c, if we had to restore a particular table, we had to do all sorts of things like restoring a tablespace and or do Export and Import. The new restore command in RMAN simplifies this task. We can simply restore/recover a table.  Lovely feature.


8. Masking Data for specific users ( REDACTION )

REDACTION is nothing but masking.  We can now mask a field/column of a table for a specific schema/user. When we do SELECT * FROM.. it will show that particular column as masked. 

From Sql Developer we can do this by going to the table:  <TableName> ->Right click on Security Policy->click on New->click on Redaction Policy->Enter <ColumnName>


9. Inline Procedure and Function  ( Great feature among all )

The in line feature is extended in Oracle 12C. In addition to Views, we can now have PL/SQL Procedures and Functions as in line code. 

The query can be written as if it is calling a real stored procedure, but however the functions do not actually exist in the database. 

We will not be able to find them in ALL_OBJECTS. I personally feel this will be a very good feature for the PLSQL developers to explore as there is no code that needs to be compiled and manage separately.


10  Top 'N'  Select and Fetch option ( Long pending feature )

I mentioned this feature as Long pending feature, because we have this already in Non-Oracle DBs.

In 12c, we have new SQL syntax to simplify fetching the first few rows. The new sql syntax "Fetch First X Rows only" can be used. No need of inline views, order by etc., and etc.,  Big relief for SQL writers. 


Hope this is useful. If yes, kindly share.

Sathish Chandran