DBMS_COMPRESSION can be run in parallel, at least from 11.2.0.4

I was trying to use DBMS_COMPRESSION on an 11gR2 (11.2.0.4 on RHEL 6.3) database the other day and was helped by this article from Neil Johnson. I was having some trouble with permissions until I read that article which nicely summarises everything you need to know – thanks Neil!

One thing I did notice is that Neil stated that you can’t parallelise the calls to the advisor since it uses the same named objects each time and this would then cause conflicts (and problems). Neil illustrated the example calls that the advisor is making based on him tracing the sessions…

create table "ACME".DBMS_TABCOMP_TEMP_UNCMP tablespace "SCRATCH" nologging
 as select /*+ DYNAMIC_SAMPLING(0) FULL("ACME"."ACCS") */ *
 from "ACME"."ACCS" sample block( 99) mytab
 
create table "ACME".DBMS_TABCOMP_TEMP_CMP organization heap 
 tablespace "SCRATCH" compress for all operations nologging
 as select /*+ DYNAMIC_SAMPLING(0) */ *
 from "ACME".DBMS_TABCOMP_TEMP_UNCMP mytab

Because I kept having permissions issues I was repeatedly running the advisor and I ended up with a situation where one of the transient objects (above, or so I thought) had been left in place and when I tried the next rerun it complained that the object existed. I can’t reproduce this as I can’t remember all the steps that I took and I wasn’t recording my session at the time – it’s not really the point of this blog in any case, rather the knowledge it led to. Because the error was that the object existed, I figured I just needed to find the object and drop it and I’d be good to carry on – obviously I looked at the above code fragments and started to search for the two objects in question (DBMS_TABCOMP_TEMP_UNCMP and DBMS_TABCOMP_TEMP_CMP) but found nothing. I started looking for DBMS_TABCOMP% and again found nothing.

Somewhat confused, I then looked for the latest object created and found that the objects were actually called something completely different and of the form CMPx$yyyyyyyy. I think this must have changed since Neil wrote his article (it is from 2013 after all).

I can’t work out what “x” is – at first I thought it was the RAC instance but that was just a coincidence that I saw a 3 and I was on instance 3 of a RAC cluster. In fact on a single instance database (test below) I saw numbers higher than 1 so it’s not the RAC instance number and I can’t work out what it is. “yyyyyyyy” is definitely the OBJECT_ID of the data object, confirmed by cross referencing the data dictionary.

Given this naming standard is therefore object specific, it suggests that you could execute these advisor calls in parallel.

Just to be clear, I’m not sure what version of 11g Neil was using but I am using 11.2.0.4:

SQL> select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production
PL/SQL Release 11.2.0.4.0 - Production
CORE 11.2.0.4.0 Production
TNS for Linux: Version 11.2.0.4.0 - Production
NLSRTL Version 11.2.0.4.0 - Production

A little test using Neil’s code (and a bit more as I don’t have acme user on my database):

First create a script called dbms_comp.sql with the following content:

set serveroutput on
set feedback on
set verify off
 
declare
 blkcnt_cmp BINARY_integer;
 blkcnt_uncmp BINARY_integer;
 row_cmp BINARY_integer;
 row_uncmp BINARY_integer;
 cmp_ratio number;
 comptype_str varchar2(60);
begin
 dbms_compression.get_compression_ratio(
 scratchtbsname => upper('&3.')
 , ownname => upper('&1.')
 , tabname => upper('&2.')
 , partname => null
 , comptype => dbms_compression.comp_for_oltp
 , blkcnt_cmp => blkcnt_cmp
 , blkcnt_uncmp => blkcnt_uncmp
 , row_cmp => row_cmp
 , row_uncmp => row_uncmp
 , cmp_ratio => cmp_ratio
 , comptype_str => comptype_str
 , subset_numrows => &4.
 );
 DBMS_OUTPUT.PUT_LINE('Block count compressed = ' || blkcnt_cmp);
 DBMS_OUTPUT.PUT_LINE('Block count uncompressed = ' || blkcnt_uncmp);
 DBMS_OUTPUT.PUT_LINE('Row count per block compressed = ' || row_cmp);
 DBMS_OUTPUT.PUT_LINE('Row count per block uncompressed = ' || row_uncmp);
 --DBMS_OUTPUT.PUT_LINE('Compression type = ' ||comptype_str);
 DBMS_OUTPUT.PUT_LINE('Compression ratio = '||round(blkcnt_uncmp/blkcnt_cmp,1)||' to 1');
 DBMS_OUTPUT.PUT_LINE('Compression % benefit = '||round((blkcnt_uncmp-blkcnt_cmp)/blkcnt_uncmp*100,1));
 --DBMS_OUTPUT.PUT_LINE('Compression ratio org= '||cmp_ratio);
end;
/
set verify on

Then create another script called setup.sql with the following content – I’m using auditing (thanks Tim!) to see the statements rather than tracing in this instance:

conn sys as sysdba
drop user acme cascade;
drop user nj cascade;
drop tablespace acme including contents and datafiles;
drop tablespace scratch including contents and datafiles;
drop role nj_dba;
create user acme identified by acme;
grant create session,create table to acme;
create tablespace acme datafile '/u01/app/oracle/oradata/db11g/acme01.dbf' size 2G;
alter user acme quota unlimited on acme;
create tablespace scratch datafile '/u01/app/oracle/oradata/db11g/scratch01.dbf' size 2G;
create role nj_dba;
create user nj identified by nj;
REM Use auditing instead of tracing to identify the statements run:
audit all by nj by access;
audit create table by nj by access;
grant create session, create any table, drop any table, select any table to nj_dba;
grant execute on sys.dbms_monitor to nj_dba;
grant nj_dba to nj;
alter user acme quota unlimited on scratch;
alter user nj quota unlimited on scratch;
grant ANALYZE ANY to NJ_DBA;

Now login to sqlplus /nolog and run setup.sql which should show this:

SQL> @setup
Enter password:
Connected.

User dropped.



User dropped.



Tablespace dropped.



Tablespace dropped.



Role dropped.



User created.



Grant succeeded.



Tablespace created.



User altered.



Tablespace created.



Role created.



User created.



Audit succeeded.



Audit succeeded.



Grant succeeded.



Grant succeeded.



Grant succeeded.



User altered.



User altered.



Grant succeeded.

 

Now login to acme and create the subject table for the compression advisor:

conn acme/acme
create table test tablespace acme as select lpad(TO_CHAR(ROWNUM),2000,'x') char_col from dual connect by level < 300000;

Table created.

Now check the compression using the advisor (ignore the actual compression results as we’re not interested in those at this time):

conn nj/nj
@dbms_comp acme test scratch 200000
Block count compressed = 2048
Block count uncompressed = 2048
Row count per block compressed = 3
Row count per block uncompressed = 3
Compression ratio = 1 to 1
Compression % benefit = 0

PL/SQL procedure successfully completed.

Now check the audit trail to find the statements run:

conn sys as sysdba
column username format a8 
column obj_name format a30
column timestamp format a30
column action_name format a20
set linesize 200
set pagesize 1000
SELECT USERNAME, action_name,obj_name, to_char(extended_TIMESTAMP,'DD-MON-YYYY HH24:MI:SSxFF') timestamp FROM DBA_AUDIT_TRAIL WHERE USERNAME='NJ' and action_name='CREATE TABLE';

USERNAME ACTION_NAME OBJ_NAME TIMESTAMP
-------- -------------------- ------------------------------ --------------------
NJ CREATE TABLE CMP3$87401 10-DEC-2017 18:01:15.119890
NJ CREATE TABLE CMP4$87401 10-DEC-2017 18:01:15.177518

(Abridged to remove non relevant tests)

Now check the dictionary to see the OBJECT_ID:

select object_name from dba_objects where object_id=87401;

OBJECT_NAME
--------------------------------------------------------------------------------------------------------------------------------
TEST

1 row selected.

OK, how about the parallelism? Let’s create a second table called TEST2 in ACME:

conn acme/acme
create table test2 tablespace acme as select lpad(TO_CHAR(ROWNUM),2000,'x') char_col from dual connect by level < 300000;

Table created.

Now run two parallel sessions – I did it by firing off the calls manually in separate SQL*Plus sessions rather than being clever:

In session 1:

conn nj/nj
@dbms_comp acme test scratch 200000

In session 2:

conn nj/nj
@dbms_comp acme test2 scratch 200000

 

First one gives:

Block count compressed = 1920
Block count uncompressed = 1920
Row count per block compressed = 3
Row count per block uncompressed = 3
Compression ratio = 1 to 1
Compression % benefit = 0

PL/SQL procedure successfully completed.

Second one gives:

Block count compressed = 2432
Block count uncompressed = 2432
Row count per block compressed = 3
Row count per block uncompressed = 3
Compression ratio = 1 to 1
Compression % benefit = 0

PL/SQL procedure successfully completed.

Both ran at the same time and didn’t fail

Now check the audit trail:

conn sys as sysdba
column username format a8 
column obj_name format a30
column timestamp format a30
column action_name format a20
set linesize 200
set pagesize 1000
SELECT USERNAME, action_name,obj_name, to_char(extended_TIMESTAMP,'DD-MON-YYYY HH24:MI:SSxFF') timestamp FROM DBA_AUDIT_TRAIL WHERE USERNAME='NJ' and action_name='CREATE TABLE';
USERNAME ACTION_NAME OBJ_NAME TIMESTAMP
-------- -------------------- ------------------------------ --------------------
NJ CREATE TABLE CMP3$87401 10-DEC-2017 18:01:15.119890
NJ CREATE TABLE CMP4$87401 10-DEC-2017 18:01:15.177518 
NJ CREATE TABLE CMP3$87408 10-DEC-2017 18:12:18.114321
NJ CREATE TABLE CMP3$87409 10-DEC-2017 18:12:18.114353
NJ CREATE TABLE CMP4$87408 10-DEC-2017 18:12:22.730715
NJ CREATE TABLE CMP4$87409 10-DEC-2017 18:12:22.735908
(Abridged to remove non relevant tests)

And from the dictionary:

select object_name from dba_objects where object_id IN(87408,87409);

OBJECT_NAME
--------------------------------------------------------------------------------------------------------------------------------
TEST
TEST2

2 rows selected.

So, it appears to allow parallel running without issue.

If anyone works out what the “x” part of the object names is (3 and 4 in the above example), please shout out in the comments…

Using AWR to summarise SQL operations activity

I was asked recently how much “use” a database had – a non too specific question but basically the person asking it wanted to know how many queries, DMLs, DDLs etc.. were run against the database on a daily basis…kind of like the queries per hour metrics that are sometimes quoted on TPC benchmarks I guess.

I didn’t have an answer as to where to get this information so I figured it warranted a ramble through the AWR tables to try and come up with something…

This isn’t going to be a description of how the AWR works – there are manuals for that. Instead, I’ll just give you the salient points from my exploration.

The AWR data, including it’s history, is accessible by looking at a collection of DBA Views named DBA_HIST%.

Part of the AWR processing captures Active Session History (ASH) data which is then accessible via the DBA_HIST_ACTIVE_SESS_HISTORY view. From this view we can obtain, for any given snap point in time, the sessions which were active, together with details of what they were doing and the statistics relating to the performance of that activity.

On this view is a column called SQL_OPCODE which tells us the type of SQL operation being performed by the session, with various codes indicating different things. I didn’t manage to find a definitive list of these SQL_OPCODES in a table/view anywhere so, by trial and error I worked them out as follows:

SQL_OPCODE    SQL Operation
1             DDL
2             INSERT
3             Query
6             UPDATE
7             DELETE
47            PL/SQL package call
50            Explain Plan
189           MERGE

…I’m sure this isn’t definitive…someone really smart out there will probably know where this list is actually stored and will, perhaps, put it on a comment to this post 😉

OK, now to code up something that goes back over this AWR data for the last, say seven days, and group it up by SQL operation so we can see what our activity has been – I used a ROW_NUMBER analytic (cos they rock) to make sure we only get the latest snapshot row from the AWR table – if a SQL operation is running for a long time then it may appear in the AWR view more than once…and we don’t want to double count it as far as executions go.

I’ve grouped up those sessions which don’t have an active SQL statement associated with them (i.e. SQL_ID is NULL) into groups based on their Object Type and I’ve then used ROLLUP to get the results for each SQL operation by day and also the summed up totals by operation across the seven days and a grand total.

I’m not actually sure why some rows in DBA_HIST_ACTIVE_SESS_HISTORY don’t have a SQL_ID against them – if they were not actually running a statement in the session then that sounds fair enough…but some of them had values in PLSQL_ENTRY_OBJECT_ID indicating (I think) that they were running a specific package (DBMS_STATS or DBMS_SPACE in many cases) so the fact they didn’t have a SQL_ID was confusing to me – perhaps it means they’re in that package but not actually running a query at the time of the snap – either way, they’re grouped up separately.

Here is a SQL*Plus spool of the code and results from a test system:

x_j4134[543/9757]@AED52> l
1 WITH ash AS
2 (
3 SELECT TRUNC(ash.sample_time) sample_day
4 , (CASE WHEN ash.sql_opcode = 47
5 THEN ‘PL/SQL’
6 WHEN ash.sql_opcode IN(1)
7 THEN ‘DDL’
8 WHEN ash.sql_opcode IN(2,6,7,189)
9 THEN ‘DML’
10 WHEN ash.sql_opcode IN(50)
11 THEN ‘Explain Plan’
12 WHEN ash.sql_opcode IN(3)
13 THEN ‘Query’
14 ELSE ‘No Statement ID; In object type: ‘NVL(o.object_type,’Not Specified’)
15 END) statement_type
16 , ROW_NUMBER() OVER(PARTITION BY ash.sql_id,ash.sql_child_number ORDER BY ash.sample_time DESC) rn
17 FROM dba_hist_snapshot s
18 , dba_hist_active_sess_history ash
19 , dba_objects o
20 WHERE s.snap_id = ash.snap_id(+)
21 AND s.dbid = ash.dbid(+)
22 AND s.instance_number = ash.instance_number(+)
23 AND ash.plsql_entry_object_id = o.object_id(+)
24 AND TRUNC(ash.sample_time) BETWEEN TRUNC(SYSDATE-6) AND TRUNC(SYSDATE+1) — all within last 7 days
25 )
26 SELECT sample_day
27 , statement_type
28 , COUNT(1)
29 FROM ash
30 WHERE rn = 1
31 GROUP BY ROLLUP(sample_day)
32 , ROLLUP(statement_type)
33 ORDER BY sample_day
34* , statement_type
x_j4134[543/9757]@AED52> /

SAMPLE_DAY STATEMENT_TYPE COUNT(1)
——————– —————————————————- ———-
02-FEB-2007 00:00:00 DDL 112
02-FEB-2007 00:00:00 DML 49
02-FEB-2007 00:00:00 No Statement ID; In object type: Not Specified 292
02-FEB-2007 00:00:00 No Statement ID; In object type: PACKAGE 7
02-FEB-2007 00:00:00 No Statement ID; In object type: PROCEDURE 131
02-FEB-2007 00:00:00 PL/SQL 301
02-FEB-2007 00:00:00 Query 181
02-FEB-2007 00:00:00 1073
03-FEB-2007 00:00:00 DDL 20
03-FEB-2007 00:00:00 DML 26
03-FEB-2007 00:00:00 No Statement ID; In object type: Not Specified 91
03-FEB-2007 00:00:00 No Statement ID; In object type: PACKAGE 2
03-FEB-2007 00:00:00 No Statement ID; In object type: PROCEDURE 84
03-FEB-2007 00:00:00 PL/SQL 166
03-FEB-2007 00:00:00 Query 12
03-FEB-2007 00:00:00 401
04-FEB-2007 00:00:00 DDL 127
04-FEB-2007 00:00:00 DML 14
04-FEB-2007 00:00:00 No Statement ID; In object type: Not Specified 410
04-FEB-2007 00:00:00 No Statement ID; In object type: PROCEDURE 305
04-FEB-2007 00:00:00 PL/SQL 306
04-FEB-2007 00:00:00 Query 14
04-FEB-2007 00:00:00 1176
05-FEB-2007 00:00:00 DDL 115
05-FEB-2007 00:00:00 DML 81
05-FEB-2007 00:00:00 Explain Plan 1
05-FEB-2007 00:00:00 No Statement ID; In object type: Not Specified 261
05-FEB-2007 00:00:00 No Statement ID; In object type: PACKAGE 16
05-FEB-2007 00:00:00 No Statement ID; In object type: PROCEDURE 161
05-FEB-2007 00:00:00 PL/SQL 315
05-FEB-2007 00:00:00 Query 360
05-FEB-2007 00:00:00 1310
06-FEB-2007 00:00:00 DDL 98
06-FEB-2007 00:00:00 DML 86
06-FEB-2007 00:00:00 Explain Plan 2
06-FEB-2007 00:00:00 No Statement ID; In object type: Not Specified 212
06-FEB-2007 00:00:00 No Statement ID; In object type: PACKAGE 16
06-FEB-2007 00:00:00 No Statement ID; In object type: PROCEDURE 108
06-FEB-2007 00:00:00 PL/SQL 299
06-FEB-2007 00:00:00 Query 439
06-FEB-2007 00:00:00 1260
07-FEB-2007 00:00:00 DDL 98
07-FEB-2007 00:00:00 DML 162
07-FEB-2007 00:00:00 Explain Plan 1
07-FEB-2007 00:00:00 No Statement ID; In object type: Not Specified 210
07-FEB-2007 00:00:00 No Statement ID; In object type: PACKAGE 24
07-FEB-2007 00:00:00 No Statement ID; In object type: PROCEDURE 96
07-FEB-2007 00:00:00 PL/SQL 337
07-FEB-2007 00:00:00 Query 348
07-FEB-2007 00:00:00 1276
08-FEB-2007 00:00:00 DDL 112
08-FEB-2007 00:00:00 DML 420
08-FEB-2007 00:00:00 Explain Plan 1
08-FEB-2007 00:00:00 No Statement ID; In object type: Not Specified 311
08-FEB-2007 00:00:00 No Statement ID; In object type: PACKAGE 25
08-FEB-2007 00:00:00 No Statement ID; In object type: PROCEDURE 119
08-FEB-2007 00:00:00 PL/SQL 572
08-FEB-2007 00:00:00 Query 493
08-FEB-2007 00:00:00 2053
DDL 682
DML 838
Explain Plan 5
No Statement ID; In object type: Not Specified 1787
No Statement ID; In object type: PACKAGE 90
No Statement ID; In object type: PROCEDURE 1004
PL/SQL 2296
Query 1847
8549

68 rows selected.

Elapsed: 00:09:49.44

It was just an idea…if anyone has anything to add/improve it feel free to comment.

Partition exchange loading and column transposing issue

I came across an interesting issue yesterday whereby a partition exchange load routine wasn’t working for my colleague Jon. The ensuing investigation seems to point at the possibility of a bug, whereby after exchanging a partition Oracle may have inadvertently transposed columns in the target table. I’ve created a test script to illustrate the problem and logged an SR with Metalink to see what they make of it.

In light of my recent post on DBMS_APPLICATION_INFO I’d better state that we found the problem on 10.2.0.2 on HP-UX and then tested it against 9.2.0.6 on HP-UX and found the same results so it seems the issue has been there a while in terms of Oracle releases.

The investigation started down a different avenue – as they often do – with the inability to complete a partition exchange due to the Oracle error:

ORA-14098: index mismatch for tables in ALTER TABLE EXCHANGE PARTITION

Reissuing the exchange partition command with the EXCLUDING INDEXES clause allowed the exchange to proceed so we figured there was definitely a problem with the indexes.

Next we tried dropping all the indexes on both sides of the exchange and recreating them…but when we retried the exchange partition it failed again with the same error.

Right, drastic measures time…we dropped all the indexes on both sides and ran the exchange which, with no indexes around, worked fine (with INCLUDING INDEXES clause in place as it would normally be on this process).

Next we proceeded to add the indexes one by one, starting with the b trees for the primary key and unique keys. This worked without a problem so we moved on to the bitmap indexes which seemed to be fine for the first few but then after adding a few of them it suddenly stopped working. Through a process of trial and error we were able to determine that if two specific bitmap indexes were on the tables then the exchange would fail with ORA-14098.

So, what was so special about those two bitmaps and the columns they were on. We selected the details from dba_tab_columns for the two columns involved in these bitmap indexes and realised that the only difference was that they were at a different column position (COLUMN_ID) on the tables – which begged the question “If these two tables are not quite the same then how can we be allowed to exchange them?”

Well, I guess, in our case, we were able to say that the two objects being exchanged had the same columns but just not necessarily in the same order (those of you who love classic comedy will I’m sure be recalling Morecambe and Wise with Andre Previn right about now)…should this mean we can swap the tables or not ?

To try and illustrate the scenario I built a test script…the output from a run follows…sorry it’s a bit long…but it does prove some useful conclusions (I think)

> @test_pel_bug_mismatch_columns
> REM Drop the tables…
> DROP TABLE jeff_test
/

Table dropped.

> DROP TABLE jeff_test_ptn
/

Table dropped.

> REM Create the tables…
> CREATE TABLE jeff_test_ptn(partition_key_col NUMBER NOT NULL
2 ,column1 NUMBER NOT NULL
3 ,column3 NUMBER NOT NULL
4 ,column2 NUMBER NOT NULL
5 ,column4 NUMBER NOT NULL
6 )
7 PARTITION BY RANGE (partition_key_col)
8 (
9 PARTITION p001 VALUES LESS THAN (1000001)
10 ,PARTITION p002 VALUES LESS THAN (MAXVALUE)
11 )
12 /

Table created.

> CREATE TABLE jeff_test(partition_key_col NUMBER NOT NULL
2 ,column1 NUMBER NOT NULL
3 ,column2 NUMBER NOT NULL
4 ,column3 NUMBER NOT NULL
5 ,column4 NUMBER NOT NULL
6 )
7 /

Table created.

> REM Populate the table…
> INSERT /*+ APPEND */
2 INTO jeff_test_ptn(partition_key_col
3 ,column1
4 ,column2
5 ,column3
6 ,column4
7 )
8 SELECT idx
9 , MOD(idx,2)
10 , MOD(idx,3)
11 , MOD(idx,4)
12 , MOD(idx,2)
13 FROM (SELECT LEVEL idx FROM dual CONNECT BY LEVEL
COMMIT
2 /

Commit complete.

> INSERT /*+ APPEND */
2 INTO jeff_test(partition_key_col
3 ,column1
4 ,column2
5 ,column3
6 ,column4
7 )
8 SELECT idx
9 , MOD(idx,2)
10 , MOD(idx,5)
11 , MOD(idx,6)
12 , MOD(idx,2)
13 FROM (SELECT LEVEL idx FROM dual CONNECT BY LEVEL
COMMIT
2 /

Commit complete.

> REM Count the rows in COLUMN2 and COLUMN3 for both tables…
>
> REM Should see 3 rows…all NUMBERS
> SELECT column2,COUNT(1) FROM jeff_test_ptn GROUP BY column2;

COLUMN2 COUNT(1)
———- ———-
1 12000
2 12000
0 12000

3 rows selected.

>
> REM Should see 4 rows…all NUMBERS
> SELECT column3,COUNT(1) FROM jeff_test_ptn GROUP BY column3;

COLUMN3 COUNT(1)
———- ———-
1 9000
2 9000
3 9000
0 9000

4 rows selected.

>
> REM Should see 5 rows…all NUMBERS
> SELECT column2,COUNT(1) FROM jeff_test GROUP BY column2;

COLUMN2 COUNT(1)
———- ———-
1 7200
2 7200
4 7200
3 7200
0 7200

5 rows selected.

>
> REM Should see 6 rows…all NUMBERS
> SELECT column3,COUNT(1) FROM jeff_test GROUP BY column3;

COLUMN3 COUNT(1)
———- ———-
1 6000
2 6000
4 6000
5 6000
3 6000
0 6000

6 rows selected.

>
> REM Now lets try and swap the partition and the table…
> ALTER TABLE jeff_test_ptn
2 EXCHANGE PARTITION p001
3 WITH TABLE jeff_test
4 INCLUDING INDEXES
5 WITHOUT VALIDATION
6 /

Table altered.

>
> REM Surprisingly, it lets us do the above operation without complaining.
> REM Even worse…it transposes the values in COLUMN2 and COLUMN3…
>
> REM Should see 5 rows…but we see 6 rows
> SELECT column2,COUNT(1) FROM jeff_test_ptn GROUP BY column2;

COLUMN2 COUNT(1)
———- ———-
1 6000
2 6000
4 6000
5 6000
3 6000
0 6000

6 rows selected.

>
> REM Should see 6 rows…but we see 5 rows
> SELECT column3,COUNT(1) FROM jeff_test_ptn GROUP BY column3;

COLUMN3 COUNT(1)
———- ———-
1 7200
2 7200
4 7200
3 7200
0 7200

5 rows selected.

>
> REM Should see 3 rows…but we see 4 rows
> SELECT column2,COUNT(1) FROM jeff_test GROUP BY column2;

COLUMN2 COUNT(1)
———- ———-
1 9000
2 9000
3 9000
0 9000

4 rows selected.

>
> REM Should see 4 rows…but we see 3 rows
> SELECT column3,COUNT(1) FROM jeff_test GROUP BY column3;

COLUMN3 COUNT(1)———- ———-
1 12000
2 12000
0 12000

3 rows selected.

>
> REM Now, lets try again but with COLUMN2 and COLUMN3
> REM being of different datatypes…
>
> REM Drop the tables…
> DROP TABLE jeff_test
2 /

Table dropped.

> DROP TABLE jeff_test_ptn
2 /

Table dropped.

> REM Create the tables…
> CREATE TABLE jeff_test_ptn(partition_key_col NUMBER NOT NULL
2 ,column1 NUMBER NOT NULL
3 ,column3 NUMBER NOT NULL
4 ,column2 DATE NOT NULL
5 ,column4 NUMBER NOT NULL
6 )
7 PARTITION BY RANGE (partition_key_col)
8 (
9 PARTITION p001 VALUES LESS THAN (1000001)
10 ,PARTITION p002 VALUES LESS THAN (MAXVALUE)
11 )
12 /

Table created.

> CREATE TABLE jeff_test(partition_key_col NUMBER NOT NULL
2 ,column1 NUMBER NOT NULL
3 ,column2 DATE NOT NULL
4 ,column3 NUMBER NOT NULL
5 ,column4 NUMBER NOT NULL
6 )
7 /

Table created.

> REM Populate the table…
> INSERT /*+ APPEND */
2 INTO jeff_test_ptn(partition_key_col
3 ,column1
4 ,column2
5 ,column3
6 ,column4
7 )
8 SELECT idx
9 , MOD(idx,2)
10 , SYSDATE + MOD(idx,3)
11 , MOD(idx,4)
12 , MOD(idx,2)
13 FROM (SELECT LEVEL idx FROM dual CONNECT BY LEVEL
COMMIT
2 /

Commit complete.

> INSERT /*+ APPEND */
2 INTO jeff_test(partition_key_col
3 ,column1
4 ,column2
5 ,column3
6 ,column4
7 )
8 SELECT idx
9 , MOD(idx,2)
10 , SYSDATE + MOD(idx,5)
11 , MOD(idx,6)
12 , MOD(idx,2)
13 FROM (SELECT LEVEL idx FROM dual CONNECT BY LEVEL
COMMIT
2 /

Commit complete.

> REM Count the rows in COLUMN2 and COLUMN3 for both tables…
>
> REM Should see 3 rows…all DATES
> SELECT column2,COUNT(1) FROM jeff_test_ptn GROUP BY column2;

COLUMN2 COUNT(1)
——————– ———-
27-SEP-2006 11:49:03 12000
28-SEP-2006 11:49:03 12000
26-SEP-2006 11:49:03 12000

3 rows selected.

>
> REM Should see 4 rows…all NUMBERS
> SELECT column3,COUNT(1) FROM jeff_test_ptn GROUP BY column3;

COLUMN3 COUNT(1)
———- ———-
1 9000
2 9000
3 9000
0 9000

4 rows selected.

>
> REM Should see 5 rows…all DATES
> SELECT column2,COUNT(1) FROM jeff_test GROUP BY column2;

COLUMN2 COUNT(1)
——————– ———-
30-SEP-2006 11:49:03 7200
27-SEP-2006 11:49:03 7200
28-SEP-2006 11:49:03 7200
29-SEP-2006 11:49:03 7200
26-SEP-2006 11:49:03 7200

5 rows selected.

>
> REM Should see 6 rows…all NUMBERS
> SELECT column3,COUNT(1) FROM jeff_test GROUP BY column3;

COLUMN3 COUNT(1)
———- ———-
1 6000
2 6000
4 6000
5 6000
3 6000
0 6000

6 rows selected.

>
> REM Now lets try and swap the partition and the table…
> REM It will fail with error…
> REM ORA-14097: column type or size mismatch in ALTER TABLE EXCHANGE PARTITION
>
> ALTER TABLE jeff_test_ptn
2 EXCHANGE PARTITION p001
3 WITH TABLE jeff_test
4 INCLUDING INDEXES
5 WITHOUT VALIDATION
6 /
ALTER TABLE jeff_test_ptn
*
ERROR at line 1:
ORA-14097: column type or size mismatch in ALTER TABLE EXCHANGE PARTITION

>
> REM So, it only fails when the columns have the same datatypes.
>
> REM Now, lets say they are the same datatype and look at how bitmap indexes
> REM are affected in the PARTITION EXCHANGE process…
>
> REM First lets recreate the tables with COLUMN2 and COLUMN3
> REM having the same datatype…
> REM Drop the tables…
> DROP TABLE jeff_test
2 /

Table dropped.

> DROP TABLE jeff_test_ptn
2 /

Table dropped.

> REM Create the tables…
> CREATE TABLE jeff_test_ptn(partition_key_col NUMBER NOT NULL
2 ,column1 NUMBER NOT NULL
3 ,column3 NUMBER NOT NULL
4 ,column2 NUMBER NOT NULL
5 ,column4 NUMBER NOT NULL
6 )
7 PARTITION BY RANGE (partition_key_col)
8 (
9 PARTITION p001 VALUES LESS THAN (1000001)
10 ,PARTITION p002 VALUES LESS THAN (MAXVALUE)
11 )
12 /

Table created.

> CREATE TABLE jeff_test(partition_key_col NUMBER NOT NULL
2 ,column1 NUMBER NOT NULL
3 ,column2 NUMBER NOT NULL
4 ,column3 NUMBER NOT NULL
5 ,column4 NUMBER NOT NULL
6 )
7 /

Table created.

> REM Populate the table…
> INSERT /*+ APPEND */
2 INTO jeff_test_ptn(partition_key_col
3 ,column1
4 ,column2
5 ,column3
6 ,column4
7 )
8 SELECT idx
9 , MOD(idx,2)
10 , MOD(idx,3)
11 , MOD(idx,4)
12 , MOD(idx,2)
13 FROM (SELECT LEVEL idx FROM dual CONNECT BY LEVEL
COMMIT
2 /

Commit complete.

> INSERT /*+ APPEND */
2 INTO jeff_test(partition_key_col
3 ,column1
4 ,column2
5 ,column3
6 ,column4
7 )
8 SELECT idx
9 , MOD(idx,2)
10 , MOD(idx,5)
11 , MOD(idx,6)
12 , MOD(idx,2)
13 FROM (SELECT LEVEL idx FROM dual CONNECT BY LEVEL
COMMIT
2 /

Commit complete.

>
> REM Now lets create a bitmap index on COLUMN1 on both tables…
> CREATE BITMAP INDEX jtp1 ON jeff_test_ptn(column1) LOCAL
2 /

Index created.

> CREATE BITMAP INDEX jt1 ON jeff_test(column1)
2 /

Index created.

>
> REM …and now try PARTITION EXCHANGE…
> ALTER TABLE jeff_test_ptn
2 EXCHANGE PARTITION p001
3 WITH TABLE jeff_test
4 INCLUDING INDEXES
5 WITHOUT VALIDATION
6 /

Table altered.

> REM It works fine.
>
> REM Now lets create a bitmap index on COLUMN4 on both tables…
> CREATE BITMAP INDEX jtp4 ON jeff_test_ptn(column4) LOCAL
2 /

Index created.

> CREATE BITMAP INDEX jt4 ON jeff_test(column4)
2 /

Index created.

>
> REM …and now try PARTITION EXCHANGE…
> ALTER TABLE jeff_test_ptn
2 EXCHANGE PARTITION p001
3 WITH TABLE jeff_test
4 INCLUDING INDEXES
5 WITHOUT VALIDATION
6 /

Table altered.

>
> REM It works fine.
>
> REM Now lets create a bitmap index on COLUMN2 on both tables…
> CREATE BITMAP INDEX jtp2 ON jeff_test_ptn(column2) LOCAL
2 /

Index created.

> CREATE BITMAP INDEX jt2 ON jeff_test(column2)
2 /

Index created.

>
> REM …and now try PARTITION EXCHANGE…
> ALTER TABLE jeff_test_ptn
2 EXCHANGE PARTITION p001
3 WITH TABLE jeff_test
4 INCLUDING INDEXES
5 WITHOUT VALIDATION
6 /
WITH TABLE jeff_test
*
ERROR at line 3:
ORA-14098: index mismatch for tables in ALTER TABLE EXCHANGE PARTITION

Right, now what can we conclude from the above ?

1. If you don’t have indexes on your table then you can do an exchange even if the columns are in a different order – but you will silently transpose the columns. No errors are given and I think this is a bug.
2. You will only hit the problem in 1 if the columns which are transposed are of the same datatype – I’ve not checked for scale/precision/length – it may be, for example, that a VARCHAR2(20) and VARCHAR2(10) being transposed would raise an ORA-14097 error.
3. If you have indexes on columns which are transposed then the exchange will fail with ORA-14098. I don’t know whether this is a bitmap index specific thing as I’ve not tested it any further.
4. If you only have indexes on columns which are not transposed then you can do the exchange and there will be no errors – but your data is obviously transposed silently.