Category: Oracle

Using partitioning to colocate data for optimal multiple index access paths

I came across a situation the other day where a client was accessing rows from a table via one of several different indexes and getting poor performance on all of them.

The table in question had about five million rows and about four indexes. For the process in question, the table was hitting each of the indexes on this single table at one point or another.

I noticed that the clustering factor for each of the indexes was quite poor – a value closer to the number of rows in the table rather than the number of blocks in the table. This was leading to a high number of logical IOs to get the rows needed on each query in the process, because the rows pointed to by the index were all in different data blocks, scattered far and wide. As I recalle, Jonathan gives a great explanation of how the clustering factor can affect the performance of a query in Chapter five of his CBO Fundamentals book and basically this was relevant to this scenario.

A fix for a poor clustering factor, is to sort the data in the table, in the order which matches that of the index in question. So, if INDEX_A is on columns A then B, you should order the data in the table by column A, then B…perhaps by removing it and then reinserting it all using an ORDER BY on the insert.

This of course, only works for one order – you can’t order data in multiple different ways within the same table, to suit all the indexes, so for one index you might be able to order the data perfectly but that might be terrible for another index which wants to order by say, column C, then D….or does it?

Well, in this particular case, the data required by each query hitting different indexes on this single table was all of a different type – by this I mean, the table itself was modelled from a Supertype Entity, implemented physically as a single table with a Supertype differentiator column (PRODUCT_TYPE in this case). Query one in the process wanted data for PRODUCT_TYPE “X”, and wanted to get this via Index IDX1 on COL1. Query two would then want data of PRODUCT_TYPE “Y” and get it via index IDX2 on COL2 etc…

I figured that if we split the table into partitions based on the PRODUCT_TYPE then we’d be able to order the data within each partition by the appropriate order for the index which would commonly be used to access data of that PRODUCT_TYPE.

It will take them a while to test and implement the solution and gain some empirical results, however a simple test script I knocked up seemed to indicate that the solution could work:

The script…


CLEAR BREAKS
CLEAR COLUMNS
COLUMN index_name HEADING "IndexName" FORMAT A10
COLUMN partition_name HEADING "PartitionName" FORMAT A10
COLUMN leaf_blocks HEADING "LeafBlocks" FORMAT 999
COLUMN distinct_keys HEADING "DistinctKeys" FORMAT 999,999
COLUMN num_rows HEADING "NumRows" FORMAT 999,999
COLUMN clustering_factor HEADING "ClusteringFactor" FORMAT 999,999
COLUMN clfpct HEADING "Clu FactPercent" FORMAT 999

DROP TABLE jeff_unpartitioned PURGE
/
CREATE TABLE jeff_unpartitioned
AS
SELECT l pk_col
, MOD(l,4) partitioning_key_col
, (dbms_random.value * 1000) index_col1
, (dbms_random.value * 1000) index_col2
, (dbms_random.value * 1000) index_col3
, (dbms_random.value * 1000) index_col4
FROM (SELECT level l FROM dual CONNECT BY LEVEL < 100001)
ORDER BY l
/
CREATE UNIQUE INDEX ju_pk ON jeff_unpartitioned(pk_col)
/
CREATE INDEX ju_idx1 ON jeff_unpartitioned(index_col1)
/
CREATE INDEX ju_idx2 ON jeff_unpartitioned(index_col2)
/
CREATE INDEX ju_idx3 ON jeff_unpartitioned(index_col3)
/
CREATE INDEX ju_idx4 ON jeff_unpartitioned(index_col4)
/
exec DBMS_STATS.GATHER_TABLE_STATS (ownname=>USER,tabname=>’JEFF_UNPARTITIONED’);

SELECT i.index_name
, i.leaf_blocks
, i.distinct_keys
, i.clustering_factor
, (DECODE((t.num_rows – t.blocks),0,0,(i.clustering_factor) / (t.num_rows – t.blocks))) * 100 clfpct
FROM all_indexes i
, all_tables t
WHERE i.table_name = t.table_name
AND i.table_owner = t.owner
AND t.table_name = ‘JEFF_UNPARTITIONED’
AND t.owner = USER
/

DROP TABLE jeff_partitioned PURGE
/
CREATE TABLE jeff_partitioned
(pk_col number
,partitioning_key_col number
,index_col1 number
,index_col2 number
,index_col3 number
,index_col4 number
)
PARTITION BY LIST(partitioning_key_col)
(PARTITION p1 VALUES (1)
,PARTITION p2 VALUES (2)
,PARTITION p3 VALUES (3)
,PARTITION p4 VALUES (4)
)
/
INSERT /*+ APPEND */
INTO jeff_partitioned(pk_col,partitioning_key_col,index_col1,index_col2,index_col3,index_col4)
SELECT l
, 1
, ROUND(l,100)
, (dbms_random.value * 1000)
, (dbms_random.value * 1000)
, (dbms_random.value * 1000)
FROM (SELECT level l FROM dual CONNECT BY LEVEL < 25001)
ORDER BY 3
/
COMMIT
/
INSERT /*+ APPEND */
INTO jeff_partitioned(pk_col,partitioning_key_col,index_col1,index_col2,index_col3,index_col4)
SELECT l
, 2
, (dbms_random.value * 1000)
, ROUND(l,100)
, (dbms_random.value * 1000)
, (dbms_random.value * 1000)
FROM (SELECT level l FROM dual CONNECT BY LEVEL < 25001)
ORDER BY 4
/
COMMIT
/
INSERT /*+ APPEND */
INTO jeff_partitioned(pk_col,partitioning_key_col,index_col1,index_col2,index_col3,index_col4)
SELECT l
, 3
, (dbms_random.value * 1000)
, (dbms_random.value * 1000)
, ROUND(l,100)
, (dbms_random.value * 1000)
FROM (SELECT level l FROM dual CONNECT BY LEVEL < 25001)
ORDER BY 5
/
COMMIT
/
INSERT /*+ APPEND */
INTO jeff_partitioned(pk_col,partitioning_key_col,index_col1,index_col2,index_col3,index_col4)
SELECT l
, 4
, (dbms_random.value * 1000)
, (dbms_random.value * 1000)
, (dbms_random.value * 1000)
, ROUND(l,100)
FROM (SELECT level l FROM dual CONNECT BY LEVEL < 25001)
ORDER BY 6
/
COMMIT
/
CREATE UNIQUE INDEX jp_pk ON jeff_partitioned(pk_col,partitioning_key_col) LOCAL
/
CREATE INDEX jp_idx1 ON jeff_partitioned(index_col1,partitioning_key_col) LOCAL
/
CREATE INDEX jp_idx2 ON jeff_partitioned(index_col2,partitioning_key_col) LOCAL
/
CREATE INDEX jp_idx3 ON jeff_partitioned(index_col3,partitioning_key_col) LOCAL
/
CREATE INDEX jp_idx4 ON jeff_partitioned(index_col4,partitioning_key_col) LOCAL
/
exec DBMS_STATS.GATHER_TABLE_STATS (ownname=>USER,tabname=>’JEFF_PARTITIONED’);

SELECT i.index_name
, i.leaf_blocks
, i.distinct_keys
, i.clustering_factor
The tool that can help to get up and apply acheter pfizer viagra for your driver’s license. Does it give the user an automatic erection? Yes, there are side effects of tadalafil various things one can try for erectile dysfunction after prostate cancer radiation therapy, which can extend up until a year or more. This will give an erection to best viagra india about four to five hours with no sort of issue. They work safely to cure impotence and improve the levitra generic no prescription overall energy level. , (DECODE((t.num_rows – t.blocks),0,0,(i.clustering_factor) / (t.num_rows – t.blocks))) * 100 clfpct
FROM all_indexes i
, all_tables t
WHERE i.table_name = t.table_name
AND i.table_owner = t.owner
AND t.table_name = ‘JEFF_PARTITIONED’
AND t.owner = USER
ORDER BY i.index_name
/

SELECT i.index_name
, ip.partition_name
, ip.leaf_blocks
, ip.distinct_keys
, ip.clustering_factor
, (DECODE((t.num_rows – t.blocks),0,0,(ip.clustering_factor) / (t.num_rows – t.blocks))) * 100 clfpct
FROM all_indexes i
, all_ind_partitions ip
, all_tables t
WHERE i.table_name = t.table_name
AND i.table_owner = t.owner
AND i.index_name = ip.index_name
AND i.owner = ip.index_owner
AND t.table_name = ‘JEFF_PARTITIONED’
AND t.owner = USER
ORDER BY i.index_nam e
, ip.partition_name
/

And now the results…


Table dropped.

Table created.

Index created.

Index created.

Index created.

Index created.

Index created.

PL/SQL procedure successfully completed.

Index Leaf Distinct Clustering Clu Fact
Name Blocks Keys Factor Percent
———- —— ——– ———- ——–
JU_IDX4 226 100,000 99,830 101
JU_IDX3 226 100,000 99,880 101
JU_IDX2 226 100,000 99,872 101
JU_IDX1 226 100,000 99,833 101
JU_PK 103 100,000 685 1

5 rows selected.

Table dropped.

Table created.

25000 rows created.

Commit complete.

25000 rows created.

Commit complete.

25000 rows created.

Commit complete.

25000 rows created.

Commit complete.

Index created.

Index created.

Index created.

Index created.

Index created.

PL/SQL procedure successfully completed.

Index Leaf Distinct Clustering Clu Fact
Name Blocks Keys Factor Percent
———- —— ——– ———- ——–
JP_IDX1 218 100,000 74,593 75
JP_IDX2 218 100,000 74,624 75
JP_IDX3 218 100,000 74,609 75
JP_IDX4 218 100,000 74,627 75
JP_PK 124 100,000 568 1

5 rows selected.

Index Partition Leaf Distinct Clustering Clu Fact
Name Name Blocks Keys Factor Percent
———- ———- —— ——– ———- ——–
JP_IDX1 P1 32 25,000 142 0
JP_IDX1 P2 62 25,000 24,826 25
JP_IDX1 P3 62 25,000 24,811 25
JP_IDX1 P4 62 25,000 24,814 25
JP_IDX2 P1 62 25,000 24,812 25
JP_IDX2 P2 32 25,000 142 0
JP_IDX2 P3 62 25,000 24,835 25
JP_IDX2 P4 62 25,000 24,835 25
JP_IDX3 P1 62 25,000 24,802 25
JP_IDX3 P2 62 25,000 24,826 25
JP_IDX3 P3 32 25,000 142 0
JP_IDX3 P4 62 25,000 24,839 25
JP_IDX4 P1 62 25,000 24,830 25
JP_IDX4 P2 62 25,000 24,832 25
JP_IDX4 P3 62 25,000 24,823 25
JP_IDX4 P4 32 25,000 142 0
JP_PK P1 31 25,000 142 0
JP_PK P2 31 25,000 142 0
JP_PK P3 31 25,000 142 0
JP_PK P4 31 25,000 142 0

20 rows selected.

From the results we can see that when the table is unpartitioned and, in this instance, ordered by the primary key, all four indexes have high clustering factors.

We could order the data by one of the columns in one of the indexes and make that index get a good (low) clustering factor, but the other indexes would still remain with high values – you can’t make them all have low clustering factors whilst the table is one big amorphous mass of data.

In the partitioned table we are able to order the data in the most efficient way for each partition, insofar as each partition maps to an access path via a specific index.

So, partition P1, where users would commonly access the data via index JP_IDX1, has a low clustering factor because we ordered the data by INDEX_COL1 on insert.

Partition P2 has the lowest clustering factor for index JP_IDX2 and P3 for JP_IDX3 etc…

Some caveats to bear in mind for this approach:

1. You should ensure that your queries do use the partitioning key column when accessing the table, otherwise you’ll end up scanning extra index partitions unnecessarily. You will also make life extremely difficult, if not impossible, for the optimizer because if we don’t identify that the query will only use one partition then the CBO will be forced to use the global stats on the index/table rather than the specific ones for a single partition…the global ones as you can see from the results still show a high clustering factor because it is an aggregate which is hiding the detail that one specific partition – the one we really want – has a low clustering factor.

2. If you want LOCAL indexes, you will need to add the partitioning key column to that index – if it is not already present of course. As Jonathan points out in the first comment on this post…this only applies if the index in question is unique and supports a PK/unique constraint. If the index does not support a PK/unique constraint then it can be LOCAL without the need to have the partitioning key column(s) in it.

3. If you are using GLOBAL indexes on the partitioned table then they will still have reasonably high clustering factors due to the partitions where the data is not ordered by the columns of that index, but the data itself will be ordered within the partition that we access via the partioning key column so we should get good performance.

4. If you want to use ordered data in a table then you need to factor that into the way you populate the table – if it’s a batch process then you can this as part of that process but if it’s built up over time, in an OLTP fashion then you’ll need periodic “rebuilds” of the table to take the data out and put it back in, reordered. This is something you would need to do whether it was partitioned of course.

I’m sure there are more caveats.

Your mileage may, of course, vary.

Is ASSM evil in a DW?

I saw an interesting article on ASSM from Howard the other day. I wanted to comment on it but unless I’m being a bit thick, it doesn’t seem to be a blogpost and therefore I couldn’t and instead, I thought I’d do it here.

As I said, I found the article interesting and informative but I did have a few “But what about…?” issues that popped into my head as I read through it.

It probably won’t surprise anyone who reads my blog that my queries relate to the use of ASSM on a datawarehouse environment given my interest in that area.

Firstly, Howard asserted that ASSM wouldn’t be helpful in scenarios where you recieved sorted data from a warehouse source which you wanted to load up into a target table in order that you could gain a performance benefit from creating an index on that data whilst taking advantage of the NOSORT option – thereby speeding up the index creation considerably. Nothing Howard said was untrue although there are a number of things that could be discussed in relation to this point…

Firstly, in my experience, most warehouse processing uses Oracle parallel execution for performance – but if one were to load up data into a target table using parallel processing, then even if it were sorted in the source, the use of parallel processing would result in the target being unsorted and hence this advantage would be lost. Of course you could resort to serial processing but would that be more efficient than doing the work in parallel and then sorting the data for the index build again using parallel processing techniques – each process is probably different but I guess what I’m saying is that it’s possible that the NOSORT issue is not necessarily relevant in all cases especially those on a DW using parallel execution.

There are a number of caveats with the use of the NOSORT option such as:

You cannot specify REVERSE with this clause.
You cannot use this clause to create a cluster index partitioned or bitmap index.
You cannot specify this clause for a secondary index on an index-organized table.
Check out www.hoax-slayer.com before you send sildenafil tablets in india on any warnings. 7. There are three viagra pharmacies secretworldchronicle.com hormones that the Clomid are designed to work on. The working of blue pill has allowed researchers and clinicians to determine the connection between erectile buy cialis in australia dysfunction and other sexual problems in males. Children s and women are strictly asked to stay away from liquor as it may diminish the viability of published here levitra on line this item Don’t expend with, or simply after a high fat or heavy meal, it may take longer to affect a cure but both are effective.
Read more here

The use of Direct Path operations on a DW might also mean that the space wastage doesn’t occur given Oracle constructs full blocks from clean ones rather than via free list management or the ASSM equivalent.

Howard suggested that tests he has undertaken showed a 1.5% excess space utilisation when ASSM was involved on a 1Tb table. A long time ago I would have been horified to lose 15Gb but these days I probably wouldn’t be as concerned, particularly as I often find ways to save that kind of storage just as easily in other areas, e.g. Ensuring we use compression appropriately for tables and indexes and ensuring we only have indexes we need and which are of the right type and structure. If the solution to save that 1.5% were simple then I’d consider that too – not using ASSM sounds like a simple solution but I guess it depends on what the DBA standards are for the organisation amongst other factors.

Howard mentioned that with ASSM “wasting” space and using more blocks to store the same number of rows, that would mean a full scan would need to read more blocks for a given table and that the numerous, repeatedly accessed and therefore “hot”, ASSM bitmap blocks could lead to other, “warm” data being aged out sooner than would otherwise be the case – It’s a reasonable hypothesis and if we were full scanning a 1Tb table that had 1.5% or 15Gb of “waste” – I’m making a big assumption due to my lack of research here, in guessing that the 15Gb is largely made up of these ASSM bitmap blocks – then yes, 15Gb of blocks could have a considerable impact on a buffer cache…but that’s assuming you’d be reading the whole 1Tb table via a full scan – one would imagine the table is partitioned by some reasonable key (hopefully including time) and that therefore you’d only be reading a much smaller subset with the consequent reduced effect on the buffer cache. The effect is there but it’s impact would be system/query dependent.

Since the read (full scan) of the table itself would result in blocks going on the LRU end of the LRU list for the buffer cache then that wouldn’t affect the caching of hot/warm data at all – it’s just a matter of whether the ASSM bitmap blocks would be aging out other “warm” data to the detriment of overall system performance.

Whilst it raised a number of queries, it’s definitely the case that the article Howard posted has raised quite a debate within the DBA team on the warehouse I’m currently working on…should be fun this week on the warehouse as we contemplate peforming a number of benchmarks to try and ascertain which way we should jump – if any!

On another note, even though he’s moved on to pastures new, I think Doug Burns must have had something to do with the latest Sky TV adverts as the phone number they wanted you to call was 08702 42 42 42 – now that’s what I call leaving a lasting impression!

Changing the subpartition template and pre-existing partitions

My colleague Tank could not insert some records into a subpartitioned table the other day as he kept getting an ORA-14400 error and couldn’t work out why – nor could I straight away until he mentioned the possibility that the subpartition template had changed on the table…which it had.

Lets try and work through the problem…first lets create some test tables…

DROP TABLE j4134_test1 PURGE
/
CREATE TABLE j4134_test1
(
col1  NUMBER        NOT NULL
,col2  NUMBER        NOT NULL
,col3  VARCHAR2(254) NOT NULL
)
PARTITION BY RANGE (col1)
SUBPARTITION BY LIST(col2)
SUBPARTITION TEMPLATE
(SUBPARTITION s1 VALUES (1),
 SUBPARTITION s2 VALUES (2),
 SUBPARTITION s3 VALUES (3)
)
(
PARTITION p1 VALUES LESS THAN (1000)
,PARTITION p2 VALUES LESS THAN (2000)
,PARTITION p3 VALUES LESS THAN (3000)
,PARTITION p4 VALUES LESS THAN (4000)
)
/
ALTER TABLE j4134_test1 SET SUBPARTITION TEMPLATE
(SUBPARTITION s1 VALUES (1),
 SUBPARTITION s2 VALUES (2)
)
/
ALTER TABLE j4134_test1 ADD PARTITION p5 VALUES LESS THAN(5000)
/
DROP TABLE j4134_test2 PURGE
/
CREATE TABLE j4134_test2 (
col1  NUMBER        NOT NULL
,col2  NUMBER        NOT NULL
,col3  VARCHAR2(254) NOT NULL
)
PARTITION BY LIST (col2)
(
PARTITION s1 VALUES (1)
,PARTITION s2 VALUES (2)
,PARTITION s3 VALUES (3)
)
/

OK, that’s everything set up…now lets just have a look at what partitions and subpartitions we have:

ae_aml[522/313]@AED52> SELECT partition_name
2  ,      subpartition_name
3  FROM   user_tab_subpartitions
4  WHERE  table_name='J4134_TEST1'
5  /

Partition            Sub Part
Name                 Name
-------------------- ------------
P1                   P1_S1
P1                   P1_S2
P1                   P1_S3
P2                   P2_S1
P2                   P2_S2
It is a prescription drug thus applying prescription for the drug is adequate before starting off levitra sample  its prescription dosage. This effective solution has tablets viagra  been launched in market by the medical experts. Puss may appear from your skin. cialis 10 mg raindogscine.com In the  cialis professional no prescription case of chronic ailments, a joint consultation is obtained from the experts. P2                   P2_S3
P3                   P3_S1
P3                   P3_S2
P3                   P3_S3
P4                   P4_S1
P4                   P4_S2
P4                   P4_S3
P5                   P5_S1
P5                   P5_S2

14 rows selected.

Notice that Partition P5 has only two subpartitions whilst the other partitions all have three subpartitions.

Now lets do a couple of tests…

First lets try and replicate the original problem…

ae_aml[522/313]@AED52> SET ECHO ON
ae_aml[522/313]@AED52> INSERT INTO j4134_test1(col1,col2,col3)
2  VALUES(3500,3,'TEST')
3  /

1 row created.

This worked – because the subpartition template used for the Partition P4 where the COL1 value 3500 would be inserted, included a subpartition for COL2 with values of 3 – no problem.

Lets try again…

ae_aml[522/313]@AED52> INSERT INTO j4134_test1(col1,col2,col3)
2  VALUES(4500,3,'TEST')
3  /
INSERT INTO j4134_test1(col1,col2,col3)
*
ERROR at line 1:
ORA-14400: inserted partition key does not map to any partition

Aha – now it fails…because the COL1 value of 4500 would result in Partition P5 being used – but this was created after the subpartition template was changed to only have subpartitions for values 1 and 2 – the value of 3 for COL2 does not have a “home” to go to, so the statement fails.

This is interesting because it means that Oracle allows us to have Partitions setup with different subpartition layouts under them depending on what the template was at the time of partition creation.

When I first thought about this I figured it wasn’t possible/correct to do this and that the subpartitions had to be uniform across the partitions, but when you think about that, it would make life difficult to change the template – what should Oracle do with the “missing” or “extra” or “modified”
subpartitions in the pre-existing partitions? Create/drop/modify them? Ignore them? If it tried to create missing ones there would be all sorts of questions like which tablespace to use and what storage characteristics to set.

As the documentation says, it leaves what exists already as is and just uses the new template for partitions created in the future…which brings me to another point…that piece of code I put up on this post needed updating because I didn’t know about this subpartition template issue…now the code will check to ensure that if we are exchanging a partitioned table for a partition of a composite partitioned table that we need to ensure the source table is partitioned in the same way
(LIST/HASH) as the subpartitioning on the target partition and that there are the same number of partitions in the source table and the target table partition – if not the process fails…like this:

ae_aml[522/313]@AED52> ALTER TABLE j4134_test1 EXCHANGE PARTITION p1 WITH TABLE j4134_test2
2  /

Table altered.

ae_aml[522/313]@AED52> REM Now put it back...
ae_aml[522/313]@AED52> ALTER TABLE j4134_test1 EXCHANGE PARTITION p1 WITH TABLE j4134_test2
2  /

Table altered.

ae_aml[522/313]@AED52> REM now try with the mismatched one - fails!
ae_aml[522/313]@AED52> ALTER TABLE j4134_test1 EXCHANGE PARTITION p5 WITH TABLE j4134_test2
2  /
ALTER TABLE j4134_test1 EXCHANGE PARTITION p5 WITH TABLE j4134_test2
                                                       * ERROR at line 1:
ORA-14294: Number of partitions does not match number of subpartitions

I’ve updated the code to check for this scenario and it now reports it.

Using AWR to summarise SQL operations activity

I was asked recently how much “use” a database had – a non too specific question but basically the person asking it wanted to know how many queries, DMLs, DDLs etc.. were run against the database on a daily basis…kind of like the queries per hour metrics that are sometimes quoted on TPC benchmarks I guess.

I didn’t have an answer as to where to get this information so I figured it warranted a ramble through the AWR tables to try and come up with something…

This isn’t going to be a description of how the AWR works – there are manuals for that. Instead, I’ll just give you the salient points from my exploration.

The AWR data, including it’s history, is accessible by looking at a collection of DBA Views named DBA_HIST%.

Part of the AWR processing captures Active Session History (ASH) data which is then accessible via the DBA_HIST_ACTIVE_SESS_HISTORY view. From this view we can obtain, for any given snap point in time, the sessions which were active, together with details of what they were doing and the statistics relating to the performance of that activity.

On this view is a column called SQL_OPCODE which tells us the type of SQL operation being performed by the session, with various codes indicating different things. I didn’t manage to find a definitive list of these SQL_OPCODES in a table/view anywhere so, by trial and error I worked them out as follows:

SQL_OPCODE    SQL Operation
1             DDL
2             INSERT
3             Query
6             UPDATE
7             DELETE
47            PL/SQL package call
50            Explain Plan
189           MERGE

…I’m sure this isn’t definitive…someone really smart out there will probably know where this list is actually stored and will, perhaps, put it on a comment to this post 😉

OK, now to code up something that goes back over this AWR data for the last, say seven days, and group it up by SQL operation so we can see what our activity has been – I used a ROW_NUMBER analytic (cos they rock) to make sure we only get the latest snapshot row from the AWR table – if a SQL operation is running for a long time then it may appear in the AWR view more than once…and we don’t want to double count it as far as executions go.

I’ve grouped up those sessions which don’t have an active SQL statement associated with them (i.e. SQL_ID is NULL) into groups based on their Object Type and I’ve then used ROLLUP to get the results for each SQL operation by day and also the summed up totals by operation across the seven days and a grand total.

I’m not actually sure why some rows in DBA_HIST_ACTIVE_SESS_HISTORY don’t have a SQL_ID against them – if they were not actually running a statement in the session then that sounds fair enough…but some of them had values in PLSQL_ENTRY_OBJECT_ID indicating (I think) that they were running a specific package (DBMS_STATS or DBMS_SPACE in many cases) so the fact they didn’t have a SQL_ID was confusing to me – perhaps it means they’re in that package but not actually running a query at the time of the snap – either way, they’re grouped up separately.

Here is a SQL*Plus spool of the code and results from a test system:

x_j4134[543/9757]@AED52> l
1 WITH ash AS
2 (
3 SELECT TRUNC(ash.sample_time) sample_day
4 , (CASE WHEN ash.sql_opcode = 47
5 THEN ‘PL/SQL’
6 WHEN ash.sql_opcode IN(1)
7 THEN ‘DDL’
8 WHEN ash.sql_opcode IN(2,6,7,189)
9 THEN ‘DML’
10 WHEN ash.sql_opcode IN(50)
11 THEN ‘Explain Plan’
12 WHEN ash.sql_opcode IN(3)
13 THEN ‘Query’
14 ELSE ‘No Statement ID; In object type: ‘NVL(o.object_type,’Not Specified’)
15 END) statement_type
16 , ROW_NUMBER() OVER(PARTITION BY ash.sql_id,ash.sql_child_number ORDER BY ash.sample_time DESC) rn
17 FROM dba_hist_snapshot s
18 , dba_hist_active_sess_history ash
19 , dba_objects o
20 WHERE s.snap_id = ash.snap_id(+)
21 AND s.dbid = ash.dbid(+)
22 AND s.instance_number = ash.instance_number(+)
23 AND ash.plsql_entry_object_id = o.object_id(+)
24 AND TRUNC(ash.sample_time) BETWEEN TRUNC(SYSDATE-6) AND TRUNC(SYSDATE+1) — all within last 7 days
25 )
26 SELECT sample_day
27 , statement_type
28 , COUNT(1)
29 FROM ash
30 WHERE rn = 1
31 GROUP BY ROLLUP(sample_day)
32 , ROLLUP(statement_type)
33 ORDER BY sample_day
34* , statement_type
x_j4134[543/9757]@AED52> /

SAMPLE_DAY STATEMENT_TYPE COUNT(1)
——————– —————————————————- ———-
02-FEB-2007 00:00:00 DDL 112
02-FEB-2007 00:00:00 DML 49
02-FEB-2007 00:00:00 No Statement ID; In object type: Not Specified 292
Now there is good news is that http://appalachianmagazine.com/category/news-headlines/page/6/ cialis 40 mg impotency caused by cycling is chiefly temporary. This is why these capsules are stated wholesale cialis price as safe diabetes supplements. This medicine starts working within generic cialis viagra half an hour to exhibit its consequences; if the man is sexually elicited. And, tadalafil 5mg no prescription as I have already commented, Voice Broadcasting is one sure fire way of bringing a voice and a face to the relationship. 02-FEB-2007 00:00:00 No Statement ID; In object type: PACKAGE 7
02-FEB-2007 00:00:00 No Statement ID; In object type: PROCEDURE 131
02-FEB-2007 00:00:00 PL/SQL 301
02-FEB-2007 00:00:00 Query 181
02-FEB-2007 00:00:00 1073
03-FEB-2007 00:00:00 DDL 20
03-FEB-2007 00:00:00 DML 26
03-FEB-2007 00:00:00 No Statement ID; In object type: Not Specified 91
03-FEB-2007 00:00:00 No Statement ID; In object type: PACKAGE 2
03-FEB-2007 00:00:00 No Statement ID; In object type: PROCEDURE 84
03-FEB-2007 00:00:00 PL/SQL 166
03-FEB-2007 00:00:00 Query 12
03-FEB-2007 00:00:00 401
04-FEB-2007 00:00:00 DDL 127
04-FEB-2007 00:00:00 DML 14
04-FEB-2007 00:00:00 No Statement ID; In object type: Not Specified 410
04-FEB-2007 00:00:00 No Statement ID; In object type: PROCEDURE 305
04-FEB-2007 00:00:00 PL/SQL 306
04-FEB-2007 00:00:00 Query 14
04-FEB-2007 00:00:00 1176
05-FEB-2007 00:00:00 DDL 115
05-FEB-2007 00:00:00 DML 81
05-FEB-2007 00:00:00 Explain Plan 1
05-FEB-2007 00:00:00 No Statement ID; In object type: Not Specified 261
05-FEB-2007 00:00:00 No Statement ID; In object type: PACKAGE 16
05-FEB-2007 00:00:00 No Statement ID; In object type: PROCEDURE 161
05-FEB-2007 00:00:00 PL/SQL 315
05-FEB-2007 00:00:00 Query 360
05-FEB-2007 00:00:00 1310
06-FEB-2007 00:00:00 DDL 98
06-FEB-2007 00:00:00 DML 86
06-FEB-2007 00:00:00 Explain Plan 2
06-FEB-2007 00:00:00 No Statement ID; In object type: Not Specified 212
06-FEB-2007 00:00:00 No Statement ID; In object type: PACKAGE 16
06-FEB-2007 00:00:00 No Statement ID; In object type: PROCEDURE 108
06-FEB-2007 00:00:00 PL/SQL 299
06-FEB-2007 00:00:00 Query 439
06-FEB-2007 00:00:00 1260
07-FEB-2007 00:00:00 DDL 98
07-FEB-2007 00:00:00 DML 162
07-FEB-2007 00:00:00 Explain Plan 1
07-FEB-2007 00:00:00 No Statement ID; In object type: Not Specified 210
07-FEB-2007 00:00:00 No Statement ID; In object type: PACKAGE 24
07-FEB-2007 00:00:00 No Statement ID; In object type: PROCEDURE 96
07-FEB-2007 00:00:00 PL/SQL 337
07-FEB-2007 00:00:00 Query 348
07-FEB-2007 00:00:00 1276
08-FEB-2007 00:00:00 DDL 112
08-FEB-2007 00:00:00 DML 420
08-FEB-2007 00:00:00 Explain Plan 1
08-FEB-2007 00:00:00 No Statement ID; In object type: Not Specified 311
08-FEB-2007 00:00:00 No Statement ID; In object type: PACKAGE 25
08-FEB-2007 00:00:00 No Statement ID; In object type: PROCEDURE 119
08-FEB-2007 00:00:00 PL/SQL 572
08-FEB-2007 00:00:00 Query 493
08-FEB-2007 00:00:00 2053
DDL 682
DML 838
Explain Plan 5
No Statement ID; In object type: Not Specified 1787
No Statement ID; In object type: PACKAGE 90
No Statement ID; In object type: PROCEDURE 1004
PL/SQL 2296
Query 1847
8549

68 rows selected.

Elapsed: 00:09:49.44

It was just an idea…if anyone has anything to add/improve it feel free to comment.

Good to be back in the 21st Century…and identifying PEL mismatches

Well, I can honestly say, that moving house over the last few months has been an experience I don’t wish to repeat for a good few years – if not decades! I’ve only just got my broadband enabled for the new place – hence my first post in ages – and that was a miracle given the state of what they very loosely call “customer service” at BT.

A few things I’ve learned in the process of moving house:

  • Never choose BT to assist you with any communications infrastructure
  • – phone line, VOIP, Broadband etc…the list of problems I’ve encountered whilst getting the simple task of telephone line and broadband installed has been literally staggering.
  • Never move in with your in laws when you are “between houses”…unless they are a nice bunch of people in which case for a short period of time (we spent 5 weeks there) it can be quite workable, even if there were some opinions at polar extremes to negotiate around at times – how exactly do you get a pro hunting person and a vegetarian to live happily under one roof?! Thankfully the Christmas spirit got us all through it and we’re happily esconced in or new place.
  • Don’t even think about using BT to get a phone line installed – it will take at least 3 weeks and they will probably lose your order at least twice.
  • Never trust Estate Agents – ours misinformed us about one particular aspect of our purchase and very nearly stopped it happening because of it – miscommunication is a highly dangerous thing.
  • Never ask BT to install your broadband – I won’t go into massive details but basically they lost my first order and then handled my second one in an inept, dismissive and unprofessional manner…and believe me, if you’ve ever tried to use their IVR system to get through to the right person you’ll understand just how frustrating it can be to be passed from pillar to post, explain your situation for five minutes only to be told that you have been talking to the wrong department (Exactly how I managed to get through to a Residential Sales person when I called a Business Sales line is quite beyond me!)
  • Take more care when you pack stuff away…otherwise you’ll end up turning the new house upside down looking for things you can’t find only to find they are tucked away in an incorrectly marked box…it would have helped if we’d been more involved with the packing people from the removals company in hindsight.

Female smokers are 13 times more likely to pfizer viagra without prescription develop lung cancer than those who have never smoked. Nowadays, people are facing lack of sexual stimulation hits best price on levitra the highest point. One would be astonished to hear about the incapability issue of ladies, but the fact is that maximum ladies around the world too suffer from the problem of lack of sexual desire. http://deeprootsmag.org/2013/10/16/the-lost-gospel/ online cialis It is an levitra 10 mg inability to achieve and maintain an erection and slowly maintain it for a long time.
Now, I’m not going to just post a social thing as I quite understand the points being made recently on other blogs about “Sorry I’ve not posted in ages but…”, so I thought I’d say something about problems when you’re partition exchange loading. I built this routine a while back to help diagnose why we were unable to exchange partitions during a Partition Exchange Load process – it doesn’t check every scenario but does cover a number of common ones such as:

  • IOT’s with different overflow characteristics can’t be exchanged
  • Hakan factors must be the same
  • Columns must be the same – in my view, that includes column names – this script will find issues like this one – even though it won’t actually stop the PEL.
  • Constraints must be the same
  • Indexes must be the same – at least if they do if you are INCLUDING INDEXES in the PEL operation.

I’m sure it’s not perfect but if anyone spots any issues with it then feel free to let me know and I’ll check and fix the code.

We’ve found it quite useful – your mileage, as they say, may vary.

There are a number of links on this blog site that were pointing at my old web hosting site (www.oramoss.demon.co.uk) – that’s dead now – I made the mistake of cancelling that and moving to BT – I mentioned that right? Anyway, I’ve fixed some of the posts and the presentation links – I’ll fix anything else as I come across it.

Partition exchange loading and column transposing issue

I came across an interesting issue yesterday whereby a partition exchange load routine wasn’t working for my colleague Jon. The ensuing investigation seems to point at the possibility of a bug, whereby after exchanging a partition Oracle may have inadvertently transposed columns in the target table. I’ve created a test script to illustrate the problem and logged an SR with Metalink to see what they make of it.

In light of my recent post on DBMS_APPLICATION_INFO I’d better state that we found the problem on 10.2.0.2 on HP-UX and then tested it against 9.2.0.6 on HP-UX and found the same results so it seems the issue has been there a while in terms of Oracle releases.

The investigation started down a different avenue – as they often do – with the inability to complete a partition exchange due to the Oracle error:

ORA-14098: index mismatch for tables in ALTER TABLE EXCHANGE PARTITION

Reissuing the exchange partition command with the EXCLUDING INDEXES clause allowed the exchange to proceed so we figured there was definitely a problem with the indexes.

Next we tried dropping all the indexes on both sides of the exchange and recreating them…but when we retried the exchange partition it failed again with the same error.

Right, drastic measures time…we dropped all the indexes on both sides and ran the exchange which, with no indexes around, worked fine (with INCLUDING INDEXES clause in place as it would normally be on this process).

Next we proceeded to add the indexes one by one, starting with the b trees for the primary key and unique keys. This worked without a problem so we moved on to the bitmap indexes which seemed to be fine for the first few but then after adding a few of them it suddenly stopped working. Through a process of trial and error we were able to determine that if two specific bitmap indexes were on the tables then the exchange would fail with ORA-14098.

So, what was so special about those two bitmaps and the columns they were on. We selected the details from dba_tab_columns for the two columns involved in these bitmap indexes and realised that the only difference was that they were at a different column position (COLUMN_ID) on the tables – which begged the question “If these two tables are not quite the same then how can we be allowed to exchange them?”

Well, I guess, in our case, we were able to say that the two objects being exchanged had the same columns but just not necessarily in the same order (those of you who love classic comedy will I’m sure be recalling Morecambe and Wise with Andre Previn right about now)…should this mean we can swap the tables or not ?

To try and illustrate the scenario I built a test script…the output from a run follows…sorry it’s a bit long…but it does prove some useful conclusions (I think)

> @test_pel_bug_mismatch_columns
> REM Drop the tables…
> DROP TABLE jeff_test
/

Table dropped.

> DROP TABLE jeff_test_ptn
/

Table dropped.

> REM Create the tables…
> CREATE TABLE jeff_test_ptn(partition_key_col NUMBER NOT NULL
2 ,column1 NUMBER NOT NULL
3 ,column3 NUMBER NOT NULL
4 ,column2 NUMBER NOT NULL
5 ,column4 NUMBER NOT NULL
6 )
7 PARTITION BY RANGE (partition_key_col)
8 (
9 PARTITION p001 VALUES LESS THAN (1000001)
10 ,PARTITION p002 VALUES LESS THAN (MAXVALUE)
11 )
12 /

Table created.

> CREATE TABLE jeff_test(partition_key_col NUMBER NOT NULL
2 ,column1 NUMBER NOT NULL
3 ,column2 NUMBER NOT NULL
4 ,column3 NUMBER NOT NULL
5 ,column4 NUMBER NOT NULL
6 )
7 /

Table created.

> REM Populate the table…
> INSERT /*+ APPEND */
2 INTO jeff_test_ptn(partition_key_col
3 ,column1
4 ,column2
5 ,column3
6 ,column4
7 )
8 SELECT idx
9 , MOD(idx,2)
10 , MOD(idx,3)
11 , MOD(idx,4)
12 , MOD(idx,2)
13 FROM (SELECT LEVEL idx FROM dual CONNECT BY LEVEL
COMMIT
2 /

Commit complete.

> INSERT /*+ APPEND */
2 INTO jeff_test(partition_key_col
3 ,column1
4 ,column2
5 ,column3
6 ,column4
7 )
8 SELECT idx
9 , MOD(idx,2)
10 , MOD(idx,5)
11 , MOD(idx,6)
12 , MOD(idx,2)
13 FROM (SELECT LEVEL idx FROM dual CONNECT BY LEVEL
COMMIT
2 /

Commit complete.

> REM Count the rows in COLUMN2 and COLUMN3 for both tables…
>
> REM Should see 3 rows…all NUMBERS
> SELECT column2,COUNT(1) FROM jeff_test_ptn GROUP BY column2;

COLUMN2 COUNT(1)
———- ———-
1 12000
2 12000
0 12000

3 rows selected.

>
> REM Should see 4 rows…all NUMBERS
> SELECT column3,COUNT(1) FROM jeff_test_ptn GROUP BY column3;

COLUMN3 COUNT(1)
———- ———-
1 9000
2 9000
3 9000
0 9000

4 rows selected.

>
> REM Should see 5 rows…all NUMBERS
> SELECT column2,COUNT(1) FROM jeff_test GROUP BY column2;

COLUMN2 COUNT(1)
———- ———-
1 7200
2 7200
4 7200
3 7200
0 7200

5 rows selected.

>
> REM Should see 6 rows…all NUMBERS
> SELECT column3,COUNT(1) FROM jeff_test GROUP BY column3;

COLUMN3 COUNT(1)
———- ———-
1 6000
2 6000
4 6000
5 6000
3 6000
0 6000

6 rows selected.

>
> REM Now lets try and swap the partition and the table…
> ALTER TABLE jeff_test_ptn
2 EXCHANGE PARTITION p001
3 WITH TABLE jeff_test
4 INCLUDING INDEXES
5 WITHOUT VALIDATION
6 /

Table altered.

>
> REM Surprisingly, it lets us do the above operation without complaining.
> REM Even worse…it transposes the values in COLUMN2 and COLUMN3…
>
> REM Should see 5 rows…but we see 6 rows
> SELECT column2,COUNT(1) FROM jeff_test_ptn GROUP BY column2;

COLUMN2 COUNT(1)
———- ———-
1 6000
2 6000
4 6000
5 6000
3 6000
0 6000

6 rows selected.

>
> REM Should see 6 rows…but we see 5 rows
> SELECT column3,COUNT(1) FROM jeff_test_ptn GROUP BY column3;

COLUMN3 COUNT(1)
———- ———-
1 7200
2 7200
4 7200
3 7200
0 7200

5 rows selected.

>
> REM Should see 3 rows…but we see 4 rows
> SELECT column2,COUNT(1) FROM jeff_test GROUP BY column2;

COLUMN2 COUNT(1)
———- ———-
1 9000
2 9000
3 9000
0 9000

4 rows selected.

>
> REM Should see 4 rows…but we see 3 rows
> SELECT column3,COUNT(1) FROM jeff_test GROUP BY column3;

COLUMN3 COUNT(1)———- ———-
1 12000
2 12000
0 12000

3 rows selected.

>
> REM Now, lets try again but with COLUMN2 and COLUMN3
> REM being of different datatypes…
>
> REM Drop the tables…
> DROP TABLE jeff_test
2 /

Table dropped.

> DROP TABLE jeff_test_ptn
2 /

Table dropped.

> REM Create the tables…
> CREATE TABLE jeff_test_ptn(partition_key_col NUMBER NOT NULL
2 ,column1 NUMBER NOT NULL
3 ,column3 NUMBER NOT NULL
4 ,column2 DATE NOT NULL
5 ,column4 NUMBER NOT NULL
6 )
7 PARTITION BY RANGE (partition_key_col)
8 (
9 PARTITION p001 VALUES LESS THAN (1000001)
10 ,PARTITION p002 VALUES LESS THAN (MAXVALUE)
11 )
12 /

Table created.

> CREATE TABLE jeff_test(partition_key_col NUMBER NOT NULL
2 ,column1 NUMBER NOT NULL
3 ,column2 DATE NOT NULL
4 ,column3 NUMBER NOT NULL
5 ,column4 NUMBER NOT NULL
6 )
7 /

Table created.

> REM Populate the table…
> INSERT /*+ APPEND */
2 INTO jeff_test_ptn(partition_key_col
3 ,column1
4 ,column2
But it is important to try description now free sample of viagra note that Kamagra is not an aphrodisiac. They are cialis canadian prices greyandgrey.com considered as a liver and kidney tonic. Sildamax cialis 20 mg is prepared in the clinically clean and healthy conditions to provide safe solution for ED treatment. The most cautious consumers seek online stores that can offer you generic cialis professional medicines. 5 ,column3
6 ,column4
7 )
8 SELECT idx
9 , MOD(idx,2)
10 , SYSDATE + MOD(idx,3)
11 , MOD(idx,4)
12 , MOD(idx,2)
13 FROM (SELECT LEVEL idx FROM dual CONNECT BY LEVEL
COMMIT
2 /

Commit complete.

> INSERT /*+ APPEND */
2 INTO jeff_test(partition_key_col
3 ,column1
4 ,column2
5 ,column3
6 ,column4
7 )
8 SELECT idx
9 , MOD(idx,2)
10 , SYSDATE + MOD(idx,5)
11 , MOD(idx,6)
12 , MOD(idx,2)
13 FROM (SELECT LEVEL idx FROM dual CONNECT BY LEVEL
COMMIT
2 /

Commit complete.

> REM Count the rows in COLUMN2 and COLUMN3 for both tables…
>
> REM Should see 3 rows…all DATES
> SELECT column2,COUNT(1) FROM jeff_test_ptn GROUP BY column2;

COLUMN2 COUNT(1)
——————– ———-
27-SEP-2006 11:49:03 12000
28-SEP-2006 11:49:03 12000
26-SEP-2006 11:49:03 12000

3 rows selected.

>
> REM Should see 4 rows…all NUMBERS
> SELECT column3,COUNT(1) FROM jeff_test_ptn GROUP BY column3;

COLUMN3 COUNT(1)
———- ———-
1 9000
2 9000
3 9000
0 9000

4 rows selected.

>
> REM Should see 5 rows…all DATES
> SELECT column2,COUNT(1) FROM jeff_test GROUP BY column2;

COLUMN2 COUNT(1)
——————– ———-
30-SEP-2006 11:49:03 7200
27-SEP-2006 11:49:03 7200
28-SEP-2006 11:49:03 7200
29-SEP-2006 11:49:03 7200
26-SEP-2006 11:49:03 7200

5 rows selected.

>
> REM Should see 6 rows…all NUMBERS
> SELECT column3,COUNT(1) FROM jeff_test GROUP BY column3;

COLUMN3 COUNT(1)
———- ———-
1 6000
2 6000
4 6000
5 6000
3 6000
0 6000

6 rows selected.

>
> REM Now lets try and swap the partition and the table…
> REM It will fail with error…
> REM ORA-14097: column type or size mismatch in ALTER TABLE EXCHANGE PARTITION
>
> ALTER TABLE jeff_test_ptn
2 EXCHANGE PARTITION p001
3 WITH TABLE jeff_test
4 INCLUDING INDEXES
5 WITHOUT VALIDATION
6 /
ALTER TABLE jeff_test_ptn
*
ERROR at line 1:
ORA-14097: column type or size mismatch in ALTER TABLE EXCHANGE PARTITION

>
> REM So, it only fails when the columns have the same datatypes.
>
> REM Now, lets say they are the same datatype and look at how bitmap indexes
> REM are affected in the PARTITION EXCHANGE process…
>
> REM First lets recreate the tables with COLUMN2 and COLUMN3
> REM having the same datatype…
> REM Drop the tables…
> DROP TABLE jeff_test
2 /

Table dropped.

> DROP TABLE jeff_test_ptn
2 /

Table dropped.

> REM Create the tables…
> CREATE TABLE jeff_test_ptn(partition_key_col NUMBER NOT NULL
2 ,column1 NUMBER NOT NULL
3 ,column3 NUMBER NOT NULL
4 ,column2 NUMBER NOT NULL
5 ,column4 NUMBER NOT NULL
6 )
7 PARTITION BY RANGE (partition_key_col)
8 (
9 PARTITION p001 VALUES LESS THAN (1000001)
10 ,PARTITION p002 VALUES LESS THAN (MAXVALUE)
11 )
12 /

Table created.

> CREATE TABLE jeff_test(partition_key_col NUMBER NOT NULL
2 ,column1 NUMBER NOT NULL
3 ,column2 NUMBER NOT NULL
4 ,column3 NUMBER NOT NULL
5 ,column4 NUMBER NOT NULL
6 )
7 /

Table created.

> REM Populate the table…
> INSERT /*+ APPEND */
2 INTO jeff_test_ptn(partition_key_col
3 ,column1
4 ,column2
5 ,column3
6 ,column4
7 )
8 SELECT idx
9 , MOD(idx,2)
10 , MOD(idx,3)
11 , MOD(idx,4)
12 , MOD(idx,2)
13 FROM (SELECT LEVEL idx FROM dual CONNECT BY LEVEL
COMMIT
2 /

Commit complete.

> INSERT /*+ APPEND */
2 INTO jeff_test(partition_key_col
3 ,column1
4 ,column2
5 ,column3
6 ,column4
7 )
8 SELECT idx
9 , MOD(idx,2)
10 , MOD(idx,5)
11 , MOD(idx,6)
12 , MOD(idx,2)
13 FROM (SELECT LEVEL idx FROM dual CONNECT BY LEVEL
COMMIT
2 /

Commit complete.

>
> REM Now lets create a bitmap index on COLUMN1 on both tables…
> CREATE BITMAP INDEX jtp1 ON jeff_test_ptn(column1) LOCAL
2 /

Index created.

> CREATE BITMAP INDEX jt1 ON jeff_test(column1)
2 /

Index created.

>
> REM …and now try PARTITION EXCHANGE…
> ALTER TABLE jeff_test_ptn
2 EXCHANGE PARTITION p001
3 WITH TABLE jeff_test
4 INCLUDING INDEXES
5 WITHOUT VALIDATION
6 /

Table altered.

> REM It works fine.
>
> REM Now lets create a bitmap index on COLUMN4 on both tables…
> CREATE BITMAP INDEX jtp4 ON jeff_test_ptn(column4) LOCAL
2 /

Index created.

> CREATE BITMAP INDEX jt4 ON jeff_test(column4)
2 /

Index created.

>
> REM …and now try PARTITION EXCHANGE…
> ALTER TABLE jeff_test_ptn
2 EXCHANGE PARTITION p001
3 WITH TABLE jeff_test
4 INCLUDING INDEXES
5 WITHOUT VALIDATION
6 /

Table altered.

>
> REM It works fine.
>
> REM Now lets create a bitmap index on COLUMN2 on both tables…
> CREATE BITMAP INDEX jtp2 ON jeff_test_ptn(column2) LOCAL
2 /

Index created.

> CREATE BITMAP INDEX jt2 ON jeff_test(column2)
2 /

Index created.

>
> REM …and now try PARTITION EXCHANGE…
> ALTER TABLE jeff_test_ptn
2 EXCHANGE PARTITION p001
3 WITH TABLE jeff_test
4 INCLUDING INDEXES
5 WITHOUT VALIDATION
6 /
WITH TABLE jeff_test
*
ERROR at line 3:
ORA-14098: index mismatch for tables in ALTER TABLE EXCHANGE PARTITION

Right, now what can we conclude from the above ?

1. If you don’t have indexes on your table then you can do an exchange even if the columns are in a different order – but you will silently transpose the columns. No errors are given and I think this is a bug.
2. You will only hit the problem in 1 if the columns which are transposed are of the same datatype – I’ve not checked for scale/precision/length – it may be, for example, that a VARCHAR2(20) and VARCHAR2(10) being transposed would raise an ORA-14097 error.
3. If you have indexes on columns which are transposed then the exchange will fail with ORA-14098. I don’t know whether this is a bitmap index specific thing as I’ve not tested it any further.
4. If you only have indexes on columns which are not transposed then you can do the exchange and there will be no errors – but your data is obviously transposed silently.

DBMS_APPLICATION_INFO and V$SESSION

Tom noted on his blog recently that DBMS_APPLICATION_INFO is something we should be using in our code to provide instrumentation and monitoring information – something I fully agree with and have been doing for some time now.

The post showed how, if we’ve called DBMS_APPLICATION_INFO in our code, we can retrieve the values of the MODULE/ACTION/PROGRAM_ID by querying V$SQL with a nice example to illustrate the point.

After reading it I recalled that some of this kind of information is available on V$SESSION from 10.2.0.2 (Thanks to Yas for identifying the specific version) so I created a simple supplementary example to illustrate this:

First I created a package – I’m using a package in order to demonstrate a secondary point which will become clearer as we progress…


create or replace package pk1 as
  procedure p3;
  procedure p4;
end pk1;
/

create or replace package body pk1 as
procedure p3 is
begin
  dbms_application_info.set_module( 'PK1 module', 'Startup' );
  dbms_application_info.set_action( 'Running P3' );
  for x in ( select * from dual look_for_me_1 ) loop
    null;
  end loop;
  for x in ( select * from dual look_for_me_2 ) loop
    null;
  end loop;
  dbms_lock.sleep(20);
end p3;

procedure p4 is
begin
  dbms_application_info.set_module( 'PK1 module', 'Startup' );
  dbms_application_info.set_action( 'Running P4' );
  for x in ( select * from dual look_for_me_2 ) loop
    null;
  end loop;
  dbms_lock.sleep(20);
end p4;

end pk1;
/

 

I’ve slightly changed the calls to DBMS_APPLICATION_INFO to show how the SET_ACTION works and also because if I don’t do it that way, the module name gets set by the first subprogram called in PK1 and should therefore not be specific to a subprogram.

Now lets get back the object name and ID – I’ve specifically excluded public synonyms in order to not cloud things:


select object_name
,      object_type
,      object_id
from   all_objects
where  object_name IN('PK1','DBMS_LOCK')
and    owner != 'PUBLIC' -- don't get public synonyms
/

 

This gives us something like:


OBJECT_NAME  OBJECT_TYPE   OBJECT_ID
------------ ------------- ---------
DBMS_LOCK    PACKAGE BODY  4368    
DBMS_LOCK    PACKAGE       4265    
PK1          PACKAGE BODY  76404   
PK1          PACKAGE       76403   

Now in another sql*plus session I can run the P4 procedure in package PK1 followed by the P3 procedure:


exec pk1.p4;
exec pk1.p3;

After both calls complete I switch to my main session and issue a slightly modified version of the query Tom wrote:


select sql_text
levitra on line sales No doubt, Zenegra can be one of your highest priorities. Like oestrogens, testosterone has recently been shown to increase nitric oxide, to dilate the arteries entering the penis, so as to allow them to supply enough blood needed to produce erection. pills viagra canada Isagenix Australia Cleanse for Life Cleanse for Life is a natural physiological need of men and women, it is lowest prices on viagra  also the world's strongest antioxidant. There is discount viagra levitra  physical, mental as well as sexual weakness. ,      action
,      module
,      program_id
,      program_line#
from   v$sql
where  sql_text like '% LOOK_FOR_ME_%' escape ''
/

SQL_TEXT                         ACTION     MODULE     PROGRAM_ID PROGRAM_LINE#
-------------------------------- ---------- ---------- ---------- -------------
SELECT * FROM DUAL LOOK_FOR_ME_1 Running P3 PK1 module 76404      7          
SELECT * FROM DUAL LOOK_FOR_ME_2 Running P4 PK1 module 76404      20        

Which gives us the two cursors that have been run and as Tom indicated would happen, in my example, the LOOK_FOR_ME_2 cursor is first issued in the call to P4 so the Action is “Running P4” even though the cursor is also issued by P3 subsequently.

Note that the PROGRAM_ID has the OBJECT_ID of the PK1 Package Body in V$SQL.

Now I can rerun the procedures again in the same order and because there is a small wait during the processing I can switch back to my main session to issue the following query against V$SESSION:


select plsql_entry_object_id
,      plsql_entry_subprogram_id
,      plsql_object_id
,      plsql_subprogram_id
,      module
,      action
from   v$session
where  sid = 150  -- this was the SID of my other session!
/

During the call to P4 I see this:


PLSQL_ENTRY_OBJECT_ID PLSQL_ENTRY_SUBPROGRAM_ID PLSQL_OBJECT_ID PLSQL_SUBPROGRAM_ID MODULE     ACTION  
--------------------- ------------------------- --------------- ------------------- ---------- ----------
76403                 2                         4265            8                   PK1 module Running P4

…and then whilst P3 is running I see this:


PLSQL_ENTRY_OBJECT_ID PLSQL_ENTRY_SUBPROGRAM_ID PLSQL_OBJECT_ID PLSQL_SUBPROGRAM_ID MODULE     ACTION  
--------------------- ------------------------- --------------- ------------------- ---------- ----------
76403                 1                         4265            8                   PK1 module Running P3

So, the PLSQL_ENTRY* columns are showing that we’re running the second subprogram of the PLSQL package with OBJECT_ID 76403 – which is the PK1 Package Spec – so slightly different to the example from Tom with V$SQL – but understandable information nevertheless.

From the other two columns, we can also see the currently executing subprogram – 8 in the package with OBJECT_ID 4265 – which is the DBMS_LOCK Package Spec. If we DESCribe DBMS_LOCK we would see that the 8th subprogram is SLEEP – which tallies up with what we’re executing.

We can also see these subprogram_id values in the ALL_PROCEDURES view:


select object_name
,      object_type
,      subprogram_id
,      procedure_name
from   all_procedures
where  object_name IN('PK1','DBMS_LOCK')
order by object_name
,        subprogram_id
/

OBJECT_NAME OBJECT_TYPE SUBPROGRAM_ID PROCEDURE_NAME
----------- ----------- ------------- ---------------
DBMS_LOCK   PACKAGE     0           
DBMS_LOCK   PACKAGE     1             ALLOCATE_UNIQUE
DBMS_LOCK   PACKAGE     2             REQUEST
DBMS_LOCK   PACKAGE     3             REQUEST
DBMS_LOCK   PACKAGE     4             CONVERT
DBMS_LOCK   PACKAGE     5             CONVERT
DBMS_LOCK   PACKAGE     6             RELEASE
DBMS_LOCK   PACKAGE     7             RELEASE
DBMS_LOCK   PACKAGE     8             SLEEP
PK1         PACKAGE     0           
PK1         PACKAGE     1             P3
PK1         PACKAGE     2             P4

So, V$SQL gives us useful information but we can also use V$SESSION in addition to get information by session including the subprogram within a package that we are executing.

As I said at the start, I’ve been using this for years and I’ve no idea why I don’t see it’s use more often. Hopefully the “Tom” effect will help to resolve this!

Using a trigger to grant access on new objects in end user schemae

End users on our warehouse log in using their own, personal oracle account and have the ability to create objects in their own schema – so they can test out analysis ideas without harming anyone else. I had to tune one of the pieces of SQL a user was running the other day and it involved links between some tables the user had created in their own schema and those in the public warehouse schemae and unfortunately, when I went to do get an execution plan using EXPLAIN PLAN it stopped me in my tracks because I couldn’t see the objects in the end user schema – I don’t have SELECT ANY TABLE privileges in my personal Oracle account.

So, the options I thought of were:

1. Get SELECT ANY TABLE privilege granted to me so that I could do the tuning for this query and, indeed, any query that I will come across. Not something the security people would be happy about which I can understand.

2. Get the end user to grant me SELECT access privilege on the objects – also possible but a bit painful if there are lots of objects and I’d need to go through this hassle every time I had a tuning requirement
Physical Causes – find out now cheap viagra There are several medical as well as physical conditions which can lead to problems like insomnia, diarrhea, headache, agitation, itching, nervousness etc. Improved immune and nervous system function Better energy Improved breathing and cheap levitra generic exercising Improved digestion and bowel movements. Safe detoxification requires a sincere and a levitra brand full assessment by a medical professional. You may love your partner to bits and they may feel the same way about you, order sildenafil if you are physically unable to fulfil each other’s needs of passion.
3. Get the login details of the user and log in as them – a security minefield and not really practical.

So, no brilliant solutions there, until my teamleader Tank suggested that we create a trigger AFTER CREATE ON SCHEMA that would do the necessary grant(s). Phil from the DBA team had a look at it and found some code from Tom to do this and it works great.

It interested me as a solution because I remember reading recently on Tom’s blog how much he hates triggers and that he’d love to have them removed from the database. He did say that “triggers are so abused – and used so inappropriately” but there are some occasions when they are useful and perhaps this is a good example.

Why you should skim read deep stuff first!

Seems that my recent posting about constraint generated predicates is already thoroughly covered in Jonathans latest book (Ch 6, pp 145-6)…including the bit about specifying a “NOT NULL” predicate to get around the issue with when the column in question is declared as “NULL” and not “NOT NULL”.

Doug said to me It also ensures ample blood supply to the reproductive organs to rejuvenate and boost http://icks.org/n/bbs/content.php?co_id=Contact_Us viagra without prescription your love life. You can treat your erectile dysfunction by controlling your blood pressure by getting it checked regularly; take your medications, diet and exercise on a regular basis. cialis generic viagra A few aggressive claims are being created about cheap cialis professional the strength of 15mg, 30mg and 45mgs in tablet forms. Besides, it cannot be taken on browse these guys viagra samples in canada own requirements, it need to be prescribed by an experienced doctor. recently that it was one of those books you probably should skim read first in order to get your brain used to what content is in it…so when you need to know about a particular topic you’ll (hopefully) remember that it was covered somewhere in such and such book…I think he’s probably spot on there.

Help the Cost Based Optimizer – add constraints – but beware the NULLS

If you use a function on a column then the optimizer can’t use an index right ?

Not quite.

You can of course use a Function Based index…but that’s not the subject of this post…so what else can we use in some circumstances ?

Well, I attended the Scottish Oracle User Group conference in Glasgow on Monday and enjoyed the Masterclass Jonathan Lewis gave on the CBO. After recently reading his book, the course had a degree of familiarity in terms of the slide content, but it was still a very worthwhile experience as Jonathan is a good presenter and I find it sinks in perhaps easier than just reading the book.

One of the things Jonathan said was that if you had a predicate such as this:

WHERE UPPER(col1) = ‘ABC’

…then the CBO can choose to ignore the presence of the UPPER() function if there happens to be a constraint defined on that column that can effectively substitute for that function.

I’d never heard of this so I decided to investigate…

First I created a table:

create table t1(id number
,v1 varchar2(40) null
,v2 varchar2(40) not null
,constraint t1_ck_v1 check(v1=UPPER(v1))
,constraint t1_ck_v2 check(v2=UPPER(v2))
);

Note the presence of two character columns – one NULLable and the other mandatory. I’ve added check constraints enforcing the uppercase content of both these character columns also.

…next I create indexes on these character columns:

create index t1_i1 on t1(v1);
create index t1_i2 on t1(v2);

…insert some data and analyse the table:

insert into t1(id,v1,v2)
select l
,      'THIS IS ROW: 'TO_CHAR(l)
,      'THIS IS ROW: 'TO_CHAR(l)
from   (select level l from dual connect by level<500001);

commit;

exec DBMS_STATS.GATHER_TABLE_STATS ownname=>USER,tabname=>’T1′,estimate_percent=>100,cascade=>TRUE);

 

(NOTE – The data in columns V1 and V2 is an actual value in each row, i.e. there are no NULLs. This will be important later).

…now lets turn autotrace on:

set autotrace on

…and try a query against the table using the optional column:


select * from t1
where upper(v1)=’THIS IS ROW: 1′;

…which gives us (abridged for clarity/succinctness):

ID V1              V2
-- --------------- ---------------
 1 THIS IS ROW: 1  THIS IS ROW: 1

1 row selected.

Elapsed: 00:00:00.81

Execution Plan
———————————————————-

Plan hash value: 3617692013

————————————————————————–
Id Operation Name Rows Bytes Cost (%CPU) Time
————————————————————————–
0 SELECT STATEMENT 5000 214K 789 (5) 00:00:10
* 1 TABLE ACCESS FULL T1 5000 214K 789 (5) 00:00:10
————————————————————————–

Predicate Information (identified by operation id):
—————————————————
1 – filter(UPPER(“V1”)=’THIS IS ROW: 1′)

 
You may perhaps practice side effects for instance nasal overcrowding, body pains or nervousness, however be anxious not, none of them should be a chief reason for the loss of erection of the penile region during love making sessions. levitra 20mg Agnus castus: This remedy may here are the findings purchase cialis be helpful for your dental treatments. Medicines for sleep and anti depressants – When you’re on a prescribed daily dose of sleeping canadian cialis generic medicine or an antidepressant pill and you know the rest. The viagra properien loved this regular consumption of this supplement improves the act of sexual intercourse.
As we can see, it decided that with the UPPER() function involved, a plan using the index was not possible and so chose to do a full table scan – which was not what I was expecting.

I must admit I looked at it for some time to try and understand why it wasn’t doing what Jonathan had indicated it would. I then called in my colleague Anthony, to discuss it and, after much thought, he came up with the answer that it was the definition of the V1 column being NULLable that was causing the CBO to not be able to use the index since NULLS are not stored in (B Tree) indexes and therefore, given the information at it’s disposal, the CBO deemed it impossible for the query to be answered via the index since it could, potentially, have missed a NULL value.

Given this information, I then rebuilt my test table to include the V2 column as per the above definition and then ran the query against the V2 column which was declared as NOT NULL:

select * from t1
where upper(v2)=’THIS IS ROW: 1′;

gives us:

ID V1              V2
-- --------------- ---------------
 1 THIS IS ROW: 1  THIS IS ROW: 1

1 row selected.

Elapsed: 00:00:00.03

Execution Plan
———————————————————-
Plan hash value: 965905564

————————————————————————————-
Id Operation Name Rows Bytes Cost (%CPU) Time
————————————————————————————-
0 SELECT STATEMENT 1 44 4 (0) 00:00:01
1 TABLE ACCESS BY INDEX ROWID T1 1 44 4 (0) 00:00:01
* 2 INDEX RANGE SCAN T1_I2 1 3 (0) 00:00:01
————————————————————————————-

Predicate Information (identified by operation id):
—————————————————

2 – access(“V2″=’THIS IS ROW: 1′)
filter(UPPER(“V2”)=’THIS IS ROW: 1′)

 

So, for the mandatory column, the CBO determines that the index can be used as an access path to obtain all of the relevant rows and given that it’s more efficient to do so it uses the index T1_I2 accordingly. This is what I was expecting to see in the first place…but obviously the NULLability of the V1 column had led me astray.

So, what happens if we add another predicate to the first query to try and inform the CBO that we are not looking for any NULL values – will it be clever enough to add this fact to the information from the constraint and come up with an index access path ?

select * from t1
where upper(v1)=’THIS IS ROW: 1′
and v1 is not null;

which gives us:

 

ID V1              V2
-- --------------- ---------------
 1 THIS IS ROW: 1  THIS IS ROW: 1

1 row selected.

Elapsed: 00:00:00.01

Execution Plan
———————————————————-
Plan hash value: 1429545322

————————————————————————————-
Id Operation Name Rows Bytes Cost (%CPU) Time
————————————————————————————-
0 SELECT STATEMENT 1 44 4 (0) 00:00:01
1 TABLE ACCESS BY INDEX ROWID T1 1 44 4 (0) 00:00:01
* 2 INDEX RANGE SCAN T1_I1 1 3 (0) 00:00:01
————————————————————————————-

Predicate Information (identified by operation id):
—————————————————

2 – access(“V1″=’THIS IS ROW: 1′)
filter(UPPER(“V1”)=’THIS IS ROW: 1′ AND “V1” IS NOT NULL)

So, yes, it can derive from the additional predicate stating that we are only looking for rows where V1 IS NOT NUL
L, in conjunction with the check constraint T1_CK_V1, that the UPPER() function can be ignored and that the index access path is now available and given it’s more efficient, it chooses to use it.

Quite clever really but I’m glad Anthony was around to help me see the wood for the trees on this one.

I spoke with Jonathan about this testing and he said he was aware of the need for the NOT NULL constraint in order for this to work and that from memory he thinks it was somewhere in the middle of 9.2 that this requirement came in to address a bug in transformation.