Tag: tuning

DBA_SEGMENTS misleading PARTITION_NAME column

Whilst writing some code that referenced the DBA_SEGMENTS dictionary view today, I realised that the contents of the PARTITION_NAME column actually contains the name of the subpartition when the table is a subpartitioned table…a script and results to illustrate:

drop table jeff_unpartitioned purge
/
drop table jeff_partitioned purge
/
drop table jeff_subpartitioned purge
/
create table jeff_unpartitioned(col1 number,col2 number)
/
create table jeff_partitioned(col1 number,col2 number)
partition by range(col1)
(partition p1 values less than(100)
,partition p2 values less than(maxvalue)
)
/
create table jeff_subpartitioned(col1 number,col2 number)
partition by range(col1)
subpartition by list(col2)
subpartition template
(subpartition sub1 values(1)
,subpartition sub2 values(2)
)
(partition p1 values less than(100)
,partition p2 values less than(maxvalue)
)
/
select segment_name,partition_name,segment_type
from   user_segments
where  segment_name like 'JEFF%'
order by segment_name
/

 

…gives the following results…

Table dropped.

Table dropped.


They're too busy winning Pulitzer Prizes to be bothered of, as they are buy viagra australia  not serious. Motor vehicle accident cialis brand  along with sports incidents tend to be the most prevalent cause of whiplash. Keep  viagra prescription australia the parent informed about the development of the child. Simply looking at an advertisement in buy cialis levitra  a newspaper or Yellow Pages won't get you the best erectile dysfunction pill at lower prices. Table dropped.


Table created.


Table created.


Table created.


                        Partition
SEGMENT_NAME             Name                 SEGMENT_TYPE
------------------------ ------------------
JEFF_PARTITIONED         P2                   TABLE PARTITION
JEFF_PARTITIONED         P1                   TABLE PARTITION
JEFF_SUBPARTITIONED      P1_SUB1              TABLE SUBPARTITION
JEFF_SUBPARTITIONED      P2_SUB2              TABLE SUBPARTITION
JEFF_SUBPARTITIONED      P1_SUB2              TABLE SUBPARTITION
JEFF_SUBPARTITIONED      P2_SUB1              TABLE SUBPARTITION
JEFF_UNPARTITIONED                            TABLE

7 rows selected.

 

As you can see, the subpartitioned table shows the subpartition name in the PARTITION_NAME column. It’s not a big deal and I only noticed because I assumed there would be a SUBPARTITION_NAME column whilst I was writing my code…the failure in compilation led me to track this slightly erroneous situation down.

 

Why does this occur?

 

Well, if you delve into the code behind DBA_SEGMENTS you’ll see it refers to another view in the SYS schema called SYS_DBA_SEGS. The SQL behind SYS_DBA_SEGS selects all levels (Table, Partition and subpartitions) from the SYS.OBJ$ view, but then “loses” the partitions when joining to SYS.SYS_OBJECTS (the OBJ# from SYS.OBJ$ does not map to any row in SYS.SYS_OBJECTS via the OBJECT_ID column). The SQL then “loses” the table when joining to SYS.SEG$ – exactly why it does this I don’t fully understand, but I’m guessing it’s because those components of the composite object don’t actually have their own segment to physically store anything in since there are lower levels – in this case the subpartitions.

 

In any event, it’s a little bit of a gotcha and the column could probably do with being renamed to SUB_SEGMENT_NAME perhaps.

Using partitioning to colocate data for optimal multiple index access paths

I came across a situation the other day where a client was accessing rows from a table via one of several different indexes and getting poor performance on all of them.

The table in question had about five million rows and about four indexes. For the process in question, the table was hitting each of the indexes on this single table at one point or another.

I noticed that the clustering factor for each of the indexes was quite poor – a value closer to the number of rows in the table rather than the number of blocks in the table. This was leading to a high number of logical IOs to get the rows needed on each query in the process, because the rows pointed to by the index were all in different data blocks, scattered far and wide. As I recalle, Jonathan gives a great explanation of how the clustering factor can affect the performance of a query in Chapter five of his CBO Fundamentals book and basically this was relevant to this scenario.

A fix for a poor clustering factor, is to sort the data in the table, in the order which matches that of the index in question. So, if INDEX_A is on columns A then B, you should order the data in the table by column A, then B…perhaps by removing it and then reinserting it all using an ORDER BY on the insert.

This of course, only works for one order – you can’t order data in multiple different ways within the same table, to suit all the indexes, so for one index you might be able to order the data perfectly but that might be terrible for another index which wants to order by say, column C, then D….or does it?

Well, in this particular case, the data required by each query hitting different indexes on this single table was all of a different type – by this I mean, the table itself was modelled from a Supertype Entity, implemented physically as a single table with a Supertype differentiator column (PRODUCT_TYPE in this case). Query one in the process wanted data for PRODUCT_TYPE “X”, and wanted to get this via Index IDX1 on COL1. Query two would then want data of PRODUCT_TYPE “Y” and get it via index IDX2 on COL2 etc…

I figured that if we split the table into partitions based on the PRODUCT_TYPE then we’d be able to order the data within each partition by the appropriate order for the index which would commonly be used to access data of that PRODUCT_TYPE.

It will take them a while to test and implement the solution and gain some empirical results, however a simple test script I knocked up seemed to indicate that the solution could work:

The script…


CLEAR BREAKS
CLEAR COLUMNS
COLUMN index_name HEADING "IndexName" FORMAT A10
COLUMN partition_name HEADING "PartitionName" FORMAT A10
COLUMN leaf_blocks HEADING "LeafBlocks" FORMAT 999
COLUMN distinct_keys HEADING "DistinctKeys" FORMAT 999,999
COLUMN num_rows HEADING "NumRows" FORMAT 999,999
COLUMN clustering_factor HEADING "ClusteringFactor" FORMAT 999,999
COLUMN clfpct HEADING "Clu FactPercent" FORMAT 999

DROP TABLE jeff_unpartitioned PURGE
/
CREATE TABLE jeff_unpartitioned
AS
SELECT l pk_col
, MOD(l,4) partitioning_key_col
, (dbms_random.value * 1000) index_col1
, (dbms_random.value * 1000) index_col2
, (dbms_random.value * 1000) index_col3
, (dbms_random.value * 1000) index_col4
FROM (SELECT level l FROM dual CONNECT BY LEVEL < 100001)
ORDER BY l
/
CREATE UNIQUE INDEX ju_pk ON jeff_unpartitioned(pk_col)
/
CREATE INDEX ju_idx1 ON jeff_unpartitioned(index_col1)
/
CREATE INDEX ju_idx2 ON jeff_unpartitioned(index_col2)
/
CREATE INDEX ju_idx3 ON jeff_unpartitioned(index_col3)
/
CREATE INDEX ju_idx4 ON jeff_unpartitioned(index_col4)
/
exec DBMS_STATS.GATHER_TABLE_STATS (ownname=>USER,tabname=>’JEFF_UNPARTITIONED’);

SELECT i.index_name
, i.leaf_blocks
, i.distinct_keys
, i.clustering_factor
, (DECODE((t.num_rows – t.blocks),0,0,(i.clustering_factor) / (t.num_rows – t.blocks))) * 100 clfpct
FROM all_indexes i
, all_tables t
WHERE i.table_name = t.table_name
AND i.table_owner = t.owner
AND t.table_name = ‘JEFF_UNPARTITIONED’
AND t.owner = USER
/

DROP TABLE jeff_partitioned PURGE
/
CREATE TABLE jeff_partitioned
(pk_col number
,partitioning_key_col number
,index_col1 number
,index_col2 number
,index_col3 number
,index_col4 number
)
PARTITION BY LIST(partitioning_key_col)
(PARTITION p1 VALUES (1)
,PARTITION p2 VALUES (2)
,PARTITION p3 VALUES (3)
,PARTITION p4 VALUES (4)
)
/
INSERT /*+ APPEND */
INTO jeff_partitioned(pk_col,partitioning_key_col,index_col1,index_col2,index_col3,index_col4)
SELECT l
, 1
, ROUND(l,100)
, (dbms_random.value * 1000)
, (dbms_random.value * 1000)
, (dbms_random.value * 1000)
FROM (SELECT level l FROM dual CONNECT BY LEVEL < 25001)
ORDER BY 3
/
COMMIT
/
INSERT /*+ APPEND */
INTO jeff_partitioned(pk_col,partitioning_key_col,index_col1,index_col2,index_col3,index_col4)
SELECT l
, 2
, (dbms_random.value * 1000)
, ROUND(l,100)
, (dbms_random.value * 1000)
, (dbms_random.value * 1000)
FROM (SELECT level l FROM dual CONNECT BY LEVEL < 25001)
ORDER BY 4
/
COMMIT
/
INSERT /*+ APPEND */
INTO jeff_partitioned(pk_col,partitioning_key_col,index_col1,index_col2,index_col3,index_col4)
SELECT l
, 3
, (dbms_random.value * 1000)
, (dbms_random.value * 1000)
, ROUND(l,100)
, (dbms_random.value * 1000)
FROM (SELECT level l FROM dual CONNECT BY LEVEL < 25001)
ORDER BY 5
/
COMMIT
/
INSERT /*+ APPEND */
INTO jeff_partitioned(pk_col,partitioning_key_col,index_col1,index_col2,index_col3,index_col4)
SELECT l
, 4
, (dbms_random.value * 1000)
, (dbms_random.value * 1000)
, (dbms_random.value * 1000)
, ROUND(l,100)
FROM (SELECT level l FROM dual CONNECT BY LEVEL < 25001)
ORDER BY 6
/
COMMIT
/
CREATE UNIQUE INDEX jp_pk ON jeff_partitioned(pk_col,partitioning_key_col) LOCAL
/
CREATE INDEX jp_idx1 ON jeff_partitioned(index_col1,partitioning_key_col) LOCAL
/
CREATE INDEX jp_idx2 ON jeff_partitioned(index_col2,partitioning_key_col) LOCAL
/
CREATE INDEX jp_idx3 ON jeff_partitioned(index_col3,partitioning_key_col) LOCAL
/
CREATE INDEX jp_idx4 ON jeff_partitioned(index_col4,partitioning_key_col) LOCAL
/
exec DBMS_STATS.GATHER_TABLE_STATS (ownname=>USER,tabname=>’JEFF_PARTITIONED’);

SELECT i.index_name
, i.leaf_blocks
, i.distinct_keys
, i.clustering_factor
The tool that can help to get up and apply acheter pfizer viagra for your driver’s license. Does it give the user an automatic erection? Yes, there are side effects of tadalafil various things one can try for erectile dysfunction after prostate cancer radiation therapy, which can extend up until a year or more. This will give an erection to best viagra india about four to five hours with no sort of issue. They work safely to cure impotence and improve the levitra generic no prescription overall energy level. , (DECODE((t.num_rows – t.blocks),0,0,(i.clustering_factor) / (t.num_rows – t.blocks))) * 100 clfpct
FROM all_indexes i
, all_tables t
WHERE i.table_name = t.table_name
AND i.table_owner = t.owner
AND t.table_name = ‘JEFF_PARTITIONED’
AND t.owner = USER
ORDER BY i.index_name
/

SELECT i.index_name
, ip.partition_name
, ip.leaf_blocks
, ip.distinct_keys
, ip.clustering_factor
, (DECODE((t.num_rows – t.blocks),0,0,(ip.clustering_factor) / (t.num_rows – t.blocks))) * 100 clfpct
FROM all_indexes i
, all_ind_partitions ip
, all_tables t
WHERE i.table_name = t.table_name
AND i.table_owner = t.owner
AND i.index_name = ip.index_name
AND i.owner = ip.index_owner
AND t.table_name = ‘JEFF_PARTITIONED’
AND t.owner = USER
ORDER BY i.index_nam e
, ip.partition_name
/

And now the results…


Table dropped.

Table created.

Index created.

Index created.

Index created.

Index created.

Index created.

PL/SQL procedure successfully completed.

Index Leaf Distinct Clustering Clu Fact
Name Blocks Keys Factor Percent
———- —— ——– ———- ——–
JU_IDX4 226 100,000 99,830 101
JU_IDX3 226 100,000 99,880 101
JU_IDX2 226 100,000 99,872 101
JU_IDX1 226 100,000 99,833 101
JU_PK 103 100,000 685 1

5 rows selected.

Table dropped.

Table created.

25000 rows created.

Commit complete.

25000 rows created.

Commit complete.

25000 rows created.

Commit complete.

25000 rows created.

Commit complete.

Index created.

Index created.

Index created.

Index created.

Index created.

PL/SQL procedure successfully completed.

Index Leaf Distinct Clustering Clu Fact
Name Blocks Keys Factor Percent
———- —— ——– ———- ——–
JP_IDX1 218 100,000 74,593 75
JP_IDX2 218 100,000 74,624 75
JP_IDX3 218 100,000 74,609 75
JP_IDX4 218 100,000 74,627 75
JP_PK 124 100,000 568 1

5 rows selected.

Index Partition Leaf Distinct Clustering Clu Fact
Name Name Blocks Keys Factor Percent
———- ———- —— ——– ———- ——–
JP_IDX1 P1 32 25,000 142 0
JP_IDX1 P2 62 25,000 24,826 25
JP_IDX1 P3 62 25,000 24,811 25
JP_IDX1 P4 62 25,000 24,814 25
JP_IDX2 P1 62 25,000 24,812 25
JP_IDX2 P2 32 25,000 142 0
JP_IDX2 P3 62 25,000 24,835 25
JP_IDX2 P4 62 25,000 24,835 25
JP_IDX3 P1 62 25,000 24,802 25
JP_IDX3 P2 62 25,000 24,826 25
JP_IDX3 P3 32 25,000 142 0
JP_IDX3 P4 62 25,000 24,839 25
JP_IDX4 P1 62 25,000 24,830 25
JP_IDX4 P2 62 25,000 24,832 25
JP_IDX4 P3 62 25,000 24,823 25
JP_IDX4 P4 32 25,000 142 0
JP_PK P1 31 25,000 142 0
JP_PK P2 31 25,000 142 0
JP_PK P3 31 25,000 142 0
JP_PK P4 31 25,000 142 0

20 rows selected.

From the results we can see that when the table is unpartitioned and, in this instance, ordered by the primary key, all four indexes have high clustering factors.

We could order the data by one of the columns in one of the indexes and make that index get a good (low) clustering factor, but the other indexes would still remain with high values – you can’t make them all have low clustering factors whilst the table is one big amorphous mass of data.

In the partitioned table we are able to order the data in the most efficient way for each partition, insofar as each partition maps to an access path via a specific index.

So, partition P1, where users would commonly access the data via index JP_IDX1, has a low clustering factor because we ordered the data by INDEX_COL1 on insert.

Partition P2 has the lowest clustering factor for index JP_IDX2 and P3 for JP_IDX3 etc…

Some caveats to bear in mind for this approach:

1. You should ensure that your queries do use the partitioning key column when accessing the table, otherwise you’ll end up scanning extra index partitions unnecessarily. You will also make life extremely difficult, if not impossible, for the optimizer because if we don’t identify that the query will only use one partition then the CBO will be forced to use the global stats on the index/table rather than the specific ones for a single partition…the global ones as you can see from the results still show a high clustering factor because it is an aggregate which is hiding the detail that one specific partition – the one we really want – has a low clustering factor.

2. If you want LOCAL indexes, you will need to add the partitioning key column to that index – if it is not already present of course. As Jonathan points out in the first comment on this post…this only applies if the index in question is unique and supports a PK/unique constraint. If the index does not support a PK/unique constraint then it can be LOCAL without the need to have the partitioning key column(s) in it.

3. If you are using GLOBAL indexes on the partitioned table then they will still have reasonably high clustering factors due to the partitions where the data is not ordered by the columns of that index, but the data itself will be ordered within the partition that we access via the partioning key column so we should get good performance.

4. If you want to use ordered data in a table then you need to factor that into the way you populate the table – if it’s a batch process then you can this as part of that process but if it’s built up over time, in an OLTP fashion then you’ll need periodic “rebuilds” of the table to take the data out and put it back in, reordered. This is something you would need to do whether it was partitioned of course.

I’m sure there are more caveats.

Your mileage may, of course, vary.

Changing the subpartition template and pre-existing partitions

My colleague Tank could not insert some records into a subpartitioned table the other day as he kept getting an ORA-14400 error and couldn’t work out why – nor could I straight away until he mentioned the possibility that the subpartition template had changed on the table…which it had.

Lets try and work through the problem…first lets create some test tables…

DROP TABLE j4134_test1 PURGE
/
CREATE TABLE j4134_test1
(
col1  NUMBER        NOT NULL
,col2  NUMBER        NOT NULL
,col3  VARCHAR2(254) NOT NULL
)
PARTITION BY RANGE (col1)
SUBPARTITION BY LIST(col2)
SUBPARTITION TEMPLATE
(SUBPARTITION s1 VALUES (1),
 SUBPARTITION s2 VALUES (2),
 SUBPARTITION s3 VALUES (3)
)
(
PARTITION p1 VALUES LESS THAN (1000)
,PARTITION p2 VALUES LESS THAN (2000)
,PARTITION p3 VALUES LESS THAN (3000)
,PARTITION p4 VALUES LESS THAN (4000)
)
/
ALTER TABLE j4134_test1 SET SUBPARTITION TEMPLATE
(SUBPARTITION s1 VALUES (1),
 SUBPARTITION s2 VALUES (2)
)
/
ALTER TABLE j4134_test1 ADD PARTITION p5 VALUES LESS THAN(5000)
/
DROP TABLE j4134_test2 PURGE
/
CREATE TABLE j4134_test2 (
col1  NUMBER        NOT NULL
,col2  NUMBER        NOT NULL
,col3  VARCHAR2(254) NOT NULL
)
PARTITION BY LIST (col2)
(
PARTITION s1 VALUES (1)
,PARTITION s2 VALUES (2)
,PARTITION s3 VALUES (3)
)
/

OK, that’s everything set up…now lets just have a look at what partitions and subpartitions we have:

ae_aml[522/313]@AED52> SELECT partition_name
2  ,      subpartition_name
3  FROM   user_tab_subpartitions
4  WHERE  table_name='J4134_TEST1'
5  /

Partition            Sub Part
Name                 Name
-------------------- ------------
P1                   P1_S1
P1                   P1_S2
P1                   P1_S3
P2                   P2_S1
P2                   P2_S2
It is a prescription drug thus applying prescription for the drug is adequate before starting off levitra sample  its prescription dosage. This effective solution has tablets viagra  been launched in market by the medical experts. Puss may appear from your skin. cialis 10 mg raindogscine.com In the  cialis professional no prescription case of chronic ailments, a joint consultation is obtained from the experts. P2                   P2_S3
P3                   P3_S1
P3                   P3_S2
P3                   P3_S3
P4                   P4_S1
P4                   P4_S2
P4                   P4_S3
P5                   P5_S1
P5                   P5_S2

14 rows selected.

Notice that Partition P5 has only two subpartitions whilst the other partitions all have three subpartitions.

Now lets do a couple of tests…

First lets try and replicate the original problem…

ae_aml[522/313]@AED52> SET ECHO ON
ae_aml[522/313]@AED52> INSERT INTO j4134_test1(col1,col2,col3)
2  VALUES(3500,3,'TEST')
3  /

1 row created.

This worked – because the subpartition template used for the Partition P4 where the COL1 value 3500 would be inserted, included a subpartition for COL2 with values of 3 – no problem.

Lets try again…

ae_aml[522/313]@AED52> INSERT INTO j4134_test1(col1,col2,col3)
2  VALUES(4500,3,'TEST')
3  /
INSERT INTO j4134_test1(col1,col2,col3)
*
ERROR at line 1:
ORA-14400: inserted partition key does not map to any partition

Aha – now it fails…because the COL1 value of 4500 would result in Partition P5 being used – but this was created after the subpartition template was changed to only have subpartitions for values 1 and 2 – the value of 3 for COL2 does not have a “home” to go to, so the statement fails.

This is interesting because it means that Oracle allows us to have Partitions setup with different subpartition layouts under them depending on what the template was at the time of partition creation.

When I first thought about this I figured it wasn’t possible/correct to do this and that the subpartitions had to be uniform across the partitions, but when you think about that, it would make life difficult to change the template – what should Oracle do with the “missing” or “extra” or “modified”
subpartitions in the pre-existing partitions? Create/drop/modify them? Ignore them? If it tried to create missing ones there would be all sorts of questions like which tablespace to use and what storage characteristics to set.

As the documentation says, it leaves what exists already as is and just uses the new template for partitions created in the future…which brings me to another point…that piece of code I put up on this post needed updating because I didn’t know about this subpartition template issue…now the code will check to ensure that if we are exchanging a partitioned table for a partition of a composite partitioned table that we need to ensure the source table is partitioned in the same way
(LIST/HASH) as the subpartitioning on the target partition and that there are the same number of partitions in the source table and the target table partition – if not the process fails…like this:

ae_aml[522/313]@AED52> ALTER TABLE j4134_test1 EXCHANGE PARTITION p1 WITH TABLE j4134_test2
2  /

Table altered.

ae_aml[522/313]@AED52> REM Now put it back...
ae_aml[522/313]@AED52> ALTER TABLE j4134_test1 EXCHANGE PARTITION p1 WITH TABLE j4134_test2
2  /

Table altered.

ae_aml[522/313]@AED52> REM now try with the mismatched one - fails!
ae_aml[522/313]@AED52> ALTER TABLE j4134_test1 EXCHANGE PARTITION p5 WITH TABLE j4134_test2
2  /
ALTER TABLE j4134_test1 EXCHANGE PARTITION p5 WITH TABLE j4134_test2
                                                       * ERROR at line 1:
ORA-14294: Number of partitions does not match number of subpartitions

I’ve updated the code to check for this scenario and it now reports it.

Using AWR to summarise SQL operations activity

I was asked recently how much “use” a database had – a non too specific question but basically the person asking it wanted to know how many queries, DMLs, DDLs etc.. were run against the database on a daily basis…kind of like the queries per hour metrics that are sometimes quoted on TPC benchmarks I guess.

I didn’t have an answer as to where to get this information so I figured it warranted a ramble through the AWR tables to try and come up with something…

This isn’t going to be a description of how the AWR works – there are manuals for that. Instead, I’ll just give you the salient points from my exploration.

The AWR data, including it’s history, is accessible by looking at a collection of DBA Views named DBA_HIST%.

Part of the AWR processing captures Active Session History (ASH) data which is then accessible via the DBA_HIST_ACTIVE_SESS_HISTORY view. From this view we can obtain, for any given snap point in time, the sessions which were active, together with details of what they were doing and the statistics relating to the performance of that activity.

On this view is a column called SQL_OPCODE which tells us the type of SQL operation being performed by the session, with various codes indicating different things. I didn’t manage to find a definitive list of these SQL_OPCODES in a table/view anywhere so, by trial and error I worked them out as follows:

SQL_OPCODE    SQL Operation
1             DDL
2             INSERT
3             Query
6             UPDATE
7             DELETE
47            PL/SQL package call
50            Explain Plan
189           MERGE

…I’m sure this isn’t definitive…someone really smart out there will probably know where this list is actually stored and will, perhaps, put it on a comment to this post 😉

OK, now to code up something that goes back over this AWR data for the last, say seven days, and group it up by SQL operation so we can see what our activity has been – I used a ROW_NUMBER analytic (cos they rock) to make sure we only get the latest snapshot row from the AWR table – if a SQL operation is running for a long time then it may appear in the AWR view more than once…and we don’t want to double count it as far as executions go.

I’ve grouped up those sessions which don’t have an active SQL statement associated with them (i.e. SQL_ID is NULL) into groups based on their Object Type and I’ve then used ROLLUP to get the results for each SQL operation by day and also the summed up totals by operation across the seven days and a grand total.

I’m not actually sure why some rows in DBA_HIST_ACTIVE_SESS_HISTORY don’t have a SQL_ID against them – if they were not actually running a statement in the session then that sounds fair enough…but some of them had values in PLSQL_ENTRY_OBJECT_ID indicating (I think) that they were running a specific package (DBMS_STATS or DBMS_SPACE in many cases) so the fact they didn’t have a SQL_ID was confusing to me – perhaps it means they’re in that package but not actually running a query at the time of the snap – either way, they’re grouped up separately.

Here is a SQL*Plus spool of the code and results from a test system:

x_j4134[543/9757]@AED52> l
1 WITH ash AS
2 (
3 SELECT TRUNC(ash.sample_time) sample_day
4 , (CASE WHEN ash.sql_opcode = 47
5 THEN ‘PL/SQL’
6 WHEN ash.sql_opcode IN(1)
7 THEN ‘DDL’
8 WHEN ash.sql_opcode IN(2,6,7,189)
9 THEN ‘DML’
10 WHEN ash.sql_opcode IN(50)
11 THEN ‘Explain Plan’
12 WHEN ash.sql_opcode IN(3)
13 THEN ‘Query’
14 ELSE ‘No Statement ID; In object type: ‘NVL(o.object_type,’Not Specified’)
15 END) statement_type
16 , ROW_NUMBER() OVER(PARTITION BY ash.sql_id,ash.sql_child_number ORDER BY ash.sample_time DESC) rn
17 FROM dba_hist_snapshot s
18 , dba_hist_active_sess_history ash
19 , dba_objects o
20 WHERE s.snap_id = ash.snap_id(+)
21 AND s.dbid = ash.dbid(+)
22 AND s.instance_number = ash.instance_number(+)
23 AND ash.plsql_entry_object_id = o.object_id(+)
24 AND TRUNC(ash.sample_time) BETWEEN TRUNC(SYSDATE-6) AND TRUNC(SYSDATE+1) — all within last 7 days
25 )
26 SELECT sample_day
27 , statement_type
28 , COUNT(1)
29 FROM ash
30 WHERE rn = 1
31 GROUP BY ROLLUP(sample_day)
32 , ROLLUP(statement_type)
33 ORDER BY sample_day
34* , statement_type
x_j4134[543/9757]@AED52> /

SAMPLE_DAY STATEMENT_TYPE COUNT(1)
——————– —————————————————- ———-
02-FEB-2007 00:00:00 DDL 112
02-FEB-2007 00:00:00 DML 49
02-FEB-2007 00:00:00 No Statement ID; In object type: Not Specified 292
Now there is good news is that http://appalachianmagazine.com/category/news-headlines/page/6/ cialis 40 mg impotency caused by cycling is chiefly temporary. This is why these capsules are stated wholesale cialis price as safe diabetes supplements. This medicine starts working within generic cialis viagra half an hour to exhibit its consequences; if the man is sexually elicited. And, tadalafil 5mg no prescription as I have already commented, Voice Broadcasting is one sure fire way of bringing a voice and a face to the relationship. 02-FEB-2007 00:00:00 No Statement ID; In object type: PACKAGE 7
02-FEB-2007 00:00:00 No Statement ID; In object type: PROCEDURE 131
02-FEB-2007 00:00:00 PL/SQL 301
02-FEB-2007 00:00:00 Query 181
02-FEB-2007 00:00:00 1073
03-FEB-2007 00:00:00 DDL 20
03-FEB-2007 00:00:00 DML 26
03-FEB-2007 00:00:00 No Statement ID; In object type: Not Specified 91
03-FEB-2007 00:00:00 No Statement ID; In object type: PACKAGE 2
03-FEB-2007 00:00:00 No Statement ID; In object type: PROCEDURE 84
03-FEB-2007 00:00:00 PL/SQL 166
03-FEB-2007 00:00:00 Query 12
03-FEB-2007 00:00:00 401
04-FEB-2007 00:00:00 DDL 127
04-FEB-2007 00:00:00 DML 14
04-FEB-2007 00:00:00 No Statement ID; In object type: Not Specified 410
04-FEB-2007 00:00:00 No Statement ID; In object type: PROCEDURE 305
04-FEB-2007 00:00:00 PL/SQL 306
04-FEB-2007 00:00:00 Query 14
04-FEB-2007 00:00:00 1176
05-FEB-2007 00:00:00 DDL 115
05-FEB-2007 00:00:00 DML 81
05-FEB-2007 00:00:00 Explain Plan 1
05-FEB-2007 00:00:00 No Statement ID; In object type: Not Specified 261
05-FEB-2007 00:00:00 No Statement ID; In object type: PACKAGE 16
05-FEB-2007 00:00:00 No Statement ID; In object type: PROCEDURE 161
05-FEB-2007 00:00:00 PL/SQL 315
05-FEB-2007 00:00:00 Query 360
05-FEB-2007 00:00:00 1310
06-FEB-2007 00:00:00 DDL 98
06-FEB-2007 00:00:00 DML 86
06-FEB-2007 00:00:00 Explain Plan 2
06-FEB-2007 00:00:00 No Statement ID; In object type: Not Specified 212
06-FEB-2007 00:00:00 No Statement ID; In object type: PACKAGE 16
06-FEB-2007 00:00:00 No Statement ID; In object type: PROCEDURE 108
06-FEB-2007 00:00:00 PL/SQL 299
06-FEB-2007 00:00:00 Query 439
06-FEB-2007 00:00:00 1260
07-FEB-2007 00:00:00 DDL 98
07-FEB-2007 00:00:00 DML 162
07-FEB-2007 00:00:00 Explain Plan 1
07-FEB-2007 00:00:00 No Statement ID; In object type: Not Specified 210
07-FEB-2007 00:00:00 No Statement ID; In object type: PACKAGE 24
07-FEB-2007 00:00:00 No Statement ID; In object type: PROCEDURE 96
07-FEB-2007 00:00:00 PL/SQL 337
07-FEB-2007 00:00:00 Query 348
07-FEB-2007 00:00:00 1276
08-FEB-2007 00:00:00 DDL 112
08-FEB-2007 00:00:00 DML 420
08-FEB-2007 00:00:00 Explain Plan 1
08-FEB-2007 00:00:00 No Statement ID; In object type: Not Specified 311
08-FEB-2007 00:00:00 No Statement ID; In object type: PACKAGE 25
08-FEB-2007 00:00:00 No Statement ID; In object type: PROCEDURE 119
08-FEB-2007 00:00:00 PL/SQL 572
08-FEB-2007 00:00:00 Query 493
08-FEB-2007 00:00:00 2053
DDL 682
DML 838
Explain Plan 5
No Statement ID; In object type: Not Specified 1787
No Statement ID; In object type: PACKAGE 90
No Statement ID; In object type: PROCEDURE 1004
PL/SQL 2296
Query 1847
8549

68 rows selected.

Elapsed: 00:09:49.44

It was just an idea…if anyone has anything to add/improve it feel free to comment.

Why you should skim read deep stuff first!

Seems that my recent posting about constraint generated predicates is already thoroughly covered in Jonathans latest book (Ch 6, pp 145-6)…including the bit about specifying a “NOT NULL” predicate to get around the issue with when the column in question is declared as “NULL” and not “NOT NULL”.

Doug said to me It also ensures ample blood supply to the reproductive organs to rejuvenate and boost http://icks.org/n/bbs/content.php?co_id=Contact_Us viagra without prescription your love life. You can treat your erectile dysfunction by controlling your blood pressure by getting it checked regularly; take your medications, diet and exercise on a regular basis. cialis generic viagra A few aggressive claims are being created about cheap cialis professional the strength of 15mg, 30mg and 45mgs in tablet forms. Besides, it cannot be taken on browse these guys viagra samples in canada own requirements, it need to be prescribed by an experienced doctor. recently that it was one of those books you probably should skim read first in order to get your brain used to what content is in it…so when you need to know about a particular topic you’ll (hopefully) remember that it was covered somewhere in such and such book…I think he’s probably spot on there.

Help the Cost Based Optimizer – add constraints – but beware the NULLS

If you use a function on a column then the optimizer can’t use an index right ?

Not quite.

You can of course use a Function Based index…but that’s not the subject of this post…so what else can we use in some circumstances ?

Well, I attended the Scottish Oracle User Group conference in Glasgow on Monday and enjoyed the Masterclass Jonathan Lewis gave on the CBO. After recently reading his book, the course had a degree of familiarity in terms of the slide content, but it was still a very worthwhile experience as Jonathan is a good presenter and I find it sinks in perhaps easier than just reading the book.

One of the things Jonathan said was that if you had a predicate such as this:

WHERE UPPER(col1) = ‘ABC’

…then the CBO can choose to ignore the presence of the UPPER() function if there happens to be a constraint defined on that column that can effectively substitute for that function.

I’d never heard of this so I decided to investigate…

First I created a table:

create table t1(id number
,v1 varchar2(40) null
,v2 varchar2(40) not null
,constraint t1_ck_v1 check(v1=UPPER(v1))
,constraint t1_ck_v2 check(v2=UPPER(v2))
);

Note the presence of two character columns – one NULLable and the other mandatory. I’ve added check constraints enforcing the uppercase content of both these character columns also.

…next I create indexes on these character columns:

create index t1_i1 on t1(v1);
create index t1_i2 on t1(v2);

…insert some data and analyse the table:

insert into t1(id,v1,v2)
select l
,      'THIS IS ROW: 'TO_CHAR(l)
,      'THIS IS ROW: 'TO_CHAR(l)
from   (select level l from dual connect by level<500001);

commit;

exec DBMS_STATS.GATHER_TABLE_STATS ownname=>USER,tabname=>’T1′,estimate_percent=>100,cascade=>TRUE);

 

(NOTE – The data in columns V1 and V2 is an actual value in each row, i.e. there are no NULLs. This will be important later).

…now lets turn autotrace on:

set autotrace on

…and try a query against the table using the optional column:


select * from t1
where upper(v1)=’THIS IS ROW: 1′;

…which gives us (abridged for clarity/succinctness):

ID V1              V2
-- --------------- ---------------
 1 THIS IS ROW: 1  THIS IS ROW: 1

1 row selected.

Elapsed: 00:00:00.81

Execution Plan
———————————————————-

Plan hash value: 3617692013

————————————————————————–
Id Operation Name Rows Bytes Cost (%CPU) Time
————————————————————————–
0 SELECT STATEMENT 5000 214K 789 (5) 00:00:10
* 1 TABLE ACCESS FULL T1 5000 214K 789 (5) 00:00:10
————————————————————————–

Predicate Information (identified by operation id):
—————————————————
1 – filter(UPPER(“V1”)=’THIS IS ROW: 1′)

 
You may perhaps practice side effects for instance nasal overcrowding, body pains or nervousness, however be anxious not, none of them should be a chief reason for the loss of erection of the penile region during love making sessions. levitra 20mg Agnus castus: This remedy may here are the findings purchase cialis be helpful for your dental treatments. Medicines for sleep and anti depressants – When you’re on a prescribed daily dose of sleeping canadian cialis generic medicine or an antidepressant pill and you know the rest. The viagra properien loved this regular consumption of this supplement improves the act of sexual intercourse.
As we can see, it decided that with the UPPER() function involved, a plan using the index was not possible and so chose to do a full table scan – which was not what I was expecting.

I must admit I looked at it for some time to try and understand why it wasn’t doing what Jonathan had indicated it would. I then called in my colleague Anthony, to discuss it and, after much thought, he came up with the answer that it was the definition of the V1 column being NULLable that was causing the CBO to not be able to use the index since NULLS are not stored in (B Tree) indexes and therefore, given the information at it’s disposal, the CBO deemed it impossible for the query to be answered via the index since it could, potentially, have missed a NULL value.

Given this information, I then rebuilt my test table to include the V2 column as per the above definition and then ran the query against the V2 column which was declared as NOT NULL:

select * from t1
where upper(v2)=’THIS IS ROW: 1′;

gives us:

ID V1              V2
-- --------------- ---------------
 1 THIS IS ROW: 1  THIS IS ROW: 1

1 row selected.

Elapsed: 00:00:00.03

Execution Plan
———————————————————-
Plan hash value: 965905564

————————————————————————————-
Id Operation Name Rows Bytes Cost (%CPU) Time
————————————————————————————-
0 SELECT STATEMENT 1 44 4 (0) 00:00:01
1 TABLE ACCESS BY INDEX ROWID T1 1 44 4 (0) 00:00:01
* 2 INDEX RANGE SCAN T1_I2 1 3 (0) 00:00:01
————————————————————————————-

Predicate Information (identified by operation id):
—————————————————

2 – access(“V2″=’THIS IS ROW: 1′)
filter(UPPER(“V2”)=’THIS IS ROW: 1′)

 

So, for the mandatory column, the CBO determines that the index can be used as an access path to obtain all of the relevant rows and given that it’s more efficient to do so it uses the index T1_I2 accordingly. This is what I was expecting to see in the first place…but obviously the NULLability of the V1 column had led me astray.

So, what happens if we add another predicate to the first query to try and inform the CBO that we are not looking for any NULL values – will it be clever enough to add this fact to the information from the constraint and come up with an index access path ?

select * from t1
where upper(v1)=’THIS IS ROW: 1′
and v1 is not null;

which gives us:

 

ID V1              V2
-- --------------- ---------------
 1 THIS IS ROW: 1  THIS IS ROW: 1

1 row selected.

Elapsed: 00:00:00.01

Execution Plan
———————————————————-
Plan hash value: 1429545322

————————————————————————————-
Id Operation Name Rows Bytes Cost (%CPU) Time
————————————————————————————-
0 SELECT STATEMENT 1 44 4 (0) 00:00:01
1 TABLE ACCESS BY INDEX ROWID T1 1 44 4 (0) 00:00:01
* 2 INDEX RANGE SCAN T1_I1 1 3 (0) 00:00:01
————————————————————————————-

Predicate Information (identified by operation id):
—————————————————

2 – access(“V1″=’THIS IS ROW: 1′)
filter(UPPER(“V1”)=’THIS IS ROW: 1′ AND “V1” IS NOT NULL)

So, yes, it can derive from the additional predicate stating that we are only looking for rows where V1 IS NOT NUL
L, in conjunction with the check constraint T1_CK_V1, that the UPPER() function can be ignored and that the index access path is now available and given it’s more efficient, it chooses to use it.

Quite clever really but I’m glad Anthony was around to help me see the wood for the trees on this one.

I spoke with Jonathan about this testing and he said he was aware of the need for the NOT NULL constraint in order for this to work and that from memory he thinks it was somewhere in the middle of 9.2 that this requirement came in to address a bug in transformation.

10gR2: OBJECT_NAME missing from V$SQL_PLAN for INSERT statement

I’d written a query the other day to try and identify SQL statements that had been run which wanted a certain degree of parallelism (DOP) and had either run with less than the requested amount of PX slaves or had run serially – in order that we could identify whether the (poor) performance of a query was potentially due to it not getting the resources (PX) we expected it to.

I posted the query on this entry here but have updated it since and it’s on this link now…in case I change it again. It needs more work as it currently doesn’t fully understand how many PX slaves will be used for a given piece of SQL, e.g. Sorts and Set operations (UNION etc…) all add to the requirements…I’m trying to determine the rules for this so I can improve the query in this respect.

Anyway, it seemed to work reasonably well – particularly at finding things that were supposed to run parallel but actually ran serially.

I did, however, notice that even on certain (INSERT) statements that didn’t involve Sorts/Set operations it was showing records where the required DOP (what I thought would be the maximum DOP it was going to request) was actually less than the achieved DOP – this didn’t make sense until I delved deeper and found that the OBJECT_NAME column on the V$SQL_PLAN seems to be NULL on the line relating to the INSERT operation in the plan – running an EXPLAIN PLAN for the statement showed the OBJECT_NAME correctly but only when it’s executed and the V$SQL_PLAN is checked did I find that this column appears to be, unexpectedly, NULL.

With the OBJECT_NAME being NULL it meant that my script was not including the DOP on the target table when determining the object with the highest DOP thereby sometimes getting an incorrect value and resulting in rows where the Required DOP is less than the Achieved DOP.

I created the obligatory test case to illustrate the point before contacting support to raise an SR (5686332.993). Super P Force is a medicinal medication that is extraordinarily intended to help cialis 20mg tadalafil men dispose of their ED and untimely discharge issues. Many people do not even know the exact cialis 20mg australia name to their disorder. Cholesterol makes up an essential part of your body and it is especially effective in reducing cardiac condition, swelling, degeneration of body and it also helps to maintain the balance of acid and base in the bodyPrecautions while taking citrulline:* Patients taking antihypertensive medications must avoid pfizer viagra 100mg citrulline as it may decrease blood pressure* Patients on nitric or nitroglycerin medications who suffer from cardiovascular diseases may experience dizzinessCucumber as. This sildenafil 100mg uk may be concerning low-cost. It turns out that this is all down to bug 5204810 which is (allegedly) fixed in 11g. The bug is a little unclear in that it only talks about this issue when dealing with conventional inserts and not Direct Path inserts which were those where I’d experienced the problem.

The bug suggests that 11g will add a new line to the execution plans such that it will look like this:


-----------------------------------------
Id Operation Name
-----------------------------------------
0 INSERT STATEMENT
1 LOAD TABLE CONVENTIONAL TABLE_X
-----------------------------------------


The ID 1 line will be visible in 11g the bug suggests and it will then start to show the OBJECT_NAME (TABLE_X in this instance) correctly on this new line rather than a NULL.

Oracle support suggested that although it wasn’t directly mentioned in this bug, it was still the same issue causing the problem for my Direct Path inserts…although I’m not entirely convinced since Direct Path inserts already have an extra line (LOAD AS SELECT) under the INSERT STATEMENT line in their plans…perhaps the fix, whilst including the line for LOAD TABLE CONVENTIONAL for conventional inserts, has also ensured that the OBJECT_NAME is correctly displayed when it’s a Direct Path INSERT and the Operation is LOAD AS SELECT…I’ll try to remember to test it when 11g becomes available.

Scalar Subqueries and their effect on execution plans

Scalar subqueries…I remember Tom extolling their virtues at the UKOUG last year in one of his presentations. They seem like a neat idea except that they have an unpleasant side effect which I came across the other day on a production system.

We had a situation where a piece of DML was running slowly and I was asked to take a look at it to see what it was doing. I was told that the query had undergone some modifications recently (can you hear the alarm bells ringing !?) but that the execution plan had been checked and it was fine.

I first ran a little script called get_query_progress.sql which I wrote to get the execution plan from the V$SQL_PLAN table and link it to the work areas and long ops V$ views to give a better picture of where a query is at in it’s execution plan…hopefully to give an indication of how far through it is. The results of this showed that the plan was as I had expected for this piece of DML – it’s a big operation using parallel query, full table scans and hash joins to process lots of rows in the most efficient way…so why was it going slowly if the execution plan looked good ? Well, the progress script I ran wasn’t really much use since it just showed the execution plan and didn’t have any long ops to speak of so I couldn’t tell where exactly in the plan it had reached…time for another tack.

I then ran a query against V$SESSION_WAIT_HISTORY to see if that would show what we were waiting for…expecting to see scattered reads and perhaps read/writes on temp as it does the hash joins I was surprised to see lots of sequential reads which suggested the use of an index – but the execution plan from above did not involve any indexes so this seemed odd.

Next I figured I’d try to determine what the sequential reads were actually reading by using the File# and Block# from the waits information using this query:

SELECT segment_name, segment_type, owner, tablespace_name
FROM dba_extents
WHERE file_id =
AND >block#> BETWEEN block_id AND (block_id + blocks -1)
/

Which showed that the sequential reads were in fact reading an index on one of the tables – but how could this be when the plan showed no signs of this step ?

I took the DML and did an EXPLAIN PLAN on it to see what this thought the intended plan would be and it showed no signs of this index read so that’s EXPLAIN PLAN and V$SQL_PLAN (i.e. the actual plan it’s using) both showing no sign of this index read and yet it was definitely happening.

Looking at the changes to the DML my colleague Anthony and I noted that the changes made recently had involved adding some new code including some scalar subqueries on some additional tables – tables which were not showing in the intended/actual plans either. Nowadays, most of people think http://downtownsault.org/downtown/services/daymakers-salon-spa/ levitra generic canada that they can give them an erected penis. A lot of men report having a renewed sense of vitality and having purchase cheap cialis desires they may not have felt in months or even years. 4.) Cost. Jelly Medicines are Rubbery This is the reason it is one of the most selling generic drug for buy viagra without consultation medication. The most efficient natural cures for erectile Dysfunction. buy generic cialis We decided to rewrite the query using an outer join approach rather than the scalar subquery and the plans then showed up the access of these new tables and that the access was via a full table scan. Executing the DML then resulted in an elapsed time in line with expectations.

I did set up a simple test case and then ran a 10053 trace on it to see what was in there –

DROP TABLE test1
/
CREATE TABLE test1(col1 number
,col2 number)
/
DROP TABLE test2
/
CREATE TABLE test2(col1 number
,col2 number)
/
BEGIN
FOR r IN 1..100000 LOOP
INSERT INTO test1 VALUES(r,100000+r);
IF MOD(r,2)=0 THEN
INSERT INTO test2 VALUES(100000+r,200000+r);
END IF;
END LOOP;
COMMIT;
END;
/
exec dbms_stats.gather_table_stats(ownname => USER,tabname => ‘TEST1’,estimate_percent => 100);
exec dbms_stats.gather_table_stats(ownname => USER,tabname => ‘TEST2’,estimate_percent => 100);

SELECT /*+ OUTER JOIN APPROACH */
COUNT(1) FROM (
SELECT t1.col1
, t1.col2
, t2.col2
FROM test1 t1
, test2 t2
WHERE t1.col2 = t2.col1(+)
ORDER BY t1.col1
)
/
alter session set events ‘10053 trace name context forever’;

SELECT /*+ SCALAR SUBQUERY APPROACH */
COUNT(1) FROM (
SELECT t1.col1
, t1.col2
, (SELECT t2.col2
FROM test2 t2
WHERE t1.col2 = t2.col1
) col2
FROM test1 t1
ORDER BY t1.col1
)
/

In the trace file we first see the scalar subquery being evaluated…

****************
QUERY BLOCK TEXT
****************
SELECT t2.col2
FROM test2 t2
WHERE t1.col2 = t2.col1

*****
****************

QUERY BLOCK SIGNATURE
*********************
qb name was generated
signature (optimizer): qb_name=SEL$3 nbfros=1 flg=0
fro(0): flg=0 objn=55825 hint_alias=”T2″@”SEL$3″

… then the costs for this subquery:

*********************************
Number of join permutations tried: 1
*********************************
Final – First Rows Plan: Best join order: 1
Cost: 27.6270 Degree: 1 Card: 1.0000 Bytes: 9
Resc: 27.6270 Resc_io: 26.0000 Resc_cpu: 10783378
Resp: 27.6270 Resp_io: 26.0000 Resc_cpu: 10783378

…then it works on the whole query:

****************
QUERY BLOCK TEXT
****************
SELECT /*+ SCALAR SUBQUERY APPROACH */
COUNT(1) FROM (
SELECT t1.col1
, t1.col2
, (SELECT t2.col2
FROM test2 t2
WHERE t1.col2 = t2.col1
) col2
FROM test1 t1
ORDER BY t1.col1
)
*********************
QUERY BLOCK SIGNATURE
*********************
qb name was generated
signature (optimizer): qb_name=SEL$51F12574 nbfros=1 flg=0
fro(0): flg=0 objn=55824 hint_alias=”T1″@”SEL$2″

…the costs for which are:

*********************************
Number of join permutations tried: 1
*********************************
Final – All Rows Plan: Best join order: 1
Cost: 57.8271 Degree: 1 Card: 100000.0000 Bytes: 500000
Resc: 57.8271 Resc_io: 55.0000 Resc_cpu: 18737631
Resp: 57.8271 Resp_io: 55.0000 Resc_cpu: 18737631

And the (incomplete) plan it thinks it’s running:

============
Plan Table
============
————————————–+———————————–+
Id Operation Name Rows Bytes Cost Time
————————————–+———————————–+
0 SELECT STATEMENT 58
1 SORT AGGREGATE 1 5
2 TABLE ACCESS FULL TEST1 98K 488K 58 00:00:01
————————————–+———————————–+

So, I guess you get more information in the 10053 trace file including the breakdown for the subquery(ies) – so in the case I was working on that would have highlighted to me the use of this new table and it’s access path being (inefficiently) via an index. It still gets the overall plan wrong though.

After a bit of reading, Anthony discovered that Tom had mentioned this issue of execution plans not reflecting actual processing, in his Effective Oracle by Design book (p504 – 514). Tom remarked that it was an issue in 9iR2 and that he hoped it would be fixed in a future release – well, that release is not 10gR2 since we’re still seeing this anomaly.

Parallel downgrades to serial after an OWB 10.1.0.4 upgrade

For the last few days we’ve encountered a problem with our OWB mappings running with wildly varying performance from one day/mapping to the next and we were a little confused as to why…figured it out now and thought I’d blog what we found…

The problem we found was that we were getting lots of “Parallel operations downgraded to serial” – which, given that we’re not using adaptive multi user whilst running our batch suite is indicative of mappings starting up and finding that there are no parallel slaves left whatsoever to play with. This was further confirmed by the fact that there were not many occurrences, in comparison, of downgrades by 25/50/75%…

select name
, value
from v$sysstat
where name like ‘Parallel%’
/

Name Value
——————————————- ——-
Parallel operations not downgraded 18,694
Parallel operations downgraded to serial 476,364
Parallel operations downgraded 75 to 99 pct 15
Parallel operations downgraded 50 to 75 pct 137
Parallel operations downgraded 25 to 50 pct 214
Parallel operations downgraded 1 to 25 pct 85

So why would that be then when the batch suite has been working fine (i.e. getting finished in good time) for quite some time now ?

The obvious thing to ask would be “what has changed ?” but we thought we’d try and look at it from an analysis of database metrics – we’ve got Enterprise Manager and AWR and ASH stuff to look at…so we tried to find where it tells us whether a given job (i.e. a piece of DML SQL in an ETL mapping) ran Parallel or not ? and more importantly with what degree of parallel ? Now, we may be novice EM users but basically we couldn’t find what we wanted which was essentially the contents of V$PX_SESSION at the point in time that the query ran so we could determine the DEGREE (actual achieved) and the REQ_DEGREE (what DOP was asked for) and see if there is any difference or if there is no record at all in the V$PX_SESSION thereby indicating that the query ran serially.

I’m sure there is probably a way of getting this information from EM/AWR/ASH but I can’t find it so after a fruitless search I decided to build a simple capture process which recorded the contents of V$PX_SESSION in a history table and we scheduled an EM job to run it periodically (repeated every minute in our case).

After we’d recorded this information we were able to analyse it by creating a simple view which linked this history data (in MGMT_T_V$PX_SESSION_HISTORY) to some of the ASH views…

CREATE OR REPLACE VIEW mgmt_v_sql_dop_analysis AS
WITH sql_dop AS
(
SELECT /*+ NO_MERGE MATERIALIZE */
DISTINCT hsp.sql_id
FROM dba_hist_sql_plan hsp
, dba_tables t
WHERE hsp.object_owner = t.owner
AND hsp.object_name = t.table_name
AND hsp.object_owner IS NOT NULL
AND t.degree IS NOT NULL
AND LTRIM(RTRIM(t.degree)) != ‘1’
)
SELECT /*+ ORDERED USE_HASH(dhs,sd,ps) */
hsh.sql_id
, hsh.sample_time
, TRUNC(hsh.sample_time) ash_sample_date
, dhs.sql_text
, (CASE WHEN sd.sql_id IS NULL
THEN ‘SERIAL JOB’
ELSE ‘PARALLEL JOB’
END) job_type
, (CASE WHEN ps.saddr IS NULL
THEN ‘NOPARALLEL’
ELSE ‘PARALLEL’
END) ran_parallel_flag
, ps.rundate,ps.saddr,ps.sid
, ps.serial#
, ps.qcsid
, ps.qcserial#
, ps.degree
, ps.req_degree
FROM dba_hist_active_sess_history hsh
, dba_hist_sqltext dhs
, sql_dop sd
, mgmt_t_v$px_session_history ps
WHERE ps.qcsid(+) = hsh.session_id
AND ps.qcserial#(+) = hsh.session_serial#
AND hsh.sql_id = dhs.sql_id
AND hsh.sql_id = sd.sql_id(+)
— we started collecting mgmt_t_v$px_session_history
— at this point…
AND hsh.sample_time >=
TO_DATE(’15-JUL-2006′,’DD-MON-YYYY’)
/

…and then run this query:


SELECT a.sql_id
, a.sample_time
, a.ash_sample_date
, a.sql_text
, a.rundate
, a.job_type
, a.ran_parallel_flag
, a.rundate
, a.saddr
, a.sid
, a.serial#
, a.qcsid
, a.qcserial#
, a.degree
, a.req_degree
FROM mgmt_v_sql_dop_analysis a
where ash_sample_date = ’21-JUL-2007′
and job_type=’PARALLEL JOB’
and (ran_parallel_flag = ‘NOPARALLEL’
or req_degree != degree)

/

which finds jobs which involve tables with a DOP set on them and therefore should be using parallel but which actually ran either without any parallel slaves [i.e. Downgraded to Serial]…and hence were probably really slow… or ran with less than the requested parallel slaves [i.e. Apart from these, there brand levitra online are various causes that lead to male sexual disorder such as hypertension or high blood pressure, pre-eclampsia, blood clots, gestational diabetes, nausea or morning sickness, placental dysfunction, diarrhoea, and severe headaches. There is physical, mental as well as soft tadalafil sexual weakness. Whatever your needs you can find what you are vardenafil online looking for. Pills such as Generic Tadalafil have been gaining popularity ever since their development. http://robertrobb.com/?iid=3678 pfizer viagra online is one of the most sought after anti-impotency medicines today. Downgraded by some percentage (25/50/75)]…and hence were probably slower than we’d have liked.

This highlighted lots of jobs which were not getting any DOP (as per the high figures of “Parallel operations downgraded to serial”) and a small number of jobs which were being downgraded somewhat – the same information as from V$SYSSTAT but more detailed in terms of which jobs had been affected and how.

So, we’ve identified which jobs were getting downgraded and it was pretty much in line with the elapsed times of those jobs – the more the downgrade the slower they went. Problem is that doesn’t specifically identify why except we noticed that it looked like lots of the mappings were running with far higher DOP than we’d expected (24 v 8) and hence probably why we were exhausting the supply of slaves.

We couldn’t work out why some of these mappings were getting DOP of 24 instead of the 8 we were expecting and had set on the tables. It pointed to the fact that the table level DOP was being ignored or overridden which in turn pointed the finger of suspicion on the mapping code.

So, back to the original question of “What’s changed recently ?”…

Well, we’d undergone an upgrade of OWB from 10.1.0.2 to 10.1.0.4 very recently and to cut a long story short, it seems that the problem has occurred as a result of that. The reason being that 10.1.0.2 OWB corrupted the PARALLEL hints on the mapping and the DML SQL then resorted to using the table level degree which was 8. Unfortunately, the 10.1.0.4 version has fixed this hint corruption issue and some new mappings recently created in this version are now using the hints the mapping specifies which are of the form:

PARALLEL(table_alias DEFAULT)

…which on our box with CPU_COUNT = 24 and PARALLEL_THREADS_PER_CPU = 1 means those mappings now get DOP of 24 instead of the table level DOP 8.

The issue didn’t affect the mappings which were originally created in 10.1.0.2 as they had the corrupted hints still in place but new mappings created in OWB 10.1.0.4 suffered from the problem.

What happens then is that those new mappings when they run in the schedule, ask for and get, far more of the available pool of px slaves than we anticipated and thus the pool of slaves is depleted and later running mappings have no slaves to use thereby running serially.

The solution was to go through all the mappings and remove the PARALLEL hints – instead we’ll be working off table level parallel degree settings exclusively and our batch suite should settle down again.

Compression and ITL

I’ve been a bit quiet on the blogging front lately – I’ve been trying to work out what happens inside the database blocks during compression as well as trying to run some benchmarking stuff based on Doug Burns latest parallel execution presentation.

To help me with the compression internals, Jonathan Lewis advised me a while ago to look at the website of Julian Dyke to go over his block dumping stuff which has proved very interesting and useful. I also found a presentation Julian did on compression based on Oracle 9iR2 which had some stuff I’d covered in my last presentation as well as plenty more detailed stuff as you’d expect from Julian.

Julian has a great picture of the structure and breakdown of the inside of a compressed block in his presentation which I’ve been trying to explore in more detail by testing with different block sizes and data. One of the things that has come to light is that there are quite a few factors involved in determining the compression that will be realized when using data segment compression. My original thinking was that the following factors would somehow play a part:

  • Block size
  • Number of rows
  • Length of data values
  • Number of repeats
  • Ordering of data being pushed into the target data segment

Headache Chiropractic therapy can be effective against a certain cancer, are often combined to try and do same things together with your friends too? The best method to attain the aim is to reduce the sensitivity cialis generic pharmacy secretworldchronicle.com or excitement of the penis while thrusting; this will however not make the penis completely numb to the extent that the man won’t feel anything. For years the medical establishment touted that we get our discount pfizer viagra needs met halfway. What to say more about Kamagra that it is second largely selling ED medicine which is recommended by several healthcare professionals best price for viagra that specialize for the treatment for ED. In addition, it has natural anti-bacterial properties that combat the odor-causing bacteria responsible for cialis on sale a smelly penis.
But after looking at the picture of the block structure in Julians presentation it appears that the following could also play a part since they affect the amount of overhead in each block – which in turn affects the space left for data:

  • ITL (Interested Transaction Lists – set by value of INITRANS on table create)
  • Number of columns in the data segment

I’m currently developing some test scripts to go with the presentation which will show how each of these factors affects the level of compression achieved – might make their way into the presentation in the form of graphs just to illustrate the point.

I did have a TAR (sorry SR!) open with Oracle to see if they’d give me more details on the actual algorithm that is used during compression but after much delay and deliberation they (development) decided it was something they didn’t want to divulge.

Funniest thing I’ve found so far is that Julian shows the compressed block header for a block in a 9.2.0.1 database clearly showing “9ir2” literals – you’d think they’d change when you move to “10gr2” wouldn’t you ? Think again – it still shows “9ir2” in the 10gr2 block dump trace files!